Unnamed: 0
int64
0
10k
input
stringlengths
9.18k
112k
output
stringlengths
136
194k
instruction
stringclasses
1 value
217
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Shedding light on the relationships between protein sequences and functions is a challenging task with many implications in protein evolution, diseases understanding, and protein design. The protein sequence space mapping to specific functions is however hard to comprehend due to its complexity. Generative models help to decipher complex systems thanks to their abilities to learn and recreate data specificity. Applied to proteins, they can capture the sequence patterns associated with functions and point out important relationships between sequence positions. By learning these dependencies between sequences and functions, they can ultimately be used to generate new sequences and navigate through uncharted area of molecular evolution.</ns0:p><ns0:p>Results: This study presents an Adversarial Auto-Encoder (AAE) approached, an unsupervised generative model, to generate new protein sequences. AAEs are tested on three protein families known for their multiple functions the sulfatase, the HUP and the TPP families. Clustering results on the encoded sequences from the latent space computed by AAEs display high level of homogeneity regarding the protein sequence functions. The study also reports and analyzes for the first time two sampling strategies based on latent space interpolation and latent space arithmetic to generate intermediate protein sequences sharing sequential properties of original sequences linked to known functional properties issued from different families and functions. Generated sequences by interpolation between latent space data points demonstrate the ability of the AAE to generalize and produce meaningful biological sequences from an evolutionary uncharted area of the biological sequence space. Finally, 3D structure models computed by comparative modelling using generated sequences and templates of different subfamilies point out to the ability of the latent space arithmetic to successfully transfer protein sequence properties linked to function between different sub-families.</ns0:p><ns0:p>All in all this study confirms the ability of deep learning frameworks to model biological complexity and bring new tools to explore amino acid sequence and functional spaces.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head></ns0:div> <ns0:div><ns0:head>39</ns0:head><ns0:p>Protein diversity, regarding sequence, structure, or function, is the result of a long evolutionary process. its complexity.</ns0:p><ns0:p>Many resources have been developed over the years to group amino acid sequences into families whose members share sequence and structural similarities <ns0:ref type='bibr' target='#b14'>Dawson et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b39'>Pandurangan et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b19'>El-Gebali et al. (2018)</ns0:ref>. Thus, protein families permit to organize the sequence space. The sequence space area between these families is however mostly uncharted <ns0:ref type='bibr' target='#b13'>Das et al. (2015)</ns0:ref> in spite of very remote evolutionary relationships between families <ns0:ref type='bibr' target='#b1'>Alva et al. (2010)</ns0:ref>. Navigating the sequence space with respect to the functional diversity of a family is therefore a difficult task. This difficulty is even increased by the low number of proteins with experimentally confirmed function. In this regard, computer models are needed to explore the relationships between sequence space and functional space of the protein families <ns0:ref type='bibr' target='#b20'>Goldstein and Pollock (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b56'>Tian et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b12'>Copp et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b48'>Salinas and Ranganathan (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b57'>Tubiana et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b40'>Poelwijk et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b47'>Russ et al. (2020)</ns0:ref>. Perfect modeling of the sequence space could have applications in molecular engineering, functional annotation, or evolutionary biology. It may for example be possible to understand completely the relationships between amino acid positions of a family responsible of a molecular function or to navigate the sequence space between families of different functions.</ns0:p><ns0:p>In this study, tools and strategies based on an unsupervised deep learning approach are proposed to model and navigate the current evolutionary uncharted area of the amino acid sequence space.</ns0:p><ns0:p>Previous deep learning generative models such as variational autoencoders (VAE) have been applied on biological and chemical data. They have for example been used to explore and classify gene expression in single-cell transcriptomics data <ns0:ref type='bibr' target='#b32'>Lopez et al. (2018)</ns0:ref>, or to explore the chemical space of small molecules for drug discovery and design <ns0:ref type='bibr' target='#b42'>Rampasek et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b23'>G&#243;mez-Bombarelli et al. (2018)</ns0:ref>. Their ability to reduce input data complexity in a latent space and perform inference on this reduced representation make them highly suitable to model, in an unsupervised manner, complex systems. Regarding protein science, VAE have been able to accurately model amino acid sequence and functional spaces <ns0:ref type='bibr' target='#b53'>Sinai et al. (2017)</ns0:ref>, to predict mutational impact <ns0:ref type='bibr' target='#b26'>Hopf et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b45'>Riesselman et al. (2018)</ns0:ref>, to decipher protein evolution and fitness landscape <ns0:ref type='bibr' target='#b16'>Ding et al. (2019)</ns0:ref> or to design new proteins <ns0:ref type='bibr' target='#b22'>Greener et al. (2018)</ns0:ref>. In this study, Adversarial AutoEncoder (AAE) network <ns0:ref type='bibr' target='#b34'>Makhzani et al. (2015)</ns0:ref> is proposed as a new and efficient way to represent and navigate the functional space of a protein family. Unlike VAE, AAE networks constrain the latent space over a prior distribution. Prior distribution allows better inference to explore the whole latent distribution which is particularly useful for travelling through uncharted areaKadurin et al. <ns0:ref type='bibr'>(2017)</ns0:ref>.</ns0:p><ns0:p>Like VAE and other autoencoder architectures, AAEs reduce high dimensional data by projection, using an encoder, into a lower dimensional space. This space is known as a latent space, or embedding representation. The latent space can, in turn, be used by the decoder to reconstruct the initial data. AAE <ns0:ref type='bibr' target='#b34'>Makhzani et al. (2015)</ns0:ref> architecture corresponds to a probabilistic autoencoder but with a constraint on the latent space of the encoder. The latent space is constrained to follow a defined prior distribution. This constraint is applied using a generative adversarial network (GAN) <ns0:ref type='bibr' target='#b21'>Goodfellow et al. (2014)</ns0:ref> trained to discriminate between the latent space and the prior distribution. It ensures that meaningful samples can be generated from anywhere in the latent space defined by the prior distribution. Applied to biological sequences of a protein domain family, it is then possible to encode the sequence diversity to any prior distribution. Thus, the model is able to sample and generate new amino acid sequences of the family from any point of the prior distribution. Ideally, the learned latent space should be representative of the functions of the protein domain family and even able to dissociate protein sequences with different sub-functions.</ns0:p><ns0:p>Protein sequences can cluster in the latent space of AAE network. These clusters were analyzed to verify their ability to group sequences according to function as observed with VAE networks. Three protein families including different sub-families were used to train AAE models. The protein functional annotations of these families were used to analyze the clustered sequences. The three different protein families selected were the sulfatases, the HUP (HIGH-signature proteins, UspA, and PP-ATPase) and the TPP (Thiamin diphosphate (ThDP)-binding fold, Pyr/PP domains) families. The sulfatases are a group of proteins acting on sulfated biomolecules. This family have been manually curated into sub-family with specific functions according to substrate specificity <ns0:ref type='bibr' target='#b6'>Barbeyron et al. (2016)</ns0:ref>. They are found in various protein family databases, such as in Pfam (PF00884). The SulfAtlas database <ns0:ref type='bibr' target='#b6'>Barbeyron et al. (2016)</ns0:ref> The two other protein families, HUP and TPP families are not manually curated but were selected as they are known to have multiple functions <ns0:ref type='bibr' target='#b13'>Das et al. (2015)</ns0:ref>. Proteins of the HUP family are a very diverse group with functions linked to particular motifs such as HIGH and KMSKS (nucleotidyl transferases and t-RNA synthetases activities), ATP PyroPhosphatase motif, or sequence motifs responsible of the hydrolysis of the alpha-beta phosphate bond of <ns0:ref type='bibr'>ATP Bork and Koonin (1994)</ns0:ref>; <ns0:ref type='bibr' target='#b61'>Wolf et al. (1999);</ns0:ref><ns0:ref type='bibr' target='#b2'>Aravind et al. (2002)</ns0:ref>. The TPP family is made of very similar protein domains which are probably evolutionary related <ns0:ref type='bibr' target='#b37'>Muller et al. (1993)</ns0:ref>; <ns0:ref type='bibr' target='#b7'>Berthold et al. (2005)</ns0:ref>. They have pyruvate dehydrogenases, decarboxylate, and binding functions <ns0:ref type='bibr' target='#b37'>Muller et al. (1993)</ns0:ref>.</ns0:p><ns0:p>The VAE architecture has previously been used to cluster protein sequences and interpret the resulting clusters regarding their function or evolutionary history <ns0:ref type='bibr' target='#b53'>Sinai et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b26'>Hopf et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b45'>Riesselman et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b16'>Ding et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b22'>Greener et al. (2018)</ns0:ref>. These experiments have not studied the quality of the architecture generative ability for protein sequences. Particularly, the performances of the architecture is not known for the tasks of navigating the sequence space and transferring features between clusters. In this study, two experiments were carried out in this direction using latent space interpolation and latent space arithmetic operations. These experiments were designed as new tools and frameworks for the amino acid sequence space exploration.</ns0:p><ns0:p>Data point interpolations between protein sequences of different sulfatase sub-families was used to analyze the latent space coverage of the protein domain family functional space. The interpolated data points correspond therefore to unseen proteins, i.e. evolutionary uncharted area between groups of amino acid sequences. A good model should be able to produce realistic protein sequences from these data points.</ns0:p><ns0:p>This study also explored arithmetic operations with protein sequences encoded in their latent space to generate new protein sequences. Arithmetic operations on latent space have previously been reported to transfer features between images of different classes <ns0:ref type='bibr' target='#b41'>Radford et al. (2015)</ns0:ref>. These operations may therefore have interesting potential for molecular design and for exploration of the amino acid sequence space. Four different strategies were explored to combine latent spaces of different sulfatase sub-families.</ns0:p><ns0:p>The generated proteins from the combined latent spaces were analysed in term of sequences and structures, after being built by comparative modelling.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>Protein Families</ns0:head><ns0:p>The sulfatase family. An initial seed protein multiple sequence alignment (MSA) was computed from sequences of the protein structures of SulfAtlas <ns0:ref type='bibr' target='#b6'>Barbeyron et al. (2016)</ns0:ref> database sub-families 1 to 12. This seed was used to search for homologous sequences on the UniRef90 <ns0:ref type='bibr' target='#b55'>Suzek et al. (2014)</ns0:ref> protein sequence database using hmmsearch <ns0:ref type='bibr' target='#b18'>Eddy (2011)</ns0:ref> with reporting and inclusion e-values set at 1e &#8722;3 .</ns0:p><ns0:p>A label was assigned to each retrieved protein if the protein belonged to one of the 12 known sub-families.</ns0:p><ns0:p>The MSA computed with hmmsearch was filtered to remove columns and sequences with more than 90% and 75% gap characters respectively. Proteins with multiple hits on different parts of their sequences were also merged into a single entry. From 105181 initial protein sequences retrieved by hmmsearch, the filtering steps led to a final set of 41901 proteins.</ns0:p><ns0:p>HUP and TPP protein families. A similar protocol was followed for the HUP and TPP protein families.</ns0:p><ns0:p>Instead of using an initial seed alignment made of sequences with known 3D structures, the CATH protein domain HMM <ns0:ref type='bibr' target='#b38'>Orengo et al. (1997)</ns0:ref>; <ns0:ref type='bibr' target='#b51'>Sillitoe et al. (2018)</ns0:ref> was used to search for homologous sequences in the UniRef90 database. <ns0:ref type='bibr'>CATH models 3.40.50.620 and 3.40.50</ns0:ref>.970 correspond to the HUP and TPP protein families, respectively. A sequence filtering pipeline identical to the one used for the sulfatase family was applied to each of the resulting MSAs.</ns0:p><ns0:p>The final numbers of proteins in each dataset were: 25041 for the HUP family (32590 proteins before filtering) and 33693 for the TPP family (133701 before filtering).</ns0:p></ns0:div> <ns0:div><ns0:head>3/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59415:1:1:NEW 23 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Deep learning model</ns0:head><ns0:p>Generative Adversarial Network. A complete description of Generative Adversarial Network can be found in <ns0:ref type='bibr' target='#b21'>Goodfellow et al. (2014)</ns0:ref>. To summarize, the GAN framework corresponds to a min-max adversarial game between two neural networks: a generator (G) and a discriminator (D). The discriminator computes the probability that an input x corresponds to a real point in the data space rather than coming from a sampling of the generator. Concurrently, the generator maps samples z from prior p(z) to the data space with the objective to confuse the discriminator. This game between the generator and discriminator can be expressed as:</ns0:p><ns0:formula xml:id='formula_0'>min G max D E x&#8764;p data [log D(x)] + E z&#8764;p(z) [log(1 &#8722; D(G(z))]</ns0:formula><ns0:p>(1)</ns0:p><ns0:p>Adversarial auto-encoder. Adversarial autoencoders (AAEs) were introduced by <ns0:ref type='bibr' target='#b34'>Makhzani et al. (2015)</ns0:ref>. The proposed model was constructed using an encoder, a decoder networks, and a GAN network to match the posterior distribution of the encoded vector with an arbitrary prior distribution. Thus, the decoder of the AAE learns from the full space of the prior distribution. A Gaussian prior distribution was used in this study to compute the aggregated posterior q(z|x) (the encoding distribution). The mean and variance of this distribution was learned by the encoder network:</ns0:p><ns0:formula xml:id='formula_1'>z i &#8764; N(&#181; i (x), &#963; i (x)).</ns0:formula><ns0:p>The re-parameterization trick introduced by Kingma and Welling (2014) was used for back-propagation through the encoder network.</ns0:p><ns0:p>Three different architectures were evaluated. The general architecture was as follows (see Table <ns0:ref type='table'>S1</ns0:ref> and Fig. <ns0:ref type='figure'>S1</ns0:ref> for a representation of architecture number 3). The encoder was made of one or two 1D convolutional layers with 32 filters of size 7 and a stride of length 2, and one or two densely connected layers of 256 or 512 units. The output of the last layer was passed through two stacked densely connected layers of hidden size units to evaluate &#181; and &#963; of the re-parameterization trick <ns0:ref type='bibr' target='#b31'>Kingma and Welling (2014)</ns0:ref>.</ns0:p><ns0:p>The decoder was made of two or three densely connected layers of the length of the sequence family time alphabet units for the last layers and of 256 or 512 units for the first or the two first layers. The final output of the decoder was reshaped to match the input shape. A softmax activation function was applied, corresponding to the amino acid probabilities at each position. To convert the probability matrix of the decoder into a sequence, a random sampling according to the probability output was performed at each position. The selected amino acid at a given position was therefore not necessarily the amino acid with the highest probability but reflect the biological distributions. The discriminator network was made of two or three densely connected layers. The last layer had only one unit and corresponds to the discriminator classification decision using a sigmoid activation function.</ns0:p></ns0:div> <ns0:div><ns0:head>Model training</ns0:head><ns0:p>The network was trained for each protein family independently. Amino acids and gap symbol of sequence input data were transformed using one-hot-encoding. A batch size of 32 was used to train the network. The autoencoder was trained using a categorical cross-entropy loss function between the input data and the predicted sequences by the autoencoder. The discriminator was trained using binary cross-entropy loss function between the input data encoded and the samples from the prior distribution.</ns0:p></ns0:div> <ns0:div><ns0:head>Generated sequences and structures analyses</ns0:head><ns0:p>Dimensionality reduction. The AAE model can be used to reduce the dimensionality of the sequence space by setting a small latent size. Two dimensionality reductions were tested with latent size of 2 and 100. Latent size of 2 can be easily visualized and a larger latent size of 100 should represent the input data more efficiently as more information can be stored.</ns0:p><ns0:p>Clustering. <ns0:ref type='bibr'>HDBSCAN Campello et al. (2013)</ns0:ref>; McInnes and Healy (2017) was used to cluster the sequences in the latent space due to its capacity to handle clusters of different sizes and densities and its performances in high dimensional space. The Euclidean distance metric was used to compute distances between points of the latent space. A minimal cluster size of 60 was set to consider a group as a cluster as the number of protein sequences is rather large. The minimal number of samples in a neighborhood to consider a point as a core point was set to 15 to maintain relatively conservative clusters. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the UniProt-GOA mapping <ns0:ref type='bibr' target='#b28'>Huntley et al. (2014)</ns0:ref>. Proteins without annotation were discarded.</ns0:p><ns0:p>The annotation homogeneity was computed for each cluster. Considering a cluster, the number of different EC ids and taxonomic ids were retrieved. The percentage of each EC id (taxonomic id) was computed by cluster. An EC id (taxonomic id) of a cluster with a value of 90% indicates that 90% of the cluster members have this EC id (taxonomic id). A cluster with a high homogeneity value corresponds to functionally or evolutionary related sequences.</ns0:p><ns0:p>Homogeneous clusters will point out the ability of the AAE model to capture and distinguish protein sequences with functionally or evolutionary relevant features without supervision.</ns0:p><ns0:p>Latent space interpolation. Twenty pairs of protein sequences were randomly chosen between all combinations of sulfatases sub-families with at least 100 labeled members but with less than 1000 members (to avoid pronounced imbalance between classes): S1-0 (308 proteins), S1-2 (462 proteins), S1-3 (186 proteins), S1-7 (741 proteins), S1-8 (290 proteins) and S1-11 (669 proteins). The coordinates of the selected sequences in the encoded latent space with 100 dimensions were retrieved. Spherical interpolations using 50 steps were performed between the pairs. Spherical interpolation has previously been reported to provide better interpolation for the generation of images <ns0:ref type='bibr' target='#b60'>White (2016)</ns0:ref>. The interpolated points were given to the decoder to generate new sequences. Statistical analyses were carried out on the sequence transition from one family to an other. A model able to learn a generalized latent space should generate new sequences with smooth transitions between families. Analyses at the amino acid level were also performed on the interpolated sequences of two Sulfatase sub-families encoded far from one-another in the latent space.</ns0:p><ns0:p>Shannon entropy computation. Shannon entropy is computed to measure the degree of variability at each position (column) of the <ns0:ref type='bibr'>MSA Jost (2006)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_2'>H(X) = &#8722; n &#8721; i=1 P i log P i (2)</ns0:formula><ns0:p>with P i the frequency of symbol i and n the number of characters (20 amino acids and a gap symbol). The mean entropy per amino acid is computed for MSAs of biological sequences and generated sequences.</ns0:p><ns0:p>Low entropy indicates that the analyzed sequences have low amino acid variability between each other.</ns0:p><ns0:p>High entropy indicates high amino acid variability.</ns0:p><ns0:p>Latent space arithmetic. It is possible to transfer features between data such as images by using subtraction or addition between projected data into a latent space <ns0:ref type='bibr' target='#b41'>Radford et al. (2015)</ns0:ref>. This latent space property was tested on seven Sulfatase sub-families (S1-0, S1-1, S1-2, S1-3, S1-7, S1-8 and S1-11) selected on the basis of their number of protein sequences. Different arithmetic strategies (Fig. <ns0:ref type='figure' target='#fig_5'>S2</ns0:ref>)</ns0:p><ns0:p>were tested between latent spaces. The sub-family whose features are transferred is named the source sub-family. The sub-family receiving the transferred feature is named the query sub-family.</ns0:p><ns0:p>A first strategy consists in the addition of the mean latent space of the source sub-family to the encoded sequences of the the query sub-family. The second strategy differs from the first one by subtracting from the mean background latent space of all sub-families the latent space of the query sub-family. The third strategy differs from the second by the mean background strategy being computed using all sub-families except the source and query sub-families. Finally, in the fourth strategy, the subtraction is performed using a local KD-tree to only remove features shared by the closest members of a given query and the addition is performed by randomly selecting a member of the source family and its closest 10 members.</ns0:p><ns0:p>For each strategy, new sequences were generated using the latent spaces of all query proteins in the sub-families. The generated sequences by latent space arithmetic are compared to the initial query and source sub-families in terms of sequence and structural properties.</ns0:p><ns0:p>The protein sequence similarities were computed between the generated sequences by latent space arithmetic and the biological sequences of the two initial sub-families using a Blosum 62 substitution matrix.</ns0:p><ns0:p>The sequence similarities were also computed inside a sub-family, between sub-families, and between generated sequences. The distributions of sequence similarities allow to explore the abilities of the latent space arithmetic operations and of the decoder to produce meaningful intermediate protein sequences from data points not corresponding to biological sequences. These data points correspond to an uncharted sequence space.</ns0:p></ns0:div> <ns0:div><ns0:head>5/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59415:1:1:NEW 23 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Protein structural models were computed using the structures of the initial sub-families as templates for MODELLER <ns0:ref type='bibr' target='#b59'>Webb and Sali (2014)</ns0:ref> and evaluated using the DOPE score <ns0:ref type='bibr' target='#b49'>Shen and Sali (2006)</ns0:ref>. Models were computed using the generated sequences by latent space arithmetic on template structures from their source and query sub-families. The DOPE energies of the modeled structures were compared to structural models computed as references. The first structural model references were computed using the sequences and template structures belonging to the same sub-families, which should provide the best DOPE energies.</ns0:p><ns0:p>The second structural model references were computed using the sequences and template structure be- </ns0:p></ns0:div> <ns0:div><ns0:head>Glossary</ns0:head><ns0:p>The following glossary defines the different terms and techniques used in this manuscript.</ns0:p><ns0:p>Adversarial Auto Encoder. A neural network architecture used for generative tasks. The architecture combine an auto-encoder and a generative adversarial network.</ns0:p><ns0:p>Encoder. Part of the AAE used to project the input data to a latent space.</ns0:p><ns0:p>Latent space. Input data point representation in a lower dimension.</ns0:p><ns0:p>Decoder. Part of the AAE used to reconstruct the input data from the latent space. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Latent space projection AAE can be used as a dimensional reduction and visualization techniques by fixing the dimension of the latent space to two or three for plotting purpose. In this section, AAE network ability to create meaningful projection is tested on Sulfatase, HUP and TPP families by clustering and analysing protein sequences in terms of enzymatic activity and phylogenetic diversity.</ns0:p><ns0:p>Starting from the final MSA of the Sulfatase family, an AAE network was trained to project the sequences in a latent space with two dimensions. A PCA of the MSA was computed for comparison purpose with the AAE projection using the PCA first two principal components.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref>. Projections of the MSA sequences of the SulfAtlas family. Left: projection in the encoding learned using an AAE (number of latent dimensions: 2). Right: projection using a PCA (two first components). Gray data points correspond to protein sequences not part of the curated 12 sub-families. This analysis is also performed for the HUP and TPP families. Results can be found in Fig. <ns0:ref type='figure' target='#fig_7'>S3</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>HUP and TPP families. The AAE projections can be visualized on Fig. <ns0:ref type='figure' target='#fig_7'>S3</ns0:ref>. There are fewer functional annotations for these two families than for the sulfatase family. A strong separation can however clearly be observed between the major functions of the two families .</ns0:p><ns0:p>Latent spaces were evaluated for each protein family based on enzyme classification (EC) and taxonomic homogeneity. Given a set of protein sequences, the encoded sequences in a latent space of a 100 dimensions were clustered using HDBSCAN.</ns0:p><ns0:p>For the sulfatase family, 27 clusters were found, for which taxonomic and EC annotations could be extracted (Fig. <ns0:ref type='figure' target='#fig_9'>S4</ns0:ref> and Table <ns0:ref type='table'>S3</ns0:ref>). All these clusters displayed either strong taxonomic or EC homogeneity.</ns0:p><ns0:p>Enzymatic homogeneity was higher than taxonomic homogeneity for 16 clusters, found equal in one cluster and lower for 10 clusters.</ns0:p><ns0:p>In the HUP family, all clusters had very high EC homogeneity (Table <ns0:ref type='table'>S4</ns0:ref>). Only two clusters out of 47 could be found with higher taxonomic homogeneity than EC homogeneity. For these two clusters enzymatic homogeneity values were high and only marginally different (cluster 5, taxonomic homogeneity of 100% an EC homogeneity of 99% and cluster 31, taxonomic homogeneity of 99 % and EC homogeneity of 97%). Five clusters were found with equal taxonomic and EC homogeneity.</ns0:p><ns0:p>In the TPP family all clusters had also very high EC homogeneity (Table <ns0:ref type='table'>S5</ns0:ref>). Five clusters out of 51 could be found with higher taxonomic homogeneity than EC homogeneity. For these 5 clusters the differences between taxonomic homogeneity and EC homogeneity were higher than the differences observed for the HUP clusters. Six clusters were found with equal taxonomic and EC homogeneity.</ns0:p><ns0:p>The differences between the AAE and the PCA projections together with the general cluster enzymatic homogeneity highlight the ability of the encoding space to capture amino acid functional properties.</ns0:p></ns0:div> <ns0:div><ns0:head>Protein latent space interpolation</ns0:head><ns0:p>Interpolation between encoded sequences can be used to 'navigate' between proteins of two sub-families.</ns0:p><ns0:p>After the selection of a query sub-family, the sequences of this sub-family are projected to the latent space and used as the starting points of the interpolation. The end points of the interpolation correspond to sequences of a target sub-family, different from the query sub-family, and projected into the latent space. Twenty pairs of protein sequences were randomly selected between all combinations of protein sub-families to test the capacity of the encoded space and 50 intermediates, i.e. interpolated, data points were generated between each query / target pair. The sequence similarities were computed between the generated protein sequences from the interpolated latent space and the biological query and target protein sequences of the sub-families. It is thus possible to measure the amino acid sequence drift from one protein to another one.</ns0:p><ns0:p>The observed amino acid transitions from the query sub-family to the target sub-family are very smooth for all combinations of sub-families. The sequence similarity distributions display a logistic function shape as shown in Fig. <ns0:ref type='figure' target='#fig_5'>2-A</ns0:ref>. The smooth transition between points demonstrates the ability of AAE network to encode the sequences into a smooth latent space and thus to correctly 'fill' the gap between projected protein sequence sub-families.</ns0:p><ns0:p>The Shannon entropy was computed for each group of sequences: interpolated sequences between query and target sub-families, sequences of the query sub-families, and sequences of the target subfamilies. Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref>-B shows the Shannon entropy distribution for the S1-0 and S1-11 sequences and their interpolated sequences. Interestingly, the figure shows lower entropy for interpolated sequences than for original sequences. Lower entropy indicates fewer amino acid variation at each position of the interpolated sequences than in biological sequences. Fewer amino acid variation at each position for the interpolated sequences could corresponds to restricted paths to travel between sub-families. This trend is true for all interpolated sequences between all sub-families as reported in the Table <ns0:ref type='table'>S6</ns0:ref>. This is in agreement with molecular evolution theory and experiments that describe protein families as basins in fitness landscape Bornberg-Bauer and Chan (1999); <ns0:ref type='bibr' target='#b50'>Sikosek et al. (2012)</ns0:ref>; <ns0:ref type='bibr' target='#b10'>Boucher et al. (2014)</ns0:ref>.</ns0:p><ns0:p>A closer inspection of interpolations between sub-families S1-0 and S1-4 (respectively blue and red data points in Fig. <ns0:ref type='figure'>1</ns0:ref>) was also performed to study changes at the amino acid level. The two sub-families are in 'opposite' spaces in the two-dimensional projection. It can be observed in Fig. <ns0:ref type='figure'>S5</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Protein latent space arithmetic</ns0:head><ns0:p>Latent space arithmetic is able to transfer learned features between different classes <ns0:ref type='bibr' target='#b41'>Radford et al. (2015)</ns0:ref>.</ns0:p><ns0:p>If applied to protein sequence latent space, this technique could permit to transfer features such as enzymatic activity or part of structure between protein families. To test this technique different arithmetic strategies (see Methods and Fig. <ns0:ref type='figure' target='#fig_5'>S2</ns0:ref>) were tested between latent spaces of two Sulfatase sub-families.After performing the arithmetic operation between latent space coordinates, the protein sequences corresponding to the new coordinates were generated by the decoder. Protein structures of the generated sequences were computed using homology modeling The structure templates correspond to both sub-families. The protein structures of the generated sequences were compared to computed models using sequences and structures of the same sub-families and using sequences from one sub-family and structure templates from the other one. The Sulfatases sub-families S1-0, S1-2, S1-3, S1-7, S1-8 and S1-11 were chosen to test this technique.</ns0:p><ns0:p>In the following section, the terminology _ Seq. S1-XmY will correspond to a generated sequence using a combination of the mean latent space of the sub-family S1-Y added to the latent space of the sub-family S1-X. The X and Y sub-families will be referred to as the query and source sub-families.</ns0:p><ns0:p>First, two Prosite motifs of the Sulfatase family are analyzed from generated and original sequences. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>signature patterns for all the sulfatases in the Prosite database.</ns0:p><ns0:p>Different amino acid patterns can be observed between the sequence groups that can be classified as 'competition', 'taking over', or 'balanced' pattern. A competition pattern of amino acids corresponds to equivalent frequency of two different amino acids in the generated sequences. A taking over pattern corresponds to an amino acid of one of the original sequences being the most frequent in the generated sequences. A balanced pattern corresponds to a maintained equilibrium between amino acids in the generated sequences. Some other positions are displaying much more complex patterns and cannot be summarized as a frequency competition between source and query sub-families. These behaviors can be observed several times through the logo plots but are still position-specific, meaning that the bits scores pattern observed in the source sub-families (Panels A and D of Fig. <ns0:ref type='figure'>S6</ns0:ref>) do not necessary allow to predict the amino acids bits scores in the generated sequences (Panels B and C of Fig. <ns0:ref type='figure'>S6</ns0:ref>).</ns0:p><ns0:p>Protein sequence similarities were computed to evaluate the diversity of the generated sequences and compare their diversity with the original sub-families. Protein sequence similarities were computed between : the generated sequences, the sequences of a sulfatase sub-family used to generate protein sequences, the generated sequences and their query sulfatase sub-family, the generated sequences and their source sulfatase sub-family, the query and source sequences of sulfatase sub-families. Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref> shows the mean and variance distribution of computed protein sequence similarities between these different groups for generated sequences computed using the first strategy. The first, second, and third strategies display a similar pattern and their corresponding figures are available in the Supplementary Information (Fig. <ns0:ref type='figure'>S10</ns0:ref>, <ns0:ref type='bibr'>S12 and S14)</ns0:ref>.</ns0:p><ns0:p>Protein sequence similarities between different sub-families (red upper triangles) have lower similarity scores and lower variances than the other distributions. Protein sequence similarities between sequences of a sub-family (blue circles) have the highest mean and variance values observed. However, since only 6 sub-families were kept for analysis (sub-families 0, 2, 3, 7, 8, and 11), trends must therefore be taken with precaution. Generated protein sequences compared to themselves (magenta lower triangles) have mean and variance protein sequence similarities higher than when compared to their query or sub-families. The last two (generated sequences compared to query sequences, orange squares and generated sequences compared to target sequences, green crosses) have mean and variance values spread between the blue and red distributions.</ns0:p><ns0:p>These distributions indicate that generated protein sequences by latent space arithmetic have an intrinsic diversity similar to the biological sub-families. Moreover, the generated sequences are less similar to the sequences from their query and source sub-families than to themselves. The generated sequences are also globally as similar to the sequences of their query sub-family as to the sequence of their source sub-family. The generation process is therefore able to capture the features of the selected query and source sub-families and generate a protein sequence diversity similar to the original sub-families.</ns0:p><ns0:p>Finally, protein structure modeling was performed to assess and compare the properties of the generated sequences by latent space arithmetic and the protein sequences of the natural sub-families.</ns0:p><ns0:p>For each sub-family, 100 original sequences were randomly selected along the corresponding generated sequences. All the generated sequences were aligned to protein structures of their corresponding source and query sub-families, and the alignments were used to create 3D structures models by comparative modeling. The quality of models was then evaluated with the DOPE function of MODELLER.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>S7</ns0:ref> shows an example of the energy distribution computed from models using the second strategy with query sub-family S1-0 and source sub-family S1-2. The lowest energies (best models)</ns0:p><ns0:p>were found on modelled structures using the original protein sequences of a sub-family to the structural templates of the same sub-family (Struct. 0 Seq. 0 and Struct. 2 Seq. 2). Conversely, the highest energies are found on modelled structures using the original protein sequences of a sub-family to the structural templates of another sub-family (Struct. 0 Seq. 2 and Struct. 2 Seq. 0). Interestingly, generated sequences using additions and subtractions of latent spaces have intermediate energy distributions. This can be clearly observed in Fig. <ns0:ref type='figure' target='#fig_9'>4</ns0:ref>, where generated sequences are mostly situated between the two dotted lines.</ns0:p><ns0:p>Dots on the right side of the vertical line at 0 correspond to modeled structures using sequences of the latent space with lower energy than the modeled structures using sequences from their original sub-family.</ns0:p><ns0:p>Dots on the left side of the vertical line at 0 are modeled structures using sequences of the latent space with higher energy than the modeled structures using sequences from their original sub-family. The Manuscript to be reviewed</ns0:p><ns0:p>Computer Science diagonal line on the top-left corner corresponds to the difference in energy between modeled structures using sequences from their original sub-family and modeled structures using sequences from biological sequences of another sub-family. The energy of generated sequences modeled using their query sub-family templates (ex: Struct. 0 Seq.S1-0m2 and Struct. 2 Seq. S1-2m0 on Fig. <ns0:ref type='figure'>S7</ns0:ref> and M QS /Q on Fig. <ns0:ref type='figure' target='#fig_9'>4</ns0:ref>) is slightly lower than the energy of models using their source sub-family templates (ex: Struct. 0 Seq. S1-2m0 and Struct. 2 Seq. S1-0m2 on Fig. <ns0:ref type='figure'>S7</ns0:ref> and M SQ /Q on Fig. <ns0:ref type='figure' target='#fig_9'>4</ns0:ref>). This trend is true for all query / source pairs of sub-families and all strategies except for generated sequences using the fourth strategy (local background subtraction of query latent space using a KD-tree and the addition of source latent space), see Fig. <ns0:ref type='figure'>S8</ns0:ref>, S9, S11, S13 and Methods.</ns0:p><ns0:p>In this strategy, the modeled structures using generated sequences do not display energy distributions in-between the energy distributions of the original sequences modeled on structures of the query or of the <ns0:ref type='bibr'>(2021)</ns0:ref>. The reported techniques in this study can be applied to any latent space projection and it would be interesting to combine them with representation of the protein sequence universe to navigate and perform feature transfer between protein families. These techniques could perhaps lead to the rediscovery of evolutionary sequence paths leading to the current protein families <ns0:ref type='bibr' target='#b1'>Alva et al. (2010)</ns0:ref>, improving our understanding of the protein sequence universe <ns0:ref type='bibr' target='#b17'>Dryden et al. (2008)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>This study shows that AAE models are able to finely capture the protein functional space of three different The results of this study show that AAE, in particular, and deep learning generative models in general, can provide original and promising avenues for protein design and functional exploration.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:03:59415:1:1:NEW 23 Jul 2021) Manuscript to be reviewed Computer Science collection of curated sulfatases centered on the classification of their substrate specificity. The majority of Sulfatases (30,726 over 35,090 Version 1.1 September 2017) is found in family S1 and is sub-divided into 73 sub-families corresponding to different substrate specificities. Sub-families S1-0 to S1-12 possess proteins with experimentally characterized EC identifiers.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Functional</ns0:head><ns0:label /><ns0:figDesc>and taxonomic analyses. Enzyme functional annotation (EC ids) and NCBI taxonomic identifiers were extracted when available from the Gene Ontology Annotation portal (January 2019) using 4/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59415:1:1:NEW 23 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>longing to different sub-families (ex: sequences from source sub-family and template structures from the query sub-family or inversely, sequences from query sub-family and template structures from the source sub-family), which should provide the worst DOPE energy. If the generated sequences by latent space arithmetic correspond to intermediate proteins with properties from two sub-families, they should have intermediate DOPE energies when compared to the others evaluated models.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Query sub-family. Starting sub-family in interpolation experiments. Sub-family whose individual sequence latent spaces have been used in combination with the mean latent space of sequences from a source sub-family in latent space arithmetic strategies.Source sub-family. Sub-family whose mean latent space has been used in combination with individual sequence latent space of a query sub-family in latent space arithmetic strategies.Target sub-family. sub-family used as end point in interpolation experiments.RESULTSA structurally constrained MSA was computed using Expresso from T-Coffee webserver<ns0:ref type='bibr' target='#b3'>Armougom et al. (2006)</ns0:ref>; Di<ns0:ref type='bibr' target='#b15'>Tommaso et al. (2011)</ns0:ref> between sequences of S1 sulfatases structures. This MSA was processed into a Hidden Markov Model and hmmsearch was used to retrieve aligned sequence matches against the UniRef90 sequence database. A total of 76,427 protein sequence hits were found to match the sulfatase HMM in UniRef90. The sequences were filtered to remove columns and hits with more than 90% and 75%, respectively, of gap characters. The final MSA comprised 41,901 sequences. The sulfatases protein dataset was separated into a training, a validation, and a test sets with a split ratio of: 0.8, 0.1, and 0.1.The three different AAE architectures (see Method section) were trained on the training set and evaluated on the validation set. The test set was only used on the final selected architecture. Models were evaluated by computing top k-accuracy, corresponding to the generation of the correct amino acid in the first k amino acids. TableS2shows the top k accuracy metric for k=1 and k=3 computed for the different AAEs. The accuracy scores scaled down with the number of parameters, but without any large difference.The architecture with the fewest number of parameters (architecture 3) was therefore selected to avoid over-fitting the data. The final accuracy scores on the test set were computed and were similar to the values observed during the model training: 62.5% and 80.2% (k=1 and k=3). The selected architecture was separately trained using the protein sequences of the HUP and TPP families with identical train, validation, and test set splits.6/16PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59415:1:1:NEW 23 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Figure1shows the protein sequences encoded by the AAE and the PCA projection. Each dot corresponds to a protein sequence. The dots are colored according to their sub-family. Gray dots correspond to protein sequences not belonging to any of the 12 curated sulfatases sub-families. The AAE displays in this figure a better disentanglement of the S1 family sequence and functional spaces than the PCA. Well-separated gray spikes can also be observed in the AAE projection.These spikes may correspond to groups of enzymes sharing common substrate specificity but not yet experimentally characterized.In some cases, sub-families with identical functions are projected closely on the encoded space. For instance, sub-families S1-6 (light magenta) and S1-11 (yellow) have both the EC 3.1.6.14 activity (Nacetylglucosamine-6-sulfatase) and are closely located in the encoded space. Moreover, some sub-family projections appear entangled such as the S1-1 sub-family (light blue, Cerebroside sulfatase activity, EC 3.1.6.8), the S1-2 (orange) and the S1-3 (green) sub-families (Steryl-sulfatase activity, EC 3.1.6.2), the S1-5 (pink) sub-family (N-acetylgalactosamine-6-sulfatase activity, EC 3.1.6.4), and the S1-10 (gray) sub-family (Glucosinolates sulfatase activity EC 3.1.6.-). The five families correspond to four different functions but are made of Eukaryotic protein sequences only and their entanglement may be due to their shared common evolutionary history. This separation based on the sequence kingdoms can clearly be visualized in the PCA projections with Eukaryotic sequences on the right side on sub-families with a majority of Bacteria sequences on the left side. The PCA projections failled to finely separate protein sub-families based on their functions. The example of protein B6QLZ0 PENMQ is also interesting. The protein is projected (yellow dot corresponding to the S1-11 sub-family) at coordinates (0.733, -1.289), inside the space of the S1-4 family (red). This may look like an error but a closer inspection shows that this protein is part of both the S1-4 and S1-11 sub-families of the SulfAtlas database.Projections of sequences into latent spaces using AAE with two dimensions were also tested on the</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2. Interpolation analyses between sub-families S1-0 and S1-11. A) Sequence similarity distributions (sum of blosum weights, higher the score higher the similarity) between interpolated sequences and query proteins (blue) or target proteins (orange). B) Distribution of amino acid Shannon entropy for interpolated sequences (orange, R 2 = 0.64) between sub-families S1-0 (blue, R 2 = 0.72) and S1-11 (green, R 2 = 0.76) over the amino acid mean Shannon entropy of query and target sub-families.</ns0:figDesc><ns0:graphic coords='11,141.73,63.78,413.58,153.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure S6 displays logo plots of two regions corresponding to Prosite motifs PS00523 and PS00149 to illustrate the amino acid content of the generated protein sequences by latent space arithmetic. These regions correspond to the most conserved regions of the sulfatase family and have been proposed as</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Distributions of protein sequence similarities. Blue dots: protein sequence similarity computed between sequences of the same protein sub-family. Orange squares: similarity computed between generated sequences and the sequences of their query sub-family (ex: S1-0m2 generated sequences and S1-0 sub-family sequences). Green x: similarity computed between generated sequences and the sequences of their target sub-family (ex: S1-0m2 generated sequences and S1-2 sub-family sequences). Red upper triangles: similarity computed between sequences of two different sub-families (ex: S1-0 sequences and S1-2 sequences). Magenta lower triangles: similarity computed between sequences of the same generated sequence group. The variance and the mean of each distribution are displayed on the horizontal and vertical axes.</ns0:figDesc><ns0:graphic coords='13,183.09,63.78,330.85,330.85' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>source sub-families (dotted lines). The energy distribution of generated sequences modeled on structures belonging to the sub-family of their query latent space sub-family (ex: Struct. 0 Seq.S1-0m2, blue dots M QS /Q) with the fourth strategy is closer to the energy distribution of the modeled structures using a sequence and a structure template from the same sub-families. The energy distribution of generated sequences modeled on structures corresponding to the sub-family of their source latent space (ex: Struct.11/16PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59415:1:1:NEW 23 Jul 2021)Manuscript to be reviewed Computer Science 2 Seq.S1-0m2, orange dots M SQ /Q) with the fourth strategy is closer to the energy distribution of the modeled structures using a sequence and a structure template from different sub-families. This indicates that the fourth strategy is less robust to latent space arithmetic than the other three strategies. No clear differences could be observed between the first, second, and third strategy.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Difference between mean DOPE distributions. Mean value for each distribution, such as the distributions presented in Fig. S7, were computed. The y axis represents the difference between the mean values computed for query sequences modeled on structures of the same sub-family and mean values computed for source sequences modeled on structures of the query sub-family (ex: differences between mean of Struct. 0 Seq. 0 and mean of Struct. 0 Seq. 2 distributions in Fig.S7). The x axis corresponds to the difference between the mean values computed for query sequences modeled on structures of the same sub-family and mean values computed for query sequences to which latent spaces of the source sub-family sequences have been added and modeled on structures of the query sub-family (M QS /Q), or source sequences to which latent spaces of the query sub-family sequences have been added and modeled on structures of the source sub-family (M SQ /Q) (ex: differences between mean of Struct. 0 Seq. S1-0m2 and mean of Struct. 0 Seq. 0 distributions in Fig.S7).Points in the red area correspond to mean distribution values from generated sequences whose modeled structures have a higher energy than models created using pairs of sequences/structures from different sub-families. Points in the blue area correspond to mean distribution values from generated sequences whose modeled structures have a lower energy than models created using pairs of sequences/structures from the same sub-family.</ns0:figDesc><ns0:graphic coords='14,183.09,123.25,330.87,171.43' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>protein families with known sub-functions. The presented experiments carried out on the sulfatase family provide new insight on the effectiveness of generative model and protein sequence embedding to study and model protein function and evolution. The proposed methods are robust to artifacts and generate consistent sequences and structures.</ns0:figDesc></ns0:figure> <ns0:note place='foot' n='16'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59415:1:1:NEW 23 Jul 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Objet: Manuscript resubmission To: Editorial Board of Peer J. Computer Science July 6th, 2021 Dear Members of the Editorial Board, I thank the reviewers and the members of the editorial board for considering the manuscript for publication with minor revisions. All the comments of the reviewers have been answered. Detailed replies addressed to the reviewers can be found bellow. The text modifications corresponding to the comments are colored in orange in the revised manuscript. Moreover, an extensive edition of the manuscript has been performed to correct the English grammar and syntax of the manuscript. These modifications are colored in blue in the revised manuscript. I think the comments and the manuscript editing greatly improved the clarity of the new manuscript for the readership. I hope this new revision will met the standards of Peer J. Computer Science for publication. Thank you again for your time and consideration. Yours sincerely, Tristan Bitard-Feildel Reply to reviewers Editor I thank the reviewers for their time and careful reading of the manuscript. I think their comments greatly helped me to improve the quality of this manuscript. You will find below a point by point reply to your remarks/concerns/questions. I took all of them into consideration. The red colored text in the manuscript correspond to modifications taking into account your comments. Reviewer 1 Basic reporting A proofread for language clarity and minor errors is advisable (especially in the abstract) Intro and background cover the majority of necessary topics The structure is clear and figures are close to ready with minor tweaks Raw data and code all seem to be openly available. I'd like the naming system for the data files to be made clearer in a metadata file or in the supplementary PDF though. => Thank you for addressing these issues. The manuscript has been extensively corrected to improve its clarity (corrections in blue). Raw data and code naming system has been improved in the github. Experimental design I very much like the application of AAEs to this topic. The exploratory and descriptive aspects of the work are pretty well defined. However the conditions for success in the design aspects are less clearly articulated (see below). The inclusion of the necessary code greatly aids replicability. => I tried to make the data and code self-contained. I am glad it helps replicability. Interrogations concerning the conditions for success in the design are addressed below. Validity of the findings The author will need to be more cautious in their ever-interpretation of several results. Specifically, the protein designs were not experimentally validates, so there is no guarantee that they have the predicted function. Even seemingly minor differences to native sequences can have large effects on activities. In the absence of a collaboration to biochemically characterise the designs (likely beyond the scope of this work), care should be taken to be clear that the method has generated potentially designs (suggested to be promising by several in silico test). It is noted in L485-488, but needs to be more consistently framed throughout e.g. clarify 'successfully transfer' versus 'predicted to successfully transfer' or 'likely to transfer'. => I share the reviewer opinion regarding the interpretation of the results. I rewrote several sections to clarify the results. Experimental validations are indeed necessary and I am currently discussing with experimental biologists working on sulfatases to test and improve the current method. Comments for the author Abstract & intro L14 - 'Protein sequence / function space' be cautious to be clear whether you're referring to sequence space, function space, or the space of mappings from sequence-to-function (which can be significantly different in both concept and structure). => changed to « The protein sequence space mappings to specific functions » L16 - The paper doesn't really end up differentiating between 'relationships between protein positions and functions' versus 'sequence patterns associated with functions'. Might be better to combine these into a single concept for the abstract. => This sentence was also pointed out by Reviewer 2 and was edited as follow: “Applied to proteins, they can capture the sequence patterns associated with functions and point out important relationships between sequence positions. By learning these dependencies between sequences and functions, they can ultimately be used to generate new sequences and navigate through uncharted area of molecular evolution.” The first part of the sentence refers to consecutive pattern of amino acids and the second part to specific amino acid positions. L21 - State the three model families here! => The following change has been done: “AAEs are tested on three protein families known for their multiple functions: the sulfatase, the HUP and the TPP families.” L23-25 - 'The study also reports and analyzes for the first time two sampling strategies based on latent space interpolation and latent space arithmetic to generate intermediate protein sequences sharing sequential and functional properties of original sequences issued from different families and functions.' Be very careful with this statement. Yes, the strategies generate intermediate protein sequences. No, they don't necessarily share functional properties (would need to be experimentally verified via recombinantly expressed protein)! Handled better at L351's 'correspond to plausible sequences'. => Thanks for pointing out my over enthusiastic mistake. The sentence has been changed to: “[…] to generate intermediate protein sequences sharing sequential properties of original sequences linked to known functional properties functional properties issued from different families and functions.” L30 - Again 'successfully transfer functional properties between sub-families'. Not experimentally confirmed. These protein designs could turn out to be non-functional (e.g. see success rates for Hilvert's or Baker's designs e.g 'Robust design and optimization of retroaldol enzymes' doi.org/10.1002/pro.2059) => Changed to “[…] to successfully transfer protein sequence properties linked to function between different sub-families” Methods L148 - For the initial deep learning model, were amino acids each treated as different characters, or biochemical similarities taken into account? Note: there are multiple ways to encode protein sequence information for (e.g. 'Evolution of Sequence-Diverse Disordered Regions in a Protein Family: Order within the Chaos' doi.org/10.1093/molbev/msaa096, 'A method to predict functional residues in proteins' doi.org/10.1038/nsb0295-171, 'Principal components analysis of protein sequence clusters' doi.org/10.1007/s10969-014-9173-2). => A « Model Training » paragraph was added with the following sentence: « Amino acids and gap symbol of sequence input data were transformed using one-hot-encoding. A batch size of 32 was used to train the network.  L176 - What was the dimensionality before dimensionality reduction? => The dimensionality depends of the family and was for a single entry a matrix of dimensionality: [sequence length, 21] with 21 the number of amino acids + 1 (gap symbol). For the sulfatase the full dimensionality during training with a batch size of 32 was [32, 538, 21]. Results L252 - It would be worth noting why gappy columns were removed in this instance. I.e. In what way do they otherwise skew results? => Columns with many gaps have low information content. It is still possible to include these columns but the network will have difficulty to generalize due to the limited sample size at these positions. L252 - How were remaining gaps handled? Were they simply treated as another character as with other amino acids, or encoded differently? => They were treated as another character and encoded using one-hot-encoding. The information has been added to the training model paragraph in the Method section. L272 - some of the colours of Fig 1 are extremely similar, making the in-image legend hard to follow. I recommend in at least panel A, overlaying the labels 1 to 11 over / next to their respective clusters => The clusters have been annotated with their respective sub-family. L272 - It's worth noting Fig S3 equivalent for HUP and TPP in the legend of Fig 1A. => The legend has been modified accordingly: “This analysis is also performed for the HUP and TPP families. Results can be found in Fig. S3” L338 - some text on Fig 2 is unreadably small. I recommend enlarging the text on the right of panel B, or possibly making the panels vertically arranged so that each is full page width => The text has been horizontally arranged with a augmented font-size for readability. L386 - '(Panels A and D)', I assume of Fig S6? => Indeed, thank you. The text has been corrected. Discussion & conclusion L482 - The comparison to image generation (or other AAN applications) might be interesting to note in more detail. How do these tasks compare in practice (e.g. accuracy, separation of clusters)? => It is hard to compare both. In recent studies of image synthesis (generation), models are usually compared using Frechet Inception Distance (FID), Negative Log Likelihood (NLL) or NELBO (Negative Evidence Lower Bound) corresponding to the upper bound of a NLL distribution. However, it would be extremely interesting to test new generative models such as LSGM (Latent Score-based Generative Model) for sequences. Score-based generative models have shown very promising results. A sentence in this discussion has been added on this topic: “New models from image synthesis could also provide interesting approaches for the generation of protein sequences Vahdat et al. 2021”. L469-487 - How do the observations here compare to VAEs or similar methods? E.g. is the smooth transition during interpolation unique to this method or a common feature ? => To my knowledge transition between generated sequences of different sub-families have not been studied thoroughly with VAE. L508 - 'Promising avenues' rather than 'solutions' probably more accurate (assuming solutions in the sense of solution to a problem, rather than solution to an equation) => Thank you for pointing this mistake. Minor language notes A proofread for clarity and minor errors is advisable (especially in the abstract) L16 - Missing comma between 'functions, capture' L17 - Probably best to include an oxford comma between 'functions, or' to be safe L17 - probably plural areas Some sentences also get quite long. I recommend a quick read of the Structure of Prose and Stress Position sections of www.americanscientist.org/blog/the-long-view/the-science-of-scientific-writing. => Thank you for reference. Sections of the manuscript have been rewritten in blue to improve the manuscript readability. Reviewer 2 Basic reporting 1. The most important issue is, that the difference between source, query and target families is not clear. In the methods section, source and query sub-families are nicely explained (l 213 - 214). However target sub-families are first mentioned in the results section (l 323). It seems like the source - query pair is used for latent space arithmetic and query - target pair for interpolation, but this is never clarified. For the reader to understand it better either use the same word pair throughout or clearly explain the different word pairs and why they should be distinguished. => Thank you for pointing out this mistake. The method section has been updated accordingly. Indeed, as the reviewer pointed out, target sub-families are used for the interpolation as the sequence interpolation is computed from a query sequence to a target sequence represented as points in the latent space. The query / source sub-families in contrary correspond to a way of mixing sequences of different sub-families as reported in l213- 214. The following sentences have been written: “After the selection of a query sub-family, the sequences of this sub-family are projected to the latent space and used as the starting points of the interpolation. The end points of the interpolation correspond to sequences of a target sub-family, different from the query sub-family, and projected into the latent space.” 2. Another important point is the improvement of language. For example, a few sentences that should be inspected are in the following lines: l 16-20, “Applied to protein sequences, they can point out relationships between protein positions and functions capture the sequence patterns associated with functions or navigate through uncharted area of molecular evolution . Generative models of protein sequences can capture the patterns associated with functions and point out important relationships between sequence positions. By learning these dependencies between sequences and functions, they can ultimately be used to generate new sequences and navigate through uncharted area of molecular evolution. Results : In this study, an unsupervised generative approach based on adversarial auto-encoder (AAE) is proposed to generate and explore new sequences with respect to their functions thanks to the prior distribution allowing a continuous exploration of the latent space.» This study presents an Adversarial Auto-Encoder (AAE) approached, an unsupervised generative model, to generate new protein sequences. l 28-31, “Finally, 3D structure models generated by comparative modelling between different combinations of structures of different sub-families and of generated sequences from latent space or sub-family sequences point out to the ability of the latent space arithmetic to successfully transfer sequential properties linked to function between sub-families. » Finally, 3D structure models computed by comparative modelling using generated sequences and templates of different sub-families point out to the ability of the latent space arithmetic to successfully transfer protein sequence properties linked to function between different sub-families. l 56-58, “.Understanding the relationships between amino acid positions of a sequence groups responsible for a particular molecular function and how to cross the sequence space from one group to an other have a lot of implications in molecular engineering, functional annotation, and evolutionary biology.» Perfect modeling of the sequence space could have applications in molecular engineering, functional annotation, or evolutionary biology. It may for example be possible to completely understand the relationships between amino acid positions of a family responsible of a molecular function or to navigate the sequence space between families of different functions. l 465-467, “After checking the ability of AAE networks to disentangle correctly protein functional space, this study proposes to explore the capacity of AAE networks with two sampling tasks: interpolate protein sequences of different sub-families and generate new protein sequences mixing properties of two sub-families.» Similarly to previous works using VAE architectures, this study analyzed the capacity of the AAE architecture to correctly disentangle protein functional space of different families. The generative capacity of the models were looked into with two original tasks: protein sequences interpolation between different sub-families and protein sequence arithmetics to mix properties of two sub-families. l 477-479 “The generated sequences have amino acid Shannon entropy lower than sequences from sub-families which indicates a, amino acid diversity at each position lower than the biological sequences. » The generated sequences have Shannon entropy values per amino acid position lower than biological sequences which indicates a lower amino acid diversity at each position Other small grammatical errors like missing words or conjugation of verbs should be corrected in the following lines: l 41, The sequence space is difficult to explore due to its huge size and thus the constrains between sequence positions are hard to automatically understand l 50, The sequence space area between these families is however mostly uncharted Das et al. 2015 in spite of very remote evolutionary relationship between families Alva et al 2010 l 56, These models could have applications in molecular engineering, functional annotation, or evolutionary biology such as understanding the relationships between amino acid positions of a family responsible of a molecular function or crossing the sequence space between families of different functions. l 110, The VAE architecture has previously been used to cluster protein sequences and interpret the resulting clusters regarding their function or evolutionary history Sinai et al. 2017, Hopf et al. 2017, Riesselman et al. 2018, Ding et al. 2019, Greener et al. 2018). These experiments have not studied the quality of its generative ability for protein sequences. Particularly, the performances of the architecture is not known for the tasks of navigating the sequence space and transfering features between clusters. l 188, Proteins without annotation were discarded. l 195, Homogeneous clusters will point out the ability of the AAE model to capture and distinguish protein sequences with functionally or evolutionary relevant features without supervision. l 381, A taking over pattern corresponds to an amino acid of one of the original sequences being the most frequent in the generated sequences. l 438, The energy of generated sequences modeled using their query sub-family templates is slightly lower than the energy of models using their source sub-family templates. l 461-462, sentence removed and first sentence rewritten as “In this study, an Adversarial Autoencoder (AAE) architecture is proposed to analyze and explore the protein sequence space regarding functionality and evolution.” l 492, l 493 Currently the model input is a filtered MSA. An improved model could make use of full protein sequences of different sizes without filtering. Unfiltered protein sequences may benefit the generative model by capturing during training important protein specific motifs for family sub-functions Das et al 2015 not reaching the filtering thresholds. l 495 using self-supervised approaches such as the Transformer models 3. Another important issue that needs clarification concerns figure 3 including corresponding caption and flow text (l 389 - 409). The caption of the figure is clear on its own, but more difficult to understand when looking at the flow text. There seem to be two paragraphs describing the same figure in different words. Lines 401 - 409 correspond to the figure captions and the sub-families given in the caption are very helpful. However, the bullet points (l 392 - 396) are difficult to relate to the description of the symbols in figure 3. As mentioned under issue 1, there is a confusion between the source - query and query - target pairs. L392 – 396 Protein sequence similarities were computed to evaluate the diversity of the generated sequences and compare their diversity with the original sub-families. Protein sequence similarities were computed between: • sequences of a group of generated sequences (magenta lower triangles), • sequences of a sulfatase sub-family used to generate protein sequences (blue circles), • generated sequences and sequences of their query sulfatase sub-family (orange squares), • generated sequences and sequences their target sulfatase sub-family (green crosses), • query and target sequences of sulfatase sub-families (red upper triangles). L401 – 409: Protein sequence similarities between different sub-families (red upper triangles) have lower similarity scores and lower variances than the other distributions. Protein sequence similarities between sequences of a sub-family (blue circles) have the highest mean and variance values observed. However, since only 6 sub-families were kept for analysis (sub-families 0, 2, 3, 7, 8, and 11), trends must therefore be taken with precaution. Generated protein sequences compared to themselves (magenta lower triangles) have mean and variance protein sequence similarities higher than when compared to their query or sub-families. The last two have (generated sequences compared to query sequences, orange squares and generated sequences compared to target sequences, green crosses) mean and variance values spread between the blue and red distributions. The distribution of similarity scores computed between proteins of different sub-families (red upper triangles) has the lowest mean and variance when compared to the other four distributions. The distribution of similarity scores computed between sequences of the same sub-family (blue-circles) has the highest mean and variance value observed. The similarity score distribution of protein of different sub-families also has the lowest variance compared to the other distributions. 4. One less important point is a missing reference for latent space arithmetic (l 209 - 210, l 482 - 483). One refernence is mentioned in the results section (l 358), but to make this clearer you should also refer to it in the methods and discussion. => The reference has been added 5. Another point concerns the Shannon entropy. It nicely visualises the variability in biological protein space compared to the generated sequences. But the Shannon entropy is first mentioned in the results (l 329). It is not entirely clear how and why it is used. To clarify this, you could describe it to the methods and add a reference (see experimental design). => The following text was added to the Method section: Shannon entropy is computed to measure the degree of variability at each position (column) of the MSA (Jost 2006). with Pi the frequency of amino acid i and n the number of characters (20 amino acids and a gap symbol). The mean entropy per amino acid is computed for each sequence of MSAs of biological sequences and generated sequences. Low the entropy indicates that the analyzed sequences have low amino acid variability between each other. High entropy indicates high amino acid variability. Experimental design 1. The methods are generally well referenced and described, except for the Shannon entropy calculation, as mentioned above. Please add an explanation and reference (e.g. Jost, 2006, Oikos, 113) to the methods. => Thank you for the reference. It was added to the explanatory text. 2. The names of the scripts are very descriptive and show their purpose. However, you could make the scripts more useful for other researchers by including a README to the scripts folder. => A README was added in the script folder. Validity of the findings 1. Statistical measures would make the findings more valid, i.e.R2 for fitted curves or lines in figure 2. You could also add if there is a statistically significant difference between the groups in figure 3. => R2 and pearson r correlation coefficient were given in the supplementary Information. The values were added to the data in figure 3. Comments for the author 1. A glossary could be of help, especially for the issue addressed above (1.Basic reporting). This would avoid confusion, for example between the source, query and target sub-families. => A glossary has been added to the end of the Method section covering these differents terms: AAE, encoder, latent space, decoder, query sub-family, target sub-family. The glossary has been written to take into account the first reviewer’s comment. 2. I had a look at the older version of the paper (”Exploring protein se-quence and functional spaces using adversarial autoencoder”, 2020) containing more figures in the main text. I would support the decision to move some figures from the supplementary in the current version back to the main text. Especially supplementary figure 2 could be helpful in the main manuscript. => I understand the reviewer’s comment. I however prefer to keep “pipeline” figures into the supplementary to help the reader to focus on the result figures. 3. You could elaborate more on how much of the sequence space is known. Useful references are for example Taverna and Goldstein (2002, Proteins, 46), Goldstein and Pollock (2016, Protein Science, 25), Marchi et al. (2019, PLOS Computational Biology, 15) or Alva et al. (2010, Protein Science, 19). => Thank you for the suggestion and references. They have been incorporated to the manuscript. "
Here is a paper. Please give your review comments after reading it.
218
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Monocular 3D object detection has recently become prevalent in autonomous driving and navigation applications due to its cost-efficiency and easy-to-embed to existent vehicles.</ns0:p><ns0:p>The most challenging task in monocular vision is to estimate a reliable object's location cause of the lack of depth information in RGB images. Many methods tackle this ill-posed problem by directly regressing the object's depth or take the depth map as a supplement input to enhance the model's results. However, the performance relies heavily on the estimated depth map quality, which is bias to the training data. In this work, we propose depth-adaptive convolution to replace the traditional 2D convolution to deal with the divergent context of the image's features. This lead to significant improvement in both training convergence and testing accuracy. Second, we propose a ground plane model that utilizes geometric constraints in the pose estimation process. With the new method, named GAC3D, we achieve better detection results. We demonstrate our approach on the KITTI 3D Object Detection benchmark, which outperforms existing monocular methods.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In recent years, with the evolution of the deep neural network in computer vision <ns0:ref type='bibr' target='#b16'>(Krizhevsky et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b14'>He et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b3'>Brock et al., 2021)</ns0:ref>, we have seen various methods being proposed to resolve 2D object detection task <ns0:ref type='bibr' target='#b37'>(Ren et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b13'>He et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b36'>Redmon and Farhadi, 2018;</ns0:ref><ns0:ref type='bibr' target='#b51'>Zhou et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b44'>Tan et al., 2020)</ns0:ref> and achieve remarkable performances, which almost approach human visual perception. Even so, in particular fields such as autonomous driving or infrastructure-less robot navigation, the demand for scene understanding, including the detailed 3D poses, identities, and scene context, is still high. Researchers pay attention to 3D object detection, especially in autonomous navigation applications. To obtain an accurate depth map of the environment, people adopt LiDAR sensors widely due to their reliable 3D point cloud acquired using laser technology. Such LiDAR-based systems <ns0:ref type='bibr' target='#b40'>(Shi et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b19'>Lang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b39'>Shi et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b12'>He et al., 2020)</ns0:ref> achieve promising results, they also come with visible limitations, including high-cost sensors, hard to mount on vehicles, sparse and unstructured data. Therefore, an alternative solution using a singular RGB camera is required. It is far more affordable, versatile, and almost available on every modern vehicle. The main difficulty of image-based 3D object detection is the missing depth information, which results in a significant gap performance compared to LiDAR-based methods. While stereo systems are available, we have to calibrate the cameras with relatively high accuracy. Commercial cameras such as Bumblebee is a compact system with lower cost; however, the obtained depth map quality is nowhere near the standard required for autonomous driving systems, especially for outdoor.</ns0:p><ns0:p>Estimating the depth via a single camera is a good choice since human also perceives the depth from 2D images. However, the accuracy of monocular depth estimation is not as good as ToF cameras like LiDAR. That makes the 3D object detection task on a single camera more challenging and has brought much attention from the community. In this work, we propose a 3D object detection system based on a single camera view. We also demonstrate that our proposed framework can bridge the gap between LiDAR and image-based detectors.</ns0:p><ns0:p>There are two main approaches in monocular 3D object detection: the representation transformation and the 2D convolutional-neural-network (CNN). In the representation approach, the general idea is to imitate the 3D point clouds of LiDAR by estimating depth information from images. This depth map is then projected into 3D space to generate the pseudo-point clouds. With the pseudo-point clouds, one can employ several algorithms that using LiDAR data to detect objects. However, the raw point clouds are sparse due to the range laser sensor's physical principle. The pseudo-point clouds, however, are considerably denser and depend heavily on the estimated depth map quality. Thus, applying object pose detection on these pseudo-point clouds decreases the performance significantly. Besides, the pseudo-LiDAR methods consist of several separate steps, which usually require training the estimation model separately, leading to non-end-to-end optimal training and time-consuming in the inference phase.</ns0:p><ns0:p>On the other hand, the 2D CNN approaches extend the 2D object detector's architecture to adapt the 3D output representation and add several techniques to solve the ill-posed problem. In M3D-RPN <ns0:ref type='bibr' target='#b1'>(Brazil and Liu, 2019)</ns0:ref>, and D4LCN <ns0:ref type='bibr' target='#b8'>(Ding et al., 2020)</ns0:ref>, the authors redefine the anchor from YOLO <ns0:ref type='bibr' target='#b36'>(Redmon and Farhadi, 2018)</ns0:ref> to 3D anchor by adding dimension, orientation, and depth information. <ns0:ref type='bibr' target='#b25'>Liu et al. (2019)</ns0:ref>, <ns0:ref type='bibr' target='#b32'>Naiden et al. (2019)</ns0:ref> follow the pipeline of two-stage detector like Faster R-CNN <ns0:ref type='bibr' target='#b37'>(Ren et al., 2015)</ns0:ref> to detect the 2D bounding box of the object, then regress to the 3D box. In the proposal stage, they localize the 2D bounding boxes with additional 3D regressed attributes. In the second stage, the 3D bounding box can be reconstructed via an optimization process that leverage the geometric constraints between the projected 3D box and the corresponding 2D box. These methods rely heavily on an accurate 2D detector.</ns0:p><ns0:p>Even with a small error in the 2D bounding box, it can cause poor 3D prediction.</ns0:p><ns0:p>Inspired by anchor-free architecture in the 2D detector CenterNet <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref>, SMOKE <ns0:ref type='bibr' target='#b27'>(Liu et al., 2020)</ns0:ref> and RTM3D <ns0:ref type='bibr' target='#b23'>(Li et al., 2020)</ns0:ref> add several regression heads parallel to the primary 2D center detection head to regress 3D properties. These anchor-free approaches are more light-weight and flexible than the other anchor-based approaches since they do not have to pre-define the 3D anchor box, which is more complicated than the one in the 2D detector. Unlike some regular 2D object detection datasets such as COCO <ns0:ref type='bibr' target='#b24'>(Lin et al., 2014)</ns0:ref> and Pascal VOC <ns0:ref type='bibr' target='#b9'>(Everingham et al., 2009)</ns0:ref>, the 3D datasets like KITTI <ns0:ref type='bibr' target='#b11'>(Geiger et al., 2012)</ns0:ref> usually contain occluded objects due to the driving scenario of the data collecting process.</ns0:p><ns0:p>As illustrated in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, in a dense object scene, the 2D center of the occluded car locates at another instance's appearance, which potentially causes errors in the pose estimation process. Moreover, the standard convolution operation is content-agnostic <ns0:ref type='bibr' target='#b43'>(Su et al., 2019)</ns0:ref>, which means once trained, the kernel weights remain unchanged despite the variance of the input scenario. Thus, the center misalignment phenomenon confounds the center-based detector with traditional convolution filtering to identify the object locations accurately. To overcome this issue, we introduce a novel convolution operation called depth adaptive convolution layer, which leverages the external guidance from a pre-trained depth estimator to enhance features selection for regression and detection tasks. Our novel convolution filtering applies a set of secondary weights on the original convolution kernel based on the depth value variance at a single pixel. As a consequence, this operator improves the precision and robustness of center-based object prediction.</ns0:p><ns0:p>In autonomous navigation and robotics applications, most moving obstacles stand on a ground plane.</ns0:p><ns0:p>Thus, the height difference between the mounted camera and the obstacle's bottoms is almost constant for each vehicle and is equal to the camera's height. Moreover, assume that the ground plane is parallel to the axis of the camera, we can reproject the 2D location in the image to the ground plan to get the z-coordinate of the object bottom. This assumption holds for most real driving scenarios. The reprojection can significantly reduce the lack of depth information in the monocular image. Such geometric information obtained from these perspective priors can help mitigate the ill-posed problem in monocular 3D object detection.</ns0:p><ns0:p>In this work, we proposed a single-stage monocular 3D object detector employing ideas from the above discussions. We name this method GAC3D (Geometric ground-guide and Adaptive Convolution for 3D object detection). Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> provides an overview of our proposed detection framework. Our work consists of two main contributions:</ns0:p><ns0:p>&#8226; Employ a novel depth adaptive convolution playing as secondary weights to adapt with the depth variance on every pixel.</ns0:p><ns0:p>&#8226; Introduce a ground-guide module to infer the 3D object bounding box information from 2D regression results. For this approach, we introduce the concept of pseudo-position that serves as an initial value for the estimation process.</ns0:p><ns0:p>The details of our proposed framework will be presented in the next section. We demonstrate that our method provides significant improvements in the results compared to the current methods. We also point out some issues with the data, which may potentially change the outcome of learning-based methods.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>There are many approaches in this research area. To analyze them, we categorize these approaches into three main groups: Manuscript to be reviewed Computer Science <ns0:ref type='bibr'>(Qi et al., 2017a,b)</ns0:ref>, deep learning models can efficiently learn features from unstructured point clouds data. In <ns0:ref type='bibr' target='#b35'>Qi et al. (2018)</ns0:ref>, the authors created 2D proposal bounding boxes using a 2D detector. They used a PointNet-based network to learn the features from cropped point clouds regions for 3D box estimation. In <ns0:ref type='bibr' target='#b40'>Shi et al. (2019)</ns0:ref>, the method directly generated high-quality 3D proposals from point clouds and leverage ROI-pooling to extract features for box refinement in the second stage.</ns0:p></ns0:div> <ns0:div><ns0:head>LiDAR-based 3D object detection</ns0:head></ns0:div> <ns0:div><ns0:head>Many</ns0:head><ns0:p>In contrast with point-based methods, the voxel-based 3D detectors divide raw point clouds into regular grids for more efficient computation. By transforming the LiDAR point clouds into bird's-eye-view representation and applying a single-stage detector, <ns0:ref type='bibr' target='#b48'>Yang et al. (2018)</ns0:ref> achieved good performance in terms of accuracy and real-time efficiency. In <ns0:ref type='bibr' target='#b39'>Shi et al. (2020)</ns0:ref>, the authors introduced a voxel set abstraction module to integrate semantic features from 3D voxels to sampled keypoints. By taking advantage of the multi-scaled features obtained from the voxel-based operation and the location information from PointNet-based set abstraction, they accomplished impressive results for 3D object detection.</ns0:p></ns0:div> <ns0:div><ns0:head>Monocular 3D object detection based on representation transformation</ns0:head><ns0:p>In this category, the methods indirectly estimate 3D detection by transforming images into alternative representations before applying detection algorithms. In <ns0:ref type='bibr' target='#b47'>Wang et al. (2019)</ns0:ref>, the authors generated 3D point clouds from pixel-wise depth estimations. The depth image obtained from the monocular image, known as Pseudo-LiDAR, was used to mimic the original point clouds from the LiDAR scan. Then, they passed the Pseudo-LiDAR through existing LiDAR-based 3D object detectors to obtain final results. This approach heavily depends on the quality of the depth image. Using a single image to estimate the depth image, the accuracy of the depth map is questionable. For most cases, it depends on the training data, and it is nontrivial to remove the bias in data.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b49'>You et al. (2020)</ns0:ref>, the authors took advantage of the previous work and utilized cheaper LiDAR sensors to de-bias the depth estimation and correct point clouds representation results. <ns0:ref type='bibr' target='#b46'>Wang et al. (2020)</ns0:ref> is another remarkable work on Pseudo-LiDAR. They realized that the foreground and background have different depth distributions. Therefore, they estimated the foreground and background depth using separate optimization objectives and decoders. This approach leads to some improvement in the pseudo point clouds. However, the above depth-based methods cannot leverage image information. To enhance Pseudo-LiDAR's discriminative capability, the authors in <ns0:ref type='bibr' target='#b30'>Ma et al. (2019)</ns0:ref> proposed a multi-modal features fusion module to embed the complementary RGB cue into the generated point clouds representation.</ns0:p><ns0:p>In another work, <ns0:ref type='bibr' target='#b42'>Srivastava et al. (2019)</ns0:ref> converted the perspective image to a bird's-eye-view (BEV) image. They used Generative Adversarial Network (GAN) to perform BEV transformation as an image-toimage translation task. This work generated BEV grids directly from a single RGB image by designing a high fidelity GAN architecture and carefully curating a training mechanism, including selecting minimally noisy data for training. In another work, <ns0:ref type='bibr' target='#b38'>Roddick et al. (2019)</ns0:ref> used a different grid-based method. They mapped the 2D feature maps to bird's-eye-view by orthographic feature transformation. They accumulated the extracted features over the projected voxel area. The voxel features were then collapsed along the vertical dimension to yield the final orthographic ground plane features. These approaches show that it is possible to imitate human perception when performing object detection on a single image without special depth sensors.</ns0:p></ns0:div> <ns0:div><ns0:head>Monocular 3D object detection based on 2D detector</ns0:head><ns0:p>In <ns0:ref type='bibr' target='#b1'>Brazil and Liu (2019)</ns0:ref>, the authors simultaneously estimated 2D and 3D boxes by introducing the 3D anchor box consisting of both 2D and 3D features. They also proposed a row-wise convolution and a 2D-3D optimization process to improve the orientation estimation. Follow the 3D anchor approach, the authors of <ns0:ref type='bibr' target='#b8'>Ding et al. (2020)</ns0:ref> integrated the estimated depth map into the network backbone and design a depth-guide convolution with dynamic local filters. With the new guidance, the estimation has improved significantly. <ns0:ref type='bibr' target='#b26'>Liu et al. (2021)</ns0:ref> leveraged the ground plane priors through a ground-aware convolution and anchors filtering pre-processing step.</ns0:p><ns0:p>Another approach in this direction is <ns0:ref type='bibr' target='#b27'>Liu et al. (2020)</ns0:ref>, where the authors extended the CenterNet-based detector by adding depth and orientation estimation branches. They proposed a multi-step disentangling loss to handle different kinds of loss functions in 3D detection tasks. Some works, such as J&#246;rgensen et al. corners instead of the spatial information (including depth, allocentric orientations). Then they used these hints to recover the 3D pose. Some other approaches rely on an off-the-shelf 2D regional proposal network to generate 2D candidates, which can significantly reduce the search space of 3D. In <ns0:ref type='bibr' target='#b17'>Ku et al. (2019)</ns0:ref>, the authors utilized MS-CNN <ns0:ref type='bibr' target='#b4'>(Cai et al., 2016)</ns0:ref> to extract 2D bounding boxes then generated 3D proposals and local point clouds based on the cropped features. Another work in <ns0:ref type='bibr' target='#b21'>Li et al. (2019)</ns0:ref> proposed a guidance algorithm with a 3D subnet to refine 3D bounding boxes from 2D proposals.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>Our proposed framework aims to improve the accuracy of the 3D object detection task from the monocular image. By introducing the depth adaptive convolution layer, we can leverage the prediction result of detection heads. We also present a new module that utilizes object's pseudo-position inference for enhancing the 3D bounding box regression results. To demonstrate this idea, we describe the details of our geometric ground-guide module (GGGM), which infers the final location, orientation, and the 3D bounding boxes. This module utilizes the intermediate output of the detection network and the pseudo-position value to recover the object's 3D pose via 2D-3D geometric transformation.</ns0:p></ns0:div> <ns0:div><ns0:head>Center-based Monocular 3D Objects Detection Network</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> illustrates the overview of the proposed framework. Our detection network follows the idea of CenterNet <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref> architecture, which consists of a backbone for feature extraction followed by multiple detection heads. We employ a modified version of the DLA-34 <ns0:ref type='bibr' target='#b50'>(Yu et al., 2018)</ns0:ref> proposed in <ns0:ref type='bibr' target='#b51'>Zhou et al. (2019)</ns0:ref> as the backbone of our network. Each detection head follows the design shown in Fig. <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>, which includes a 3 &#215; 3 depth adaptive convolution layer followed by a Rectified Linear Unit (ReLU) activation and a 1 &#215; 1 fundamental convolution layer. Let I &#8712; R H&#215;W &#215;3 be the input image with the width of W and the height of H, our detection heads include:</ns0:p><ns0:p>&#8226; Center head: The center head produces a heatmap M &#8712; [0, 1] (H/4)&#215;(W /4)&#215;c of 2D bounding box centers, where c is the number of object classes. Each value of channel c i presents the confidence score for an object of category C i . The output heatmap is inversely transformed to the resolution of H &#215;W &#215; c via affine transformation to retrieve the location of 2D bounding box centers.</ns0:p></ns0:div> <ns0:div><ns0:head>5/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_5'>2021:04:60023:1:1:NEW 24 Jul 2021)</ns0:ref> Manuscript to be reviewed &#8226; Keypoints head: Inspired by Li (2020), we estimate the location of 9 ordered 3D bounding box keypoints projected on the 2D image plane. Those points are the corners and center of the 3D bounding box. We consider the keypoint location as horizontal and vertical offsets from the corresponding 2D center. The keypoints head takes the feature maps to produce P kp &#8712; R (H/4)&#215;(W /4)&#215;18 of coordinate offsets.</ns0:p><ns0:note type='other'>Computer Science Length Width</ns0:note><ns0:p>&#8226; Pseudo-contact point head: This head estimates the projected location on the 2D perspective of the pseudo-contact point, which is described thoroughly in the object's pseudo-position section. It follows the same approach mentioned in the keypoints head and produces P co &#8712; R (H/4)&#215;(W /4)&#215;2 of coordinate offsets.</ns0:p><ns0:p>&#8226; Orientation head: Due to the perspective transform from 3D coordinate to 2D image plane, it is impossible to regress global yaw rotation &#952; from a single image (see Appendix for more details).</ns0:p><ns0:p>Hence we choose to regress the observation angle &#945; by following the angle decomposition proposed in <ns0:ref type='bibr' target='#b2'>Brazil et al. (2020)</ns0:ref>, which formulates the orientation into three components: heading, axis, and offset. We encode the offset angle &#945; as [sin(&#945;), cos(&#945;)] &#8868; . The output of the orientation head is</ns0:p><ns0:formula xml:id='formula_0'>O &#8712; R (H/4)&#215;(W /4)&#215;6 .</ns0:formula><ns0:p>&#8226; Dimension head: Dimension head directly regresses the absolute value D &#8712; R (H/4)&#215;(W /4)&#215;3 of object dimension. Instead of applying the regression strategy proposed in <ns0:ref type='bibr' target='#b27'>Liu et al. (2020)</ns0:ref> and Li (2020), which predict the value of height, width, length with a specific order, we estimate the object dimension in a more flexible way.</ns0:p><ns0:p>The visual appearance of the object has a strong impact on the object's metrics, as illustrated in Fig. <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>. Therefore, we attempt to dynamically decode the width and length of the object as follows:</ns0:p><ns0:formula xml:id='formula_1'>h = D 0 (1) w = D 1 , if |sin(&#945;)| &gt; |cos(&#945;)| D 2 , otherwise<ns0:label>(2)</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>l = D 2 , if |sin(&#945;)| &gt; |cos(&#945;)| D 1 , otherwise<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where h, w, l, &#945; are object's height, width, length, and the observation angle of the object accordingly.</ns0:p><ns0:p>The term D i is the i th channel of dimension head's output.</ns0:p></ns0:div> <ns0:div><ns0:head>6/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60023:1:1:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Depth Adaptive Convolution Layer</ns0:head><ns0:p>In the context of traffic scenes, vehicles usually occlude others. We observe that the detection result of a specific object should not be affected by the features of others. The irrelevant features could belong to adjacent objects or background objects. We aim to enhance the detection by selecting valuable features.</ns0:p><ns0:p>Features at pixels inside the object should contribute more to 3D detection. Instead of using instance segmentation as a guide for the detection network, we use the 3D surface of the object taken from the depth map. We can always figure out the discontinuity between objects just by probing the depth map.</ns0:p><ns0:p>As a consequence, we design the Depth Adaptive Convolution Layer to handle irrelevant local structures of traditional 2D convolution. Our proposed layer injects the information from depth predictions by applying pixel-adaptive convolution from <ns0:ref type='bibr' target='#b43'>Su et al. (2019)</ns0:ref>. The formulation of pixel-adaptive convolution is defined as follows:</ns0:p><ns0:formula xml:id='formula_3'>v &#8242; i = &#8721; j&#8712;&#8486;(i) K(f i , f j )W[p i &#8722; p j ]v j + b (4)</ns0:formula><ns0:p>where v &#8242; i is the convolution filtering output at pixel i , f &#8712; R D are the pixel features that guide the pixeladaptive convolution, &#8486;(i) denotes a convolution window , p i = (x i , y i ) &#8868; are pixel coordinates, W are the filter weights of convolution, v is the input features, and b are the biases value of convolution. K is a fixed kernel function.</ns0:p><ns0:p>Inspired by the work in <ns0:ref type='bibr' target='#b43'>Su et al. (2019)</ns0:ref>, we use depth maps as the external guiding features for the depth adaptive convolution layer to explicitly encourage the model to extract features from pixels of corresponding objects:</ns0:p><ns0:formula xml:id='formula_4'>v &#8242; i = &#8721; j&#8712;&#8486;(i) K(d i , d j )W[p i &#8722; p j ]v j + b<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>where d &#8712; R H&#215;W &#215;1 is the depth estimation of the current image. K is a fixed Gaussian kernel function to calculate the correlation of guiding features:</ns0:p><ns0:formula xml:id='formula_5'>K(d i , d j ) = e &#8722;1 2 (d i &#8722;d j ) 2 (6)</ns0:formula><ns0:p>Since the operator locally adapts the filter weights using the depth information, we name this operator 'depth adaptive convolution'. For feature map v &#8712; R h&#215;w&#215;c , our proposed convolution operator generates</ns0:p></ns0:div> <ns0:div><ns0:head>Distance between predicted and ground truth corner</ns0:head><ns0:p>Distance between predicted and ground truth 3d center Manuscript to be reviewed</ns0:p><ns0:p>Computer Science h &#215; w adaptive filters for each particular pixel by performing the Hadamard product of W and the local kernel K.</ns0:p></ns0:div> <ns0:div><ns0:head>Losses</ns0:head><ns0:p>Our total loss comprises a heatmap loss of 2D center, regression loss for keypoints and contact point offsets, a composition loss for local orientation, a regression loss for object's dimension, and a geometric position loss. We define this loss as follows:</ns0:p><ns0:formula xml:id='formula_6'>L = &#8721; i &#955; i &#8226; L i (7)</ns0:formula><ns0:p>where L i indicates L heat , L kps , L cps , L dim , L ori , L pos , which are the 2D center heatmap loss, keypoint offset loss, contact point offset loss, dimension regression loss, local orientation loss and position loss, respectively. The parameters &#955; i are [&#955; heat , &#955; kps , &#955; cps , &#955; dim , &#955; ori ], which are set to [1, 1, 1, 2, 0.2] accordingly.</ns0:p><ns0:p>The &#955; pos is determined by the ramp-up function proposed in <ns0:ref type='bibr' target='#b18'>Laine and Aila (2017)</ns0:ref>, which is computed as follows:</ns0:p><ns0:formula xml:id='formula_7'>&#955; pos = &#63729; &#63732; &#63730; &#63732; &#63731; 0, if n e &lt; T min e &#8722;5(1&#8722; ne Tmax ) 2 , if T min &lt; n e &#8804; T max 1, if n e &gt; T max<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where n e indicates current epoch number. The ramp-up period parameters [T min , T max ] are set to <ns0:ref type='bibr'>[40,</ns0:ref><ns0:ref type='bibr'>100]</ns0:ref>.</ns0:p><ns0:p>In this work, we apply focal loss in <ns0:ref type='bibr' target='#b51'>Zhou et al. (2019)</ns0:ref> for 2D center heatmap:</ns0:p><ns0:formula xml:id='formula_8'>L c heat = &#8722; 1 N c &#8721; 1 H/4 &#8721; 1 W /4 &#8721; 1 (1 &#8722; M xyc ) &#945; log( M xyc ), if M xyc = 1 (1 &#8722; M xyc ) &#946; ( M xyc ) &#945; log(1 &#8722; M xyc ), otherwise<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>For keypoints offsets, pseudo-contact point offsets and object's dimension regression, we use the L1-loss. In term of observation angle regression, we employ the orientation loss function in <ns0:ref type='bibr' target='#b2'>Brazil et al. (2020)</ns0:ref>.</ns0:p><ns0:p>Based on Li (2020), we can recover an object's position by solving an over-determined system of equations using singular value decomposition, which is differentiable. Since we deduce the position's equation system from nine keypoint offsets, orientation, and dimension, the position loss can be backpropagated through all the heads of our network. In some cases, the distance between the predicted 3D center and the groundtruth is relatively small, but the predicted bounding box is not accurate. Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> illustrates this phenomenon: a little shift in angle and center can lead to significant error in 3D pose estimation. Therefore, we propose a position loss function as an L2-loss of eight corners of the predicted bounding box and the groundtruth box instead of calculating position loss using the 3D center coordinate by combining and transforming the estimated position, orientation, and dimension into a 3D bounding box. We define the position loss as follows:</ns0:p><ns0:formula xml:id='formula_9'>L pos = 1 8 7 &#8721; i=0 Cor i &#8722; Cor i 2 (10)</ns0:formula><ns0:p>where Cor i is the i th corner of the 3D bounding box.</ns0:p></ns0:div> <ns0:div><ns0:head>Geometric Ground-Guide Module</ns0:head><ns0:p>Inspired by the Geometry reasoning module (GRM) from Li (2020), we introduce a 2D-3D transformation pipeline to reconstruct the object's 3D pose, named Geometric Ground-Guide Module (GGGM). First, this module estimates a pseudo-position for every detected object. Then, it takes the outputs from the detection network comprising a 2D center, dimension, orientation, 9 predicted keypoints, and the pseudo-position to generate a system of geometric equations whose solution is the position of the 3D bounding box center.</ns0:p></ns0:div> <ns0:div><ns0:head>8/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60023:1:1:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Object's Pseudo-position</ns0:head><ns0:p>We propose a lightweight approach to approximate the 3D bounding box center's y-coordinate and z-coordinate for each detected object. Our method incorporates estimated values from the pseudo-contact point head and camera calibration to calculate the pseudo-position.</ns0:p><ns0:p>In most driving scenarios, the ground around the car is flat. This assumption is not too strong since it holds in most cases. Even if our car is on the hill, the relative position between our car and other cars should be on the same flat plane. In that case, we can apply the ground-guide module to estimate the pseudo-position.</ns0:p><ns0:p>Thus, we do not consider non-flat ground-planes, which are not available in most datasets. We assume that the principal optical axis of the camera is parallel to the ground plane. Let G be a point on the ground plane (ground point) with the corresponding 3D location (x, y, z) and pixel coordinate (u, v) on the image plane. According to the pin-hole camera model, we calculate the depth value z for each ground pixel as:</ns0:p><ns0:formula xml:id='formula_10'>z = f y &#8226; h cam + T y v &#8722; c y<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>where f y , h cam , c y , T y is the focal length, camera height, principal point coordinate, and relative translation, respectively.</ns0:p><ns0:p>The h cam value depends on the dataset's camera settings, particularly 1.65m for the KITTI dataset <ns0:ref type='bibr' target='#b11'>(Geiger et al., 2012)</ns0:ref>. Equation ( <ns0:ref type='formula' target='#formula_10'>11</ns0:ref>) describes the 'ground plane model'. As shown in Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>, we can indicate every ground point's depth value by knowing its vertical coordinate on the image plane. We can calibrate this extrinsic information for the camera before using it in GGGM. Figure <ns0:ref type='figure' target='#fig_7'>7</ns0:ref> illustrates terms in the object's pseudo-position inference process. For every object with the 3D bounding box center P at (x, y, z), we define the projection of P on the ground plane P g at location (x, h cam , z) called pseudo-contact point. The coordinate (u, v) of P g projected on the 2D image plane is regressed directly in the pseudo-contact point head of the detection network. Finally, the approximation of z is inferred using v and the ground plane model in equation ( <ns0:ref type='formula' target='#formula_10'>11</ns0:ref>), proclaimed as pseudo z-coordinate. We define the pseudo y-coordinate as:</ns0:p><ns0:formula xml:id='formula_11'>y = h cam &#8722; h ob ject 2 (12)</ns0:formula></ns0:div> <ns0:div><ns0:head>2D-3D Transformation</ns0:head><ns0:p>With the ground model and the pseudo-position, we now derive the 3D pose reconstruction from 2D-3D geometric constraints. We enhance the 2D-3D transformation process of Li (2020) with our proposed pseudo-position. </ns0:p><ns0:formula xml:id='formula_12'>&#63737; &#63739; = &#63726; &#63727; &#63728; z &#215; u&#8722;c x f x z &#215; v&#8722;c y f y z &#63737; &#63738; &#63739;<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>We normalize the N (2d) and denote</ns0:p><ns0:formula xml:id='formula_13'>N (2d) = [ u, v] &#8868; = [ u &#8722; c x f x , v &#8722; c y f y ] &#8868; .</ns0:formula><ns0:p>Thus the 3D coordinate of nine keypoints of the object can be calculated as:</ns0:p><ns0:formula xml:id='formula_14'>Kp (3d) i = z i &#215; Kp (2d) i 1 3&#215;1 , i = 0, ..., 8. (<ns0:label>14</ns0:label></ns0:formula><ns0:formula xml:id='formula_15'>)</ns0:formula><ns0:p>On the other hand, these coordinates can be inferred from object's attributes, including dimension</ns0:p><ns0:formula xml:id='formula_16'>D = [ l, h, w] &#8868; , orientation O = [&#952; x , &#952; y , &#952; z ] &#8868; and location P: Kp (3d) i = R y ( O) &#215; Diag( D) &#215; Coor i + P, i = 0, ..., 8. (<ns0:label>15</ns0:label></ns0:formula><ns0:formula xml:id='formula_17'>)</ns0:formula><ns0:p>where R y ( O) is the rotation matrix around the y-axis. The angle &#952; y can be calculated by the projected 3D center K p 8 (obtained from the detection heads) and the observation angle &#945; (See Appendix for more details). Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Coor is the matrix in which each column contains the relative coordinate of nine keypoints to the object's center.</ns0:p><ns0:formula xml:id='formula_18'>Coor = &#63726; &#63728; 1/2 1/2 &#8722;1/2 &#8722;1/2 1/2 1/2 &#8722;1/2 &#8722;1/2 0 1/2 1/2 1/2 1/2 &#8722;1/2 &#8722;1/2 &#8722;1/2 &#8722;1/2 0 1/2 &#8722;1/2 1/2 &#8722;1/2 1/2 &#8722;1/2 1/2 &#8722;1/2 0 &#63737; &#63739; 3&#215;9</ns0:formula><ns0:p>and P = [p x , p y , p z ] &#8868; is the position of the 3D bounding box's center.</ns0:p><ns0:p>From ( <ns0:ref type='formula' target='#formula_14'>14</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_16'>15</ns0:ref>), we deduce to these equations:</ns0:p><ns0:formula xml:id='formula_19'>z i &#215; Kp (2d) i 1 3&#215;1 = R y ( O) &#215; Diag( D) &#215; Coor i + P, i = 0, ..., 8<ns0:label>(16)</ns0:label></ns0:formula><ns0:p>Using elementary row operations, we transform these equations to an over-determined system of 18 linear equations of variable P:</ns0:p><ns0:formula xml:id='formula_20'>&#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; &#8722;1 0 K p (2d) x0 0 &#8722;1 K p (2d) y0 &#8722;1 0 K p (2d) x1 0 &#8722;1 K p (2d) y1 &#8722;1 0 K p (2d) x2 0 &#8722;1 K p (2d) y2 &#8722;1 0 K p (2d) x3 0 &#8722;1 K p (2d) y3 &#8722;1 0 K p (2d) x4 0 &#8722;1 K p (2d) y4 &#8722;1 0 K p (2d) x5 0 &#8722;1 K p (2d) y5 &#8722;1 0 K p (2d) x6 0 &#8722;1 K p (2d) y6 &#8722;1 0 K p (2d) x7 0 &#8722;1 K p (2d) y7 &#8722;1 0 K p (2d) x8 0 &#8722;1 K p (2d) y8 &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; 18&#215;3 &#63726; &#63728; p x p y p z &#63737; &#63739; 3&#215;1 = &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; cos( r y ) l 2 + sin( r y ) w 2 &#8722; K p (2d) x0 (&#8722;sin( r y ) l 2 + cos( r y ) w 2 ) h 2 &#8722; K p (2d) y0 (&#8722;sin( r y ) l 2 + cos( r y ) w 2 ) cos( r y ) l 2 &#8722; sin( r y ) w 2 &#8722; K p (2d) x1 (&#8722;sin( r y ) l 2 &#8722; cos( r y ) w 2 ) h 2 &#8722; K p (2d) y1 (&#8722;sin( r y ) l 2 &#8722; cos( r y ) w 2 ) &#8722;cos( r y ) l 2 &#8722; sin( r y ) w 2 &#8722; K p (2d) x2 (&#8722;sin( r y ) l 2 &#8722; cos( r y ) w 2 ) h 2 &#8722; K p (2d) y2 (sin( r y ) l 2 &#8722; cos( r y ) w 2 ) &#8722;cos( r y ) l 2 + sin( r y ) w 2 &#8722; K p (2d) x3 (sin( r y ) l 2 + cos( r y ) w 2 ) h 2 &#8722; K p (2d) y3 (sin( r y ) l 2 + cos( r y ) w 2 ) cos( r y ) l 2 + sin( r y ) w 2 &#8722; K p (2d) x4 (&#8722;sin( r y ) l 2 + cos( r y ) w 2 ) &#8722; h 2 &#8722; K p (2d) y4 (&#8722;sin( r y ) l 2 + cos( r y ) w 2 ) cos( r y ) l 2 &#8722; sin( r y ) w 2 &#8722; K p (2d) x5 (&#8722;sin( r y ) l 2 &#8722; cos( r y ) w 2 ) &#8722; h 2 &#8722; K p (2d) y5 (&#8722;sin( r y ) l 2 &#8722; cos( r y ) w 2 ) &#8722;cos( r y ) l 2 &#8722; sin( r y ) w 2 &#8722; K p (2d) x6 (sin( r y ) l 2 &#8722; cos( r y ) w 2 ) &#8722; h 2 &#8722; K p (2d) y6 (sin( r y ) l 2 &#8722; cos( r y ) w 2 ) &#8722;cos( r y ) l 2 + sin( r y ) w 2 &#8722; K p (2d) x7 (sin( r y ) l 2 + cos( r y ) w 2 ) &#8722; h 2 &#8722; K p (2d) y7 (sin( r y ) l 2 + cos( r y ) w 2 ) 0 0 &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; 18&#215;1 (17)</ns0:formula><ns0:p>where each keypoint is associated with two geometric equations. The system of equations ( <ns0:ref type='formula'>17</ns0:ref>) is the baseline transformation in <ns0:ref type='bibr' target='#b22'>Li (2020)</ns0:ref>. The over-determined system of linear equations AP = b is solved by using the ordinary least square method. The intuition is minimizing the least square cost function:</ns0:p><ns0:formula xml:id='formula_21'>e(P) = b &#8722; AP 2 (18)</ns0:formula><ns0:p>then the approximate solution P = argmin</ns0:p><ns0:formula xml:id='formula_22'>P b &#8722; AP 2 = (A &#8868; A) &#8722;1 A &#8868; b.</ns0:formula><ns0:p>In practice, objects at divergent locations have different regression error which mostly comes from the inaccurate regressed keypoints' offsets. Despite the fact that there is no difference in 3D dimensions among distant and near objects, near objects have bigger 2D sizes than the far-away objects in 2D images.</ns0:p></ns0:div> <ns0:div><ns0:head>11/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60023:1:1:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Consequently, a little shift of keypoints' offsets of a distant object can lead to a significant 3D pose error while there is a small change in the 3D pose in the case of near object. To accommodate for this phenomenon, we introduce a L2 regularization term inferred from the pseudo-position and add it into the cost function ( <ns0:ref type='formula'>18</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_23'>e(P) = b &#8722; AP 2 + &#923; P &#8722; P pseudo 2 (19)</ns0:formula><ns0:p>where &#923; and P pseudo are pre-defined scale factor and an initial value for the position P (y ps and z ps are the pseudo y and z-coordinate) accordingly. The L2 regularization term encourages the solution P to not only depend on the 2D regressed output but also satisfy the ground plane model (illustrated in Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>).</ns0:p><ns0:p>The initial position P pseudo follows the ground plane model that we proposed before. The final position P is computed as:</ns0:p><ns0:formula xml:id='formula_24'>P = (A &#8868; A + &#923;) &#8722;1 (A &#8868; b + &#923;P pseudo )<ns0:label>(20)</ns0:label></ns0:formula><ns0:p>We design a soft scaling factor &#923; in equation ( <ns0:ref type='formula'>19</ns0:ref>) adaptive to the location of objects. In particular, a distant object has bigger &#923;, meaning that the refined 3D pose will depend less on the accuracy of keypoints' offsets regression then the error can be reduced. Distance of object is roughly estimated by the y-position of the 2D bounding box, the smaller y-position is, the farther object locates. Nevertheless, the two components of P pseudo do not share a similar scale. While the y-axis location only varies within meters, the z-position ranges from 0 up to hundreds of meters. Therefore, we choose the scaling factor &#923; to ensure that neither the y-component term nor z-component term dominates the other. The formula of the scale factor &#923; = [&#955; x , &#955; y , &#955; z ] &#8868; is as follows:</ns0:p><ns0:formula xml:id='formula_25'>&#63729; &#63732; &#63730; &#63732; &#63731; &#955; x = 0 &#955; y = 0.5e &#8722; y 2d &#8722;y min ymax&#8722;y min &#955; z = 0.0025&#955; y (21)</ns0:formula><ns0:p>where y 2d is the y-position of the 2D bounding box, y min and y max are set to 170 and 384.</ns0:p><ns0:p>The refined position depends on both the pseudo-position and 2D-3D constraints. Thus, even if two cars are not on the same plane, the error of the obtained pseudo-position does not affect much on the refined position. If the assumption holds, the pseudo-position can enhance the final pose estimation, as we showed in the experiments. Otherwise, a good pose obtained from the 2D regression network can produce an acceptable result since the pseudo-position won't affect much in this case. We designed this regularization term as a soft regularization scheme with the scaling factor based on the y-position of the object on the 2D image plane. With this adaptation, our approach can improve the pose estimation result in various conditions.</ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENTS Dataset and Setting</ns0:head><ns0:p>The object detection task of the KITTI dataset <ns0:ref type='bibr' target='#b11'>(Geiger et al., 2012)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Implementation Details</ns0:head><ns0:p>We implemented our model using Pytorch 1.7, CUDA 10.2, CuDNN 7.5.0 on Ubuntu 18.04 on a machine with a single RTX2080. The input image is normalized to the resolution 1280 &#215; 384. We also perform the data augmentation process, including color jittering, horizontal flipping, random scaling, and random shifting, using the default setting of <ns0:ref type='bibr' target='#b51'>Zhou et al. (2019)</ns0:ref> with the chances of 100%, 50%, and 70%, respectively. Because scaling and shifting are 3D-coordinate inconsistent, we formulate these Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>translations as a single affine transformation. Therefore the model's output can be converted into the original coordinate of the input image, make the data augmentation independent of the coordinate. We use our depth adaptive convolution layer for every detection head. <ns0:ref type='bibr' target='#b0'>Bhat et al. (2020)</ns0:ref> is the pre-trained monocular depth estimation network for depth guidance. The depth maps are scaled down with the factor of 1 4 by nearest-neighbor interpolation before passing through the depth adaptive convolution layer.</ns0:p><ns0:p>We employ the Adam optimizer and use the base learning rate at 10 &#8722;4 for the training process. This learning rate is scheduled to decrease by the factor of 10 at epoch 40 and 90. For efficiency, the batch size we select is not too large and is an adequate size. During the pose estimation process, we apply a 3 &#215; 3 max-pooling operator on the head's output and pick the top 40 objects based on the 2D confidence scores. We select the threshold for this score to be 0.3, and there is no need for applying Non-Maximum Suppression (NMS) in the testing phase.</ns0:p></ns0:div> <ns0:div><ns0:head>Comparative Results</ns0:head><ns0:p>The KITTI benchmark for the 3D object detection task consists of 2 principal metrics: average precision for 3D intersection-over-union (AP| 3D ) and average precision for bird's-eye-view (AP| BEV ), which are separated into three difficulty levels Easy, Moderate, and Hard according to the bounding box's height, occlusion, and truncation level <ns0:ref type='bibr' target='#b11'>(Geiger et al., 2012)</ns0:ref> That said, our estimation results on the Hard test set seem better. There are some abnormal detection cases in the KITTI dataset that significantly show the robustness of our proposed method. Figure <ns0:ref type='figure' target='#fig_10'>8</ns0:ref> shows some particular cases where our method produces better detection results. We show more experiments on these abnormal cases in the Appendix.</ns0:p><ns0:p>Let's take a closer look at the particular sample we illustrate in Fig. <ns0:ref type='figure' target='#fig_10'>8</ns0:ref>. In the figure, we mark some points with numbers where the difference in the estimation occurs. At point #1, we can see an occluded car in the original image. This car is completely unlabeled on the groundtruth, while both KM3D Li (2020) and our method can detect. The size of the car at that position is a bit different due to the ground assumption we posed earlier. In case the bottom of the object is occluded, our assumption can lead to better estimation. At point #2, there are actually two cars packing nearby each other and the white one is heavily occluded by the black one. In this case, the groundtruth of KITTI only marks an object in the middle of two cars (a bit bias to the black bar). The result of <ns0:ref type='bibr'>KM3D Li (2020)</ns0:ref> is similar to the one of groundtruth, while our detector can detect both cars. After visualizing the result, we verified that the detected location of the two cars was not overlapped and consistent with the context of the input image.</ns0:p><ns0:p>The position #3 also demonstrates another case where the KITTI groundtruth ignores the object while both KM3D Li (2020) and our method can detect. Likewise, in the case of position #4, the visible car is completely unlabeled in the groundtruth, while both KM3D <ns0:ref type='bibr' target='#b22'>Li (2020)</ns0:ref> and ours can detect it. If we observe carefully, the detection box of ours fit on the whole car (very close to the groundtruth at position #5 too). This observation explains why the score of our method is not as good as other methods in the Hard test set. The robustness labels in the groundtruth play an important role in the evaluation result.</ns0:p><ns0:p>We show qualitative results in Fig. <ns0:ref type='figure' target='#fig_11'>9</ns0:ref>. The results from the left column are inferred from the val set to compare our predictions with groundtruth labels. The right column images are taken from the official test set.</ns0:p></ns0:div> <ns0:div><ns0:head>ABLATION STUDY</ns0:head></ns0:div> <ns0:div><ns0:head>Accumulated impact of our proposed methods</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref> shows the experimental results that we conduct to measure the impact of each component on our proposed designs. In these experiments, we show the contribution of each proposed component to the overall performance of the monocular 3D object detection task. We follow the default setup of <ns0:ref type='bibr' target='#b22'>Li (2020)</ns0:ref> 13/22</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60023:1:1:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr'>16.15 13.17 11.48 25.17 19.91 17.63 +P. +DA. 17.59 14.79 13.10 25.01 20.56 18.20</ns0:ref> in terms of the supervised loss of keypoints head, our approach significantly improves training loss's stability.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation on Geometric Ground-Guide Module</ns0:head><ns0:p>For demonstrating the impact of the Geometric Ground-Guide Module, we evaluate four alternatives: the model with Depth Adaptive Convolution (+DA.) and without pseudo-position refinement, the +DA. model with pseudo y-coordinate, the +DA. model with pseudo z-coordinate, and the +DA. model with full pseudo-position. The result in Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref> shows that with only pseudo-position in y-coordinate or z-coordinate gives a significant improvement for all metrics. Especially when combining these y and z-component into our proposed pseudo-position, the detection's result can be improved by more than 23%. Manuscript to be reviewed Green color: our predictions, red color: groundtruth, dot: projected 3D center, diagonal cross: heading of object</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>Evaluation on the impact of depth estimation quality on Adaptive Convolution</ns0:head><ns0:p>In this experiment, we analyze the performance of the depth adaptive convolution with different pretrained depth models. To study the impact of depth estimation quality on our 3D detection results, we generate different depth maps from three recent supervised monocular depth estimation methods: DORN <ns0:ref type='bibr' target='#b10'>(Fu et al., 2018)</ns0:ref>, BTS <ns0:ref type='bibr' target='#b20'>(Lee et al., 2020)</ns0:ref>, and AdaBins <ns0:ref type='bibr' target='#b0'>(Bhat et al., 2020)</ns0:ref>. Then, we apply the model using the depth adaptive convolution with pseudo-position refinement to evaluate the results on the KITTI val set. As shown in Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref>, the model with depth maps estimated from <ns0:ref type='bibr' target='#b0'>Bhat et al. (2020)</ns0:ref> -the current state of the art among monocular depth estimation methods, obtains the highest performance. Besides, as depth guidance only provides a better geometric structure for dense traffic scenes, the accuracy on the Easy level of the 3D object detection task does not greatly depend on the depth estimation quality. This is because in the Easy dataset, we do not have small and occluded objects.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In this work, we propose a novel framework for monocular 3D object detection. Consolidating the strength of the convolutional neural network and geometric constraints, our proposed approach aims to compensate for the lack of depth information in the monocular image. We introduce a novel convolution layer, named Depth Adaptive Convolution, to improve the accuracy and stability of the detection network. We achieve this by leveraging the guidance from a pre-trained monocular depth estimator. We then propose the Geometric Ground-Guide Module, which takes advantage of 2D-3D geometric information and constraints in driving scenarios, to accurately recover the object's 3D pose from 2D regression results.</ns0:p><ns0:p>As demonstrated in the experiments, our proposed approach yields better performance and outperforms many current state-of-the-art methods. We also perform analysis on the Hard cases where we point out that the groundtruth sometimes is not entirely correct, which leads to some differences in the qualitative results. Therefore, the evaluation numbers do not reflect the performance of estimation methods.</ns0:p><ns0:p>In the future work, we will conduct experiments on different datasets such as Waymo, CityScapes as well as the data from the traffic scenarios in our country to demonstrate the performance of our proposed method.</ns0:p></ns0:div> <ns0:div><ns0:head>17/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60023:1:1:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>APPENDIX Egocentric and allocentric orientation in 3D coordinate</ns0:head><ns0:p>In Fig. <ns0:ref type='figure' target='#fig_13'>11</ns0:ref>, we illustrate the difference between the egocentric and allocentric angles in the bird's-eye-view.</ns0:p><ns0:p>The egocentric angles of the two cars change in respect to the camera's viewpoint, while their allocentric angles remain the same. The allocentric angle (&#952; ) can be deduced from the egocentric angle (&#945;) and the ray angle (ray), the angle between the z-axis and the ray passing through the object's 3D center.</ns0:p><ns0:formula xml:id='formula_26'>&#952; = &#945; + ray<ns0:label>(22)</ns0:label></ns0:formula><ns0:p>If C = (u, v) is the project 3D center of object on image plane, the equation ( <ns0:ref type='formula' target='#formula_26'>22</ns0:ref>) can be rewritten as:</ns0:p><ns0:formula xml:id='formula_27'>&#952; = &#945; + arctan u &#8722; c x f x<ns0:label>(23)</ns0:label></ns0:formula><ns0:p>where f x and c x are the focal length and principal point coordinate of the camera.</ns0:p></ns0:div> <ns0:div><ns0:head>Visualization of the impact of pseudo-position</ns0:head><ns0:p>We note here one example to show the impact of the pseudo-position on the detection result. Figure <ns0:ref type='figure' target='#fig_15'>12</ns0:ref> illustrates a common case on the road. We show the bird's-eye-view of the detection result with and without the help of the pseudo-position. One can easily see that the detection result matches with the groundtruth when we employ the pseudo-position.</ns0:p></ns0:div> <ns0:div><ns0:head>Abnormal detection cases of the KITTI dataset</ns0:head><ns0:p>In Fig. <ns0:ref type='figure' target='#fig_16'>13</ns0:ref>, we plot the groundtruth labels (red boxes) and our predictions (green boxes) of additional abnormal cases where the vehicles are not well-labeled.</ns0:p><ns0:p>We can observe similar phenomenons like what we have seen in Fig. <ns0:ref type='figure' target='#fig_10'>8</ns0:ref>. While investigating the dataset, we find out that the dataset contains many data samples like these, leading to some differences in the benchmark. While the KITTI benchmark is a good standard to measure the performance of detection methods, finding abnormal detection raises a concern on the robustness of detection results. Perhaps, some methods may overfit the dataset or have been finetuned to match with such cases.</ns0:p><ns0:p>Occlusion is something that we encounter a lot in practice. If we ignore the occlusion cases, like what we have seen in these experiments, it would be dangerous, especially in autonomous driving scenarios.</ns0:p></ns0:div> <ns0:div><ns0:head>18/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60023:1:1:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. An example of an occluded object in the driving scenario: The red dot representing the 2D center of the car lies on the visual appearance of the other car.</ns0:figDesc><ns0:graphic coords='3,204.23,64.24,288.31,173.63' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The overview of our proposed architecture for the monocular 3D object detection framework. The backbone network extracts features from the RGB image. Then, we apply the depth adaptive convolution detection head to predict the object center, pseudo-contact point offsets, keypoints offsets, observation angle, and dimension. The Geometric Ground Guide Module takes the intermediate results from detection heads and the pseudo-position value to recover the object's 3D pose via 2D-3D geometric transformation.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>3D object detection methods rely heavily on the quality of the LiDAR sensor. We characterize those LiDAR-based methods into two main approaches: point-format and voxel. With the advance of PointNet 3/22 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60023:1:1:NEW 24 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Depth Adaptive Convolution Detection Head. The depth adaptive 2D convolution processes image features with the external guidance of generated depth maps from a pre-trained monocular depth estimation network. The output goes through a ReLU activation followed by a standard 2D convolution layer.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The visual appearance of the object's dimension with different observation angles greatly affects the estimated size of the bounding box.</ns0:figDesc><ns0:graphic coords='7,163.80,189.89,143.65,67.44' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Illustration of groundtruth (red color) and predicted (blue color) 3D bounding box in bird's-eye-view.</ns0:figDesc><ns0:graphic coords='8,215.00,490.36,96.18,192.36' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. The depth of the ground plane generated from the extrinsic information of the camera.</ns0:figDesc><ns0:graphic coords='10,173.25,69.28,267.95,120.07' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>FirstFigure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Object's pseudo-position P and related terms in the inference process using camera model and ground plane model.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>) is the diagonal matrix of 3 dimensions length, height, width: Sci. reviewing PDF | (CS-2021:04:60023:1:1:NEW 24 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>contains a total of 7481 training images and 7518 test images. Because of the lack of labels in the test set, we follow Chen et al. (2015) to split the training set into 3712 samples for training and 3769 samples for evaluation purpose. We train our model on the trainval set and evaluate the result on the test set to compare with the other methods while training on the train set and testing on the val set for ablation study.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Illustration of unlabeled cases in KITTI val set. First row: monocular image. Second row: KITTI's groundtruth. Third row: the results from Li (2020). Last row: our prediction results.</ns0:figDesc><ns0:graphic coords='15,142.50,458.43,411.89,123.57' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Qualitative illustration of our monocular 3D detection results (left: val set, right: test set).Green color: our predictions, red color: groundtruth, dot: projected 3D center, diagonal cross: heading of object</ns0:figDesc><ns0:graphic coords='17,142.11,387.96,203.17,60.95' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Trajectories of the optimization process for each detection head with standard convolution and depth adaptive convolution operation.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Egocentric (green color) and allocentric (orange color) angles in the bird's-eye-view. Red arrow indicates the heading of the car, while blue arrow is the ray between the origin and the car's center.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>3D detection result with pseudo-position.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Visualization of the impact of pseudo-position for refining object's positions. Red is groundtruth z-position, Green is the predicted z-position.</ns0:figDesc><ns0:graphic coords='20,210.23,397.07,276.22,214.62' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13. Abnormal detection cases from KITTI val set. Left: groundtruth labels, right: our predictions</ns0:figDesc><ns0:graphic coords='21,154.50,490.27,190.98,57.29' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>. From October 2019, following Simonelli et al. (2020), the evaluation metrics are changed from 11-point Interpolated Average Precision (AP) metric AP| R11 to 40 recall positions-based metric AP| R40 . By default, the KITTI benchmark requires 3D bounding boxes with the Intersection over Union (IoU) of 70% for the Car category. We report our results with 11 other recent methods ordered by the AP| 3D R40 of the Moderate difficulty level of the Car category. As observed from Table 1, our method achieves remarkable improvement in comparison to contemporary monocular 3D</ns0:figDesc><ns0:table /><ns0:note>object detection frameworks. It is notable that our proposed approach outperforms all existing approaches in the Easy and Mod difficulty levels on the test set. Comparing with the second-best competitor, we achieve 17.75 (&#8593; 1.02) for Easy and 12.00 (&#8593; 0.28) for Moderate.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparative results on the KITTI 3D object detection test set of the Car category.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Backbone</ns0:cell><ns0:cell cols='3'>AP| 3D R40 (IoU = 0.7)</ns0:cell><ns0:cell cols='3'>AP| BEV R40 (IoU = 0.7)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Easy</ns0:cell><ns0:cell>Mod</ns0:cell><ns0:cell cols='2'>Hard Easy</ns0:cell><ns0:cell>Mod</ns0:cell><ns0:cell>Hard</ns0:cell></ns0:row><ns0:row><ns0:cell>ROI-10D (Manhardt et al., 2019)</ns0:cell><ns0:cell>ResNet-34</ns0:cell><ns0:cell>4.32</ns0:cell><ns0:cell>2.02</ns0:cell><ns0:cell>1.46</ns0:cell><ns0:cell>9.78</ns0:cell><ns0:cell>4.91</ns0:cell><ns0:cell>3.74</ns0:cell></ns0:row><ns0:row><ns0:cell>GS3D (Li et al., 2019)</ns0:cell><ns0:cell>VGG-16</ns0:cell><ns0:cell>4.47</ns0:cell><ns0:cell>2.90</ns0:cell><ns0:cell>2.47</ns0:cell><ns0:cell>8.41</ns0:cell><ns0:cell>6.08</ns0:cell><ns0:cell>4.94</ns0:cell></ns0:row><ns0:row><ns0:cell>MonoPSR (Ku et al., 2019)</ns0:cell><ns0:cell>ResNet-101</ns0:cell><ns0:cell cols='2'>10.76 7.25</ns0:cell><ns0:cell cols='4'>5.85 18.33 12.58 9.91</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>M3D-RPN (Brazil and Liu, 2019) DenseNet-121 14.76 9.71</ns0:cell><ns0:cell cols='4'>7.42 21.02 13.67 10.23</ns0:cell></ns0:row><ns0:row><ns0:cell>SMOKE (Liu et al., 2020)</ns0:cell><ns0:cell>DLA-34</ns0:cell><ns0:cell cols='2'>14.03 9.76</ns0:cell><ns0:cell cols='4'>7.84 20.83 14.49 12.75</ns0:cell></ns0:row><ns0:row><ns0:cell>MonoPair (Chen et al., 2020)</ns0:cell><ns0:cell>DLA-34</ns0:cell><ns0:cell cols='2'>13.04 9.99</ns0:cell><ns0:cell cols='4'>8.65 19.28 14.83 12.89</ns0:cell></ns0:row><ns0:row><ns0:cell>RTM3D (Li et al., 2020)</ns0:cell><ns0:cell>DLA-34</ns0:cell><ns0:cell cols='6'>14.41 10.34 8.77 19.17 14.20 11.99</ns0:cell></ns0:row><ns0:row><ns0:cell>AM3D (Ma et al., 2019)</ns0:cell><ns0:cell>ResNet-34</ns0:cell><ns0:cell cols='6'>16.50 10.74 9.52 25.03 17.32 14.91</ns0:cell></ns0:row><ns0:row><ns0:cell>PatchNet (Ma et al., 2020)</ns0:cell><ns0:cell>PointNet-18</ns0:cell><ns0:cell cols='6'>15.68 11.12 10.17 22.97 16.86 14.97</ns0:cell></ns0:row><ns0:row><ns0:cell>KM3D (Li, 2020)</ns0:cell><ns0:cell>DLA-34</ns0:cell><ns0:cell cols='6'>16.73 11.45 9.92 23.44 16.20 14.47</ns0:cell></ns0:row><ns0:row><ns0:cell>D4LCN (Ding et al., 2020)</ns0:cell><ns0:cell>ResNet-50</ns0:cell><ns0:cell cols='6'>16.65 11.72 9.51 22.51 16.02 12.55</ns0:cell></ns0:row><ns0:row><ns0:cell>Ours</ns0:cell><ns0:cell>DLA-34</ns0:cell><ns0:cell cols='6'>17.75 12.00 9.15 25.80 16.93 12.50</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Evaluation on accumulated improvement of our proposed methods on KITTI val set. 'P.' denotes Pseudo-position, 'DA.' denotes Depth Adaptive Convolution.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell cols='3'>AP| 3D R40 (IoU = 0.7)</ns0:cell><ns0:cell cols='2'>AP| BEV R40 (IoU = 0.7)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Easy</ns0:cell><ns0:cell>Mod</ns0:cell><ns0:cell cols='2'>Hard Easy</ns0:cell><ns0:cell>Mod</ns0:cell><ns0:cell>Hard</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Baseline 11.56 10.31 8.94 19.86 16.33 14.56</ns0:cell></ns0:row><ns0:row><ns0:cell>+DA.</ns0:cell><ns0:cell cols='5'>15.12 12.02 10.84 23.13 18.24 15.84</ns0:cell></ns0:row><ns0:row><ns0:cell>+P.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Impact of Geometric Ground-Guide Module for 3D object detection on the KITTI val set. 'DA.' denotes the model using Depth Adaptive Convolution.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell cols='3'>AP| 3D R40 (IoU = 0.7)</ns0:cell><ns0:cell cols='2'>AP| BEV R40 (IoU = 0.7)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Easy</ns0:cell><ns0:cell>Mod</ns0:cell><ns0:cell cols='2'>Hard Easy</ns0:cell><ns0:cell>Mod</ns0:cell><ns0:cell>Hard</ns0:cell></ns0:row><ns0:row><ns0:cell>+DA</ns0:cell><ns0:cell cols='5'>15.12 12.02 10.84 23.13 18.24 15.84</ns0:cell></ns0:row><ns0:row><ns0:cell>+DA + pseudo y-coordinate</ns0:cell><ns0:cell cols='5'>16.22 12.51 11.24 24.76 18.81 16.33</ns0:cell></ns0:row><ns0:row><ns0:cell>+DA + pseudo z-coordinate</ns0:cell><ns0:cell cols='5'>16.92 13.28 11.38 24.97 18.96 16.40</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>+DA + pseudo y and z-coordinate 17.59 14.79 13.10 25.01 20.56 18.20</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>15/22</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60023:1:1:NEW 24 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Comparisons of different depth estimation quality for 3D object detection on KITTI val set. We use (*) to indicate using standard convolution operation for our detection network.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Depth Estimator</ns0:cell><ns0:cell cols='3'>AP| 3D R40 (IoU = 0.7)</ns0:cell><ns0:cell cols='2'>AP| BEV R40 (IoU = 0.7)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Easy</ns0:cell><ns0:cell>Mod</ns0:cell><ns0:cell cols='2'>Hard Easy</ns0:cell><ns0:cell>Mod</ns0:cell><ns0:cell>Hard</ns0:cell></ns0:row><ns0:row><ns0:cell>None*</ns0:cell><ns0:cell cols='5'>16.15 13.17 11.48 25.17 19.91 17.63</ns0:cell></ns0:row><ns0:row><ns0:cell>Dorn</ns0:cell><ns0:cell cols='5'>17.44 13.85 12.52 25.86 20.37 17.95</ns0:cell></ns0:row><ns0:row><ns0:cell>BTS</ns0:cell><ns0:cell cols='5'>17.57 14.21 13.06 25.00 20.54 18.19</ns0:cell></ns0:row><ns0:row><ns0:cell>AdaBins</ns0:cell><ns0:cell cols='5'>17.59 14.79 13.10 25.01 20.56 18.20</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Computer Science and Engineering Faculty Hochiminh City University Of Technology 268 Ly Thuong Kiet Street, District 10, Ho Chi Minh City Tel: +84 8 38 647 256 Fax: +84 8 38 653 823 hcmut.edu.vn nddung@hcmut.edu.vn July 24, 2021 Dear Editors, We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. In particular, all of the code we wrote is available and I have included multiple links throughout the paper to the appropriate code repositories. In this revision, we perform the following modifications: • Update the abstract so that the reader can easily follow the flow of the manuscript. • Update experiments, explanations, and the appendix to address reviewer’s concern on the ground­plane constrain. • Update equations, notations, and descriptions as reviewers suggested. • Update our explanation on depth­adaptive convolution and also explain its contribution in the experiment section. • Update the explanation on 2D­3D transformation. • Update information about the experiments in Tables 1–4. • Fix typos. We believe that the manuscript is now suitable for publication in PeerJ. Sincerely, Dr. Duc Dung Nguyen Computer Science and Engineering Faculty On behave of all authors. COMPUTER SCIENCE AND ENGINEERING FACULTY, HOCHIMINH CITY UNIVERSITY OF TECHNOLOGY, VIETNAM T (84.8) 38.647.256 ­ ext: 5847 | F (84­8) 38 653 823 Reviewer 1 Basic reporting I commend the authors for their work. The manuscript is written in unambiguous language with clearly labeled tables and figures. The authors did a good job of summarizing the literature. The bottleneck in the existing KITTI benchmark discussed in the article is an intriguing direction for the research community. Response: We greatly appreciate the reviewer for the encouraging remarks. Experimental design Monocular 3D Object detection has very significant applications in the context of autonomous nav­ igation, and, thus it aligns within the scope of the journal. The manuscript describes the proposed approach in a detailed manner with the mathematical formulation backing it up. The provided implementation details and supplementary material should be enough to replicate the results. Response: We greatly appreciate the reviewer for the encouraging remarks. Validity of the findings The authors provide a comprehensive comparison of the proposed method with the literature. The ablation study also highlights the significance of the different parts of the method. Response: We greatly appreciate the reviewer for the encouraging remarks. Reviewer 2 Basic reporting This article put forward an approach to improve existing 3D object detection from monocular im­ ages by using depth adaptive convolution (DAP) layer and ground­guide model (GGM). The au­ thors proposed a type of convolutional neural network incorporating the DAP and GGM. The proposed DAP is an algorithm that leverage on the method using pixel­adaptive convolution by Su et al (2019), and then work it into the context of depth related variables for each pixel. The authors stated that the DAP objective is introduced to handle irrelevant local structures of traditional 2D convolution. However, I would like to ask the authors to elaborate more on the irrelevant local structure, in which how this DAP overcomes the existing problem and how significant it is. The proposed GGM make use of estimating the “pseudo­position”, assume that the ground plane is flat. And then this pseudo­position is put together with output from the detection network (2D centre, orientation, keypoints, etc.) to form the matrix for regression. My comments are: (1) It may be okay to make the assumes of flat ground at this moment, however, the authors should explain a bit on could this approach be improved in the future to handle e.g. planar surfaces that are not necessary flat, from flat to uphill etc. Response: Thank you for your comment. We agree that the assumption “ground is flat” is too strong. Our original idea is that nearby cars should be on the flat ground with ours. Even if our car is on the hill, the relative position between our car and other cars should be on the same flat plane. In that case, we can apply the ground­guide module to estimate the pseudo­position. The proposed ground­guide module produces the pseudo­position, which acts as a regularization term in our cost function. The refined position depends on both the pseudo­position and 2D­3D constraints. Thus, even if two cars are not on the same plane, the error of the obtained pseudo­ position does not affect much on the refined position. If the assumption holds, the pseudo­position can enhance the final pose estimation, as we showed in the experiments. Otherwise, a good pose obtained from the 2D regression network can produce an acceptable result since the pseudo­ COMPUTER SCIENCE AND ENGINEERING FACULTY, HOCHIMINH CITY UNIVERSITY OF TECHNOLOGY, VIETNAM T (84.8) 38.647.256 ­ ext: 5847 | F (84­8) 38 653 823 position won’t affect much in this case. We designed this regularization term as a soft regularization scheme with the scaling factor based on the y­position of the object on the 2D image plane. With this adaptation, our approach can improve the pose estimation result in various conditions. To demonstrate the above arguments, we discuss here three examples. Each example comes from the validation set, which describes different scenarios. In each one, we illustrate the detection results with and without the use of pseudo­position. • Example 1: (Image 231) In this scenario, our vehicle and objects stay on the same plane. This is the common case for most driving scenarios. With nearby objects, the model can regress 2D properties accurately, including keypoint offset. Thus, the 3D pose estimation is accurate even without the pseudo­position. That said, when the objects are far from our car, it is harder for the model to regress keypoints and the estimation error is also high. For these objects, the pseudo­position helps to correct the object’s position. Input image (231). (a) Without pseudo­position (b) With pseudo­position Top view of the detection result on image 231. • Example 2: (Image 66) Vehicle and object are not in the same plane. For this case, our vehicle is at the beginning of the hill, while others are on the hill. The slope, in this case, is small. As observed in the result, with the pseudo­position, our estimation is still better without it. In many other experiments, we find out that with the help of the pseudo­position, the final pose estimation is better regardless small variation of the ground surface. Input image (66): 2 objects are on the hill, our vehicle is at the begin of the hill. COMPUTER SCIENCE AND ENGINEERING FACULTY, HOCHIMINH CITY UNIVERSITY OF TECHNOLOGY, VIETNAM T (84.8) 38.647.256 ­ ext: 5847 | F (84­8) 38 653 823 (b) With pseudo­position (a) Without pseudo­position Top view of the detection result on image 66. • Example 3: (Image 1502) Vehicle and object are not in the same plane, and the slope is large. This case is rare, but it happens sometimes. As soon as the car enters the hill, the phenomenon disappears. For this case, both methods fail to detect the correct object position, regardless of the use of pseudo­position. Input image (1502): object is on a big slope. (a) Without pseudo­position (b) With pseudo­position Top view of the detection result on image 1502. We revised our explanation in the manuscript. We also update the experiment section to address the reviewer concerns. COMPUTER SCIENCE AND ENGINEERING FACULTY, HOCHIMINH CITY UNIVERSITY OF TECHNOLOGY, VIETNAM T (84.8) 38.647.256 ­ ext: 5847 | F (84­8) 38 653 823 (2) On the scaling factor of equation (18), please describe the actual sets of values that you have used and with regards to those that yield better results. Response: We have added a paragraph to explain the design purpose of the scaling factor in page 12 and its actual values are represented by equation (21) in the revision. The experiment is carried using the KITTI dataset, which is commonly used in the community for algorithm testing and evaluation. The authors reported that their method yield better results in most situations as compared to existing approaches. Others: (1) “Cori” in equation (9) not explained. Response: “Cori” (Cori ) is the position in 3D coordinate (xyz) of the ith corner of the 3D bounding box of an object. We already added the note for Cori in the revision on page 8. (2) The second and third contribution as stated in page 3­4 seem to be fairly inter­related, therefore it would be better if you could combine them i.e. mention the GGM together with “pseudo­position” estimation. Response: Thank you for your suggestion. We agree that the idea of GGM and the pseudo­ position are interrelated. Therefore, in this revision, we revised our statements on pages 3 and 8–9 to address this concern. (3) Suggest to reword and rephrase last sentence in pg20: “Let’s put aside non­flat...” to “Thus, we do not consider non­flat...”. Response: Thank you for your comments. We revised this sentence as suggested on page 9. I would recommend this article to be accepted upon make the minor corrections to address all the questions that are described. Response: We greatly appreciate the reviewer for the encouraging remarks. Experimental design The experimental design is valid and sufficient comparing with existing practice. However, it would be better that in future work, the research work should capture their own data and focus more on the challenging scenarios. Response: We greatly appreciate the reviewer for the encouraging remarks. We agree that this work needs some improvements, especially in terms of data. We rely on the KITTI dataset since it provides accurate groundtruth and people used it for benchmarking detection methods. Adding more scenarios is possible using simulation tools such as CARLA. We also need to use some other advanced models to transform this synthetic data to be more realistic. Collecting real­world data is possible, however, very costly. Labeling data and advanced pieces of equipment like LiDAR are other bottlenecks that we hope to overcome in the future. Validity of the findings The findings are valid with respect to its concept formulation and testing approach. Response: We greatly appreciate the reviewer for the encouraging remarks. Comments for the author I would suggest that in the title and abstract to mention depth adaptive convolution first before ground­guided model, so that they follow the description flow of the two methods, e.g. for the title: “... 3D object detection with depth adaptive convolution and ground­ guide model”. Response: Thank you for your suggestion. When we think about the title of this manuscript, we did not consider the order of contributions. Each one enhances the detection result in its way, as we demonstrated in the ablation study. Changing the title at this moment is also fine because it doesn’t matter much to the manuscript. However, it may confuse other reviewers. Therefore, we would like to keep the current title until we finalize the manuscript and receive instructions from the editor. COMPUTER SCIENCE AND ENGINEERING FACULTY, HOCHIMINH CITY UNIVERSITY OF TECHNOLOGY, VIETNAM T (84.8) 38.647.256 ­ ext: 5847 | F (84­8) 38 653 823 In this revision we updated our abstract to address the reviewer’s concern. We hope this helps the readers follow the flow of our work better. Reviewer 3 Comments for the author The paper claims the contributions as follows: 1. novel depth adaptive convolution 2. pseudo­position and ground­guide module However, there are some critical issues with the proposed method: ­ The proposed depth adaptive convolution seems to be directly adapted from Su et al. However, in Su’s paper, fi , f j represents the feature vector at a certain pixel. However, in this paper, the feature is changed to depth map with only one single value at a certain pixel location. Then, Eq. (5) should not be simply copied from the original paper, where fi here should be a number instead of a vector. Response: Thank you for your note. We adapt the depth­adaptive­convolution from the work of Su et al. As the reviewer pointed out, the feature is the depth at a pixel location. Therefore, fi should be a scalar instead of a vector. In this revision, we refer to the work of Su et al. with equation (4) and update our adaption using equation (5), where the depth map guidance is recognized as scalars di , d j . ­ The intuition behind this depth adaptive convolution is not clearly stated. Taking the depth in­ formation into the convolution operation (using Eq (5)), how can that be a better way to handle “irrelevant local structure”? Response: In the context of traffic scenes, vehicles usually occlude others. We mentioned the “irrelevant local structure” since the detection result of a specific object should not be affected by the features of others. The irrelevant features could belong to adjacent objects or background objects. We aim to enhance the detection by selecting valuable features. Features at pixels inside the object should contribute more to 3D detection. Instead of using instance segmentation as a guide for the detection network, we use the 3D surface of the object taken from the depth map. We can always figure out the discontinuity between objects just by probing the depth map. That is the intuition behind the depth adaptive convolution approach. In this revision, we update the explanation and address this idea (page 7). The below figure shows the correlation between the depth map and the instance segmentation of objects. If we look closer at the discontinuity in the depth map, we can figure the boundary of each object. COMPUTER SCIENCE AND ENGINEERING FACULTY, HOCHIMINH CITY UNIVERSITY OF TECHNOLOGY, VIETNAM T (84.8) 38.647.256 ­ ext: 5847 | F (84­8) 38 653 823 The relative correlation between the depth surface and the objects in the image. (Top) the depth estimation image, (Bottom) The instance segmentation of the image. ­ I doubt the innovation of the pseudo­position and ground­guide module. ­ First, the assumption of “ground is flat” is too strong. Even in KITTI, you cannot assume the ground plane is relatively static to the camera, which means the camera poses should be always changing while driving. Therefore, the mapping in Fig. 6 should not be a unique one for all frames. Response: Another reviewer also shared the same concern on this issue. We agree that this assumption is strong. We cannot get a perfect ground plane like that in practice. That said, we assume that our car and some others in front stay on the flat ground in common cases. With this assumption, we can estimate the pose of other cars better when they are in the closed range. We quote our detailed response to address this concern below: We agree that the assumption “ground is flat” is too strong. Our original idea is that nearby cars should be on the flat ground with ours. Even if our car is on the hill, the relative position between our car and other cars should be on the same flat plane. In that case, we can apply the ground­guide module to estimate the pseudo­position. The proposed ground­guide module produces the pseudo­position, which acts as a regularization term in our cost function. The refined position depends on both the pseudo­position and 2D­3D constraints. Thus, even if two cars are not on the same plane, the error of the obtained pseudo­position does not affect much on the refined position. If the assumption holds, the pseudo­position can enhance the final pose esti­ mation, as we showed in the experiments. Otherwise, a good pose obtained from the 2D regression network can produce an acceptable result since the pseudo­position won’t affect much in this case. We designed this regularization term as a soft regu­ larization scheme with the scaling factor based on the y­position of the object on the 2D image plane. With this adaptation, our approach can improve the pose estimation result in various conditions. To demonstrate the above arguments, we discuss here three examples. Each example comes from the validation set, which describes different scenarios. In each one, we illustrate the detection results with and without the use of pseudo­position. • Example 1: (Image 231) In this scenario, our vehicle and objects stay on the same plane. This is the common case for most driving scenarios. With nearby objects, the model can regress 2D properties accurately, including keypoint offset. Thus, the 3D pose estimation is accurate even without the pseudo­position. That said, when the objects are far from our car, it is harder for the model to regress keypoints and the estimation error is also high. For these objects, the pseudo­position helps COMPUTER SCIENCE AND ENGINEERING FACULTY, HOCHIMINH CITY UNIVERSITY OF TECHNOLOGY, VIETNAM T (84.8) 38.647.256 ­ ext: 5847 | F (84­8) 38 653 823 to correct the object’s position. Input image (231). (a) Without pseudo­position (b) With pseudo­position Top view of the detection result on image 231. • Example 2: (Image 66) Vehicle and object are not in the same plane. For this case, our vehicle is at the beginning of the hill, while others are on the hill. The slope, in this case, is small. As observed in the result, with the pseudo­position, our estimation is still better without it. In many other experiments, we find out that with the help of the pseudo­position, the final pose estimation is better regardless small variation of the ground surface. Input image (66): 2 objects are on the hill, our vehicle is at the begin of the hill. COMPUTER SCIENCE AND ENGINEERING FACULTY, HOCHIMINH CITY UNIVERSITY OF TECHNOLOGY, VIETNAM T (84.8) 38.647.256 ­ ext: 5847 | F (84­8) 38 653 823 (b) With pseudo­position (a) Without pseudo­position Top view of the detection result on image 66. • Example 3: (Image 1502) Vehicle and object are not in the same plane, and the slope is large. This case is rare, but it happens sometimes. As soon as the car enters the hill, the phenomenon disappears. For this case, both methods fail to detect the correct object position, regardless of the use of pseudo­position. Input image (1502): object is on a big slope. (a) Without pseudo­position (b) With pseudo­position Top view of the detection result on image 1502. We revised our explanation in the manuscript. We also update the experiment section to address the reviewer concerns. COMPUTER SCIENCE AND ENGINEERING FACULTY, HOCHIMINH CITY UNIVERSITY OF TECHNOLOGY, VIETNAM T (84.8) 38.647.256 ­ ext: 5847 | F (84­8) 38 653 823 With the current results, we see some potential improvements in the future. The calibration process is complicated, but if we perform the calibration correctly, we can calculate the pseudo­position better. In addition, the estimation of the road surface is not always correct. In fact, the depth estimation only valid for short range. Therefore, with a good calibration result and advanced depth estimation, we can extend current ground­plane model. It is, however, is not in the scope of our current work. The current assumption works well on the KITTI dataset as we have shown in the experiments. ­ Second, the 2D­3D transformation is just borrowed from Li, so that should not be clearly stated with proper citation. Response: In this revision, we update our explanation on pages 9–10 to address the reviewer’s concern. Our formulation enhances the 2D­3D transformation process of Li (2020) by employing the pseudo­position in the calculation. We carefully state the work of Li on page 11 and express our improvement in the description of equations on pages 11–12. ­ The experimental results are not well­organized. For example, 1) the author should mention the IOU criterion of the results in Table 1. Other papers usually show both IOU>0.7 and IOU>0.5. Response: The KITTI benchmark by default uses the IoU of 0.7. We can only obtain the result with IoU 0.5 offline on the validation set. In Table 1, we show our quantitative results on the Test split of the KITTI benchmark. Hence, we could only provide the detection results with IOU 0.7 calculated by the KITTI server. We also corrected the column name of the metric AP in Tables 1, 2, 3 to AP|R40 (IoU = 0.7). 2) The baseline of Table 2 is using the default setup in Li’s paper but the results are much lower than Li’s paper. Why? Response: In Li’s paper, the experimental result on the validation set used the 11 recall positions AP. However, from November 2019, with the suggestion of Simonelli et al. (2020), the official KITTI benchmark has changed to the 40 recall positions AP for a more fair comparison. In this paper, the authors found that the AP of 11 recall position is unreliable. While training our model, we also found that the AP R11 is unstable for every training epoch. Therefore, all the experimental results in our Ablation Study section use the AP|R40 for evaluation. In this revision, we mentioned the AP|R40 metric on pages 14–16 to clarify the issue. 3) The baseline in Table 2 and Table 3 should not be the same for a fair comparison. The baseline in Table 3 should include DA. Response: We made a notation mistake in the “Evaluation on Geometric Ground­Guide Module” experiment. The pseudo refinements were conducted on the detection models using the depth­ adaptive convolution. In this revision, we corrected the content of the “Method” column in Table 3 and specified the experiment conditions in the “Evaluation on Geometric Ground­Guide Module” on page 15. We also updated the result in Table 3. 4) Table 4 is not clear. What is the purpose to show three others depth estimation results and compare them with standard convolution? I think the proper way to show the depth estimation results should be by comparing your depth estimation results with other SOTA methods. Response: Thank you for your suggestion. We aim to investigate which depth estimation baseline perform best with the adpative­convolution. In this revision, we modified the name of the subsec­ tion on page 16 to ”Evaluation on the impact of depth estimation quality on Adaptive Convolution” and update our explanation to clarify the impact of the baseline on the depth­adaptive convolu­ tion. Table 4 shows how the quality of pre­trained depth estimators affects the model using the depth­adaptive convolution. Thus, we illustrate the 3D detection results of the model using stan­ dard convolution and the models using depth adaptive convolution with different depth models, respectively. COMPUTER SCIENCE AND ENGINEERING FACULTY, HOCHIMINH CITY UNIVERSITY OF TECHNOLOGY, VIETNAM T (84.8) 38.647.256 ­ ext: 5847 | F (84­8) 38 653 823 "
Here is a paper. Please give your review comments after reading it.
219
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. Rumor detection is a popular research topic in natural language processing and data mining. Since the outbreak of COVID-19, related rumors have been widely posted and spread on online social media, which have seriously affected people's daily lives, national economy, social stability, etc. It is both theoretically and practically essential to detect and refute COVID-19 rumors fast and effectively. As COVID-19 was an emergent event that was outbreaking drastically, the related rumor instances were so scarce and distinct at its early stage. This makes the detection task a typical few-shot learning problem. However, traditional rumor detection techniques focused on detecting existed events with enough training instances, so that they fail to detect emergent events such as COVID-19. Therefore, developing a new few-shot rumor detection framework has become critical and emergent to prevent outbreaking rumors at early stages.</ns0:p><ns0:p>Methods. This article focuses on few-shot rumor detection, especially for detecting COVID-19 rumors from Sina Weibo with only a minimal number of labeled instances. We contribute a Sina Weibo COVID-19 rumor dataset for few-shot rumor detection and propose a few-shot learning-based multi-modality fusion model for few-shot rumor detection. A full microblog consists of the source post and corresponding comments, which are considered as two modalities and fused with the meta-learning methods.</ns0:p><ns0:p>Results. Experiments of few-shot rumor detection on the collected Weibo dataset and the PHEME public dataset have shown significant improvement and generality of the proposed model.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>From the early social psychology literature, a rumor refers to a story or a statement whose truth value is unverified or deliberately false <ns0:ref type='bibr' target='#b0'>(Allport et al., 1947)</ns0:ref>. More recently, DiFonzo and associates defined rumor as unverified and instrumentally relevant information statements in circulation that arise in contexts of ambiguity and that function primarily to help people make sense and manage threat <ns0:ref type='bibr' target='#b7'>(DiFonzo et al., 2011)</ns0:ref>. With the fast development of the Internet, the widespread of rumors online has become a major social problem nowadays. Especially on popular online social media such as Sina Weibo and Twitter, users or machines post millions of unverified messages every day. Since the breakout of COVID-19, rumors about COVID-19 have been continuously posted and spread, causing the panic of the public and placing considerable losses on the economy and other aspects of society. Thus, the study of discovering and dispelling rumors fast and accurately has become both theoretically and practically valuable. So, rumor detection on social media has become one of the recently popular research areas. Online social media are naturally suitable for stimulating mass discussions and spreading information. Users usually initialize conversations over spotlighted events/topics and thus generate a series of related posts over the same events/topics. Each conversation/discussion consists of a source post, corresponding replies and reposts. Therefore, most existing works detect rumors on social media at a macro level. They aim to determine whether the public discussions relating to a certain event/topic belongs to rumor or not <ns0:ref type='bibr' target='#b37'>(Wu et al., 2015)</ns0:ref>. Existing works under this setting contain both traditional machine learning models with hand-crafted features <ns0:ref type='bibr' target='#b3'>(Castillo et al., 2011;</ns0:ref><ns0:ref type='bibr'>Yang et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b12'>Kwon et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b10'>Jin et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b37'>Wu et al., 2015)</ns0:ref>, and deep learning-based models <ns0:ref type='bibr'>(Ma et al., 2016;</ns0:ref><ns0:ref type='bibr'>Yu et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b4'>Chen et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b39'>Ma et al., 2019;</ns0:ref><ns0:ref type='bibr'>Bian et al., 2020)</ns0:ref>. One of the other research lines aims to detect rumors at a micro level, which means to detect whether a single post belongs to a rumor. It has practical value for those who care more about the credibility of single posts. Pioneer works have been conducted on the Twitter rumor detection task <ns0:ref type='bibr'>(Sicilia et al., 2017;</ns0:ref><ns0:ref type='bibr'>Sicilia et al., 2018 (a)</ns0:ref>). Most existing rumor detection models assume that each event has plenty of training instances and regard the task of rumor detection as the classification problem based on supervised learning. Therefore a coherent challenge of existing rumor detection methods is identifying rumors relating to some suddenly happened events such that very few instances were available at the early stages of the events. For the macro-level models, it is a possible solution to set time windows for learning good features <ns0:ref type='bibr' target='#b13'>(Kwon et al., 2017)</ns0:ref> at the early stage. It is still based on supervised learning and the discussed events appear in both the training set and test set. However, the COVID-19 is an emergent event, which has never occurred in the past. This means there is only rarely labeled data for this kind of emergent event at the early stage. In this scenario, all the previous supervised learning-based methods are not applicable, because the training data and the test data belong to distinct events. Previous works on cross-topic rumor detection have discussed this problem, which added knowledge of the test topic in the training set <ns0:ref type='bibr' target='#b28'>(Sicilia et al., 2018 (b)</ns0:ref>). According to the conclusion in <ns0:ref type='bibr' target='#b28'>(Sicilia et al., 2018 (b)</ns0:ref>), to obtain good results in cross-topic detection, at least 80% of the test topic knowledge should be included in the training set. Therefore, existing works have huge difficulty in rumor detection for emergent events like COVID-9 with very little labeled data, the main challenges include: (1) The rumors about the target emergent event to be detected has never occurred before, so that the history data of other events could hardly contribute to building prediction models. (2) The number of labeled instances for the target emergent event is extremely scarce, e.g. only 1 or 3 or 5, which makes the popular 'pretraining and finetuning' paradigm fail under this situation. Motivated by the necessity of COVID-19 rumor detection under these real challenges, we formulate it as a few-shot learning task. Few-shot learning is able to learn an adaptable model with only a few labeled data. It can predict rumors about emergent events, which have never occurred in the training set. Considering collecting information like the user profile is both timeconsuming and privacy-sensitive, we aim to detect rumors only based on the text contents from online social media. We regard a full microblog consists of two modalities, the source post and the limited number of corresponding comments, and aim to detect whether a full microblog belongs to a rumor. Both modalities are used for building fusion models. To the best of our knowledge, this is the first work tackling the challenge of detecting rumors with very few instances over emergent events and considering the rumor detection task as a few-shot learning task. The main contributions are as follows:</ns0:p><ns0:p>&#61623; We collect and contribute a publicly available rumor dataset that is suitable for few-shot learning from Sina Weibo, the largest and most popular online social media in China. This dataset contains 11 COVID-19 irrelevant events and 3 COVID-19 relevant events, which sums to a total of 3,840 instances, of which 1,975 are rumors and 1,865 are non-rumors. &#61623; We propose the novel problem of few-shot rumor detection on online social media. It aims to detect rumors of emergent events, which have never happened, with only a very small number of labeled instances. The definition of instances considers the characteristics of online social media by containing both source posts and corresponding comments. &#61623; We introduce a few-shot learning-based multi-modality fusion model named COMFUSE for COVID-19 rumor detection, including text embeddings modules with pre-trained BERT model, feature extraction module with multilayer Bi-GRUs, multi-modality feature fusion module with a fusion layer, and meta-learning based few-shot learning paradigm for rumor detection. We perform extensive evaluations on benchmark datasets to show that our model is superior to the state-of-the-art baselines in the few-shot situation, which can detect rumors of emergent events with only a small number of labeled instances.</ns0:p></ns0:div> <ns0:div><ns0:head>literature review</ns0:head><ns0:p>This paper focuses on the few-shot rumor detection task on social media for the emergent event like COVID-19, related literature reviews include rumor detection, rumor detection at an early stage, and few-shot learning.</ns0:p></ns0:div> <ns0:div><ns0:head>Rumor detection</ns0:head><ns0:p>Most early works on rumor detect extracted hand-crafted features and built classifiers under supervised learning. For example, Castillo and associates constructed features from the message, user profiles and topics to study the credibility of tweets by applying SVM and Naive Bayes <ns0:ref type='bibr' target='#b3'>(Castillo et al., 2011)</ns0:ref>. Kwon and associates comprehensively explored the user, structural, linguistic and temporal features in rumor detection tasks <ns0:ref type='bibr' target='#b13'>(Kwon et al., 2017)</ns0:ref>. Sicilia and associates applied new features such as the likelihood a tweet is retweeted, and the fraction of tweets with URLs to detect health-related rumors <ns0:ref type='bibr'>(Sicilia et al., 2018 (a)</ns0:ref>). Besides, hand-crafted features such as location-based features <ns0:ref type='bibr'>(Yang et al., 2012)</ns0:ref>, temporal features <ns0:ref type='bibr' target='#b12'>(Kwon et al., 2013)</ns0:ref>, topical space features <ns0:ref type='bibr' target='#b11'>(Jin et al., 2016)</ns0:ref> and sentimental features <ns0:ref type='bibr' target='#b14'>(Liu et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b23'>Mohammad et al., 2017)</ns0:ref> are also applied. In this stage, traditional machine learning algorithms such as support vector machines <ns0:ref type='bibr'>(Yang et al., 2012)</ns0:ref> and decision trees <ns0:ref type='bibr' target='#b3'>(Castillo et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b41'>Zhao et al., 2015)</ns0:ref> were the common choices. However, hand-crafted feature engineering is timeconsuming and with high labor costs. Benefit from the development of deep learning, deeplearning based features have been widely applied to rumor detection recently. These features are extracted automatically in the form of embeddings by training deep neural networks. Representative models such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs) are widely used to extract essential features of given texts for rumor detection <ns0:ref type='bibr'>(Yu et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b4'>Chen et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b39'>Ma et al., 2019;</ns0:ref><ns0:ref type='bibr'>Bian et al., 2020)</ns0:ref>. There are also some works considering the characteristics of social media. For the propagation structure in social media, Liu proposed to use the propagation information to help detect rumors <ns0:ref type='bibr'>(Liu et al., 2016)</ns0:ref>. For the response and reply operations in Twitter, retweets <ns0:ref type='bibr' target='#b39'>(Yuan et al., 2019)</ns0:ref> and replies <ns0:ref type='bibr' target='#b39'>(Ma et al., 2019)</ns0:ref> along with source tweets were utilized.</ns0:p></ns0:div> <ns0:div><ns0:head>Early-stage Rumor Detection</ns0:head><ns0:p>Detect rumors at the early stage is both necessary and challenging. A comprehensive study was conducted to explore rumor detection performance over varying time windows with four kinds of hand-crafted features include user, structural, linguistic and temporal-based features <ns0:ref type='bibr' target='#b13'>(Kwon et al., 2017)</ns0:ref>. It reveals that user and linguistic features are suitable for building early detection models and proposed a practical algorithm that does not require full snapshots nor complete historical records. Similar strategies were applied to deep learning-based models such as GAN-GRU <ns0:ref type='bibr' target='#b39'>(Ma et al., 2019)</ns0:ref> and Bi-GCN <ns0:ref type='bibr'>(Bian et al., 2020)</ns0:ref>, which set a detection delay time and evaluated with tweets posted no later than the delay. These introduced works detect early rumors at the macro level and the detected events of the discussions online have appeared in both the training set and test set. Another pioneer work of early rumor detection focused on cross-topic rumor detection <ns0:ref type='bibr' target='#b28'>(Sicilia et al., 2018 (b)</ns0:ref>), which aims to detect rumors about an unseen topic that has never used and existed in the training set. This paper detected rumors at the micro-level and implies that under this practical setting, it requires at least 80% of the test topic samples to be included in the training set, in order to achieve good results. The cross-topic task was also discussed in a recent proposed work about rumor detection with imbalanced learning <ns0:ref type='bibr'>(Fard et al., 2020)</ns0:ref>. Few-shot learning Few-shot learning assumes that very few labeled instances are available, which is a challenging task in machine learning <ns0:ref type='bibr' target='#b34'>(Vinyals et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b9'>Finn et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b30'>Snell et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b29'>Sung et al., 2018)</ns0:ref>. Meta-learning is one of the popular strategies in few-shot learning, developing machine learning models to predict unseen categories with few labeled data. The core idea of metalearning is to learn transferable knowledge on training data that can adapt to new tasks efficiently with just a few examples of the new tasks. Optimization-based meta-learning approaches such as MAML <ns0:ref type='bibr' target='#b9'>(Finn et al., 2017)</ns0:ref> aim to search for optimal initial parameters of models which can quickly adapt to new tasks with just a few gradient steps. Meta-transfer learning (MTL) <ns0:ref type='bibr' target='#b32'>(Sun et al., 2019)</ns0:ref> proposed to avoid the overfitting problem during training a small amount of data from the unseen category. Metric-based meta-learning approaches such as MatchingNet <ns0:ref type='bibr' target='#b34'>(Vinyals et al., 2016)</ns0:ref> and PrototypicalNet <ns0:ref type='bibr' target='#b30'>(Snell et al., 2017)</ns0:ref> aim to learn a better feature space to reflect the distance between instances. Although few-shot learning has achieved success in image classification tasks, very few research attempts have been made to study how to detect rumors with few instances.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head></ns0:div> <ns0:div><ns0:head>Problem setting</ns0:head><ns0:p>This paper models the COVID-19 rumor detection problem as a few-shot binary classification task, denoted as -way -event -shot -query.</ns0:p><ns0:p>refers to the distinct number of few-shot &#119873; &#119872; &#119870; &#119876; &#119873; learning labels, which we have in this paper as we consider an instance as rumor or non-&#119873; = 2 rumor.</ns0:p><ns0:p>represents the number of sampled events among . Let denote a set of &#119872; &#119864; &#119864; = &#119864; &#119901; &#8746; &#119864; &#119904;</ns0:p><ns0:p>given events, where refers to those events that happened in the past and have enough labeled &#119864; &#119901; instances for training, refers to those events that happened suddenly and should be predicted &#119864; &#119904; with only a small number of labeled instances. represents the number of sampled instances in &#119870; the support set (training set) for each label, and represents the number of sampled instances in &#119876; the query set (test set) for each label.</ns0:p><ns0:p>Each event is composed of a set of related instances. Given an instance , , where</ns0:p><ns0:formula xml:id='formula_0'>(&#119909; &#119894; ,&#119910; &#119894; ) &#119909; &#119894; = [&#119898; &#119894; ,&#119888; &#119894; ]</ns0:formula><ns0:p>is a full microblog, refers to the text content (post) of the -th microblog, and</ns0:p><ns0:formula xml:id='formula_1'>&#119909; &#119894; &#119898; &#119894; &#119894; &#119888; &#119894; = [&#119888; &#119894;1 ,&#119888; &#119894;2 ,&#8230;,&#119888; &#119894;&#119897;</ns0:formula><ns0:p>consists of the comments of the -th microblog. We regard and as two modalities. is the</ns0:p><ns0:formula xml:id='formula_2'>] &#119897; &#119894; &#119898; &#119894; &#119888; &#119894; &#119910; &#119894;</ns0:formula><ns0:p>label of the -th instance, which indicates whether the -th instance belongs to rumor or not.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119894; &#119894;</ns0:head><ns0:p>The few-shot learning target is to train a classifier to predict whether an instance in belongs &#8450; &#119864; &#119904; to a rumor with only a few numbers of labeled data. Models trained on instances of are used &#119864; &#119901; for task adaptation. Data Sina Weibo is a popular Chinese online social media platform, where users can post or repost, and leave comments with each other. Fig. <ns0:ref type='figure' target='#fig_4'>1</ns0:ref> is a rumor example from Sina Weibo. It mainly contains the post (similar to the source post on Twitter) and corresponding comments (similar to the replies on Twitter). If it is judged as a rumor by the official platform, there is a reminder display on the top of the page. We construct and share a novel dataset based on Weibo for the research of few-shot rumor detection 1 . The publicly available dataset is written in Chinese and each instance contains a source post along with corresponding comments, the posted date and its label are also included. Our collected dataset contains 11 independent and distinct COVID-19 irrelevant events that happened in the past and 3 COVID-19 relevant events that happened since the breakout of COVID-19. For each event, we crawl related microblogs consisting of source posts (modality 1) and corresponding comments (modality 2) from Sina Weibo, in which both rumors and nonrumors are covered. The event names are used as searching keywords. We provide the corresponding descriptions of all events are as follows (the original names are in Chinese, here we have translated them to English). &#61623; MH370: This event is about the crash of Malaysia Airlines MH370 discussed online. &#61623; College entrance exams: This event is about the annual Chinese college entrance exams. &#61623; Olympics: This event is the discussion about the news of Olympics games on Sina Weibo. &#61623; Urban managers: Urban manager is an occupation in China, who helps keep the city clean and safe. This event is the discussion about how urban managements perform their official duties. &#61623; Cola: This event is about Coke Cola from the perspectives of food additives. <ns0:ref type='table' target='#tab_3'>2021:03:59459:1:1:NEW 13 Jul 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#61623; Child trafficking: This event is about child trafficking and asking for help reported on Sina Weibo. &#61623; Waste oil: This event is about the news of waste oil used for cooking from the perspectives of food safety. &#61623; Accident: This event is about accidents that happened and reported on Sina Weibo, such as traffic accidents. &#61623; Earthquake: This event is about the earthquake discussed and reported on Sina Weibo. &#61623; Typhoon: This event is about the typhoon discussed and reported on Sina Weibo. &#61623; Rabies: This event is the discussion about serious death caused by rabies on Sina Weibo. &#61623; Lockdown the city: This event is the discussions about the lock-down-city policy online. &#61623; Zhong Nanshan: This event is about the Chinese anti-epidemic expert Dr. Zhong Nanshan. &#61623; Wuhan: This event is about discussions on the COVID-19 in Wuhan. The official Sina Weibo community management center 2 displays all the fake posts judged and labeled by professional human moderators, which is commonly used as the source of collecting Weibo rumors <ns0:ref type='bibr'>(Ma et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b39'>Yuan et al., 2019)</ns0:ref>. Fig. <ns0:ref type='figure' target='#fig_5'>2</ns0:ref> illustrated the workflow of the judgement for the rumor displayed in Fig. <ns0:ref type='figure' target='#fig_4'>1</ns0:ref>, similar to the process of the court ruling. The final judgement by the official platform (on the top of Fig. <ns0:ref type='figure' target='#fig_5'>2</ns0:ref>) comes from both reported reasons from other users (on the bottom left) and explanations from the posted user (on the bottom right). Once the post is labeled as a rumor, a 'Fake post' (the original one is in Chinese) sign would appear on the posted page, as Fig. <ns0:ref type='figure' target='#fig_4'>1</ns0:ref> shows. We implement a web crawler to collect all the reported posts from the official Sina Weibo community management center, posted date starts from May 2012 to December 2020. Keywords of distinct events (original formats are in Chinese, translated to English in Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>) are then applied to filter event-related instances as rumors. To collect non-rumors, we choose the same keywords used for collecting rumors of selected events. The web crawler is designed to search and crawl the posts with given keywords. For those crawled posts which are not marked as 'Fake post' by the official platform, we take them as non-rumors. All the corresponding comments are crawled together. Due to the repost operation in Sina Weibo, which is similar to the re-tweet feature in Twitter, there exist duplications in the originally collected data. We exploit Hamming distance to filter similar or repetitive texts. Specifically, we treat two source posts with hamming distance less than a threshold (e.g., 6) as duplicates and just retain one of them in the dataset. After this deduplication operation, the statistics of the Weibo dataset are as Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref> shows.</ns0:p></ns0:div> <ns0:div><ns0:head>Few-shot Rumor detection</ns0:head><ns0:p>The general overflow of COMFUSE is as Fig. <ns0:ref type='figure' target='#fig_6'>3</ns0:ref> shows. The input microblogs consist of source posts along with corresponding comments. Firstly, the pre-trained Bidirectional Encoder Representations from Transformers model (BERT) is applied to achieve the word embeddings of the input microblogs. Then two bidirectional GRUs are used to learn features of source posts and comments separately. A fusion layer is applied to fuse the features of both modalities, which are source posts and comments. Finally, meta-learning is applied to detect rumors related to new events with task adaptation. Pretrained word embeddings. Recently, transformer-based NLP models <ns0:ref type='bibr' target='#b33'>(Vaswani et al., 2017)</ns0:ref> have shown that attention-based embedding mechanisms have great superiority over simple structured embedding models <ns0:ref type='bibr' target='#b32'>(Sun et al., 2019)</ns0:ref>, such as word2vec <ns0:ref type='bibr' target='#b22'>(Mikolov et al., 2013)</ns0:ref> and GloVe <ns0:ref type='bibr'>(Pennington et al., 2014)</ns0:ref>. In this paper, we utilize BERT models pre-trained on largescale such as Wikipedia with Transformers to embed the inputs. Given an instance &#119909; &#119894; = [&#119898; &#119894; ,&#119888; &#119894;1 , both the source posts and comments are in the format of sequences. Fig. <ns0:ref type='figure' target='#fig_7'>4</ns0:ref> Bi-GRUs feature extractions. Recently, it is the mainstream to extract features from texts with deep neural networks. Representative RNNs-based models such as LSTMs and GRUs have shown effectiveness in the rumor detection task <ns0:ref type='bibr' target='#b15'>(Liu et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b35'>Wang et al., 2020)</ns0:ref>. In this paper, we apply bidirectional GRUs (Bi-GRUs) to extract features of source posts and corresponding comments. We take the post as an example. After applying pretrained BERT &#119898; embeddings, the input post turns to the embeddings matrix , where is</ns0:p><ns0:formula xml:id='formula_3'>&#119898; &#119861; &#119898; = [&#119887; &#119898; 1 ,&#119887; &#119898; 2 ,...,&#119887; &#119898; &#119899; ] &#119899;</ns0:formula><ns0:p>the same definition of token numbers. The BiGRUs are applied upon the embedding matrix to further decode post to textual hidden features denoted as . The general &#119898; &#119867; &#119898; = [&#8462; &#119898; 1 ,&#8462; &#119898; 2 ,...,&#8462; &#119898; &#119899; ] structure of Bi-GRUs is as Fig. <ns0:ref type='figure' target='#fig_8'>5</ns0:ref> shows. For the -th input embeddings , the decoded features of Bi-GRUs' outcome in one &#119895; &#119887; &#119898; &#119895; &#8462; &#119898; &#119895; direction can be denoted as . In Equations ( <ns0:ref type='formula'>1</ns0:ref>) -( <ns0:ref type='formula'>4</ns0:ref>) we show its complete &#8462; &#119898; &#119895; = &#119866;&#119877;&#119880;&#119904;(&#119887; &#119898; &#119895; ,&#8462; &#119898; &#119895; -1 ) form in detail:</ns0:p><ns0:p>. ( <ns0:ref type='formula'>1</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_4'>&#119903; &#119898; &#119895; = &#120590;(&#119882; &#119903; &#119887; &#119898; &#119895; + &#120573; &#119903; + &#119882; &#8462;&#119903; &#8462; &#119898; &#119895; -1 + &#120573; &#8462;&#119903; ) . (2) &#119911; &#119898; &#119895; = &#120590;(&#119882; &#119911; &#119887; &#119898; &#119895; + &#120573; &#119911; + &#119882; &#8462;&#119911; &#8462; &#119898; &#119895; -1 + &#120573; &#8462;&#119911; ) . (3) &#8462; &#119898; &#119895; ' = tanh (&#119882; &#8462; &#119887; &#119898; &#119895; + &#120573; &#8462; + &#119903; &#119898; &#119895; * (&#119882; &#8462;&#8462; &#8462; &#119898; &#119895; -1 + &#120573; &#8462;&#8462; )) . (4) &#8462; &#119898; &#119895; = (1 -&#119911; &#119898; &#119895; ) * &#8462; &#119898; &#119895; ' + &#119911; &#119898; &#119895; * &#8462; &#119898; &#119895; -1</ns0:formula><ns0:p>The hidden states of the forward input sequence with tokens can be represented as</ns0:p><ns0:formula xml:id='formula_5'>&#119899; &#119867; &#119898; = &#119866;&#119877;&#119880;&#119904;</ns0:formula><ns0:p>, where is the initial hidden state. Similarly, the hidden states of the backward input</ns0:p><ns0:formula xml:id='formula_6'>( &#119861; &#119898; , &#8462; 0 ) &#8462; 0</ns0:formula><ns0:p>sequence are represented as . The final hidden states of both directions are &#119867; &#119898; = &#119866;&#119877;&#119880;&#119904; ( &#119861; &#119898; , &#8462; 0 ) calculated as Equation ( <ns0:ref type='formula'>5</ns0:ref>) shows, which are also regarded as the features for further rumor detection.</ns0:p><ns0:p>PeerJ may have more than one comment, the fusion layer contains two main steps. The first step is to fuse the features of all comments, denoted as . The second step is to fuse the</ns0:p><ns0:formula xml:id='formula_7'>[ &#119867; &#119888; &#119894; 1 ,...,&#119867; &#119888; &#119894; &#119897; ] &#119867; &#119888; &#119894;</ns0:formula><ns0:p>features of the source post and comments.</ns0:p><ns0:p>In the first fusion step, the features of all the comments in the same instance are extracted via &#119897; the same Bi-GRUs. As the comments of each microblog are embedded into the same feature space, it is natural to fuse them with the weighted sum of their features. We regard the contribution of each comment to be equal for the rumor detection task, which is defined in Equation ( <ns0:ref type='formula'>6</ns0:ref>).</ns0:p><ns0:p>. ( <ns0:ref type='formula'>6</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_8'>&#119867; &#119888; &#119894; = &#8721; &#119897; &#119895; = 1 &#119867; &#119888; &#119894;</ns0:formula></ns0:div> <ns0:div><ns0:head>&#119895;</ns0:head><ns0:p>In the second fusion step, the features of the source post and comments in the same instance are extracted via two individual Bi-GRUs. Following the common practice, we fuse these multimodal features together with concatenation to build the features of the microblogs. For the -th &#119894; instance, the fused feature is defined as .</ns0:p><ns0:formula xml:id='formula_9'>&#119867; &#119894; = [&#119867; &#119898; &#119894; ; &#119867; &#119888; &#119894; ]</ns0:formula><ns0:p>Few-shot learning. Usually, rumors from online social media are usually produced according to certain events. Rumors of emergent events could be very distinct from events collected in the past, so that rumor detection models could barely generalize on new events. However, breaking events like COVID-19 are unprecedented so that very rare instances are available. This may result in the failure or overfitting to directly build a rumor detection model based on supervised learning with the lack of labeled training data for emergent events. To tackle this challenge, we propose a few-shot learning paradigm by learning a generic model with labeled data from past observed events and adapting to unseen events with only a few labeled instances. We propose a meta-learning based strategy in learning the few-shot rumor detection tasks. The core idea is to sample a large number of task combinations in training instances so that the model can learn the transferable knowledge for unseen categories. State-ofthe-art methods are optimization-based with the idea of training a good initialized model which could adapt to unseen categories with only a few gradient steps <ns0:ref type='bibr' target='#b9'>(Finn et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b32'>Sun et al., 2019)</ns0:ref>. The learning target of few-shot rumor detection with meta-learning methods is to minimize the adaptation loss on unseen tasks during training. Given a batch of few-shot tasks , &#8492; = {&#119879; 1 ,&#8230;,&#119879; |&#8492;| } the total loss is calculated as Equation ( <ns0:ref type='formula'>7</ns0:ref> events. This optimization problem can be solved iteratively with the steps as shown in Fig. <ns0:ref type='figure' target='#fig_9'>6</ns0:ref>, in order to train the models to adapt to sampled new tasks well. We will demonstrate each of the meta-learning steps in detail.</ns0:p><ns0:p>The COVID-19 rumor detection is defined as the -way -event -shot -query few-shot &#119925; &#119924; &#119922; &#119928; learning task.</ns0:p><ns0:p>Step 1. Sampling: This step aims to sample a few-shot task from (events happened in the &#119879; &#119864; &#119901; past). Each event has both rumor and non-rumor instances, which means the sampling times from equal to . For an -way -event -shot -query few-shot learning task ,</ns0:p><ns0:formula xml:id='formula_10'>&#119864; &#119901; &#119873; &#215; &#119872; &#119873; &#119872; &#119870; &#119876; &#119879; &#119870;</ns0:formula><ns0:p>rumor instances and non-rumor instances are sampled from events respectively to compose &#119870; &#119872; of a support set, denoted as . rumor and non-rumor instances are also sampled from the &#119879; (&#119904;) &#119876; same events respectively to compose of a query set, denoted as . This step would sample &#119872; &#119879; (&#119902;) instances for task . &#119873; &#215; &#119872; &#215; (&#119870; + &#119876;) &#119879;</ns0:p><ns0:p>Step 2. Adaptation: This step aims to learn latent semantics in unseen categories by adapting the current model to the sampled task in step 1. This step updates the model parameters and &#119879; &#119908; &#119898; &#119908; &#119888; with the few-shot labeled data in by performing Stochastic Gradient Descent (SGD), as &#119879; (&#119904;) Equations ( <ns0:ref type='formula'>8</ns0:ref>) and ( <ns0:ref type='formula'>9</ns0:ref> &#119879; (&#119902;) Step 3. Optimization: This step aims to evaluate and with more samples in the query set &#119908; ' &#119898; &#119908; ' &#119888; . The empirical loss functions are as Equations ( <ns0:ref type='formula'>10</ns0:ref>) and (11) show. &#119879; (&#119902;) , ( <ns0:ref type='formula'>10</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_11'>&#119871; &#119879; (&#119908; &#119898; ) = &#119871; &#119879; (&#119902;) (&#119908; ' &#119898; ) = &#119871; &#119879; (&#119902;) (&#119908; &#119898; -&#945;&#8711; &#119908; &#119898; &#119871; &#119879; (&#119904;) (&#119908; &#119898; ) )</ns0:formula><ns0:p>. ( <ns0:ref type='formula'>11</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_12'>&#119871; &#119879; (&#119908; &#119888; ) = &#119871; &#119879; (&#119902;) (&#119908; ' &#119888; ) = &#119871; &#119879; (&#119902;) (&#119908; &#119888; -&#945;&#8711; &#119908; &#119888; &#119871; &#119879; (&#119904;) (&#119908; &#119888; ) )</ns0:formula><ns0:p>To search for the optimal and defined in Equation ( <ns0:ref type='formula'>7</ns0:ref>), we need to compute the Hessian.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119908; &#119898; &#119908; &#119888;</ns0:head><ns0:p>However, considering the tradeoff between the computational costs and performance, we solve this problem with just one gradient descent to approximate the parameter updates <ns0:ref type='bibr' target='#b9'>(Finn et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b29'>Sung et al., 2018)</ns0:ref>.</ns0:p><ns0:p>, Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To detect rumors about suddenly happened events , we can apply the parameters of the &#119864; &#119904; adapted models and to calculate the probability of the instances with the Sigmoid &#119908; ' &#119898; &#119908; ' &#119888; function.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiments and Results</ns0:head></ns0:div> <ns0:div><ns0:head>Datasets for experiments</ns0:head><ns0:p>We carry on extensive empirical studies on two real-world datasets with user comments that have been classified as rumors or non-rumors. The first dataset is collected from Sina Weibo, which is used for detecting rumors about COVID-19. The other dataset we use is PHEME <ns0:ref type='bibr' target='#b42'>(Zubiaga et al., 2016)</ns0:ref>, which is publicly available and widely used in most rumor detection researches. Details of both datasets are as follows. Weibo Dataset: We collect microblogs that are written in Chinese from Sina Weibo -the largest online social media in China. In this dataset, there are 14 events with 3,840 instances in total. For each event, both rumors and non-rumors are included. Each event is a hot topic such as MH370, COVID-19 expert Zhong Nanshan, etc., which are widely discussed online. Each instance is recorded with a source post along with its comments. To evaluate the performance of few-shot learning on COVID-19 rumor detection, 11 COVID-19 irrelevant events are selected as the training and validation set, and 3 COVID-19 relevant events are used for testing (listed in Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>). PHEME Dataset: This is a publicly available dataset 3 with tweets from Twitter in English, which is widely used for the evaluation of rumor detection tasks <ns0:ref type='bibr'>(Zubiaga et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b39'>Ma et al. 2019)</ns0:ref>. It is collected according to 5 breaking events discussed on Twitter <ns0:ref type='bibr' target='#b42'>(Zubiaga et al., 2016)</ns0:ref>. Each instance is recorded with a source tweet along with its reply reactions. To evaluate the performance under the settings of few-shot learning, 3 breaking events that happened earlier are selected as the training and validation set (#Ferguson unrest, #Ottawa shooting, #Sydney siege), and the rest 2 events that happened most recently are used for testing (#Charlie Hebdo shooting, #Germanwings plane crash). The pre-processing of the PHEME dataset follows the practice in previous work <ns0:ref type='bibr' target='#b39'>(Ma et al., 2019)</ns0:ref>. For the Weibo dataset, we crawl the comments of microblogs directly as they are readily available on the same webpage with the source posts. For the PHEME dataset, we regard the replies in the given dataset as comments. For the sake of generality, we randomly divide the dataset into training and validation sets according to distinct events, and repeat three times to form three different splits for robust cross-validation. We choose the number of splits as three for the following reasons. In few-shot learning, the number of splits depends on the number of new events in the test set. We take the 2-way 3-event 5-shot 9-query Weibo dataset for example. It has 3 COVID-19 relevant events to be detected with only a few labeled data. The number of the event in the definition is determined by the number of new events in the test dataset, so it is 3-event. The number of ways indicates the number of labels, which are rumor and non-rumor. With this definition, during the few-shot learning training process, every training epoch will sample 3 different events in the training set, for each event, 5 rumor instances and 5 non-rumor instances will be sampled for training. According to the few-shot learning setting, we guarantee that all events in the training sets should NOT appeared in the testing sets, and vice versa, to avoid the leakage of event information and guarantee that we are testing on complete novel events. We also assume that the number of events in the training set should be no less than the number of events in the test set to ensure the model capacity for adapting to new events. According to our assumption and task settings, we split our Weibo dataset to 3 events (COVID-19 relevant) for testing, and 11 events (COVID-19 irrelevant) for training. We fix the 3 events (COVID-19 relevant) for testing, and construct 3 folds for 'cross-validation' over 11 training events (COVID-19 irrelevant) to guarantee that each fold has more than 3 events in the Weibo dataset. Table <ns0:ref type='table'>2</ns0:ref> displays the statistics of the data for experiments.</ns0:p></ns0:div> <ns0:div><ns0:head>Baselines for comparisons</ns0:head><ns0:p>Five baselines are selected to compare the performance of few-shot rumor detection, including traditional methods, deep learning methods, and few-shot learning methods. 1. DT-EMB: This baseline model uses the decision tree as the basic classifier, which was applied in traditional rumor detection tasks <ns0:ref type='bibr' target='#b41'>(Zhao et al., 2015)</ns0:ref>. The feature of each instance is represented by the embeddings encoded by the same pre-trained BERT model. 2. SEQ-CNNs: This deep learning-based baseline trains classification model with features extracted by CNNs, which is a common choice for rumor detection in recent researches <ns0:ref type='bibr'>(Yu et al., 2017)</ns0:ref>, the input sequence is encoded by the same BERT pretrained model for fair comparisons. 3. SEQ-Bi-GRUs: This is also a deep learning-based baseline for rumor detection. Bi-GRUs are applied to extract features for training and prediction <ns0:ref type='bibr'>(Ma et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b4'>Chen et al., 2018)</ns0:ref>, the input sequence is encoded by the same BERT pretrained model for fair comparisons. 4. GAN-GRU-early: The basic model of this baseline is a popular model named GAN-GRU <ns0:ref type='bibr' target='#b39'>(Ma et al. 2019)</ns0:ref>. According to the early detection setting in this paper, for each source post, only the latest 3 comments are used for evaluation, which is as same as modality 2 in COMFUSE. 5. BiGCN-early: The basic model of this baseline is a state-of-the-art model named BiGCN <ns0:ref type='bibr'>(Bian et al. 2020)</ns0:ref>. According to the early detection setting in this paper, for each source post, only the latest 3 comments are used for evaluation, which is as same as modality 2 in COMFUSE. 6. COMFUSE-post-only: This is a simplified model of COMFUSE for ablation study. Only the source post of each microblog is used for training and prediction in the few-shot rumor detection task. 7. COMFUSE-com-only: This is another simplified model of COMFUSE for ablation study.</ns0:p><ns0:p>Only the comments of each microblog are used for training and prediction in the few-shot rumor detection task. The problem setting of this paper is few-shot rumor detection, which assumes that the events in the test set have not occurred in the training set and only a small number of labeled instances is available. DT-EMB, SEQ-CNNs, SEQ-Bi-GRUs, GAN-GRU-early and BiGCN-early are five baselines for common rumor detections, which require the training set and test set to share the same events. To have fair comparisons, new paradigms are designed for training and testing these baselines. For the traditional machine learning-based model DT-EMB, a small number of labeled data sampled from the new events are put into the training set for training. For SEQ-CNNs and SEQ-Bi-GRUs, we train the rumor detection model with the training data firstly and finetune the model with a small number of labeled data sampled from the new events. Because the original GAN-GRU and BiGCN are not designed as the few-shot learning models, we use the same instances of new events, which are also used for task adaption in COMFUSE for training. For all models, the same random seed is set for sampling and these sampled data do not appear in the test set.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental settings</ns0:head><ns0:p>According to the problem setting and considering the number of events in the Weibo dataset and PHEME dataset, we define the few-shot rumor detection for the Weibo dataset as 2-way 3-event 5-shot 9-query, for PHEME dataset as 2-way 2-event 5-shot 9-query respectively. We implement COMFUSE with Pytorch 1.8.1 and utilize the pre-trained BERT model from HuggingFace 4 to encode the inputs. We use the uncased Chinese model and uncased English model for the Weibo dataset and PHEME dataset respectively. The source code will be publicly available. To determine the pad size of the input posts and comments, the statistics of the length per text are performed. The histograms of the Weibo and PHEME datasets are as Fig. <ns0:ref type='figure' target='#fig_10'>7</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_11'>8</ns0:ref> show. Considering the trade-off between performance and speed, we set the pad size of posts/source tweets as 100 (for Weibo) and 48 (for PHEME) respectively. We set the pad size of comments/replies as 32 for both datasets. Further experiments are conducted to show the influence of different pad size choices. The experimental results of the Weibo dataset are as Fig. <ns0:ref type='figure' target='#fig_12'>9</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_13'>10</ns0:ref> show. Fig. <ns0:ref type='figure' target='#fig_12'>9</ns0:ref> displays the results of different pad sizes of source posts with a fixed pad size of comments as 32 on the Weibo dataset. Fig. <ns0:ref type='figure' target='#fig_13'>10</ns0:ref> displays the results of different pad sizes of comments with a fixed pad size of source posts as 100. Both -axis refer to the pad size and -&#119909; &#119910; axis refers to the accuracy performance. We can observe that the rumor detection results of COMFUSE with different pad sizes of posts and comments vary slightly. For the Weibo dataset, the experimental results reveal that it is relatively better to set the pad size as 100 for posts and 32 for comments, which is consistent with our decision based on the statistics of the length in Fig. <ns0:ref type='figure' target='#fig_11'>7 and 8</ns0:ref>.</ns0:p><ns0:p>In this paper, we define an instance as , which contains comments. As we &#119909; &#119894; = [&#119898; &#119894; ,&#119888; &#119894;1 , &#8230;,&#119888; &#119894;&#119897; ] &#119897; consider few-shot learning scenarios, we assume that there are very few useful comments available in the early stage of an event. Thus, in our experiments, we set the number of relevant comments as 3 for all experiments, in order to simulate the emerging situations and examine &#119897; whether our approach can successfully detect rumors from very few labeled instances and informative comments.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance of few-shot rumor detection</ns0:head><ns0:p>In this paper, we treat the rumor detection task as a binary classification problem, and we use classification accuracy as the evaluation metric for comparisons. We conduct experiments on all three splits and examine their averaged performance, as shown in Table <ns0:ref type='table'>3 and Table 4</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Table <ns0:ref type='table'>3</ns0:ref> displays the performance of COVID-19 rumor detection under the few-shot learning setting. A higher classification accuracy indicates a better performance. It can be observed that the traditional machine learning-based method DT-EMB performs poorly in few-shot rumor detection: it achieves only 56.93% accuracy on average, which is barely better than random guessing in a binary classification task. Two state-of-the-art deep learning-based methods SEQ-CNNs and SEQ-Bi-GRUs achieve similar performance around 68%. They significantly improve the detection over traditional DT-EMB. One of the reasons is the superior ability of deep neural networks to extract important features from contexts, which contribute to the training of models. Furthermore, the paradigm of pretraining first and then finetuning can optimize the model to fit the data of new events to some extent. However, the number of labeled instances of the unseen events for finetuning is quite small in the few-shot rumor detection task, which may result in the underfitting of the fine-tuned model. GAN-GRU-early and BiGCN-early are another two SOTA baselines and have reported great performance in the traditional rumor detection task, which the events (topics) appear in both the training set and test set. When applying to the emergent rumor detection scenarios, which assumes the events in the test set have never appeared in the training set, the supervised-based GAN-GRU-early and BiGCN-early models show their limitations and are not suitable for the few-shot rumor detection task. One possible reason that GAN-GRU-early underperforms significantly may because there are scarce instances related to the emergent events in the test set being fed to the generators in the training process. This makes the features extracted in the inference pross hardly reflect the instances related to emergent events. COMFUSE is our proposed multi-modality fusion model for few-shot rumor detection based on the meta-learning approach, with COMFUSE-post-only and COMFUSE-com-only as two simplified versions. COMFUSE-post-only only uses the source posts (source tweets in Twitter) as inputs, as same as DT-EMB, SEQ-CNNs, and SEQ-Bi-GRUs, which are commonly used in existing rumor detection models. Compared with SEQ-CNNs and SEQ-Bi-GRUs, COMFUSEpost-only further improves the few-shot COVID-19 rumor detection accuracy by around 6%. This shows the effectiveness of applying meta-learning methods with only a small number of labeled data to detect rumors of unseen events. COMFUSE takes advantage of both source posts and corresponding comments of the full microblogs to contribute to the detection of rumors from online social media. Intuitively, comments or replies reflect the positive or negative attitudes of the public towards the source posts, so that should provide additional hints towards judging the credibility of the source posts. Table <ns0:ref type='table'>4</ns0:ref> is the experimental results on the public and commonly used rumor dataset PHEME, to show the generality of COMFUSE. The proposed COMFUSE model also achieved the best performance among all the baselines. As the Weibo dataset has more events and instances available during meta-training, the model can be trained with more and diverse event combinations and thus be more capable of adapting to novel events by learning to capture distinct hints for rumor detection. In contrast, we have to use only 3 events in PHEME training and thus may fail to understand the most distinguishing hints in rumors of the PHEME dataset. This explains why the improvement on the PHEME dataset is not as significant as that on the Weibo dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper focuses on the few-shot rumor-detection for unexpected and emergent events which have never or rarely happened before, such as COVID-19. Different from rumor detection on daily events in previous work, emergent events outbreak in sudden so that very few labeled instances can be used for training rumor detection models. As existing rumor detection works assume the events to be predicted are as same as those to be trained, they are greatly limited in rumor detection for emergent events. This paper identifies the rumor detection for emergent events as the few-shot learning tasks, and proposes a few-shot learning-based multi-modality fusion model named COMFUSE to detect COVID-19 rumors in Sina Weibo. It exploits the meta-training methodology to empower the model to adapt to new events with few instances, as well as fully utilizing two modalities including source posts and comments from the online social media to support the detection of rumors. Experiments on our self-collected Weibo dataset and the publicly available PHEME dataset have shown significant improvement on the COVID-19 few-shot rumor detection task and the generalization capacity of the proposed model. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:note type='other'>Computer Science Figure 10</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>1 https://github.com/jncsnlp/Sina-Weibo-Rumors-for-few-shot-learning-research PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59459:1:1:NEW 13 Jul 2021) Take the -th instance from Sina Weibo for example, we consider both &#119894; source post and corresponding comments as two modalities. Because each instance &#119898; &#119894; [&#119888; &#119894;1 , &#8230;,&#119888; &#119894;&#119897; ]</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>) shows. &#8466; PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59459:1:1:NEW 13 Jul 2021) &#119898; ,&#119908; &#119888; ),&#119904;.&#119905;.&#8466;(&#119908; &#119898; ,&#119908; &#119888; ) = 1 |&#8492;| &#8721; &#119879; &#8712; &#8492; &#119871; &#119879; (&#119908; &#119898; ,&#119908; &#119888; ) where refers to the parameters of the defined Bi-GRUs for dealing with the modality of &#119908; &#119898; source posts and refers to the parameters of the defined Bi-GRUs for the modality of &#119908; &#119888; comments. is the loss of task and is the optimized model which can fast adapt to unseen &#119871; &#119879; &#119879; &#119908; *</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59459:1:1:NEW 13 Jul 2021) Manuscript to be reviewed Computer Science COMFUSE-com-only is also an ablation model, which only uses the comments of instances for rumor detection. It can be observed that the proposed multi-modality fusion model COMFUSE performs much better than two ablation models COMFUSE-post-only and COMFUSE-comonly, with accuracy improvement by 5%. This shows the necessity of fusing both source posts and comments for rumor detection. Comparing the proposed COMFUSE model with traditional machine learning-based and deep learning-based rumor detection models, it achieves the improvements by 21% and 10% respectively, which shows the superiority of the meta-learning based fusion model for few-shot COVID-19 rumor detection.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure. 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure. 1 Example of the Sina Weibo page, which contains a rumor microblog</ns0:figDesc><ns0:graphic coords='28,42.52,178.87,525.00,315.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure. 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure. 2 Workflow of the rumor judgement by the official Sina Weibo community management center</ns0:figDesc><ns0:graphic coords='29,42.52,199.12,525.00,228.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure. 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure. 3 Workflow of COMFUSE</ns0:figDesc><ns0:graphic coords='30,42.52,178.87,525.00,363.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure. 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure. 4 Illustrations of word embeddings with BERT</ns0:figDesc><ns0:graphic coords='31,42.52,178.87,525.00,519.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure. 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure. 5 Structure of Bi-GRUs</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure. 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure. 6 Workflow of one meta-learning iteration</ns0:figDesc><ns0:graphic coords='34,42.52,178.87,525.00,383.25' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure. 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure. 7 Statistics of length per text of the Weibo dataset</ns0:figDesc><ns0:graphic coords='35,42.52,178.87,525.00,426.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure. 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure. 8 Statistics of length per text of the PHEME dataset</ns0:figDesc><ns0:graphic coords='36,42.52,178.87,525.00,438.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure. 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure. 9 Experimental results of different pad sizes of source posts with a fixed pad size of comments as 32 on the Weibo dataset</ns0:figDesc><ns0:graphic coords='37,42.52,199.12,525.00,309.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure. 10</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure. 10 Experimental results of different pad sizes of comments with a fixed pad size of source posts as 100 on the Weibo dataset</ns0:figDesc><ns0:graphic coords='38,42.52,199.12,525.00,311.25' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>&#119905; 2 , &#8230;, &#119905; &#119899; ] &#119899; is the number of tokens. An embedding layer is then applied to achieve initialized embeddings &#119890; &#119896; &#119890; 2 , &#8230;, &#119890; &#119899; ] &#119861; = [&#119887; 1 , &#119887; 2 , &#8230;, &#119887; &#119899; ] the output with transformer models. For the given input , the outputs of this &#119909; &#119894; = [&#119898; &#119894; ,&#119888; &#119894;1 , &#8230;,&#119888; &#119894;&#119897; ]</ns0:figDesc><ns0:table><ns0:row><ns0:cell>, &#8230;,&#119888; &#119894;&#119897; ]</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>demonstrates</ns0:cell></ns0:row><ns0:row><ns0:cell>embedding inputs with pre-trained BERT in detail. For an input (a post</ns0:cell><ns0:cell>&#119898; &#119894;</ns0:cell><ns0:cell cols='2'>or a comment ), the &#119888; &#119894;&#119897;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>first step is tokenization based on the predefined vocabulary and achieve [&#119905; 1 , for every token and achieve . Then, the embeddings &#119905; &#119896; [&#119890; 1 , procedure are corresponding pre-trained BERT embeddings, denoted as [&#119861; &#119898; &#119894; ,&#119861;</ns0:cell><ns0:cell>, where become . &#119888; &#119894;1 , &#8230;,&#119861; &#119888; &#119894;&#119897; ]</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 Statistics of Events for the COVID-19 Rumor Dataset After Removing Duplicates</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>2</ns0:cell></ns0:row></ns0:table><ns0:note>1 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59459:1:1:NEW 13 Jul 2021)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59459:1:1:NEW 13 Jul 2021)</ns0:note> <ns0:note place='foot' n='2'>https://service.account.weibo.com/ PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59459:1:1:NEW 13 Jul 2021)</ns0:note> <ns0:note place='foot' n='3'>https://figshare.com/articles/PHEME_dataset_of_rumours_and_non-rumours/4010619 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59459:1:1:NEW 13 Jul 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot' n='4'>https://huggingface.co/ PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59459:1:1:NEW 13 Jul 2021)</ns0:note> <ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59459:1:1:NEW 13 Jul 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"School of Artificial Intelligence and Computer Science Jiangnan University 1800 Lihu Avenue Wuxi, Jiangsu, China July 7th, 2021 Dear Editors and Reviewers, We sincerely thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. We accept all the suggestions and have responded all the questions. We have carefully revised this manuscript and followings are point-by-point responses to all the comments. We believe that the manuscript is now suitable for publication in PeerJ Computer Science. Dr. Heng-yang Lu Department of Computer Science and Technology On behalf of all authors. Reviewer 1 (Anonymous) Basic reporting This paper tends to tackle a very important niche in the topic of rumour detection. Although from the methodological perspective, the dataset and the techniques are novel, there are issues regarding the context that need to be clearly addressed: 1- Several places throughout the paper require citation. It is necessary to refer to the manuscripts that support the claimed statements (e.g., line 65,66,71, etc.) Agreed, thank you for your suggestions. The statement about the definition of rumor comes from (Allport et al., 1947), we have referred to the manuscript to support the claimed statements in revisions. 2- Some of the statements are very strong. For instance, despite what is strongly claimed in line 84-85 as early works, there are plenty of high-quality studies focusing on that approach [3][6]. Such a claim could be agreed upon only if it was supported by a comprehensive literature review or set of references. Agreed. We have re-written the introduction and added the literature review section. In the introduction section, we introduce previous works from the perspectives of macro-level (Castillo et al., 2011; Yang et al., 2012; Kwon et al. 2013; Jin et al., 2014; Wu et al., 2015) and micro-level (Sicilia et al., 2017; Sicilia et al., 2018 (a)), more high-quality studies were introduced. In the literature review section, we also introduced these hand-crafted feature-based early works in detail (Castillo et al., 2011; Yang et al., 2012; Kwon et al. 2013; Wu et al., 2015; Liu et al., 2015; Zhao et al., 2015; Jin et al., 2016; Mohammad et al., 2017; Kwon et al., 2017). 3- There is an assumption in this paper that rumour is either unverified or false (The first line of the introduction). Rumours are unverified in some context, and they are not accompanied by substantial evidence for at least some group of people [1] thus if we know that a message is false, then it should not be called rumour anymore. Thank you for your comments. The first line of the introduction is the definition of rumor from early social psychology literature (Allport et al., 1947). We have introduced a more recent definition from DiFonzo [1], which defined rumor as unverified and instrumentally relevant information statements in circulation that arise in contexts of ambiguity and that function primarily to help people make sense and manage threats in revisions. 4- The second paragraph of the introduction is not accurate. What is explained as SDQC support classification is in fact the stance detection which along with rumour detection, rumour tracking and veracity detection constitute the rumour resolution system [2]. Agreed, the SDQC support classification and veracity prediction belong to the SemEval-2017 Task 8: RumourEval, we have deleted this paragraph. 5- The second and third paragraphs are semantically disconnected and difficult to follow. The earlier paragraph is about the conceptual phases of the rumour resolution system and the latter one is about the methodological approach (with an emphasis on feature extraction) toward rumour detection. Agreed. We have heavily modified the introduction section and added the literature review section before describing the proposed models to alleviate the disconnection between the introduction and methodological approach. In the introduction section, we first introduce the existing works of rumor detection, then we introduce the motivation of our work, which aims to detect rumors about the emergent event like COVID-19. This problem is different from traditional rumor detection tasks, such that the events in the test set have never occurred before and do not appear in the training set. In other words, the events in training sets and in testing sets are non-overlapping with each other. Only very few labeled data (e.g. 1/3/5) of these events are available to adapt tasks, which previous supervised learning-based methods by setting time windows are not suitable in this situation. The cross-topic methods have discussed this problem, and concluded that to obtain good results in cross-topic detection, at least 80% of the test topic knowledge should be included in the training set. Therefore, existing works have huge difficulty in rumor detection for emergent events like COVID-9 with very few labeled data. For these reasons, we introduce the reasonableness and necessity of utilizing few-shot learning methods in this paper. In the literature review section, we comprehensively introduced related works in rumor detection, rumor detection at an early stage, and few-shot learning. 6- One of the major missing pieces is the gap analysis and the problem formulation. What is expected here is a thorough review of the rumour detection literature in a way that directs readers toward the gap. Here the literature review is unrelated to the gap. For instance, there are several studies on early rumour detection [3] which I expected to see here because this paper aims to flag upcoming rumours as soon as possible without spending time to collect data on the same rumour and that is exactly what early rumour detection system does. Another topic that is expected to be investigated in the literature review is cross-domain rumour detection [4][5]. One of the other missing approaches is rumour detection using context-independent features. Such features are independent of a particular incident and could be used across different domains [5]. Agreed. Thank you for your constructive suggestions. We have added the literature review section before the methodological approach section. In this section, we firstly introduce related works about rumor detection. Then, we introduced related works in early-stage rumor detection. We review this part with related works from the perspectives of applying time windows on the same rumor (Kwon et al., 2017; Ma et al., 2019; Bian et al., 2020) and detecting rumors under crosstopic situations (Sicilia et al., 2018 (b); Fard et al., 2020). Finally, we introduce related works about few-shot learning, which was related to the techniques proposed in this paper. 7- Another shortcoming is the lack of transparency about the data collection process. Questions such as when did you collect the data? how do the readers retrieve the data and get access to the data points? what keywords did you use to build the queries and collect data? are unanswered. Besides, the events (rumours and non-rumours) are expected to be fully described. The current explanation of the events is quite broad and uninformative. Agreed. We have added more details and illustrations in the Data collection part. (1) the official Sina Weibo community management center displays all the posts which are labeled as rumors, we design a web crawl to crawl all these rumors, dating from May 2012 to December 2020 and then collect selected events from these data. (2) we have shared this dataset and given an access link in the revision manuscript, the detailed posted date of each instance can also be checked in the publicly available repository (https://service.account.weibo.com/). (3) the event names displayed in Table 1 are used as searching keywords (the original names we used are in Chinese, here we have translated them to English in the manuscript). We have added this illustration in revisions. (4) all the descriptions of selected events have been fully explained in the revision. 8- When the dataset is introduced (line 293) the term “instance” is used. Does this term refer to a single message (similar to a tweet on Twitter)? Because not all the readers are Weibo users, it would be helpful to show an example of a post on Weibo visually. Thank you for your suggestions. In the problem setting, we have given the definition of instance used in this paper. An instance (𝑥𝑖 , 𝑦𝑖 ), 𝑥𝑖 = [𝑚𝑖 , 𝑐𝑖 ], where 𝑥𝑖 is a full microblog, 𝑚𝑖 refers to the text content (post) of the 𝑖-th full microblog, and 𝑐𝑖 = [𝑐𝑖1 , 𝑐𝑖2 , … , 𝑐𝑖𝑙 ] consists of the 𝑙 comments of the 𝑖-th microblog. We regard 𝑚𝑖 and 𝑐𝑖 as two modalities. 𝑦𝑖 is the label of the 𝑖-th instance, which indicates whether the 𝑖-th instance belongs to rumor or not. All these data are collected during the data collection process, and represented as one line in the dataset. We have also added an example of the Weibo page for visualization in Fig. 1 and illustrates each part in the revision manuscript. 9- Based on my understanding the equivalent terms for Twitter’s reply and retweet in Weibo are comment and repost. If that is correct, then why did you decide to regard the retweets in the PHEME dataset as comments and not repost (line 310-311)? Yes, the reply and retweet in Twitter are similar to the comment and repost in Sina Weibo. In the pheme dataset, we use the reply as comments, this was a mistake in the writing. 10- How does the annotation by the Sina Weibo community management centre work (line 159)? What kind of labels a post/datapoint may receive? Thank you for your question. The workflow of the rumor judgement by the official Sina Weibo community management center is similar to the process of court ruling. The final judgement by the official platform (on the top of Fig. 2) comes from both reported reasons from other users (on the bottom left) and explanations from the posted user (on the bottom right). Once the post is labeled as a rumor, a ‘Fake post” sign would appear on the posted page, as Fig. 1 shows. We have added explanations and Fig.1, 2 in the revision manuscript. 11- There are some typos and grammatical issues (e.g,. the first column of Table 1) Thank you for pointing out this issue, we have corrected on typo in the first column in Table 1, events about MH370, College entrance exams, etc. are COVID-19 irrelevant, and events about Zhong Nanshan, Wuhan, etc. are COVID-19 relevant. Besides, we have carefully revised our manuscript and polished the language. The proposed method is novel and the experiments are well explained; however, there are two issues regarding the robustness of this study: 1- Based on the experimentation setting (line 311-313), you used 3-fold cross-validation. How come you chose three splits here? Why not 5 or 10? You need to justify your decision. Thank you for your question. In supervised learning, the n-fold cross-validation means to divide the dataset into n folds according to the number of instances in the dataset, use one fold for test and the rest for training. Because the number of instances is more than n, so we can see 5-fold/10fold cross-validations in many supervised learning experiments. However, this paper conduct rumor detection with few-shot learning, the cross-validation strategy is different from that used in supervised learning. The divide of the training set in few-shot learning is based on distinct events. The number of n-fold cross-validation depends on the number of events in the test set. We take the Weibo dataset for example. We define the few-shot rumor detection for the Weibo dataset as 2way 3-event 5-shot 9-query. In the Weibo dataset, we have 3 COVID-19 relevant events to be detected with only a few labeled data. The number of the event in the definition is determined by the number of new events in the test dataset, so it is 3-event. The number of ways indicates the number of labels, which are rumor and non-rumor. With this definition, during the few-shot learning training process, every training epoch will sample 3 different events in the training set, for each event, 5 rumor instances and 5 non-rumor instances will be sampled for training. According to the few-shot learning setting, we guarantee that all events in the training sets should NOT appeared in the testing sets, and vice versa, to avoid the leakage of event information and guarantee that we are testing on complete novel events. We also assume that the number of events in the training set should be no less than the number of events in the test set to ensure the model capacity for adapting to new events. According to our assumption and task settings, we split our Weibo dataset to 3 events (COVID-19 relevant) for testing, and 11 events (COVID-19 irrelevant) for training. We fix the 3 events (COVID-19 relevant) for testing, and construct 3 folds for “crossvalidation” over 11 training events (COVID-19 irrelevant) to guarantee that each fold has more than 3 events. We have also added these explanations in revisions. 2- For the PHEME dataset, You decided to use #Ferguson unrest, #Ottawa shooting, #Sydney siege as the training and validation set and #Charlie Hebdo shooting, #Germanwings plane crash for the testing (line 303-307). Like the previous point, this decision is not justified as well. One quick fix for both is to do sensitivity analysis by running new experiments. For the k-fold crossvalidation issue, this means to run the new experiments when k=3,5, 10 and show to what degree the results change by increasing the number of splits. For the second issue, it means to use different datasets for training-validation and test and show how much the results are dependent on the choice of train-validation-test sets. Additionally and as I explained before, a coherent chain of related work, research gap, and research questions are absent in this paper. Hence although the experiments and few-shot learning based approach toward rumour detection is very well explained, they are not based on a crystal clear motivation (which comes from an in-depth literature review and subsequent gap identification) Thank you for your comments and questions. The followings are point-to-point responses. (1) This paper aims to propose a few-shot learning rumor detection model for detecting rumors about new events based on historical data. We use the publicly available PHEME dataset to show the generality of our work. However, PHEME is not designed for few-shot learning, it only contains 5 events. We choose #Charlie Hebdo shooting, #Germanwings plane crash as the events in the test set because these 2 events have happened later than the previous 3 events (#Ferguson unrest, #Ottawa shooting, #Sydney siege). This is to guarantee the problem setting of few-shot learning rumor detection. (2) The reason why we choose 3-folds has been explained above, please check it. This is mainly because the cross-validation in few-shot learning is different from that in supervised learning. (3) Actually, we have chosen different datasets. We take the PHEME dataset for example. In few-shot learning, the test set contains all the instances that belong to new events, which are all the instances about #Charlie Hebdo shooting and #Germanwings plane crash. We construct 3-fold datasets to train different few-shot learning models to show the stability of our proposed models. The split 0, split 1 and split 2 contains instances about (#Ferguson unrest, #Ottawa shooting), (#Ottawa shooting, #Sydney siege), (#Ferguson unrest, #Sydney siege) respectively. (4) Agreed, we indeed missed the literature review in the previous manuscript, we have responded in the previous comments and added the literature review section in revisions. Thank you for your suggestion. Validity of the findings The methodological aspect of this study is quite novel and tends to address a very important challenge in automatic rumour detection systems. Thank you. Reviewer 2 (Tian Bian) Basic reporting See General Comments for the Author. Experimental design See General Comments for the Author. Validity of the findings See General Comments for the Author. Comments for the Author The spread of rumors will cause the panic of the public and place considerable losses on the economy and other aspects of society. To solve the rumor detection problem on social media, the authors proposed a few-shot learning-based multi-modality fusion model named COMFUSE, including text embeddings modules with pre-trained BERT model, feature extraction module with multilayer Bi-GRUs, multi-modality feature fusion module with a fusion layer, and meta-learning based few-shot learning paradigm for rumor detection. Although the writing is unambiguous, this paper lacks sufficient experiments to verify its contribution. Some concerns are listed as follows: 1. The authors should illustrate the innovation of the proposed model. The modules used in this paper are all based on existing models such as BERT, Bi-GRUs, without any innovative technologies proposed in this paper. Thank you for your comments. We have completely rewritten the introduction section to show the motivation and innovation of this work. The motivation of our work is to detect rumors about the emergent event like COVID-19. This problem is different from traditional rumor detection tasks, in which the event in the test set has never occurred before and does not appear in the training set. Only an extreme few labeled data (e.g. 1/3/5) of these events are available to task adaptation, this task is different from previous supervised learning-based early-stage rumor detection task. For example, popular models such as GAN-GRU (Ma et al., 2019) and BiGCN (Bian et al., 2020), they can also support early-stage rumor detection. They use early posts whose posted time before a predefined delay to train and predict, and all the events can be found in both the training set and test set. This paper focuses on the cross-topic rumor detection task (Sicilia et al., 2018 (b)), previous work shows that at least 80% of the test topic knowledge should be included in the training set to obtain a good result. Which is a challenging task when the number of labeled new events is scarce. For these reasons, we introduce the reasonableness and necessity of utilizing fewshot learning methods in this paper. We propose a rumor detection model with few-shot learning. The BERT and Bi-GRUs are parts of the model, they offer the features for task adaptation in the few-shot learning procedure. 2. The latest baseline for comparison in this paper was proposed in 2018, the authors need to compare the proposed method with more recent baselines. Agreed, although these models achieved state-of-the-art performance in supervised-learning settings (Ma et al., 2019; Bian et al., 2020), we found that they could only provide below-average performance in the cross-topic/few-shot rumor detection setting. Specifically, these SOTA models indeed achieved great performance when the events (topics) appear in both the training set and test set, and the performance has a decline when the events discussed in the test set have not appeared in the training set. We have reported the performance of GAN-GRU-early (Ma et al., 2019), BiGCN (Bian et al., 2020) on both datasets in Table 3 and 4. Overall, we found that they underperform our approach due to the scarce instances related to the emergent events in the test set being fed to the training process. These supervised-based baselines have shown their limitations and are not suitable for the few-shot rumor detection task. (Because we used 3 latest comments as the second modality in COMFUSE, we use the early-stage strategy introduced in these two papers for evaluation.) 3. In many related works, such as Bian et al. 2020, Ma et al. 2019, and Liu et al. 2019 cited in the paper, have a rumor early detection experiment. They use very few tweets posted before the early detection deadline as the training set, the models proposed in these papers are tested on the test set, and good detection effects are also obtained. I think the method proposed in this paper should compare with these methods. Agreed. Thank you for your suggestion. Two models including BiGCN (Bian et al. 2020) and GAN-GRU (Ma et al., 2019) are used for further experiments with their official release codes. However, we found that the source code of Liu’s work is not publicly available. In the experimental settings, we used the latest 3 comments of each source post for early-stage evaluation, which is consistent with the comments we used as the second modality in COMFUSE for fair comparisons. Experimental results of two SOTA models GAN-GRU (Ma et al., 2019) and BiGCN (Bian et al., 2020) are reported in Tables 3 and 4 and analyzed in the revisions. 4. This paper uses a pre-training model to improve the accuracy of rumor detection. I wonder if this BERT model can be applied to other methods based on textual content of tweets, and can these methods also be significantly improved, even more than the Bi-GRUs based model proposed in this paper? Thank you for your suggestions. For fair comparisons, we have already used the pretrained BERT model to encode input sequences, and then use DNNs-based methods (SEQ-CNNs and SEQ-BiGRUs) to extract features from textual contents for rumor detection. So the results displayed in Tables 3 and 4 of SEQ-CNNs and SEQ-Bi-GRUs are already the experimental results with BERT pretrained model mentioned in your comments. In this way, we guarantee the baseline performance of SEQ-CNNs and SEQ-Bi-GRUs by using the same pretrained BERT embeddings with our approach, and make completely fair comparison to emphasize the superiority of our proposed method with few-shot learning. 5. The author should use experimental results to show that rumor detection results are insensitive to different pad sizes of posts and comments. Agreed. We have conducted additional experiments to show the rumor detection results with the change of different pad sizes of posts and comments, we take the Weibo dataset as an example. Fig. 9 displays the results of different pad sizes of source posts with a fixed pad size of comments as 32 on the Weibo dataset. Fig. 10 displays the results of different pad sizes of comments with a fixed pad size of source posts as 100. Both 𝑥-axis refer to the pad size and 𝑦axis refers to the accuracy performance. We can observe that the rumor detection results of COMFUSE with different pad sizes of posts and comments vary slightly. For the Weibo dataset, the experimental results reveal that it is relatively better to set the pad size as 100 for posts and 32 for comments, which is consistent with our decision based on the statistics of the length in Fig. 7 and 8. 6. Do not use the same notation for different definitions in the paper, such as b and T. Thank you for your suggestions. We do have this issue when introducing feature extraction with Bi-GRUs. We have carefully revised this part and revised corresponding Figures 1-3 to make sure the same notations only have their own definitions. 7. In Table 1, why are MH370, College entrance exams, …, Rabies COVID-19 relevant, and Zhong Nanshan, Wuhan are irrelevant? Thank you for pointing out this issue, the first column in Table 1 was written upside down, events about MH370, College entrance exams, etc. are COVID-19 irrelevant, and events about Zhong Nanshan, Wuhan, etc. are COVID-19 relevant. We have corrected this typo in the revisions. In short, the writing is clear but the model lacks innovation. And this paper lacks sufficient experiments to verify its contribution. I suggest that the paper should be greatly modified to make it more acceptable. Thank you. We have carefully revised our manuscript with all your comments. Allport, G. W, and Postman, L. The psychology of rumor. 1947. Bian T, Xiao X, Xu T, et al. Rumor Detection on Social Media with Bi-Directional Graph Convolutional Networks[C]. Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(01): 549-556. Castillo C, Mendoza M, Poblete B. Information credibility on twitter[C]. Proceedings of the 20th international conference on World wide web. 2011: 675-684. DiFonzo, N and Bordia, P. Rumors influence: Toward a dynamic social impact theory of rumor. Psychology Press. 2011: 271-295. Fard AE, Mohammadi M and van de Walle B. Detecting Rumours in Disasters: An Imbalanced Learning Approach. In International Conference on Computational Science, 2020: 639-652. Jin Z, Cao J, Jiang YG and Zhang Y. News credibility evaluation on microblog with a hierarchical propagation model. In 2014 IEEE International Conference on Data Mining, 2014: 230-239. Jin Z, Cao J, Zhang Y, et al. News verification by exploiting conflicting social viewpoints in microblogs[C]. Proceedings of the AAAI Conference on Artificial Intelligence. 2016, 30(1). Mohammad S M, Sobhani P and Kiritchenko S. Stance and sentiment in tweets[J]. ACM Transactions on Internet Technology (TOIT), 2017, 17(3): 1-23. Kwon S, Cha M, Jung K, Chen W and Wang Y. Prominent features of rumor propagation in online social media. In 2013 IEEE 13th international conference on data mining, 2013: 11031108. Kwon, S, Cha, M and Jung, K. Rumor detection over varying time windows. PloS one. 2017, 12(1), e0168344. Liu X, Nourbakhsh A, Li Q, Fang R and Shah S. Real-time rumor debunking on twitter. In Proceedings of the 24th ACM international on conference on information and knowledge management, 2015: 1867-1870. Ma J, Gao W, Wong K F. Detect rumors on twitter by promoting information campaigns with generative adversarial learning[C]. The World Wide Web Conference. 2019: 3049-3055. Sicilia R, Giudice SL, Pei Y, Pechenizkiy M and Soda P. Health-related rumour detection on Twitter. In 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2017: 1599-1606. Sicilia R (a), Giudice SL, Pei Y, Pechenizkiy M and Soda P. Twitter rumour detection in the health domain. Expert Systems with Applications, 2018, 110: 33-40. Sicilia R (b), Merone M, Valenti R, Cordelli E, D’Antoni F, De Ruvo V, and Soda P. Crosstopic rumour detection in the health domain. In 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2018: 2056-2063. Wu K, Yang S and Zhu KQ. False rumors detection on sina weibo by propagation structures. In 2015 IEEE 31st international conference on data engineering, 2015: 651-662. Yang F, Liu Y, Yu X and Yang M. Automatic detection of rumor on sina weibo. In Proceedings of the ACM SIGKDD workshop on mining data semantics, 2012: 1-7. Zhao Z, Resnick P, Mei Q. Enquiring minds: Early detection of rumors in social media from enquiry posts[C]. Proceedings of the 24th international conference on world wide web. 2015: 13951405. "
Here is a paper. Please give your review comments after reading it.
220
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>As an important part of prognostics and health management, remaining useful life (RUL) prediction can provide users and managers with system life information and improve the reliability of maintenance systems. Data-driven methods are powerful tools for RUL prediction because of their great modeling abilities. However, most current data-driven studies require large amounts of labeled training data and assume that the training data and test data follow similar distributions. In fact, the collected data are often variable due to different equipment operating conditions, fault modes, and noise distributions. As a result, the assumption that the training data and the test data obey the same distribution may not be valid . In response to the above problems, this paper proposes a data-driven framework with domain adaptability using a bidirectional gated recurrent unit (BGRU). The framework uses a domain-adversarial neural network (DANN) to implement transfer learning (TL) from the source domain to the target domain, which contains only sensor information. To verify the effectiveness of the proposed method, we analyze the IEEE PHM 2012 Challenge datasets and use them for verification. The experimental results show that the generalization ability of the model is effectively improved through the domain adaptation approach.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Prognostics aims to provide reliable remaining useful life (RUL) predictions for critical components and systems via a degradation process. Based on reliable forecast results, managers can determine the best periods for equipment maintenance and formulate corresponding management plans; this is expected to improve reliability during operation and reduce risks and costs. Typically, prognostic methods are classified into model-based methods and data-driven methods <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Model-based methods describe the degradation process of engineering systems by establishing mathematical models based on the failure mechanism or the first principle of damage <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. However, the physical parameters of the model should vary with different operating because it does not consider the temporal dependency problem <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref>. As a result, the existing RUL methods based on transfer learning (TL) can hardly adapt to common RUL prediction problems. In this article, we propose the use of bidirectional GRUs (BGRUs) to solve the problem of sequential data processing. We use labeled source domain data and unlabeled target domain data for training. This can be viewed as a process of unsupervised learning based on feature transfer. At the same time, we use a domain-adversarial neural network (DANN) to learn features with domain invariance. To verify the method proposed in this article, we use the IEEE PHM 2012 Challenge datasets for verification. The experimental results prove the effectiveness of the method proposed in this article. The main contributions of our work are as follows:</ns0:p><ns0:p>(1) We propose a new RUL prediction structure that can better adapt to data distribution shifts under different working environments and fault modes.</ns0:p><ns0:p>(2) The framework not only uses a single sensor but also integrates information from multiple sensors.</ns0:p><ns0:p>(3) Compared with the nonadaptive method and the traditional nondeep adaptive method, our proposed structure obtains better prediction results. The rest of this article is organized as follows: Section 2 briefly introduces the theoretical background of TL and deep learning. Then, the experimental procedure is introduced in Section 3. In Section 4, the BGRU, DANN, domain-adaptative BGRU and BGRU-DANN structures proposed in this article are introduced. On this basis, RUL prediction for a bearing dataset is studied. The comparative results and conclusions are given in Section 5.</ns0:p></ns0:div> <ns0:div><ns0:head>Literature Review Deep learning and PHM</ns0:head><ns0:p>Within the framework of deep learning, an RNN is a very representative structure. It can not only process sequence data but also extract features well. Furthermore, RNNs have been used in the field of RUL prediction <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref>. However, such networks cannot deal with the weight explosion and gradient disappearance problems caused by recursion. This limits their application in long-term sequence processing. To solve this problem, many RNN variants have begun to appear, for example, LSTM and GRUs. These networks can process series with long-term correlations and extract features from them. As a variant of the RNN proposed earlier, LSTM has already performed well in RUL prediction. Shi <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref> also showed similar results; real-time, high-precision RUL prediction was achieved by training a dual-LSTM network. Chen <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> tried to add the attention mechanism commonly used in the image field to an LSTM network and proposed an attention-based LSTM method, which also achieved good results. Ma <ns0:ref type='bibr' target='#b21'>[20]</ns0:ref> proposed integrating deep convolution into the LSTM network. This approach applies a convolution structure to output-to-state and state-to-state information and uses time and time-frequency information simultaneously. As another type of RNN variant, GRUs have also begun to be applied in RUL prediction.</ns0:p><ns0:p>Compared with LSTM, a GRU has a simpler structure and fewer parameters, but the effect is comparable to that of LSTM. Deng <ns0:ref type='bibr' target='#b22'>[21]</ns0:ref> combined a GRU with a particle filter (PF) and proposed an MC-GRU-based fusion prediction method, which achieved good performance in a prognostic study of ball screws. Lu <ns0:ref type='bibr' target='#b23'>[22]</ns0:ref> proposed a GRU network based on an autoencoder. It uses an autoencoder to obtain features and a GRU network to extract sequence information. Compared with standard unidirectional LSTM and GRU, the bidirectional structure can extract better feature information <ns0:ref type='bibr' target='#b26'>[25]</ns0:ref>&#12290;Huang proposed to combine multi-sensor data with operation data to make RUL prediction based on bidirectional LSTM (BLSTM) <ns0:ref type='bibr' target='#b24'>[23]</ns0:ref>. Huang <ns0:ref type='bibr' target='#b25'>[24]</ns0:ref> proposed a fusion prediction model based on BLSTM. It not only proves the advantages of LSTM in automatic feature acquisition and fusion, but also demonstrates the excellent performance of BLSTM in RUL prediction. Yu proposed a Bidirectional Recurring Neural Network based on autoencoder for C-MAPSS RUL estimation <ns0:ref type='bibr' target='#b26'>[25]</ns0:ref>. She attempted to use BGRU for RUL prediction and validated its effectiveness with Bearing data <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Transfer learning</ns0:head><ns0:p>In most classification or regression tasks, it is assumed that sufficient training data with label information can be obtained. At the same time, it is assumed that the training data and the test data come from the same distribution and feature space. However, in real life, data offset is common. The training data and test data may come from different marginal distributions. As a way to find the similarity between the source domain and the target domain, TL has achieved good results in domain adaptation. The basic TL methods can be divided into the following categories:</ns0:p><ns0:p>(1) Instance-based TL (2) Feature-based TL (3) Model-based TL (4) Relation-based TL Detailed information about these methods can be found in the literature <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref>. In this article, they are divided into two categories according to their development process. One contains nondeep learning methods, and the other is based on deep learning methods. The most representative nondeep learning approaches are a series of methods based on maximum mean discrepancy (MMD). For example, Pan proposed transfer component analysis (TCA) <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref>, which is the most representative TL method. Long tried to combine marginal distributions and conditional distributions and proposed joint distribution adaptation (JDA) <ns0:ref type='bibr' target='#b31'>[30]</ns0:ref>. Wang believed that marginal distributions and conditional distributions should have different weights. As a result, he proposed balanced distribution adaptation (BDA) <ns0:ref type='bibr' target='#b32'>[31]</ns0:ref>. This technique minimizes the distance between the source domain and the target domain through feature mapping so that the data distributions of the two domains can be as similar as possible. There are also some other nondeep learning methods. For example, Tan proposed structural correspondence learning (SCL) <ns0:ref type='bibr' target='#b33'>[32]</ns0:ref> based on feature selection. Sun and Gong proposed correlation alignment (CORAL) <ns0:ref type='bibr' target='#b34'>[33]</ns0:ref> and the geodesic flow kernel (GFK) method <ns0:ref type='bibr' target='#b35'>[34]</ns0:ref> based on subspace learning. With the continuous development of deep learning methods, an increasing number of people are beginning to use deep neural networks for TL. Compared with traditional nondeep TL methods, deep TL has achieved the best results at this stage. The simplest method for conducting deep TL to finetune the deep network, which realizes transfer by finetuning the trained network <ns0:ref type='bibr' target='#b36'>[35]</ns0:ref>. At the same time, by adding an adaptive layer to deep learning, deep network adaptation has also begun to appear consistently. For example, Tzeng proposed deep domain confusion (DDC) <ns0:ref type='bibr' target='#b37'>[36]</ns0:ref>, Ghifary proposed a domain adaptive neural network <ns0:ref type='bibr' target='#b38'>[37]</ns0:ref>, Long proposed a joint adaptation network (JAN) <ns0:ref type='bibr' target='#b39'>[38]</ns0:ref>, etc. Recently, as the latest research result in the field of artificial intelligence, generative adversarial networks (GANs) have also begun to be used in transfer learning. Ganin first proposed the DANN <ns0:ref type='bibr' target='#b40'>[39]</ns0:ref>. Yu extended a dynamic distribution to an adversarial network and proposed dynamic adversarial adaptation networks (DAANs) <ns0:ref type='bibr' target='#b41'>[40]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Transfer learning and PHM</ns0:head><ns0:p>As a way of thinking and a mode of learning, transfer learning has a core problem: finding the similarity between the new problem and the original problem. TL mainly solves the following four contradictions <ns0:ref type='bibr' target='#b41'>[40]</ns0:ref>:</ns0:p><ns0:p>(1) The contradiction between big data and less labeling.</ns0:p><ns0:p>(2) The contradiction between big data and weak computing.</ns0:p><ns0:p>(3) The contradiction between a universal model and personalized demand.</ns0:p><ns0:p>(4) The needs of specific applications. The above four contradictions also exist in PHM. For example, with the development of advanced sensor technology, an increasing amount of data have been collected. However, the amount available data with run-to-failure label information is still small. Second, because the operating state of equipment is affected by many different conditions, the data collected are often not representative due to the differences between various operating conditions and environments. Thus, it is difficult to construct a predictive model with strong universality. Finally, for a PHM system, because of the complexity of the object's use environment, we also need an RUL prediction model with specific applications. However, because there are no data with sufficient label information, it is impossible to use a data-driven approach to build an accurate predictive model. As an effective means, TL can help solve the existing problems of PHM. However, in the field of PHM, TL is mainly used in classification tasks <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref>. Shao proposed a convolutional neural network (CNN) based on TL <ns0:ref type='bibr' target='#b42'>[41]</ns0:ref>, which is used to diagnose bearing faults under different working conditions. Xing proposed a distribution-invariant deep belief network (DIDBN) <ns0:ref type='bibr' target='#b43'>[42]</ns0:ref>, which can adapt well to new working conditions. Feng pointed out that it is necessary to conduct fault diagnosis research with zero samples <ns0:ref type='bibr' target='#b44'>[43]</ns0:ref>. He introduced the idea of zero-shot learning into industrial fields and proposed a zero-sample fault diagnosis method based on the attribute transfer method. RUL prediction studies based on TL are still relatively few in number, as far as the author knows <ns0:ref type='bibr' target='#b45'>[44]</ns0:ref>[45] <ns0:ref type='bibr' target='#b47'>[46]</ns0:ref> <ns0:ref type='bibr' target='#b48'>[47]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials Experimental analysis</ns0:head><ns0:p>In this section, we first describe the experimental data and platform in detail. Then, we analyze the data processing and feature extraction methods and introduce the relevant performance metrics. Finally, the effectiveness of our proposed method is verified via a comparison with other methods.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental data description</ns0:head><ns0:p>The IEEE PHM Challenge 2012 bearing dataset is used to test the effectiveness of the proposed method. This dataset is collected from the PRONOSTIA test platform and contains run-to-failure datasets acquired under different working conditions. PRONOSTIA is composed of three main parts: a rotating part, a degradation generation part and a measurement part. Vibration and temperature signals are gathered during all experiments. The frequency of vibration signal acquisition is 25.6 kHz. A sample is recorded every 0.1 s, and the recording interval is 10 s. The frequency of temperature signal acquisition is 10 Hz. 600 samples are recorded each minute. To ensure the safety of the laboratory equipment and personnel, the tests are stopped when the amplitude of the vibration signal exceeds 20 g. The basic information of the tested bearing is shown in Table <ns0:ref type='table'>1</ns0:ref>. Table <ns0:ref type='table'>2</ns0:ref> gives a detailed description of the datasets. From the table, we can see that the operating conditions of the three datasets are different, and from the literature <ns0:ref type='bibr' target='#b48'>[47]</ns0:ref>, we can obtain that the failure modes are also different. This is very suitable for experimenting with the method proposed in this article. To verify the effectiveness of the method proposed in this paper, we divide the data into a source domain and target domain according to the different operating conditions. The basic information is shown in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Feature extraction</ns0:head><ns0:p>From the raw vibration data, we extract 13 basic time-domain features. They are the maximum, minimum, mean, root mean square error (RMSE), mean absolute value, skewness, kurtosis, shape factor, impulse factor, standard deviation, clearance factor, crest factor, and variance. At the same time, through 4-layer wavelet packet decomposition, we extract the energy of 16 frequency bands as time-frequency domain features. In the literature <ns0:ref type='bibr' target='#b48'>[47]</ns0:ref>, the frequency resolution of the vibration signal was too low. Therefore, we do not extract the frequency domain features but rather use the features of three trigonometric functions. They are the standard deviation of the inverse hyperbolic cosine (SD of the IHC), standard deviation of the inverse hyperbolic sine (SD of the IHS), and standard deviation of the inverse tangent (SD of the IT). For trigonometric features, trigonometric functions transform the input signal into different scales so that the features have better trends <ns0:ref type='bibr' target='#b48'>[47]</ns0:ref>, and the feature types are shown in Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref>. Through feature extraction, we can obtain 64 features from the feature dataset, which can better represent the degradation process of the system. Because of space constraints, we only show features along the X-axis of the bearing 1-1 data in Fig. <ns0:ref type='figure' target='#fig_8'>1</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>By processing the original processing, we extract a set of feature vectors, which are expressed as . To obtain a better experimental result, the experimental data need to be We use three indicators to evaluate the performance of the proposed method. The mean absolute error (MAE), mean squared error (MSE) and R2_score provide estimations regarding how well the model is performing on the target prediction task. The formulas for their calculation are as follows. MAE:</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_0'>1 1 | | L i i i MAE y y L &#61501; &#61501; &#61485; &#61669; MSE:<ns0:label>(3) 2 1 1 ( )</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>L i i i MSE y y L &#61501; &#61501; &#61485; &#61669; R2_score: (4) 2 1 2 2 1 ( ) 1 2 _ ( ) n i i i n i i R e y y y scor y &#61501; &#61501; &#61485; &#61501; &#61485; &#61485; &#61669; &#61669;</ns0:formula><ns0:p>Here, is the length of the test data, is the ith true value, is the corresponding predicted A GRU is a variant of the LSTM structure. Compared with LSTM, its structure is simpler, and there are fewer parameters. He combined the forget gate and input gate in LSTM into a single update gate. At the same time, the cell state and hidden state were also merged. A GRU contains two door structures, a reset door and an update door. The reset gate determines whether the new input is combined with the output from the previous moment; that is, the smaller the value of the reset gate is, the less the output information from the previous moment is retained. The update gate determines the degree of influence of the output information from the previous moment on the current moment. The larger the value of the update gate is, the greater the influence of the output from the previous moment on the current output. The GRU-based structure is shown in Fig. <ns0:ref type='bibr' target='#b2'>3</ns0:ref> <ns0:ref type='figure'>( , ,</ns0:ref> )</ns0:p><ns0:formula xml:id='formula_2'>t t t BGRU h f x h &#61553; &#61485; &#61501; r r r r (6) 1 1 1 1 ( [ , ] ) ( [ , ] ) tanh( [ , ] )<ns0:label>(1 )</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>t r t t r t z t t z t h t t t h t t t t r W h x b z W h x b h W r h x b h z h z h &#61555; &#61555; &#61485; &#61485; &#61485; &#61485; &#61676; &#61501; &#61483; &#61679; &#61501; &#61483; &#61679; &#61679; &#61501; &#61677; &#61501; &#61483; &#61679; &#61679; &#61501; &#61485; &#61483; &#61679; &#61678; r r r r r r r r r r r r r r r r % r r r r r %</ns0:formula><ns0:p>Backward propagation: Manuscript to be reviewed <ns0:ref type='figure'>[ , ]</ns0:ref> ) </ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_5'>( [ , ] ) tanh( [ , ] ) (1 ) t r t t r t z t t z t h t t t h t t t t r W h x b z W h x b h W r h x b h z h z h &#61555; &#61555; &#61485; &#61485; &#61485; &#61485; &#61676; &#61501; &#61483; &#61679; &#61501; &#61483; &#61679; &#61679; &#61501; &#61677; &#61501; &#61483; &#61679; &#61679; &#61501; &#61485; &#61483; &#61679; &#61678; s s</ns0:formula><ns0:formula xml:id='formula_6'>TW k t t k GRU GRU H h h h f X &#61553; &#61553; &#61501; &#61501; r s</ns0:formula><ns0:p>Here, represents the hidden layer function of the BGRU, as defined by Eqn. ( <ns0:ref type='formula'>6</ns0:ref>) and Eqn.</ns0:p><ns0:p>( ) f &#61623; <ns0:ref type='bibr' target='#b7'>(8)</ns0:ref>.</ns0:p><ns0:p>is the output characteristic. represents the parameters of Inspired by the GAN, Yaroslav Ganin first proposed domain-adversarial training for neural networks <ns0:ref type='bibr' target='#b40'>[39]</ns0:ref>, the process for which is shown in Fig. <ns0:ref type='figure' target='#fig_11'>4</ns0:ref> Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_7'>f i f y i d d f i f d i i N i N d L L G G x y L G G x y &#61553; &#61553; &#61553; &#61553; &#61553; &#61537; &#61553; &#61553; &#61501; &#61501; &#61501; &#61501; &#61485; &#61669; &#61669;</ns0:formula><ns0:p>Here, is the error of the category predictor, and is the error of domain classification. is </ns0:p><ns0:p>, &#710;&#710;( , ) arg min ( , , )</ns0:p><ns0:formula xml:id='formula_9'>f y f y f y d L &#61553; &#61553; &#61553; &#61553; &#61553; &#61553; &#61553; &#61501; (13) &#710;&#710;( ) arg max ( , , ) d d f y d L &#61553; &#61553; &#61553; &#61553; &#61553; &#61501;</ns0:formula><ns0:p>Judging from the above two optimization formulas, this is a minimax problem. To solve this problem, a gradient reversal layer (GRL) is introduced into the DANN. During the process of forward propagation, the GRL acts as an identity transformation. However, during the back propagation process, the GRL automatically inverts the gradient. The optimization function selected by the DANN is a stochastic gradient descent (SGD) function. The GRL layer is generally placed between the feature extraction layer and the domain classifier layer.</ns0:p><ns0:p>The original DANN was the first proposed TL method based on adversarial networks. It is not only a method but also a general framework. Based on these foundations, many people have proposed different architectures <ns0:ref type='bibr' target='#b49'>[48]</ns0:ref>[49] <ns0:ref type='bibr' target='#b51'>[50]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>BGRU-based deep domain adaptation</ns0:head><ns0:p>To process the time series data, we construct the BGRU-DANN model, the process of which is shown in Fig. <ns0:ref type='figure' target='#fig_12'>5</ns0:ref>. Source domain data and target domain data with only domain information are used to train the network. Similar to the DANN network, the BGRU-DANN network can also be divided into three parts. The first part is a feature extraction network. We use a BGRU to map the input data to a hidden state. Then, the output of the BGRU is embedded in the feature space.</ns0:p><ns0:p>That is, . The second part maps the new features to the label data ( ( ), ) (14)</ns0:p><ns0:formula xml:id='formula_10'>f k f f G BGRU X &#61553; &#61501; (<ns0:label>source</ns0:label></ns0:formula><ns0:formula xml:id='formula_11'>1 1 1 1 1 1 ( , , ) ( , ) ( ( , ) ( , )) s s t n n n i i i f y d y f y d f d d f d i i i s s t L L L L n n n &#61553; &#61553; &#61553; &#61553; &#61553; &#61537; &#61553; &#61553; &#61553; &#61553; &#61501; &#61501; &#61501; &#61501; &#61485; &#61483; &#61669; &#61669; &#61669;</ns0:formula><ns0:p>Here, the loss functions and are defined as:</ns0:p><ns0:formula xml:id='formula_12'>i y L i d L (15) ( , ) | | i i i p y f y t t L y y &#61553; &#61553; &#61501; &#61485; (16) 1 1 ( , ) log (1 ) log &#710;1 i i i d d y i i t t L d d d d &#61553; &#61553; &#61501; &#61483; &#61485; &#61485;</ns0:formula><ns0:p>In the formula, is the predicted value of the RUL at time , i.e., . is the &#710;i t y t &#710;( , )</ns0:p><ns0:formula xml:id='formula_13'>i i t y t y y G f &#61553; &#61501; i d</ns0:formula><ns0:p>field forecast, and . is the regression error. When the value of is ( , )</ns0:p><ns0:formula xml:id='formula_14'>i i d t d d G f &#61553; &#61501; ( , ) i y f y L &#61553; &#61553; p different, different calculation methods can be used.</ns0:formula><ns0:p>is the binary cross-loss quotient ( , )</ns0:p><ns0:formula xml:id='formula_15'>i d d y L &#61553; &#61553;</ns0:formula><ns0:p>between the domain labels. The optimization process is shown in Eqn. <ns0:ref type='bibr' target='#b12'>(12)</ns0:ref> and Eqn. <ns0:ref type='bibr' target='#b13'>(13)</ns0:ref>. The weight update process is as follows:</ns0:p><ns0:p>( )</ns0:p><ns0:formula xml:id='formula_17'>i i y d f f f f L L &#61553; &#61553; &#61548; &#61537; &#61553; &#61553; &#61622; &#61622; &#61612; &#61485; &#61485; &#61622; &#61622;<ns0:label>(18)</ns0:label></ns0:formula><ns0:formula xml:id='formula_18'>i y y y f L &#61553; &#61553; &#61548; &#61553; &#61622; &#61612; &#61485; &#61622; (19) i d d d f L &#61553; &#61553; &#61548;&#61537; &#61553; &#61622; &#61612; &#61485; &#61622;</ns0:formula><ns0:p>Similar to a DANN, the GRL mechanism is also introduced here to realize the optimization process. SGD is used to update Eqns. ( <ns0:ref type='formula' target='#formula_16'>17</ns0:ref>), ( <ns0:ref type='formula' target='#formula_17'>18</ns0:ref>) and <ns0:ref type='bibr' target='#b19'>(19)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>BGRU-DANN structure</ns0:head><ns0:p>The structure of BGRU-DANN is shown in Fig. <ns0:ref type='figure' target='#fig_13'>6</ns0:ref>. Its basic composition can be divided into two parts. One part uses the training data from the source domain to minimize the loss of source domain regression. The other part uses the sensor data of the source domain and the target domain to maximize the error of domain classification. The BGRU and FC layers are shared by both parts. To facilitate the parameter setting process, we set the learning rates of the two sections to the same value. At the same time, we use dropout and BN layers for feature acquisition, domain classification and source domain regression. In the source domain regression task, the purpose of the training process is to minimize the regression loss function. In the domain classification task, a GRL is placed between the feature extraction and domain classification layers. During the process of back propagation, the GRL inverts the corresponding gradient to realize the optimization process of the model. When the output of the system does not improve significantly, the training process is stopped. For the corresponding FC layer, we use the ReLU activation function.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head></ns0:div> <ns0:div><ns0:head>Transfer prediction</ns0:head><ns0:p>To realize the prediction of RUL, we need to establish the BGRU-DANN structure and set the corresponding hyperparameters. For different transfer tasks, the optimal parameters of the model may vary. The model in this paper has no specific optimization process for parameter setting during use, and the parameters used are the same for different transfer tasks. The input size of the BGRU network is set to 64. The size of each hidden layer is set to 256. The number of network layers is set to 3. The DANN classifier is set to a 3-layer FC structure, and the domain classifier is a 3-layer FC structure. The network learning rate is set to 0.01. The number of training iterations is set to 5000. Some of the remaining hyperparameter settings are provided in Table <ns0:ref type='table' target='#tab_7'>5</ns0:ref>.</ns0:p><ns0:p>After setting the relevant parameters, we can predict the RUL. First, we use the BGRU structure to extract the features of the input sequence data. Then, the DANN network is used to implement adversarial training to extract features with domain invariance. The experimental results are shown in Fig. <ns0:ref type='figure' target='#fig_14'>7</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_15'>8</ns0:ref>. Fig. <ns0:ref type='figure' target='#fig_14'>7</ns0:ref> reflects the predicted results of bearing 2-1, bearing 2-4 and bearing 2-6. The source domain data are bearing 1-3-bearing 1-7, and the target data are bearing 2-1, bearing 2-4, and bearing 2-6. Fig. <ns0:ref type='figure' target='#fig_15'>8</ns0:ref> reflects the prediction results for bearing 3-1, bearing 3-2, and bearing 3-3.</ns0:p><ns0:p>The source domain data are bearing 1-3-bearing 1-7, and the target data are bearing 3-1, bearing 3-2, and bearing 3-3. From (A), (C), and (E) in Fig. <ns0:ref type='figure' target='#fig_14'>7</ns0:ref> and (A), (C), and (E) in Fig. <ns0:ref type='figure' target='#fig_15'>8</ns0:ref>, we can conclude that the predicted RUL results exhibit a good downward trend performance and are very close to the real RUL values; this effectively illustrates the effectiveness of the proposed data-driven prediction framework based on TL.</ns0:p></ns0:div> <ns0:div><ns0:head>Comparison of experimental results</ns0:head><ns0:p>To demonstrate the advantages of data-driven prediction methods based on domain adaptation, three methods are used for comparison purposes, namely, a BGRU without transfer learning, TCA-NN, and FC-DANN. We can see in Fig. <ns0:ref type='figure' target='#fig_14'>7</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_15'>8</ns0:ref> that the RUL prediction results of BGRU-DANN are significantly better than those of the other three methods, and the declining trend can best reflect the real RUL value. However, the other three methods cannot reflect the degradation trend of the RUL effectively. Fig. <ns0:ref type='figure'>9</ns0:ref> shows the RUL errors of BGRU-DANN, the BGRU, TCA-NN and FC-DANN. It can be clearly seen from Fig. <ns0:ref type='figure'>9</ns0:ref> that the RUL error generated by the BGRU-DANN model is the smallest, especially for bearings 2-4, 3-1 and 3-3. At the same time, bearing 2-1 and bearing 2-6 in Fig. <ns0:ref type='figure'>9</ns0:ref> clearly reflect that the RUL error generated by BGRU-DANN is smaller than that of the other three methods in most cases. Bearing 3-2 in Fig. <ns0:ref type='figure'>9</ns0:ref> may not clearly indicate the superiority of BGRU-DANN due to the large amount of data involved. However, through the comparison of the three evaluation indicators in Table <ns0:ref type='table' target='#tab_8'>6</ns0:ref>, it can still be seen that BGRU-DANN achieves the best effect.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_8'>6</ns0:ref> shows that BGRU-DANN achieves the best results in terms of the three evaluations, the MAE, MSE, and R2_score, which further proves the effectiveness of the method proposed in this paper. Regarding the MSE, the calculated results of the proposed method for bearing 2-1, bearing 2-4, bearing 2-6, bearing 3-1, bearing 3-2 and bearing 3-3 are 0.0283, 0.0193, 0.0217, 0.0298, 0.0503, and 0.0472, respectively, which are far less than the calculated error results of the other three methods. For the MAE, the calculated results of the proposed method for bearing 2-1, bearing 2-4, bearing 2-6, bearing 3-1, bearing 3-2 and bearing 3-3 are 0.1157, 0.0928, 0.0875, 0.1215, 0.1569, and 0.1238, respectively, which are still better than the calculated error results of the other three models. In terms of the R2-score calculation results, the calculated results of the proposed method for bearing 2-1, bearing 2-4, bearing 2-6, bearing 3-1, bearing 3-2 and bearing 3-3 are 0.6576, 0.7664, 0.7367, 0.6379, 0.3935, and 0.4252, respectively; this indicates that the model has certain explanatory ability regarding the relationship between the independent variable and the dependent variable in the regression analysis and is superior to the three compared methods.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this article, a domain-adaptative prediction method based on deep learning with a BGRU and a DANN is proposed. The validity of the proposed method is demonstrated by an experiment on the 2012 IEEE PHM dataset. The objective of this study is to propose a domain-adaptive RUL prediction method. When the input bearing is transferred from the source domain with label information to a target domain with only sensor information, a more accurate estimate of the RUL can be obtained. From the results of the experiment, we can draw the following conclusions:</ns0:p><ns0:p>(1) Compared with the BGRU without TL, the proposed method has a better effect in terms of RUL prediction. This indicates that the model obtained by adversarial training has better generalization ability and can adapt to data with different distributions.</ns0:p><ns0:p>(2) The comparison with TCA-NN proves that the deep, domain-adaptive BGRU-DANN method has better performance. This indicates that the transfer method based on deep learning has a stronger feature extraction ability than the traditional nondeep transfer method, and it can extract better features with domain invariance.</ns0:p><ns0:p>(3) Using FC layers for feature extraction, this paper constructs an FC-DANN network. A comparison of the results fully shows that the BGRU has a better effect in terms of feature extraction. Compared with the features extracted by the FC method, the features extracted by the BGRU for sequence data processing are more representative.</ns0:p><ns0:p>(4) By means of domain adaptation, the generalization ability of the data-driven RUL prediction model can be effectively improved, and it can adapt to RUL prediction tasks under different working conditions to a certain extent.</ns0:p><ns0:p>In future work, we will take a closer look at the problem of time series transfer. Remaining life prediction problems with respect to bearings, aero engines, etc. can actually be regarded as time series transfer problems. However, research on time series transfer is still in its infancy. There are merely a few studies on such issues. Only Yu proposed two different time series transfer methods in references <ns0:ref type='bibr' target='#b52'>[51]</ns0:ref> and <ns0:ref type='bibr' target='#b53'>[52]</ns0:ref>, one based on an extreme learning machine and the other based on a CNN. However, most of the data monitored by sensors are time series data, and this is a very common data type in RUL forecasting research. Therefore, the authors intend to conduct related research in the future, hoping to obtain a better model and research results with more practical application value. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>this article, the maximum and minimum values are normalized, and the basic calculation formula is as follows:(1)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>1 {x&#61501;target 1 {</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>is the average of the true values. y Methods Problem definition We use to denote the training task and to denote the target task. The training and testing S T T T data are represented as the source domain dataset and the target domain dataset , is a series of features belonging to the feature space, PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60537:1:1:NEW 17 Jun 2021) Manuscript to be reviewed Computer Science its length is , and its characteristic number is . represents the RUL label corresponding to RUL information. We assume that the marginal probability distributions of and are not S D T D the same; that is, . Here, we use source and target domain data to learn a ( of the training process is to enable to estimate the F F corresponding RUL of the target domain samples during testing. During training, we use the corresponding datasets: from the source domain and from the is an unsupervised TL method. The process of training can be expressed as .</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>G</ns0:head><ns0:label /><ns0:figDesc>. A DANN combines domain adaptation with feature learning during the training process to better obtain distinctive and domain-invariant features. At the same time, the learned weights can also be directly used in the target field. The network structure of a DANN is mainly composed of three parts: a feature extractor , a extract the features with the greatest domain invariance. is used to classify thef G y source domain data. is used to distinguish between the characteristic data of the source d G domain and the target domain. Its training objectives are mainly twofold: the first is to accurately classify the source domain dataset to minimize the category prediction error. The second is to confuse the source domain dataset with the target domain dataset to maximize the domain classification error. The loss function of the DANN can be expressed by the following formula: PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60537:1:1:NEW 17 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>&#61553;&#61553;</ns0:head><ns0:label /><ns0:figDesc>the feature acquisition layer. The parameter of the category predictor is .y &#61553; d is the parameter of the domain classifier. During the training process, to find the features with the best domain invariance, on the one hand, it is necessary to find and to minimize the f &#61553; y category prediction error. On the other hand, it is also necessary to search to maximize the d &#61553; error of domain classification.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>domain) through the fully connected (FC) layer. That is, . In the third part, is mapped to the domain label through the FC layer, i.e.three-layer BGRU and an FC layer. A nonlinear high-dimensional feature representation of the original data is learned through the BGRU and FC layers.is composed y G of FC layers, batch normalization (BN) layers, and a rectified linear unit (ReLU) layer; y G provides the regression value of the source domain data. The network form of is FC1+ BN1+ y G ReLU1+ Dropout1+ FC2+ BN2+ ReLU2+ FC3. PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60537:1:1:NEW 17 Jun 2021) Manuscript to be reviewed Computer Science During the adversarial training process, is used to distinguish whether the observed feature d G comes from the source domain or the target domain. consists of a gradient reversal layer and d G three FC layers. Here, is trained to extract features so that the difference between the source f G domain and the target domain is maximized. The labels of the source domain and target domain are set to 1 and 0, respectively. The loss function of the training process is as follows:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>1 Table 2 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Descriptions of the experimental datasets</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure1. Features for Bearing 1- 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. Features for Bearing 1-1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Sliding TW processing technique</ns0:figDesc><ns0:graphic coords='33,42.52,178.87,525.00,305.25' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. GRU memory cell</ns0:figDesc><ns0:graphic coords='34,42.52,178.87,525.00,171.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The flowchart of DANN</ns0:figDesc><ns0:graphic coords='35,42.52,178.87,525.00,276.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The flowchart of BGRU-DANN</ns0:figDesc><ns0:graphic coords='36,42.52,178.87,525.00,303.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. BGRU-DANN Structure</ns0:figDesc><ns0:graphic coords='37,42.52,178.87,525.00,215.25' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Prediction results for dataset 2:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Prediction results for dataset 3:</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,178.87,525.00,280.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Data processing</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60537:1:1:NEW 17 Jun 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>.</ns0:head><ns0:label /><ns0:figDesc>In our proposed structure, a BGRU is used to obtain time series features from a TW . Here,</ns0:figDesc><ns0:table><ns0:row><ns0:cell>t x</ns0:cell><ns0:cell>t</ns0:cell><ns0:cell>t h</ns0:cell><ns0:cell cols='2'>t t r</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>and is the update gate. These two parts determine how to obtain from t z t h</ns0:cell><ns0:cell>t h</ns0:cell><ns0:cell>-1</ns0:cell><ns0:cell>. The hidden</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>layer of the GRU is defined as follows when running at time : t</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Forward propagation:</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell>(5)</ns0:cell></ns0:row></ns0:table><ns0:note>W Tis the input at time , and represents the output of the GRU at time . is the reset gate,</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Feature set</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Type</ns0:cell><ns0:cell cols='2'>Feature</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F1: Maximum</ns0:cell><ns0:cell>F8: Shape Factor</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F2: Minimum</ns0:cell><ns0:cell>F9: Impulse Factor</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F3: Mean</ns0:cell><ns0:cell>F10: Standard Deviation</ns0:cell></ns0:row><ns0:row><ns0:cell>Time-domain features</ns0:cell><ns0:cell>F4: RMSE</ns0:cell><ns0:cell>F11: Clearance Factor</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F5: Mean Absolute Value</ns0:cell><ns0:cell>F12: Crest Factor</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F6: Skewness</ns0:cell><ns0:cell>F13: Variance</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F7: Kurtosis</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Time-frequency domain features</ns0:cell><ns0:cell cols='2'>F14-F29: Energies of sixteen bands</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F30: SD of the IHC</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Trigonometric features</ns0:cell><ns0:cell>F31: SD of the IHS</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>F32: SD of the IT</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Hyperparameter settings</ns0:figDesc><ns0:table><ns0:row><ns0:cell>BGRU Layers, (Units), [Dropout]</ns0:cell><ns0:cell>F (Units)</ns0:cell><ns0:cell>Source Regression [Dropout] Layers, (Units),</ns0:cell><ns0:cell>Domain Classification [Dropout] Layers, (Units),</ns0:cell><ns0:cell>&#120630;</ns0:cell><ns0:cell>&#120640;</ns0:cell></ns0:row><ns0:row><ns0:cell>3, (64, 256), [0.5]</ns0:cell><ns0:cell>(256)</ns0:cell><ns0:cell>3, (256, 128, 32), [0.5]</ns0:cell><ns0:cell>3, (256, 128, 32), [0.5]</ns0:cell><ns0:cell cols='2'>0.5 0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Performance metrics for the datasets</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell cols='2'>Performance metric The proposed method</ns0:cell><ns0:cell>BGRU</ns0:cell><ns0:cell cols='2'>TCA-NN FC-DANN</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-1</ns0:cell><ns0:cell /><ns0:cell>0.0283</ns0:cell><ns0:cell>1.4652</ns0:cell><ns0:cell>0.5205</ns0:cell><ns0:cell>0.2865</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-4</ns0:cell><ns0:cell /><ns0:cell>0.0193</ns0:cell><ns0:cell>1.5442</ns0:cell><ns0:cell>1.8181</ns0:cell><ns0:cell>1.0214</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-6 bearing 3-1</ns0:cell><ns0:cell>MSE</ns0:cell><ns0:cell>0.0217 0.0298</ns0:cell><ns0:cell>1.0589 2.1957</ns0:cell><ns0:cell>1.4436 4.4414</ns0:cell><ns0:cell>0.9164 1.3557</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 3-2</ns0:cell><ns0:cell /><ns0:cell>0.0503</ns0:cell><ns0:cell>0.0883</ns0:cell><ns0:cell>0.0796</ns0:cell><ns0:cell>0.0606</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 3-3</ns0:cell><ns0:cell /><ns0:cell>0.0472</ns0:cell><ns0:cell>8.9927</ns0:cell><ns0:cell>8.8676</ns0:cell><ns0:cell>2.9564</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-1</ns0:cell><ns0:cell /><ns0:cell>0.1157</ns0:cell><ns0:cell>0.9440</ns0:cell><ns0:cell>0.6316</ns0:cell><ns0:cell>0.4218</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-4</ns0:cell><ns0:cell /><ns0:cell>0.0928</ns0:cell><ns0:cell>1.0940</ns0:cell><ns0:cell>1.2917</ns0:cell><ns0:cell>0.9429</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-6 bearing 3-1</ns0:cell><ns0:cell>MAE</ns0:cell><ns0:cell>0.0875 0.1215</ns0:cell><ns0:cell>0.8491 1.3532</ns0:cell><ns0:cell>1.0226 1.9420</ns0:cell><ns0:cell>0.8063 1.0884</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 3-2</ns0:cell><ns0:cell /><ns0:cell>0.1569</ns0:cell><ns0:cell>0.2070</ns0:cell><ns0:cell>0.2377</ns0:cell><ns0:cell>0.2035</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 3-3</ns0:cell><ns0:cell /><ns0:cell>0.1238</ns0:cell><ns0:cell>2.5813</ns0:cell><ns0:cell>2.6589</ns0:cell><ns0:cell>1.5394</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-1</ns0:cell><ns0:cell /><ns0:cell>0.6576</ns0:cell><ns0:cell>-16.6992</ns0:cell><ns0:cell>-5.2325</ns0:cell><ns0:cell>-2.4311</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-4</ns0:cell><ns0:cell /><ns0:cell>0.7664</ns0:cell><ns0:cell>-17.6799</ns0:cell><ns0:cell>-20.7599</ns0:cell><ns0:cell>-11.2249</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-6 bearing 3-1</ns0:cell><ns0:cell>R2_score</ns0:cell><ns0:cell>0.7367 0.6379</ns0:cell><ns0:cell>-11.8176 -25.6589</ns0:cell><ns0:cell>-16.2749 -52.0910</ns0:cell><ns0:cell>-9.9658 -15.2054</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 3-2</ns0:cell><ns0:cell /><ns0:cell>0.3935</ns0:cell><ns0:cell>-0.06367</ns0:cell><ns0:cell>0.0451</ns0:cell><ns0:cell>0.2733</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 3-3</ns0:cell><ns0:cell /><ns0:cell>0.4252</ns0:cell><ns0:cell cols='2'>-108.42439 -104.9228</ns0:cell><ns0:cell>-34.3136</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60537:1:1:NEW 17 Jun 2021)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60537:1:1:NEW 17 Jun 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Original Manuscript ID: CS-2021: 04:60537:1:0: NEW Original Article Title: “Data-Driven Remaining Useful Life Prediction Based on Domain Adaptation” To: Peer J Computer Science Editor Re: Response to reviewers Thank you very much for your valuable comments. We have revised the manuscript according to your comments. Your comments help us improve the quality of the paper a lot. Thank you again! All changes are marked by yellow highlight in the revised manuscript. Below, the original comments are in green, and our responses are in blue. The point-by-point explanations and responses to the reviewers’ comments are included as follows. Dear Editor, Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments. We are uploading (a) our point-by-point response to the comments (below) (response to reviewers), (b) an updated manuscript with yellow highlighting indicating changes, and (c) a clean updated manuscript without highlights (PDF main document). Best regards, Basic reporting: Reviewer#1, Concern # 1: The paper has some grammatical errors and proofreading of the paper needs to be done. Author response: Thanks to the experts for their comments. There are still some unsuitable places in our paper. We have revised it. Author action: For the questions raised by experts, the revised version is polished by AJE. There are indeed some grammatical errors and unprofessional expression in our original paper. Hopefully, the modification would clear most of the misunderstandings caused by language problems. The editing certificate is provided as follows: Reviewer#1, Concern # 2: Relevant references in line number 47 are not cited for the RNN techniques used popularly. Author response: Thank you for your question. We apologize for our mistakes. In response to the questions raised by the experts, we have added relevant references. For example: [1] A. Malhi, R. Yan, and R. X. Gao, “Prognosis of Defect Propagation Based on Recurrent Neural Networks,” IEEE Transactions on Instrumentation and Measurement, vol. 60, no. 3, pp. 703-711, 2011. [2] F. O. Heimes, Recurrent Neural Networks for Remaining Useful Life Estimation, 2008. [3] N. Gugulothu, T. Vishnu, P. Malhotra, L. Vig, P. Agarwal, and G. M. J. A. Shroff, “Predicting Remaining Useful Life Using Time Series Embeddings Based on Recurrent Neural Networks,” vol. abs/1709.01073, 2017. Author action: In line 47 of the original text (modified to line 57), we have added references related to RNN. At the same time, we also added references about LSTM and GRU: Some deep neural networks that can support sequence data inputas inputs are also widely used in RUL prediction. For example, recurrent neural networks (RNNs) RNN),, long short-term memory networks (LSTMs) Error: Reference source not found, and gated recurrent units (GRUs) Error: Reference source not foundLSTM), GRU. are common approaches. Reviewer#1, Concern # 3: In the introduction section, the authors discuss the merits of using deep learning techniques under data-driven models for RUL predictions and the demerits of Machine learning techniques (line 49). But no discussion is provided on the popular machine learning techniques such as SVM, Naive Bayes etc. Author response: Thank you for your questions. In this paper, we mainly discussed the data-driven approach based on deep learning and its advantages, and proposed some disadvantages of using the approach based on machine learning. In fact, the machine learning presented in the first draft is based on neural networks. Other commonly used machine learning methods, such as SVM, RVM, RF, etc., have some limitations. For this reason, we did not discuss it in the original text. Author action: We add the discussion about SVM, RVM and other machine learning methods, and modify the structure and expression of the article to make the expression more accurate: It has Data-driven methods have become the focus of research due to itstheir powerful modeling capabilities. Specifically, neural network Among them, machine learning, as a very common data-driven method, has been widely used in the field of RUL prediction. For example, Theodoros H proposed an E-support vector machine (SVM) method to predict the remaining life of a rolling bearing Error: Reference source not found. To solve the limitations of SVMs, Wang proposed an RUL prediction method based on a relevance vector machine (RVM) Error: Reference source not found. Selina proposed a naive Bayes-based RUL prediction model for lithium-ion batteries Error: Reference source not found. Wu used a random forest (RF) to predict tool wear Error: Reference source not found. However, machine learning methods require manual extraction or signal processing and statistical projection to obtain health factors. On the other hand, feature extraction is separate from parameter training. Recently, deep neural networks have been widely used in the field of RUL prediction due to itstheir powerful feature extraction capabilities and regression analysis capabilities Error: Reference source not foundError: Reference source not foundError: Reference source not foundError: Reference source not found[4] [5] [6] [7]. Moreover, some. Deep learning not only combines feature extraction with the parameter training process but can also automatically learn relevant features instead of manually designing them. This greatly compensates for the shortcomings of machine learning. At the same time, most of the signals collected by the associated sensors are time series. Some deep neural networks that can support sequence data inputas inputs are also widely used in RUL prediction. For example, recurrent neural networks (RNNs) RNN),, long short-term memory networks (LSTMs) Error: Reference source not found, and gated recurrent units (GRUs) Error: Reference source not foundLSTM), GRU. are common approaches. Although the data-driven life prediction methodmethods based on deep learning hashave achieved good results inon RUL prediction. However, for typical machine learning tasks, in such methods, the modelnetwork needs to be trained with a large number of labeled datasets in order to getobtain a sufficiently accurate model. ButHowever, for complex systems, it is often difficult to collect enoughsufficient data with run-to-failure information. MeanwhileFurthermore, the current machinemethods based on deep learning requiresrequire the training data and the test dataset to follow the similar distributiondistributions, which means that the dataset needneeds to come from the same feature space. However, in the actual application process, differences in data distribution are widespread, due to the changing environment in which equipment operates, differences in data distribution are widespread, which leads to thea decline of thein RUL prediction effectaccuracy in actual applications. In other words, the RUL prediction model obtained through the training dataset doesmay not have good generalization ability, and the performance on the test data setdataset may be poor. Reviewer#1, Concern # 4: The authors can also discuss the use of other RNN architectures such as BRNN or Bayesian RNN for RUL prediction Author response: Thank you for your suggestion, we have added the relevant discussion in the paper. In the literature review section 'Deep Learning and PHM', we mainly discuss RUL prediction techniques based on RNN. Firstly, we briefly introduce the traditional RNN structure and its application in RUL field. However, the traditional RNN has some defects in the processing of long sequence data. Therefore, in the second and third paragraphs, we respectively introduce RUL prediction methods based on LSTM and GRU. At the same time, because LSTM and GRU only use unidirectional information, it has been proved in related studies that bidirectional RNN structure is superior to unidirectional RNN structure in sequence data processing [1]. In this paper, we also discuss the research on BLSTM and BGRU. Compared with BLSTM, BGRU has fewer parameters when the amount of training data is the same as BLSTM. Therefore, we chose the BGRU method to carry out domain adaptation research. The authors did not find any studies on the prediction of RUL by traditional BRNN structures. More studies have been conducted based on LSTM, GRU, BLSTM or BGRU. Yu [1] proposed a RUL prediction framework based on BRNN structure, but in the actual use process, it is also based on BLSTM and BGRU structure to carry out practical application. As for the Bayesian RNN structure, we did not find relevant references. It has not been applied in the RUL research field. However, Chen [2] has proposed a prediction model based on Bayesian Recurrent Neural Network, which has been successfully applied to the prediction of time series data. Mirikitani [3] has also conducted relevant research on Bayesian Recurrent Neural Networks and proposed a time series model based on Bayesian Recurrent Neural Networks. Ma [4] proposed a link prediction method based on Bayesian Recurrent Neural Network. The BRNN-based RUL prediction will be studied in the following work and will not be discussed in this paper. Author action: In the literature review section of the paper, we have added some literatures about BLSTM and BGRU. As a variant of the RNN proposed earlier, LSTM has already performed well in RUL prediction. Shi Error: Reference source not found also showed similar results; real-time, high-precision RUL prediction was achieved by training a dual-LSTM network. Chen Error: Reference source not found tried to add the attention mechanism commonly used in the image field to an LSTM network and proposed an attention-based LSTM method, which also achieved good results. Ma Error: Reference source not found proposed integrating deep convolution into the LSTM network. This approach applies a convolution structure to output-to-state and state-to-state information and uses time and time-frequency information simultaneously. As another type of RNN variant, GRUs have also begun to be applied in RUL prediction. Compared with LSTM, a GRU has a simpler structure and fewer parameters, but the effect is comparable to that of LSTM. Deng Error: Reference source not found combined a GRU with a particle filter (PF) and proposed an MC-GRU-based fusion prediction method, which achieved good performance in a prognostic study of ball screws. Lu Error: Reference source not found proposed a GRU network based on an autoencoder. It uses an autoencoder to obtain features and a GRU network to extract sequence information. Compared with standard unidirectional LSTM and GRU, the bidirectional structure can extract better feature information Error: Reference source not found。Huang proposed to combine multi-sensor data with operation data to make RUL prediction based on bidirectional LSTM (BLSTM) Error: Reference source not found[11]. Huang Error: Reference source not found proposed a fusion prediction model based on BLSTM. It not only proves the advantages of LSTM in automatic feature acquisition and fusion, but also demonstrates the excellent performance of BLSTM in RUL prediction. Yu proposed a Bidirectional Recurring Neural Network based on autoencoder for C-MAPSS RUL estimation Error: Reference source not found. She attempted to use BGRU for RUL prediction and validated its effectiveness with Bearing data Error: Reference source not foundbidirectional LSTM in RUL prediction. Shi [12] also showed similar results. Real-time high-precision RUL prediction is achieved by training a dual-LSTM network. Chen [13] tried to add the attention mechanism commonly used in the image field to the LSTM network, and proposed an attention-based LSTM method, which also achieved good results. Ma [14] proposed to integrate deep-convolution into the LSTM network. It applies the convolution structure to output-to-state and state-to-state, and uses time and time-frequency information at the same time. . As another RNN variant, GRU has also begun to be applied in RUL prediction. Compared with LSTM, it has a simpler structure and fewer parameters, but the effect is comparable to that of LSTM. Deng [15] combined GRU with particle filter (PF) and proposed a MC-GRU-based fusion prediction method, which achieved good performance in prognostic study of ball screws. Lu [16] proposed a GRU network based on autoencoder. It uses an autoencoder to obtain features and a GRU network to extract sequence information. Reviewer#1, Concern # 5: The authors highlight four categories of transfer learning- Instance-based Transfer Learning, Feature-based Transfer Learning, Model-based Transfer Learning and Relation based Transfer Learning. However, which category they have adopted in their paper is not mentioned. Author response: Thank you for your question. The method we use is the transfer based on domain adaptation. In essence, it is still a feature-based transfer method. Features with good domain invariance are extracted through deep learning and adversarial learning. In this way, models can be transferred in different domains. Author action: In line 84 of the revised article, we added the relevant explanation: This can be viewed as a process of unsupervised learning based on feature transfer. Reviewer#1, Concern # 6: In Literature Review section, techniques with references are only cited but detailed explanation of only a few research work is given. Authors should add more detailed literature. Author response: Thank you for your advice, which is very useful for improving the completeness of the article. We did not give much consideration to this part in our first draft because many of the articles in it are foundational articles or related articles on transfer learning. This may be easy to understand for those doing the research. But we don't take into account that the reader may not be a researcher in the field. Therefore, we have revised this part. Author action: Some references in the literature review section are supplemented: In line 149 of the revised article: Pan proposed transfer component analysis (TCA) Error: Reference source not foundTransfer Component Analysis (TCA) [19], Joint Distribution Adaptation , which is the most representative TL method. Long tried to combine marginal distributions and conditional distributions and proposed joint distribution adaptation (JDA) Error: Reference source not found[20], Balanced Distribution Adaptation. Wang believed that marginal distributions and conditional distributions should have different weights. As a result, he proposed balanced distribution adaptation (BDA) Error: Reference source not found[21], etc. It. This technique minimizes the distance between the source domain and the target domain through feature mapping. So as to use so that the data distribution distributions of the two domains tocan be as similar as possible. There are also some other non-deepnondeep learning methods. For example, Structural Correspondence LearningTan proposed structural correspondence learning (SCL) Error: Reference source not found[22] based on feature selection. CORrelation ALignmentSun and Gong proposed correlation alignment (CORAL) Error: Reference source not found[23] and Geodesic Flow Kernelthe geodesic flow kernel (GFK) [24]method Error: Reference source not found based on subspace learning, etc.. In line 165 of the revised article: For example, Deep Domain ConfusionTzeng proposed deep domain confusion (DDC) Error: Reference source not found, Ghifary proposed a domain adaptive neural network Error: Reference source not found[26], Domain Adaptive Neural Network (DaNN) [27], Joint Adaptation Network, Long proposed a joint adaptation network (JAN) Error: Reference source not found[28],, etc. Recently, as the latest research result in the field of artificial intelligence, Generative Adversarial Networks (GANgenerative adversarial networks (GANs) have also begun to be used in transfer learning, such as Domain-Adversarial Neural Network (. Ganin first proposed the DANN Error: Reference source not found. Yu extended a dynamic distribution to an adversarial network and proposed dynamic adversarial adaptation networks (DAANs) Error: Reference source not found) [29], Dynamic Adversarial Adaptation Networks (DAAN) [30], and so on.. Experimental design: Reviewer#1, Concern # 1: An extensive diagram depicting the entire system methodology need to be added. Author response: We really appreciate your kind suggestions, which are very meaningful for improving the readability of the article. Author action: We have added the relevant flow chart and made the explanation: BGRU-DANN structure The structure of BGRU-DANN is shown in Fig. 6. Its basic composition can be divided into two parts. One part uses the training data from the source domain to minimize the loss of source domain regression. The other part uses the sensor data of the source domain and the target domain to maximize the error of domain classification. The BGRU and FC layers are shared by both parts. To facilitate the parameter setting process, we set the learning rates of the two sections to the same value. At the same time, we use dropout and BN layers for feature acquisition, domain classification and source domain regression. In the source domain regression task, the purpose of the training process is to minimize the regression loss function. In the domain classification task, a GRL is placed between the feature extraction and domain classification layers. During the process of back propagation, the GRL inverts the corresponding gradient to realize the optimization process of the model. When the output of the system does not improve significantly, the training process is stopped. For the corresponding FC layer, we use the ReLU activation function. Figure 6. BGRU-DANN structure Reviewer#1, Concern # 2: Explanation for using sliding window is not provided Author response: Thank you for your suggestion. In the original draft, we have a part of the description of the sliding time window, which is located at the data processing section. But only a small amount of content, may not be obvious. Author action: In this paper, we have added a section on the description of sliding window: Sliding time window processing After the extracted features are normalized, thea sliding time window (TW) is used to generate the time series input .. The size of the input time window is . The process of its generation is shown in the Fig.1 2: Figure 2. Sliding TW processing technique Reviewer#1, Concern # 3: Author states that feature vector is formed using extracted feature. Detail explanation is need on the same like the vector size formed etc. Author response: Thank you for your comments. The content about feature acquisition is less mentioned in the article. Your comments are very meaningful. Author action: In this paper, the network structure table of the proposed method is added, and the extracted information is supplemented: InTo realize the RUL prediction of RUL, we need to establish the BGRU-DANN structure and set the corresponding hyperparameters. For different transfer tasks, the optimal parameters of the model may vary. The model in this paper has no specific optimization process, the for parameter setting during use, and the parameters used are the same for different transfer tasks. The input size of the BGRU network is set to 64. The size of theeach hidden layer is set to 256. The number of network layers is set to 3. The DANN classifier is set to a 3-layer fully connected layerFC structure, and the domain classifier is a 3-layer fully connected layer. The rate ofFC structure. The network learning rate is set to 0.01. The number of training iterations is set to 5000. Using theseSome of the remaining hyperparameter settings are provided in Table 5. Table 5. Hyperparameter settings BGRU Layers, (Units), [Dropout] F (Units) Source Regression Layers, (Units), [Dropout] Domain Classification Layers, (Units), [Dropout] 3, (64, 256), [0.5] (256) 3, (256, 128, 32), [0.5] 3, (256, 128, 32), [0.5] 0.5 0.01 Reviewer#1, Concern # 4: Also explanation regarding the input of features to the model is missing. Author response: Thank you for your comments. The features of the model input are described in Feature extraction. We obtained 32 different features in the X direction and Y direction through feature extraction, respectively in the time domain, energy feature and trigonometric function feature. Author action: In view of the problems raised, we add the relevant feature map and explain it: Through feature extraction, we can obtain 64 features from the feature dataset, which can better represent the degradation process of the system. Because of space constraints, we only show features along the X-axis of the bearing 1-1 data in Fig. 1. Figure 1: Features Reviewer#1, Concern # 5: The figure quality of figure 2,5 and 6 needs to be improved. The text in some of the figures are not visible. Author response: Thank you for your comments. The quality of the related pictures is really problematic. Author action: We have modified the relevant pictures, as shown in the figure: Figure 3- GRU memory cell Figure 7. Prediction results for dataset 2: (A)Prediction results of Bearing 2-1 using BGRU-DANN; (B)Prediction results of Bearing 2-1 using the comparison method; (C)Prediction results of Bearing 2-4 using BGRU-DANN; (D)Prediction results of Bearing 2-4 using the comparison method; (E)Prediction results of Bearing 2-6 using BGRU-DANN; (F)Prediction results of Bearing 2-6 using the comparison method. Figure 8. Prediction results for dataset 3: (A)Prediction results of Bearing 3-1 using BGRU-DANN; (B)Prediction results of Bearing 3-1 using the comparison method; (C)Prediction results of Bearing 3-2 using BGRU-DANN; (D)Prediction results of Bearing 3-2 using the comparison method; (E)Prediction results of Bearing 3-3 using BGRU-DANN; (F)Prediction results of Bearing 3-3 using the comparison method. Validity of the findings: Reviewer#1, Concern # 1: Result section explanation need to be improved. Result needs to be explained in terms of accuracy or error. Author response: Thank you for your comments on our manuscript. We apologize for some ambiguousness in our original text. The description of the results is rewritten to give a more detailed description. Author action: In this paper, we add a RUL error graph, which shows the RUL errors generated by different methods. We rewrote the results section, discussed the obtained values in more details, and added an analysis of RUL errors: InTo realize the RUL prediction of RUL, we need to establish the BGRU-DANN structure and set the corresponding hyperparameters. For different transfer tasks, the optimal parameters of the model may vary. The model in this paper has no specific optimization process, the for parameter setting during use, and the parameters used are the same for different transfer tasks. The input size of the BGRU network is set to 64. The size of theeach hidden layer is set to 256. The number of network layers is set to 3. The DANN classifier is set to a 3-layer fully connected layerFC structure, and the domain classifier is a 3-layer fully connected layer. The rate ofFC structure. The network learning rate is set to 0.01. The number of training iterations is set to 5000. Using theseSome of the remaining hyperparameter settings are provided in Table 5. After setting the relevant parameters, we can predict the RUL. First, we use the BGRU structure to extract the features of the input sequence data. Then, the DANN network is used to implement adversarial training to extract features with domain invariance. The experimental results are shown in Fig. 7 and Fig. 8. Fig. 7 reflects the predicted results of bearing 2-1, bearing 2-4 and bearing 2-6. The source domain data are bearing 1-3-bearing 1-7, and the target data are bearing 2-1, bearing 2-4, and bearing 2-6. Fig. 8 reflects the prediction results of baering2for bearing 3-1, baering2-4, baering2-6, baering3-1, baering3bearing 3-2, and baering3-3bearing 3-3. The source domain data are shown in the Fig.5bearing 1-3-bearing 1-7, and Fig.6the target data are bearing 3-1, bearing 3-2, and bearing 3-3. From the figure(A), (C), and (E) in Fig. 7 and (A), (C), and (E) in Fig. 8, we can conclude that the predicted RUL has results exhibit a good performance of the downward trend performance and isare very close to the real RUL value, whichvalues; this effectively illustrates the effectiveness of the proposed data-driven prediction framework based on transfer learningTL. Figure 7. Prediction results for dataset 2: (A)Prediction results of Bearing 2-1 using BGRU-DANN; (B)Prediction results of Bearing 2-1 using the comparison method; (C)Prediction results of Bearing 2-4 using BGRU-DANN; (D)Prediction results of Bearing 2-4 using the comparison method; (E)Prediction results of Bearing 2-6 using BGRU-DANN; (F)Prediction results of Bearing 2-6 using the comparison method. Figure 8. Prediction results for dataset 3: (A)Prediction results of Bearing 3-1 using BGRU-DANN; (B)Prediction results of Bearing 3-1 using the comparison method; (C)Prediction results of Bearing 3-2 using BGRU-DANN; (D)Prediction results of Bearing 3-2 using the comparison method; (E)Prediction results of Bearing 3-3 using BGRU-DANN; (F)Prediction results of Bearing 3-3 using the comparison method. Comparison of experimental results In order toTo demonstrate the advantages of data-driven prediction methods based on domain adaptativeadaptation, three methods wereare used for comparison purposes, namely, a BGRU without transfer learning, TCA-NN, and FC-DANN. Compared with BGRU without transfer, the proposed method has a We can see in Fig. 7 and Fig. 8 that the RUL prediction results of BGRU-DANN are significantly better effect. As a representative traditional non-deep transfer learning method, TCA uses MMD to describe than those of the difference between the source domain and other three methods, and the declining trend can best reflect the target domain.real RUL value. However, the comparison with TCAother three methods cannot reflect the degradation trend of the RUL effectively. Fig. 9 shows the RUL errors of BGRU-DANN, the BGRU, TCA-NN proves that the BGRU-DANN deep domain adaptation method has better performance. Usingand FC for feature extraction, this paper constructs an FC-DANN network. The comparison of the results fully shows that BGRU has a better effect in feature extraction.-DANN. It can be drawn from the Table V that BGRU-DANN has achievedclearly seen from Fig. 9 that the RUL error generated by the BGRU-DANN model is the smallest, especially for bearings 2-4, 3-1 and 3-3. At the same time, bearing 2-1 and bearing 2-6 in Fig. 9 clearly reflect that the RUL error generated by BGRU-DANN is smaller than that of the other three methods in most cases. Bearing 3-2 in Fig. 9 may not clearly indicate the superiority of BGRU-DANN due to the large amount of data involved. However, through the comparison of the three evaluation indicators in Table 6, it can still be seen that BGRU-DANN achieves the best effect. Table 6 shows that BGRU-DANN achieves the best results in terms of the three evaluations of, the MAE, MSE, and R2_score, which further proves the effectiveness of the method proposed in this paper in the RUL prediction method. Regarding the MSE, the calculated results of the proposed method for bearing 2-1, bearing 2-4, bearing 2-6, bearing 3-1, bearing 3-2 and bearing 3-3 are 0.0283, 0.0193, 0.0217, 0.0298, 0.0503, and 0.0472, respectively, which are far less than the calculated error results of the other three methods. For the MAE, the calculated results of the proposed method for bearing 2-1, bearing 2-4, bearing 2-6, bearing 3-1, bearing 3-2 and bearing 3-3 are 0.1157, 0.0928, 0.0875, 0.1215, 0.1569, and 0.1238, respectively, which are still better than the calculated error results of the other three models. In terms of the R2-score calculation results, the calculated results of the proposed method for bearing 2-1, bearing 2-4, bearing 2-6, bearing 3-1, bearing 3-2 and bearing 3-3 are 0.6576, 0.7664, 0.7367, 0.6379, 0.3935, and 0.4252, respectively; this indicates that the model has certain explanatory ability regarding the relationship between the independent variable and the dependent variable in the regression analysis and is superior to the three compared methods. Figure 9. RUL errors Reviewer#1, Concern # 2: Similarly the conclusion section need to be explained point wise based on model performance and comparison with other models used. Author response: Thank you for your question. We apologize for not explaining conclusion detail in the article. In order to solve this problem, we modify the relevant conclusion part and give a specific conclusion. Author action: We have modified the relevant conclusion section and given a specific description: In this article, a domain adaptation-adaptative prediction method based on deep learning ofwith a BGRU and a DANN is proposed. The validity of the proposed method is demonstrated by an experiment on the 2012 IEEE PHM dataset. The objective of this study is to propose a domain-adaptive RUL prediction method. When the input bearing is transfertransferred from the source domain with label information to thea target domain with only sensor information only, a more accurate estimate of the RUL can be given. By comparing with BGRU without migration,obtained. From the validityresults of the domain adaptation model is illustrated. Compared with TCA-NN and FC-DANN, our proposed deep domain adaptation method has better effect.experiment, we can draw the following conclusions: (1) Compared with the BGRU without TL, the proposed method has a better effect in terms of RUL prediction. This indicates that the model obtained by adversarial training has better generalization ability and can adapt to data with different distributions. (2) The comparison with TCA-NN proves that the deep, domain-adaptive BGRU-DANN method has better performance. This indicates that the transfer method based on deep learning has a stronger feature extraction ability than the traditional nondeep transfer method, and it can extract better features with domain invariance. (3) Using FC layers for feature extraction, this paper constructs an FC-DANN network. A comparison of the results fully shows that the BGRU has a better effect in terms of feature extraction. Compared with the features extracted by the FC method, the features extracted by the BGRU for sequence data processing are more representative. (4) By means of domain adaptation, the generalization ability of the data-driven RUL prediction model can be effectively improved, and it can adapt to RUL prediction tasks under different working conditions to a certain extent. In future work, we will take a closer look at the problem of time series transfer. Because, the remainingRemaining life prediction of problems with respect to bearings, aero engines, etc. can actually be regarded as a time series transfer problemproblems. However, the research on time series transfer is still in its infancy. There are onlymerely a few studies on such issues. Only Yu proposed two different time series transfer methods in references Error: Reference source not found, and Error: Reference source not found., one based on an extreme learning machine and the other based on a CNN. However, most of the data monitored by sensors are time series data, whichand this is a very common data type in RUL forecasting research. Therefore, the author intendsauthors intend to conduct related research in the future, hoping to obtain a better model and research results with more practical application value. References: [1] W. Yu, I. I. Y. Kim, and C. Mechefske, “Remaining useful life estimation using a bidirectional recurrent neural network based autoencoder scheme,” Mechanical Systems and Signal Processing, vol. 129, pp. 764-780, 2019. [2] C. Tang, J. Chen, M. Tomizuka, and Ieee, 'Adaptive Probabilistic Vehicle Trajectory Prediction Through Physically Feasible Bayesian Recurrent Neural Network,' 2019 International Conference on Robotics and Automation, IEEE International Conference on Robotics and Automation ICRA A. Howard, K. Althoefer, F. Arai, F. Arrichiello, B. Caputo, J. Castellanos, K. Hauser, V. Isler, J. Kim, H. Liu, P. Oh, V. Santos, D. Scaramuzza, A. Ude, R. Voyles, K. Yamane and A. Okamura, eds., pp. 3846-3852, 2019. [3] D. T. Mirikitani, and N. Nikolaev, “Recursive Bayesian Recurrent Neural Networks for Time-Series Modeling,” Ieee Transactions on Neural Networks, vol. 21, no. 2, pp. 262-274, Feb, 2010. [4] Y. Ma, and J. Shu, “Opportunistic Networks Link Prediction Method Based on Bayesian Recurrent Neural Network,” IEEE Access, vol. 7, pp. 185786-185795, 2019. Reviewer#2, Concern # 1: Explanation of the RUL expression should be given in the Abstract section, which is the first section it is used. “remaining useful life (RUL)” Author response: Thank you for your comments and I'm sorry that we made such a silly mistake. Author action: Relevant explanations are added in the abstract part of the paper: As an important part of prognostics and health management, remaining useful life (RUL) prediction can provide users and managers with system life information and improve the reliability of maintenance systems. Reviewer#2, Concern # 2: The Abstract section should be better worded. 'In response to existing problems, this paper proposes a data-driven framework with domain adaptability using bidirectional gated recurrent unit (BGRU).' In this panel 'existing problems' should be specified. Author response: Thank you for pointing out the shortcoming. What we have said in the Abstract may give you the wrong idea. The existing problems are described in the abstract. For example: ’Data-driven methods are powerful tools for RUL prediction because of their great modeling abilities. However, most current data-driven studies require large amounts of labeled training data and assume that the training data and test data follow similar distributions. In fact, the collected data are often variable due to different equipment operating conditions, fault modes, and noise distributions. As a result, the assumption that the training data and the test data obey the same distribution may not be valid’ Author action: In order to avoid readers' misunderstanding, we will modify the ‘existing problem’ to the ‘above problem’. Data-driven methods are powerful tools for RUL prediction Becausebecause of it istheir great modeling skillsabilities. However, most current data-driven research requires astudies require large amountamounts of labeled training data for training, and assumesassume that the training data and test data obey afollow similar distribution.distributions. In fact, the data collected isdata are often variable due to different equipment operating conditions, fault modes, and noise distributiondistributions. As a result, the assumption that the training data and the test data obey the same distribution may not be valid. In response to existingthe above problems, Reviewer#2, Concern # 3: The resolution of the figures can be increased. Author response: Thank you for your comments. We have modified the pictures in the paper. The detailed pictures are given in Question 4 Reviewer#2, Concern # 4: Labels are not read in Figures 5 and 6. Typefaces can be enlarged. Author response: Thank you for your advice. We have modified the pictures in the article. Author action: We have modified the relevant pictures, as shown in the figure: Figure 3. GRU memory cell Figure 7. Prediction results for dataset 2: (A)Prediction results of Bearing 2-1 using BGRU-DANN; (B)Prediction results of Bearing 2-1 using the comparison method; (C)Prediction results of Bearing 2-4 using BGRU-DANN; (D)Prediction results of Bearing 2-4 using the comparison method; (E)Prediction results of Bearing 2-6 using BGRU-DANN; (F)Prediction results of Bearing 2-6 using the comparison method. Figure 8. Prediction results for dataset 3: (A)Prediction results of Bearing 3-1 using BGRU-DANN; (B)Prediction results of Bearing 3-1 using the comparison method; (C)Prediction results of Bearing 3-2 using BGRU-DANN; (D)Prediction results of Bearing 3-2 using the comparison method; (E)Prediction results of Bearing 3-3 using BGRU-DANN; (F)Prediction results of Bearing 3-3 using the comparison method. Reviewer#2, Concern # 5: -In the Results section; The results obtained should be compared with the literature and the obtained values should be discussed in more detail. Author response: Author response: Thank you for your comments on our manuscript. We apologize for some ambiguousness in our original text. The description of the results is rewritten to give a more detailed description. Author action: In this paper, we add a RUL error graph, which shows the RUL errors generated by different methods. We rewrote the results section, discussed the obtained values in more details, and added an analysis of RUL errors: InTo realize the RUL prediction of RUL, we need to establish the BGRU-DANN structure and set the corresponding hyperparameters. For different transfer tasks, the optimal parameters of the model may vary. The model in this paper has no specific optimization process, the for parameter setting during use, and the parameters used are the same for different transfer tasks. The input size of the BGRU network is set to 64. The size of theeach hidden layer is set to 256. The number of network layers is set to 3. The DANN classifier is set to a 3-layer fully connected layerFC structure, and the domain classifier is a 3-layer fully connected layer. The rate ofFC structure. The network learning rate is set to 0.01. The number of training iterations is set to 5000. Using theseSome of the remaining hyperparameter settings are provided in Table 5. After setting the relevant parameters, we can predict the RUL. First, we use the BGRU structure to extract the features of the input sequence data. Then, the DANN network is used to implement adversarial training to extract features with domain invariance. The experimental results are shown in Fig. 7 and Fig. 8. Fig. 7 reflects the predicted results of bearing 2-1, bearing 2-4 and bearing 2-6. The source domain data are bearing 1-3-bearing 1-7, and the target data are bearing 2-1, bearing 2-4, and bearing 2-6. Fig. 8 reflects the prediction results of baering2for bearing 3-1, baering2-4, baering2-6, baering3-1, baering3bearing 3-2, and baering3-3bearing 3-3. The source domain data are shown in the Fig.5bearing 1-3-bearing 1-7, and Fig.6the target data are bearing 3-1, bearing 3-2, and bearing 3-3. From the figure(A), (C), and (E) in Fig. 7 and (A), (C), and (E) in Fig. 8, we can conclude that the predicted RUL has results exhibit a good performance of the downward trend performance and isare very close to the real RUL value, whichvalues; this effectively illustrates the effectiveness of the proposed data-driven prediction framework based on transfer learningTL. Figure 7. Prediction results for dataset 2: (A)Prediction results of Bearing 2-1 using BGRU-DANN; (B)Prediction results of Bearing 2-1 using the comparison method; (C)Prediction results of Bearing 2-4 using BGRU-DANN; (D)Prediction results of Bearing 2-4 using the comparison method; (E)Prediction results of Bearing 2-6 using BGRU-DANN; (F)Prediction results of Bearing 2-6 using the comparison method. Figure 8. Prediction results for dataset 3: (A)Prediction results of Bearing 3-1 using BGRU-DANN; (B)Prediction results of Bearing 3-1 using the comparison method; (C)Prediction results of Bearing 3-2 using BGRU-DANN; (D)Prediction results of Bearing 3-2 using the comparison method; (E)Prediction results of Bearing 3-3 using BGRU-DANN; (F)Prediction results of Bearing 3-3 using the comparison method. Comparison of experimental results In order toTo demonstrate the advantages of data-driven prediction methods based on domain adaptativeadaptation, three methods wereare used for comparison purposes, namely, a BGRU without transfer learning, TCA-NN, and FC-DANN. Compared with BGRU without transfer, the proposed method has a We can see in Fig. 7 and Fig. 8 that the RUL prediction results of BGRU-DANN are significantly better effect. As a representative traditional non-deep transfer learning method, TCA uses MMD to describe than those of the difference between the source domain and other three methods, and the declining trend can best reflect the target domain.real RUL value. However, the comparison with TCAother three methods cannot reflect the degradation trend of the RUL effectively. Fig. 9 shows the RUL errors of BGRU-DANN, the BGRU, TCA-NN proves that the BGRU-DANN deep domain adaptation method has better performance. Usingand FC for feature extraction, this paper constructs an FC-DANN network. The comparison of the results fully shows that BGRU has a better effect in feature extraction.-DANN. It can be drawn from the Table V that BGRU-DANN has achievedclearly seen from Fig. 9 that the RUL error generated by the BGRU-DANN model is the smallest, especially for bearings 2-4, 3-1 and 3-3. At the same time, bearing 2-1 and bearing 2-6 in Fig. 9 clearly reflect that the RUL error generated by BGRU-DANN is smaller than that of the other three methods in most cases. Bearing 3-2 in Fig. 9 may not clearly indicate the superiority of BGRU-DANN due to the large amount of data involved. However, through the comparison of the three evaluation indicators in Table 6, it can still be seen that BGRU-DANN achieves the best effect. Table 6 shows that BGRU-DANN achieves the best results in terms of the three evaluations of, the MAE, MSE, and R2_score, which further proves the effectiveness of the method proposed in this paper in the RUL prediction method. Regarding the MSE, the calculated results of the proposed method for bearing 2-1, bearing 2-4, bearing 2-6, bearing 3-1, bearing 3-2 and bearing 3-3 are 0.0283, 0.0193, 0.0217, 0.0298, 0.0503, and 0.0472, respectively, which are far less than the calculated error results of the other three methods. For the MAE, the calculated results of the proposed method for bearing 2-1, bearing 2-4, bearing 2-6, bearing 3-1, bearing 3-2 and bearing 3-3 are 0.1157, 0.0928, 0.0875, 0.1215, 0.1569, and 0.1238, respectively, which are still better than the calculated error results of the other three models. In terms of the R2-score calculation results, the calculated results of the proposed method for bearing 2-1, bearing 2-4, bearing 2-6, bearing 3-1, bearing 3-2 and bearing 3-3 are 0.6576, 0.7664, 0.7367, 0.6379, 0.3935, and 0.4252, respectively; this indicates that the model has certain explanatory ability regarding the relationship between the independent variable and the dependent variable in the regression analysis and is superior to the three compared methods. Figure 9. RUL errors Reviewer#2, Concern # 6: Conclusion section '[41], [42]' it refers instead should be made in a few sentences cneri. The results obtained in the conclusion section should be given as items. If there are numerical results, they should be given here briefly. Author response: Thank you for your question. We apologize for not explaining conclusion detail in the article. In order to solve this problem, we modify the relevant conclusion part and give a specific conclusion. Author action: Supplementary explanations are made in the relevant literature: Only Yu proposed two different time series transfer methods in references Error: Reference source not found, and Error: Reference source not found., one based on an extreme learning machine and the other based on a CNN. We have modified the relevant conclusion section and given a specific description: In this article, a domain adaptation-adaptative prediction method based on deep learning ofwith a BGRU and a DANN is proposed. The validity of the proposed method is demonstrated by an experiment on the 2012 IEEE PHM dataset. The objective of this study is to propose a domain-adaptive RUL prediction method. When the input bearing is transfertransferred from the source domain with label information to thea target domain with only sensor information only, a more accurate estimate of the RUL can be given. By comparing with BGRU without migration,obtained. From the validityresults of the domain adaptation model is illustrated. Compared with TCA-NN and FC-DANN, our proposed deep domain adaptation method has better effect.experiment, we can draw the following conclusions: (1) Compared with the BGRU without TL, the proposed method has a better effect in terms of RUL prediction. This indicates that the model obtained by adversarial training has better generalization ability and can adapt to data with different distributions. (2) The comparison with TCA-NN proves that the deep, domain-adaptive BGRU-DANN method has better performance. This indicates that the transfer method based on deep learning has a stronger feature extraction ability than the traditional nondeep transfer method, and it can extract better features with domain invariance. (3) Using FC layers for feature extraction, this paper constructs an FC-DANN network. A comparison of the results fully shows that the BGRU has a better effect in terms of feature extraction. Compared with the features extracted by the FC method, the features extracted by the BGRU for sequence data processing are more representative. (4) By means of domain adaptation, the generalization ability of the data-driven RUL prediction model can be effectively improved, and it can adapt to RUL prediction tasks under different working conditions to a certain extent. In future work, we will take a closer look at the problem of time series transfer. Because, the remainingRemaining life prediction of problems with respect to bearings, aero engines, etc. can actually be regarded as a time series transfer problemproblems. However, the research on time series transfer is still in its infancy. There are onlymerely a few studies on such issues. Only Yu proposed two different time series transfer methods in references Error: Reference source not found, and Error: Reference source not found., one based on an extreme learning machine and the other based on a CNN. However, most of the data monitored by sensors are time series data, whichand this is a very common data type in RUL forecasting research. Therefore, the author intendsauthors intend to conduct related research in the future, hoping to obtain a better model and research results with more practical application value. "
Here is a paper. Please give your review comments after reading it.
221
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>As an important part of prognostics and health management, remaining useful life (RUL) prediction can provide users and managers with system life information and improve the reliability of maintenance systems. Data-driven methods are powerful tools for RUL prediction because of their great modeling abilities. However, most current data-driven studies require large amounts of labeled training data and assume that the training data and test data follow similar distributions. In fact, the collected data are often variable due to different equipment operating conditions, fault modes, and noise distributions. As a result, the assumption that the training data and the test data obey the same distribution may not be valid . In response to the above problems, this paper proposes a data-driven framework with domain adaptability using a bidirectional gated recurrent unit (BGRU). The framework uses a domain-adversarial neural network (DANN) to implement transfer learning (TL) from the source domain to the target domain, which contains only sensor information. To verify the effectiveness of the proposed method, we analyze the IEEE PHM 2012 Challenge datasets and use them for verification. The experimental results show that the generalization ability of the model is effectively improved through the domain adaptation approach.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Prognostics aims to provide reliable remaining useful life (RUL) predictions for critical components and systems via a degradation process. Based on reliable forecast results, managers can determine the best periods for equipment maintenance and formulate corresponding management plans; this is expected to improve reliability during operation and reduce risks and costs. Typically, prognostic methods are classified into model-based methods and data-driven methods <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Model-based methods describe the degradation process of engineering systems by establishing mathematical models based on the failure mechanism or the first principle of damage <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. However, the physical parameters of the model should vary with different operating because it does not consider the temporal dependency problem <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref>. As a result, the existing RUL methods based on transfer learning (TL) can hardly adapt to common RUL prediction problems. In this article, we propose the use of bidirectional GRUs (BGRUs) to solve the problem of sequential data processing. We use labeled source domain data and unlabeled target domain data for training. This can be viewed as a process of unsupervised learning based on feature transfer. At the same time, we use a domain-adversarial neural network (DANN) to learn features with domain invariance. To verify the method proposed in this article, we use the IEEE PHM 2012 Challenge datasets for verification. The experimental results prove the effectiveness of the method proposed in this article. The main contributions of our work are as follows:</ns0:p><ns0:p>(1) We propose a new RUL prediction structure that can better adapt to data distribution shifts under different working environments and fault modes.</ns0:p><ns0:p>(2) The framework not only uses a single sensor but also integrates information from multiple sensors.</ns0:p><ns0:p>(3) Compared with the nonadaptive method and the traditional nondeep adaptive method, our proposed structure obtains better prediction results. The rest of this article is organized as follows: Section 2 briefly introduces the theoretical background of TL and deep learning. Then, the experimental procedure is introduced in Section 3. In Section 4, the BGRU, DANN, domain-adaptative BGRU and BGRU-DANN structures proposed in this article are introduced. On this basis, RUL prediction for a bearing dataset is studied. The comparative results and conclusions are given in Section 5.</ns0:p></ns0:div> <ns0:div><ns0:head>Literature Review Deep learning and PHM</ns0:head><ns0:p>Within the framework of deep learning, RNN is a very representative structure. It can not only process sequence data but also extract features well. Furthermore, RNNs have been used in the field of RUL prediction <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref>. However, such networks cannot deal with the weight explosion and gradient disappearance problems caused by recursion. This limits their application in long-term sequence processing. To solve this problem, many RNN variants have begun to appear, for example, LSTM and GRUs. These networks can process series with long-term correlations and extract features from them. As a variant of the RNN proposed earlier, LSTM has already performed well in RUL prediction. Shi <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref> also showed similar results; real-time, high-precision RUL prediction was achieved by training a dual-LSTM network. Chen <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> tried to add the attention mechanism commonly used in the image field to an LSTM network and proposed an attention-based LSTM method, which also achieved good results. Ma <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> proposed integrating deep convolution into the LSTM network. This approach applies a convolution structure to output-to-state and state-to-state information and uses time and time-frequency information simultaneously. As another type of RNN variant, GRUs have also begun to be applied in RUL prediction.</ns0:p><ns0:p>Compared with LSTM, a GRU has a simpler structure and fewer parameters, but the effect is comparable to that of LSTM. Deng <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref> combined a GRU with a particle filter (PF) and proposed an MC-GRU-based fusion prediction method, which achieved good performance in a prognostic study of ball screws. Lu <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref> proposed a GRU network based on an autoencoder. It uses an autoencoder to obtain features and a GRU network to extract sequence information. Compared with standard unidirectional LSTM and GRU, the bidirectional structure can extract better feature information <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref>&#12290;Huang proposed to combine multi-sensor data with operation data to make RUL prediction based on bidirectional LSTM (BLSTM) <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref>. Huang <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref> proposed a fusion prediction model based on BLSTM. It not only proves the advantages of LSTM in automatic feature acquisition and fusion, but also demonstrates the excellent performance of BLSTM in RUL prediction. Yu proposed a Bidirectional Recurring Neural Network based on autoencoder for C-MAPSS RUL estimation <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref>. She attempted to use BGRU for RUL prediction and validated its effectiveness with Bearing data <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref>. There are other deep networks, such as CNN, that are also widely used in the PHM space <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Transfer learning</ns0:head><ns0:p>In most classification or regression tasks, it is assumed that sufficient training data with label information can be obtained. At the same time, it is assumed that the training data and the test data come from the same distribution and feature space. However, in real life, data offset is common. The training data and test data may come from different marginal distributions. As a way to find the similarity between the source domain and the target domain, TL has achieved good results in domain adaptation. The basic TL methods can be divided into the following categories:</ns0:p><ns0:p>(1) Instance-based TL (2) Feature-based TL (3) Model-based TL (4) Relation-based TL Detailed information about these methods can be found in the literature <ns0:ref type='bibr' target='#b29'>[29]</ns0:ref>. In this article, they are divided into two categories according to their development process. One contains nondeep learning methods, and the other is based on deep learning methods. The most representative nondeep learning approaches are a series of methods based on maximum mean discrepancy (MMD). For example, Pan proposed transfer component analysis (TCA) <ns0:ref type='bibr' target='#b30'>[30]</ns0:ref>, which is the most representative TL method. Long tried to combine marginal distributions and conditional distributions and proposed joint distribution adaptation (JDA) <ns0:ref type='bibr' target='#b31'>[31]</ns0:ref>. Wang believed that marginal distributions and conditional distributions should have different weights. As a result, he proposed balanced distribution adaptation (BDA) <ns0:ref type='bibr' target='#b32'>[32]</ns0:ref>. This technique minimizes the distance between the source domain and the target domain through feature mapping so that the data distributions of the two domains can be as similar as possible. There are also some other nondeep learning methods. For example, Tan proposed structural correspondence learning (SCL) <ns0:ref type='bibr' target='#b33'>[33]</ns0:ref> based on feature selection. Sun and Gong proposed correlation alignment (CORAL) <ns0:ref type='bibr' target='#b34'>[34]</ns0:ref> and the geodesic flow kernel (GFK) method <ns0:ref type='bibr' target='#b35'>[35]</ns0:ref> based on subspace learning. With the continuous development of deep learning methods, an increasing number of people are beginning to use deep neural networks for TL. Compared with traditional nondeep TL methods, deep TL has achieved the best results at this stage. The simplest method for conducting deep TL to finetune the deep network, which realizes transfer by finetuning the trained network <ns0:ref type='bibr' target='#b36'>[36]</ns0:ref>. At the same time, by adding an adaptive layer to deep learning, deep network adaptation has also begun to appear consistently. For example, Tzeng proposed deep domain confusion (DDC) <ns0:ref type='bibr' target='#b37'>[37]</ns0:ref>, Ghifary proposed a domain adaptive neural network <ns0:ref type='bibr' target='#b38'>[38]</ns0:ref>, Long proposed a joint adaptation network (JAN) <ns0:ref type='bibr' target='#b39'>[39]</ns0:ref>, etc. Recently, as the latest research result in the field of artificial intelligence, generative adversarial networks (GANs) have also begun to be used in transfer learning. Ganin first proposed the DANN <ns0:ref type='bibr' target='#b40'>[40]</ns0:ref>. Yu extended a dynamic distribution to an adversarial network and proposed dynamic adversarial adaptation networks (DAANs) <ns0:ref type='bibr' target='#b41'>[41]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Transfer learning and PHM</ns0:head><ns0:p>As a way of thinking and a mode of learning, transfer learning has a core problem: finding the similarity between the new problem and the original problem. TL mainly solves the following four contradictions <ns0:ref type='bibr' target='#b41'>[41]</ns0:ref>:</ns0:p><ns0:p>(1) The contradiction between big data and less labeling.</ns0:p><ns0:p>(2) The contradiction between big data and weak computing.</ns0:p><ns0:p>(3) The contradiction between a universal model and personalized demand.</ns0:p><ns0:p>(4) The needs of specific applications. The above four contradictions also exist in PHM. For example, with the development of advanced sensor technology, an increasing amount of data have been collected. However, the amount available data with run-to-failure label information is still small. Second, because the operating state of equipment is affected by many different conditions, the data collected are often not representative due to the differences between various operating conditions and environments. Thus, it is difficult to construct a predictive model with strong universality. Finally, for a PHM system, because of the complexity of the object's use environment, we also need an RUL prediction model with specific applications. However, because there are no data with sufficient label information, it is impossible to use a data-driven approach to build an accurate predictive model. As an effective means, TL can help solve the existing problems of PHM. However, in the field of PHM, TL is mainly used in classification tasks <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref>. Shao proposed a convolutional neural network (CNN) based on TL <ns0:ref type='bibr' target='#b42'>[42]</ns0:ref>, which is used to diagnose bearing faults under different working conditions. Xing proposed a distribution-invariant deep belief network (DIDBN) <ns0:ref type='bibr' target='#b43'>[43]</ns0:ref>, which can adapt well to new working conditions. Feng pointed out that it is necessary to conduct fault diagnosis research with zero samples <ns0:ref type='bibr' target='#b44'>[44]</ns0:ref>. He introduced the idea of zero-shot learning into industrial fields and proposed a zero-sample fault diagnosis method based on the attribute transfer method. RUL prediction studies based on TL are still relatively few in number, as far as the author knows <ns0:ref type='bibr'>[45][46]</ns0:ref>[47] <ns0:ref type='bibr' target='#b48'>[48]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials Experimental analysis</ns0:head><ns0:p>In this section, we first describe the experimental data and platform in detail. Then, we analyze the data processing and feature extraction methods and introduce the relevant performance metrics. Finally, the effectiveness of our proposed method is verified via a comparison with other methods.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental data description</ns0:head><ns0:p>The IEEE PHM Challenge 2012 bearing dataset is used to test the effectiveness of the proposed method. This dataset is collected from the PRONOSTIA test platform and contains run-to-failure datasets acquired under different working conditions. PRONOSTIA is composed of three main parts: a rotating part, a degradation generation part and a measurement part. Vibration and temperature signals are gathered during all experiments. The frequency of vibration signal acquisition is 25.6 kHz. A sample is recorded every 0.1 s, and the recording interval is 10 s. The frequency of temperature signal acquisition is 10 Hz. 600 samples are recorded each minute. To ensure the safety of the laboratory equipment and personnel, the tests are stopped when the amplitude of the vibration signal exceeds 20 g. The basic information of the tested bearing is shown in Table <ns0:ref type='table'>1</ns0:ref>. Table <ns0:ref type='table'>2</ns0:ref> gives a detailed description of the datasets. From the table, we can see that the operating conditions of the three datasets are different, and from the literature <ns0:ref type='bibr' target='#b48'>[48]</ns0:ref>, we can obtain that the failure modes are also different. This is very suitable for experimenting with the method proposed in this article. To verify the effectiveness of the method proposed in this paper, we divide the data into a source domain and target domain according to the different operating conditions. The basic information is shown in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Feature extraction</ns0:head><ns0:p>The original signal extracted by the sensor cannot reflect the degradation trend of the system well. At the same time, using original data for network training will increase the cost of network training and affect the final output result. It is necessary to extract the degradation information of the system by corresponding methods, which is called feature extraction. From the raw vibration data, we extract 13 basic time-domain features. They are the maximum, minimum, mean, root mean square error (RMSE), mean absolute value, skewness, kurtosis, shape factor, impulse factor, standard deviation, clearance factor, crest factor, and variance. At the same time, through 4-layer wavelet packet decomposition, we extract the energy of 16 frequency bands as time-frequency domain features. In the literature <ns0:ref type='bibr' target='#b48'>[48]</ns0:ref>, the frequency resolution of the vibration signal was too low. Therefore, we do not extract the frequency domain features but rather use the features of three trigonometric functions. They are the standard deviation of the inverse hyperbolic cosine (SD of the IHC), standard deviation of the inverse hyperbolic sine (SD of the IHS), and standard deviation of the inverse tangent (SD of the IT). For trigonometric features, trigonometric functions transform the input signal into different scales so that the features have better trends <ns0:ref type='bibr' target='#b48'>[48]</ns0:ref>, and the feature types are shown in Table <ns0:ref type='table' target='#tab_7'>4</ns0:ref>. Through feature extraction, we can obtain 64 features from the feature dataset, which can better represent the degradation process of the system. Because of space constraints, we only show features along the X-axis of the bearing 1-1 data in Fig. <ns0:ref type='figure' target='#fig_8'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Data processing</ns0:head><ns0:p>By processing the original data, we extract a set of feature vectors, which are expressed as . To obtain a better experimental result, the experimental data need to be ( , , ,......, )</ns0:p><ns0:formula xml:id='formula_0'>N X x x x x &#61501;</ns0:formula><ns0:p>normalized. In this article, the maximum and minimum values are normalized, and the basic calculation formula is as follows:</ns0:p><ns0:p>(1) </ns0:p><ns0:formula xml:id='formula_1'>{ } T i i t t x x &#61559; &#61501; &#61501; T &#61559;</ns0:formula><ns0:p>generation is shown in Fig. <ns0:ref type='figure' target='#fig_11'>2</ns0:ref>: Performance metrics</ns0:p><ns0:p>We use three indicators to evaluate the performance of the proposed method. The mean absolute error (MAE), mean squared error (MSE) and R2_score provide estimations regarding how well the model is performing on the target prediction task. The formulas for their calculation are as follows. MAE:</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_2'>1 1 | | L i i i MAE y y L &#61501; &#61501; &#61485; &#61669; MSE: (3) 2 1 1 ( ) L i i i MSE y y L &#61501; &#61501; &#61485; &#61669; R2_score: (4) 2 1 2 2 1 ( ) 1 2 _ ( ) n i i i n i i R e y y y scor y &#61501; &#61501; &#61485; &#61501; &#61485; &#61485; &#61669; &#61669;</ns0:formula><ns0:p>Here, is the length of the test data, is the ith true value, is the corresponding predicted , where is a series of features belonging to the feature space, </ns0:p><ns0:formula xml:id='formula_3'>{( , )} s N i i S S i x y &#61501; 1 {( , )} T N i T i x &#61501;</ns0:formula><ns0:p>domain. This is an unsupervised TL method. The process of training can be expressed as .</ns0:p><ns0:p>( , )</ns0:p></ns0:div> <ns0:div><ns0:head>S S T y F x x &#61501; Bidirectional gated recurrent unit</ns0:head><ns0:p>A GRU is a variant of the LSTM structure. Compared with LSTM, its structure is simpler, and there are fewer parameters. He combined the forget gate and input gate in LSTM into a single update gate. At the same time, the cell state and hidden state were also merged. A GRU contains two door structures, a reset door and an update door. The reset gate determines whether the new input is combined with the output from the previous moment; that is, the smaller the value of the reset gate is, the less the output information from the previous moment is retained. The update gate determines the degree of influence of the output information from the previous moment on the current moment. The larger the value of the update gate is, the greater the influence of the output from the previous moment on the current output. The GRU-based structure is shown in Fig. <ns0:ref type='figure' target='#fig_12'>3</ns0:ref> Manuscript to be reviewed <ns0:ref type='figure'>[ , ]</ns0:ref> )</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_4'>( [ , ] ) tanh( [ , ] ) (1 ) t r t t r t z t t z t h t t t h t t t t r W h x b z W h x b h W r h x b h z h z h &#61555; &#61555; &#61485; &#61485; &#61485; &#61485; &#61676; &#61501; &#61483; &#61679; &#61501; &#61483; &#61679; &#61679; &#61501; &#61677; &#61501; &#61483; &#61679; &#61679; &#61501; &#61485; &#61483; &#61679; &#61678; r r r r r r r r r r r r r r r r % r r r r r %</ns0:formula><ns0:p>Backward propagation:</ns0:p><ns0:p>( , , )</ns0:p><ns0:formula xml:id='formula_6'>t t t BGRU h f x h &#61553; &#61485; &#61501; s s s s (8) 1 1 1 1 ( [ , ]</ns0:formula><ns0:p>) Inspired by the GAN, Yaroslav Ganin first proposed domain-adversarial training for neural networks <ns0:ref type='bibr' target='#b40'>[40]</ns0:ref>, the process for which is shown in Fig. <ns0:ref type='figure' target='#fig_13'>4</ns0:ref> ( ( ( ; ); ), ) ( ( ( ; ); ), )</ns0:p><ns0:formula xml:id='formula_7'>( [ , ] ) tanh( [ , ] )<ns0:label>(1 )</ns0:label></ns0:formula><ns0:formula xml:id='formula_8'>t r t t r t z t t z t h t t t h t t t t r W h x b z W h x b h W r h x b h z h z h &#61555; &#61555; &#61485; &#61485; &#61485; &#61485; &#61676; &#61501; &#61483; &#61679; &#61501; &#61483; &#61679; &#61679; &#61501; &#61677; &#61501; &#61483; &#61679; &#61679; &#61501; &#61485; &#61483; &#61679; &#61678; s s</ns0:formula><ns0:formula xml:id='formula_9'>i f y d y y f i f y i d d f i f d i i N i N d L L G G x y L G G x y &#61553; &#61553; &#61553; &#61553; &#61553; &#61537; &#61553; &#61553; &#61501; &#61501; &#61501; &#61501; &#61485; &#61669; &#61669;</ns0:formula><ns0:p>Here, is the error of the category predictor, and is the error of domain classification. is </ns0:p><ns0:p>, &#710;&#710;( , ) arg min ( , , )</ns0:p><ns0:formula xml:id='formula_11'>f y f y f y d L &#61553; &#61553; &#61553; &#61553; &#61553; &#61553; &#61553; &#61501;<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>&#710;&#710;( ) arg max ( , , )</ns0:p><ns0:formula xml:id='formula_12'>d d f y d L &#61553; &#61553; &#61553; &#61553; &#61553; &#61501;</ns0:formula><ns0:p>Judging from the above two optimization formulas, this is a minimax problem. To solve this problem, a gradient reversal layer (GRL) is introduced into the DANN. During the process of forward propagation, the GRL acts as an identity transformation. However, during the back propagation process, the GRL automatically inverts the gradient. The optimization function selected by the DANN is a stochastic gradient descent (SGD) function. The GRL layer is generally placed between the feature extraction layer and the domain classifier layer.</ns0:p><ns0:p>The original DANN was the first proposed TL method based on adversarial networks. It is not only a method but also a general framework. Based on these foundations, many people have proposed different architectures <ns0:ref type='bibr' target='#b49'>[49]</ns0:ref>[50] <ns0:ref type='bibr' target='#b51'>[51]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>BGRU-based deep domain adaptation</ns0:head><ns0:p>To process the time series data, we construct the BGRU-DANN model, the process of which is shown in Fig. <ns0:ref type='figure' target='#fig_14'>5</ns0:ref>. Source domain data and target domain data with only domain information are used to train the network. Similar to the DANN network, the BGRU-DANN network can also be divided into three parts. The first part is a feature extraction network. We use a BGRU to map the input data to a hidden state. Then, the output of the BGRU is embedded in the feature space.</ns0:p><ns0:p>That is, . The second part maps the new features to the label data ( ( ), ) (14)</ns0:p><ns0:formula xml:id='formula_13'>f k f f G BGRU X &#61553; &#61501; PeerJ Comput. Sci</ns0:formula><ns0:formula xml:id='formula_14'>1 1 1 1 1 1 ( , , )<ns0:label>( , ) ( ( , ) ( , )</ns0:label></ns0:formula><ns0:p>)</ns0:p><ns0:formula xml:id='formula_15'>s s t n n n i i i f y d y f y d f d d f d i i i s s t L L L L n n n &#61553; &#61553; &#61553; &#61553; &#61553; &#61537; &#61553; &#61553; &#61553; &#61553; &#61501; &#61501; &#61501; &#61501; &#61485; &#61483; &#61669; &#61669; &#61669;</ns0:formula><ns0:p>Here, the loss functions and are defined as: L &#61553; &#61553; between the domain labels. The optimization process is shown in Eqn. <ns0:ref type='bibr' target='#b12'>(12)</ns0:ref> and Eqn. <ns0:ref type='bibr' target='#b13'>(13)</ns0:ref>. The weight update process is as follows:</ns0:p><ns0:formula xml:id='formula_16'>i y L i d L (15) ( , ) | | i i i p y f y t t L y y &#61553; &#61553; &#61501; &#61485; (16)</ns0:formula><ns0:p>( ) Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_18'>i i y d f f f f L L &#61553; &#61553; &#61548; &#61537; &#61553; &#61553; &#61622; &#61622; &#61612; &#61485; &#61485; &#61622; &#61622;<ns0:label>(18)</ns0:label></ns0:formula><ns0:formula xml:id='formula_19'>i y y y f L &#61553; &#61553; &#61548; &#61553; &#61622; &#61612; &#61485; &#61622;<ns0:label>(19)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Similar to a DANN, the GRL mechanism is also introduced here to realize the optimization process. SGD is used to update Eqns. ( <ns0:ref type='formula' target='#formula_17'>17</ns0:ref>), ( <ns0:ref type='formula' target='#formula_18'>18</ns0:ref>) and <ns0:ref type='bibr' target='#b19'>(19)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>BGRU-DANN structure</ns0:head><ns0:p>The structure of BGRU-DANN is shown in Fig. <ns0:ref type='figure' target='#fig_15'>6</ns0:ref>. Its basic composition can be divided into two parts. One part uses the training data from the source domain to minimize the loss of source domain regression. The other part uses the sensor data of the source domain and the target domain to maximize the error of domain classification. The BGRU and FC layers are shared by both parts. To facilitate the parameter setting process, we set the learning rates of the two sections to the same value. At the same time, we use dropout and BN layers for feature acquisition, domain classification and source domain regression. In the source domain regression task, the purpose of the training process is to minimize the regression loss function. In the domain classification task, a GRL is placed between the feature extraction and domain classification layers. During the process of back propagation, the GRL inverts the corresponding gradient to realize the optimization process of the model. When the output of the system does not improve significantly, the training process is stopped. For the corresponding FC layer, we use the ReLU activation function.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head></ns0:div> <ns0:div><ns0:head>Transfer prediction</ns0:head><ns0:p>To realize the prediction of RUL, we need to establish the BGRU-DANN structure and set the corresponding hyperparameters. For different transfer tasks, the optimal parameters of the model may vary. The model in this paper has no specific optimization process for parameter setting during use, and the parameters used are the same for different transfer tasks. The input size of the BGRU network is set to 64. The size of each hidden layer is set to 256. The number of network layers is set to 3. The DANN classifier is set to a 3-layer FC structure, and the domain classifier is a 3-layer FC structure. The network learning rate is set to 0.01. The number of training iterations is set to 5000. Some of the remaining hyperparameter settings are provided in Table <ns0:ref type='table' target='#tab_8'>5</ns0:ref>.</ns0:p><ns0:p>After setting the relevant parameters, we can predict the RUL. First, we use the BGRU structure to extract the features of the input sequence data. Then, the DANN network is used to implement adversarial training to extract features with domain invariance. The experimental results are shown in Fig. <ns0:ref type='figure' target='#fig_16'>7</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_17'>8</ns0:ref>. Fig. <ns0:ref type='figure' target='#fig_16'>7</ns0:ref> reflects the predicted results of bearing 2-1, bearing 2-4 and bearing 2-6. The source domain data are bearing 1-3-bearing 1-7, and the target data are bearing 2-1, bearing 2-4, and bearing 2-6. Fig. <ns0:ref type='figure' target='#fig_17'>8</ns0:ref> reflects the prediction results for bearing 3-1, bearing 3-2, and bearing 3-3.</ns0:p><ns0:p>The source domain data are bearing 1-3-bearing 1-7, and the target data are bearing 3-1, bearing 3-2, and bearing 3-3. From (A), (C), and (E) in Fig. <ns0:ref type='figure' target='#fig_16'>7</ns0:ref> and (A), (C), and (E) in Fig. <ns0:ref type='figure' target='#fig_17'>8</ns0:ref>, we can conclude that the predicted RUL results exhibit a good downward trend performance and are very close to the real RUL values; this effectively illustrates the effectiveness of the proposed data-driven prediction framework based on TL. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Comparison of experimental results</ns0:head><ns0:p>To demonstrate the advantages of data-driven prediction methods based on domain adaptation, three methods are used for comparison purposes, namely, a BGRU without transfer learning, TCA-NN, and FC-DANN. We can see in Fig. <ns0:ref type='figure' target='#fig_16'>7</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_17'>8</ns0:ref> that the RUL prediction results of BGRU-DANN are significantly better than those of the other three methods, and the declining trend can best reflect the real RUL value. However, the other three methods cannot reflect the degradation trend of the RUL effectively. <ns0:ref type='table' target='#tab_9'>6</ns0:ref>, it can still be seen that BGRU-DANN achieves the best effect.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_9'>6</ns0:ref> shows that BGRU-DANN achieves the best results in terms of the three evaluations, the MAE, MSE, and R2_score, which further proves the effectiveness of the method proposed in this paper. Regarding the MSE, the calculated results of the proposed method for bearing 2-1, bearing 2-4, bearing 2-6, bearing 3-1, bearing 3-2 and bearing 3-3 are 0.0283, 0.0193, 0.0217, 0.0298, 0.0503, and 0.0472, respectively, which are far less than the calculated error results of the other three methods. For the MAE, the calculated results of the proposed method for bearing 2-1, bearing 2-4, bearing 2-6, bearing 3-1, bearing 3-2 and bearing 3-3 are 0.1157, 0.0928, 0.0875, 0.1215, 0.1569, and 0.1238, respectively, which are still better than the calculated error results of the other three models. In terms of the R2-score calculation results, the calculated results of the proposed method for bearing 2-1, bearing 2-4, bearing 2-6, bearing 3-1, bearing 3-2 and bearing 3-3 are 0.6576, 0.7664, 0.7367, 0.6379, 0.3935, and 0.4252, respectively; this indicates that the model has certain explanatory ability regarding the relationship between the independent variable and the dependent variable in the regression analysis and is superior to the three compared methods.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this article, a domain-adaptative prediction method based on deep learning with a BGRU and a DANN is proposed. The validity of the proposed method is demonstrated by an experiment on the 2012 IEEE PHM dataset. The objective of this study is to propose a domain-adaptive RUL prediction method. When the input bearing is transferred from the source domain with label information to a target domain with only sensor information, a more accurate estimate of the RUL can be obtained. From the results of the experiment, we can draw the following conclusions: Manuscript to be reviewed In future work, we will take a closer look at the problem of time series transfer. Remaining life prediction problems with respect to bearings, aero engines, etc. can actually be regarded as time series transfer problems. However, research on time series transfer is still in its infancy. There are merely a few studies on such issues. Only Yu proposed two different time series transfer methods in references <ns0:ref type='bibr' target='#b52'>[52]</ns0:ref> and <ns0:ref type='bibr' target='#b53'>[53]</ns0:ref>, one based on an extreme learning machine and the other based on a CNN. However, most of the data monitored by sensors are time series data, and this is a very common data type in RUL forecasting research. Therefore, the authors intend to conduct related research in the future, hoping to obtain a better model and research results with more practical application value. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note></ns0:div><ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>is the average of the true values. y Methods PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60537:2:0:NEW 21 Jul 2021) Manuscript to be reviewed Computer Science Problem definition We use to denote the training task and to denote the target task. The training and testing S T T T data are represented as the source domain dataset and the target domain dataset ,</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>. A DANN combines domain adaptation with feature learning during the training process to better obtain distinctive and domain-invariant features. At the same time, the learned weights can also be directly used in the target field. The network structure of a DANN is mainly composed of three parts: a feature extractor , a f G PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60537:2:0:NEW 21 Jul 2021) extract the features with the greatest domain invariance. is used to classify the f G y G source domain data. is used to distinguish between the characteristic data of the source d G domain and the target domain. Its training objectives are mainly twofold: the first is to accurately classify the source domain dataset to minimize the category prediction error. The second is to confuse the source domain dataset with the target domain dataset to maximize the domain classification error. The loss function of the DANN can be expressed by the following formula:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>&#61553;</ns0:head><ns0:label /><ns0:figDesc>the parameter of the feature acquisition layer. The parameter of the category predictor is .y &#61553; d &#61553; is the parameter of the domain classifier. During the training process, to find the features with the best domain invariance, on the one hand, it is necessary to find and to minimize the f &#61553; y &#61553; category prediction error. On the other hand, it is also necessary to search to maximize the d &#61553; error of domain classification.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>G</ns0:head><ns0:label /><ns0:figDesc>. reviewing PDF | (CS-2021:04:60537:2:0:NEW 21 Jul 2021) Manuscript to be reviewed Computer Science (source domain) through the fully connected (FC) layer. That is, . In the third part, &#710;( , ) y y y G f &#61553; &#61501; the same feature is mapped to the domain label through the FC layer, i.e., . three-layer BGRU and an FC layer. A nonlinear high-dimensional feature representation of the original data is learned through the BGRU and FC layers. is composed y G of FC layers, batch normalization (BN) layers, and a rectified linear unit (ReLU) layer; y G provides the regression value of the source domain data. The network form of is FC1+ BN1+ y ReLU1+ Dropout1+ FC2+ BN2+ ReLU2+ FC3. During the adversarial training process, is used to distinguish whether the observed feature d G comes from the source domain or the target domain. consists of a gradient reversal layer and d G three FC layers. Here, is trained to extract features so that the difference between the source f G domain and the target domain is maximized. The labels of the source domain and target domain are set to 1 and 0, respectively. The loss function of the training process is as follows:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60537:2:0:NEW 21 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Fig. 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>shows the RUL errors of BGRU-DANN, the BGRU, TCA-NN and FC-DANN. It can be clearly seen from Fig.9that the RUL error generated by the BGRU-DANN model is the smallest, especially for bearings 2-4, 3-1 and 3-3. At the same time, bearing 2-1 and bearing 2-6 in Fig.9clearly reflect that the RUL error generated by BGRU-DANN is smaller than that of the other three methods in most cases. Bearing 3-2 in Fig.9may not clearly indicate the superiority of BGRU-DANN due to the large amount of data involved. However, through the comparison of the three evaluation indicators in Table</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>( 1 )</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Compared with the BGRU without TL, the proposed method has a better effect in terms of RUL prediction. This indicates that the model obtained by adversarial training has better generalization ability and can adapt to data with different distributions.(2) The comparison with TCA-NN proves that the deep, domain-adaptive BGRU-DANN method has better performance. This indicates that the transfer method based on deep learning has a stronger feature extraction ability than the traditional nondeep transfer method, and it can extract better features with domain invariance.(3) Using FC layers for feature extraction, this paper constructs an FC-DANN network. A comparison of the results fully shows that the BGRU has a better effect in terms of feature extraction. Compared with the features extracted by the FC method, the features extracted by the BGRU for sequence data processing are more representative. (4) By means of domain adaptation, the generalization ability of the data-driven RUL prediction model can be effectively improved, and it can adapt to RUL prediction tasks under different working conditions to a certain extent.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>1 Table 2 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Descriptions of the experimental datasets</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure1. Features for Bearing 1- 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. Features for Bearing 1-1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Sliding TW processing technique</ns0:figDesc><ns0:graphic coords='34,42.52,178.87,525.00,305.25' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. GRU memory cell</ns0:figDesc><ns0:graphic coords='35,42.52,178.87,525.00,171.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The flowchart of DANN</ns0:figDesc><ns0:graphic coords='36,42.52,178.87,525.00,276.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The flowchart of BGRU-DANN</ns0:figDesc><ns0:graphic coords='37,42.52,178.87,525.00,303.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. BGRU-DANN Structure</ns0:figDesc><ns0:graphic coords='38,42.52,178.87,525.00,215.25' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Prediction results for dataset 2:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Prediction results for dataset 3:</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='43,42.52,178.87,525.00,280.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>.</ns0:head><ns0:label /><ns0:figDesc>In our proposed structure, a BGRU is used to obtain time series features from a TW . Here,</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>W T</ns0:cell></ns0:row><ns0:row><ns0:cell>t x</ns0:cell><ns0:cell cols='2'>is the input at time , and t</ns0:cell><ns0:cell>t h</ns0:cell><ns0:cell cols='5'>represents the output of the GRU at time . is the reset gate, t t r</ns0:cell></ns0:row><ns0:row><ns0:cell>and</ns0:cell><ns0:cell>t z</ns0:cell><ns0:cell cols='4'>is the update gate. These two parts determine how to obtain</ns0:cell><ns0:cell>t h</ns0:cell><ns0:cell>from</ns0:cell><ns0:cell>t h</ns0:cell><ns0:cell>-1</ns0:cell><ns0:cell>. The hidden</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>layer of the GRU is defined as follows when running at time : t</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>Forward propagation:</ns0:cell><ns0:cell /><ns0:cell>1 ( , , t t h f x h &#61553; t &#61485; &#61501; r r r BGRU r</ns0:cell><ns0:cell>)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>(5)</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60537:2:0:NEW 21 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Feature set</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Type</ns0:cell><ns0:cell cols='2'>Feature</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F1: Maximum</ns0:cell><ns0:cell>F8: Shape Factor</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F2: Minimum</ns0:cell><ns0:cell>F9: Impulse Factor</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F3: Mean</ns0:cell><ns0:cell>F10: Standard Deviation</ns0:cell></ns0:row><ns0:row><ns0:cell>Time-domain features</ns0:cell><ns0:cell>F4: RMSE</ns0:cell><ns0:cell>F11: Clearance Factor</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F5: Mean Absolute Value</ns0:cell><ns0:cell>F12: Crest Factor</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F6: Skewness</ns0:cell><ns0:cell>F13: Variance</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F7: Kurtosis</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Time-frequency domain features</ns0:cell><ns0:cell cols='2'>F14-F29: Energies of sixteen bands</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F30: SD of the IHC</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Trigonometric features</ns0:cell><ns0:cell>F31: SD of the IHS</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>F32: SD of the IT</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Hyperparameter settings</ns0:figDesc><ns0:table><ns0:row><ns0:cell>BGRU Layers, (Units), [Dropout]</ns0:cell><ns0:cell>F (Units)</ns0:cell><ns0:cell>Source Regression [Dropout] Layers, (Units),</ns0:cell><ns0:cell>Domain Classification [Dropout] Layers, (Units),</ns0:cell><ns0:cell>&#120630;</ns0:cell><ns0:cell>&#120640;</ns0:cell></ns0:row><ns0:row><ns0:cell>3, (64, 256), [0.5]</ns0:cell><ns0:cell>(256)</ns0:cell><ns0:cell>3, (256, 128, 32), [0.5]</ns0:cell><ns0:cell>3, (256, 128, 32), [0.5]</ns0:cell><ns0:cell cols='2'>0.5 0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Performance metrics for the datasets</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell cols='2'>Performance metric The proposed method</ns0:cell><ns0:cell>BGRU</ns0:cell><ns0:cell cols='2'>TCA-NN FC-DANN</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-1</ns0:cell><ns0:cell /><ns0:cell>0.0283</ns0:cell><ns0:cell>1.4652</ns0:cell><ns0:cell>0.5205</ns0:cell><ns0:cell>0.2865</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-4</ns0:cell><ns0:cell /><ns0:cell>0.0193</ns0:cell><ns0:cell>1.5442</ns0:cell><ns0:cell>1.8181</ns0:cell><ns0:cell>1.0214</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-6 bearing 3-1</ns0:cell><ns0:cell>MSE</ns0:cell><ns0:cell>0.0217 0.0298</ns0:cell><ns0:cell>1.0589 2.1957</ns0:cell><ns0:cell>1.4436 4.4414</ns0:cell><ns0:cell>0.9164 1.3557</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 3-2</ns0:cell><ns0:cell /><ns0:cell>0.0503</ns0:cell><ns0:cell>0.0883</ns0:cell><ns0:cell>0.0796</ns0:cell><ns0:cell>0.0606</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 3-3</ns0:cell><ns0:cell /><ns0:cell>0.0472</ns0:cell><ns0:cell>8.9927</ns0:cell><ns0:cell>8.8676</ns0:cell><ns0:cell>2.9564</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-1</ns0:cell><ns0:cell /><ns0:cell>0.1157</ns0:cell><ns0:cell>0.9440</ns0:cell><ns0:cell>0.6316</ns0:cell><ns0:cell>0.4218</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-4</ns0:cell><ns0:cell /><ns0:cell>0.0928</ns0:cell><ns0:cell>1.0940</ns0:cell><ns0:cell>1.2917</ns0:cell><ns0:cell>0.9429</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-6 bearing 3-1</ns0:cell><ns0:cell>MAE</ns0:cell><ns0:cell>0.0875 0.1215</ns0:cell><ns0:cell>0.8491 1.3532</ns0:cell><ns0:cell>1.0226 1.9420</ns0:cell><ns0:cell>0.8063 1.0884</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 3-2</ns0:cell><ns0:cell /><ns0:cell>0.1569</ns0:cell><ns0:cell>0.2070</ns0:cell><ns0:cell>0.2377</ns0:cell><ns0:cell>0.2035</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 3-3</ns0:cell><ns0:cell /><ns0:cell>0.1238</ns0:cell><ns0:cell>2.5813</ns0:cell><ns0:cell>2.6589</ns0:cell><ns0:cell>1.5394</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-1</ns0:cell><ns0:cell /><ns0:cell>0.6576</ns0:cell><ns0:cell>-16.6992</ns0:cell><ns0:cell>-5.2325</ns0:cell><ns0:cell>-2.4311</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-4</ns0:cell><ns0:cell /><ns0:cell>0.7664</ns0:cell><ns0:cell>-17.6799</ns0:cell><ns0:cell>-20.7599</ns0:cell><ns0:cell>-11.2249</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 2-6 bearing 3-1</ns0:cell><ns0:cell>R2_score</ns0:cell><ns0:cell>0.7367 0.6379</ns0:cell><ns0:cell>-11.8176 -25.6589</ns0:cell><ns0:cell>-16.2749 -52.0910</ns0:cell><ns0:cell>-9.9658 -15.2054</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 3-2</ns0:cell><ns0:cell /><ns0:cell>0.3935</ns0:cell><ns0:cell>-0.06367</ns0:cell><ns0:cell>0.0451</ns0:cell><ns0:cell>0.2733</ns0:cell></ns0:row><ns0:row><ns0:cell>bearing 3-3</ns0:cell><ns0:cell /><ns0:cell>0.4252</ns0:cell><ns0:cell cols='2'>-108.42439 -104.9228</ns0:cell><ns0:cell>-34.3136</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60537:2:0:NEW 21 Jul 2021)</ns0:note></ns0:figure> </ns0:body> "
"Original Manuscript ID: CS-2021: 04:60537:1:0: NEW Original Article Title: “Data-Driven Remaining Useful Life Prediction Based on Domain Adaptation” To: Peer J Computer Science Editor Re: Response to reviewers Thank you very much for your valuable comments. We have revised the manuscript according to your comments. Your comments help us improve the quality of the paper a lot. Thank you again! All changes are marked by yellow highlight in the revised manuscript. Below, the original comments are in green, and our responses are in blue. The point-by-point explanations and responses to the reviewers’ comments are included as follows. Dear Editor, Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments. We are uploading (a) our point-by-point response to the comments (below) (response to reviewers), (b) an updated manuscript with yellow highlighting indicating changes, and (c) a clean updated manuscript without highlights (PDF main document). Best regards, Basic reporting: Reviewer#1, Concern # 1: Literature Review is now detailed. Few reference articles relevant to the manuscript are mentioned below. The authors can choose to add them in Literature Review section: Author response: We read the references recommended by the reviewers and decided to add references “Spatiotemporal non-negative projected convolutional network with bidirectional NMF and 3DCNN for remaining useful life estimation of bearings” Author action: References are added in the literature review section: There are other deep networks, such as CNN, that are also widely used in the PHM spaceError: Reference source not found. Experimental design: Reviewer#1, Concern # 1: The authors can discuss the relation of the extracted features such as kurtosis, skewness, etc with bearing life. Author response: Thank you for your opinion. The relationship between the extracted features and bearing life is not discussed in this paper. Feature extraction is very important for bearing life prediction, and the feature we used is widely used in the field of bearing life prediction. Liao used RMS, standard, deviation, mean value, variation, skewness, kurtosis, crest indicator, peak value, etc. to predict the bearing life[1]. Liu used the usual time-domain features, for example, mean, median, root mean square (RMS), kurtosis, maximum to-minimum difference (MD), standard deviation (SD), variance, maximum of signals, crest Indicator (CF), to predict the remaining life of the tool[2]. Wang extracted the time domain, frequency domain and time-frequency domain features of the signal, and screened out 11 features that could better reflect the degradation trend. They were the RMS, kurtosis, peak-peak value, peak indicator in time domain, the spectrum means, spectrum variance, spectrum RMS in frequency domain, the 3rd frequency bands normalized energy spectrum (E3) and sample entropy (S3), The 7th frequency bands normalized energy spectrum (E7) and sample entropy (S7) in time-frequency domain[3]. Zhu extracted the relevant features in the time domain and frequency domain at the same time. Includes entropy, energy, root mean square, kurtosis, square mean root, mean absolute, max absolute, skewness, shape indicator, clearance indicator, impulse indicator, crest indicator, standard deviation, energies of sixteen bands[4]. There are more literatures, for example [5][6]. I will not explain in detail here. How to construct a health indicator is very important for machine RUL prediction. In literature [7], the construction of health indicator is divided into two types, one is physical health indicator, the other is virtual health indicator. The features extracted in this paper are physical health indicator. For health indicator, literature[7]proposed some evaluation indexes, such as monotonicity, stability, etc. The feature (health indicator) used in this paper may not have good monotonicity and robustness. But for the purposes of this article, this does not affect the final result. This is because this paper uses BGRU and DANN networks for feature selection and fusion, that is, on the basis of physical health indicator, better features (virtual health indicator) are extracted through deep learning, and optimal results are achieved through network training. In fact, this paper does not carry out special screening and processing for the extracted features, because the research focus of this paper is not feature engineering. For feature extraction and health indicator construction, reference can be made to [8]. On the basis of statistical features, frequency domain features and time-frequency domain features, related similarity (RS) features were extracted, and a health indicator with better experimental effect was obtained by using RNN network. Meanwhile, genetic programming was also used to screen the extracted features in literature [1]. Author action: For the questions raised by reviewers, we analyze them here, that is, the relationship between the acquired features and bearing life: Figure 1: Features Figure 2: Bearing life From the feature list, we choose several representative features to illustrate: F1: Maximum F2: Minimum F4: RMSE F5: Mean Absolute Value F10: Standard Deviation F30: SD of the IHC F31: SD of the IHS F32: SD of the IT It can be seen from the figure that with the increase of service time, the life of the bearing is decreasing, and its corresponding characteristics such as RMSE, mean absolute value show corresponding monotonicity, through which the trend of bearing life degradation can be better reflected. We didn't filter for features. Because for the method proposed in this paper, features will be further fused and selected through the deep network. Reviewer#1, Concern # 2: Importance of the feature extraction step can be justified Author response: Why feature extraction, we can take a look at the original signal extracted by the sensor: Figure 3: Vibration signal It can be seen from the Fig.3 that the original characteristic signals cannot well reflect the degradation trend of the system. Although it is generally believed that the deep network can directly process the raw vibration signals, it will greatly increase the scale of the network and the difficulty of training[9]. Better features can be extracted from the preprocessed data to obtain better prediction performance. Through data processing and feature extraction, we can get the features that we've simply processed, which can reduce the training cost of network and improve the prediction effect. Figure 1: Features Author action: In this paper, we added relevant explanations on the necessity of feature extraction: The original signal extracted by the sensor cannot reflect the degradation trend of the system well. At the same time, using original data for network training will increase the cost of network training and affect the final output result. It is necessary to extract the degradation information of the system by corresponding methods, which is called feature extraction. References: [1] L. Liao, “Discovering Prognostic Features Using Genetic Programming in Remaining Useful Life Prediction,” IEEE Transactions on Industrial Electronics, vol. 61, no. 5, pp. 2464-2472, 2014. [2] C. Liu, and L. Zhu, “A two-stage approach for predicting the remaining useful life of tools using bidirectional long short-term memory,” Measurement, vol. 164, 2020. [3] F. Wang, X. Liu, G. Deng, X. Yu, H. Li, and Q. Han, “Remaining Life Prediction Method for Rolling Bearing Based on the Long Short-Term Memory Network,” Neural Processing Letters, vol. 50, no. 3, pp. 2437-2454, 2019. [4] J. Zhu, N. Chen, and C. Shen, “A new data-driven transferable remaining useful life prediction approach for bearing under different working conditions,” Mechanical Systems and Signal Processing, vol. 139, 2020. [5] D. An, J.-H. Choi, and N. H. Kim, “Remaining useful life prediction of rolling element bearings using degradation feature based on amplitude decrease at specific frequencies,” Structural Health Monitoring, vol. 17, no. 5, pp. 1095-1109, 2017. [6] P. S. Kumar, L. A. Kumaraswamidhas, and S. K. Laha, “Selection of efficient degradation features for rolling element bearing prognosis using Gaussian Process Regression method,” ISA Trans, vol. 112, pp. 386-401, Jun, 2021. [7] Y. Lei, N. Li, L. Guo, N. Li, T. Yan, and J. Lin, “Machinery health prognostics: A systematic review from data acquisition to RUL prediction,” Mechanical Systems and Signal Processing, vol. 104, pp. 799-834, 2018. [8] L. Guo, N. Li, F. Jia, Y. Lei, and J. Lin, “A recurrent neural network based health indicator for remaining useful life prediction of bearings,” Neurocomputing, vol. 240, pp. 98-109, 2017. [9] Y. Ding, M. Jia, Q. Miao, and P. Huang, “Remaining useful life estimation using deep metric transfer learning for kernel regression,” Reliability Engineering & System Safety, vol. 212, 2021. "
Here is a paper. Please give your review comments after reading it.
222
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Planes are the core geometric models present everywhere in the three-dimensional real world. There are many examples of manual constructions based on planar patches: facades, corridors, packages, boxes, etc. In these constructions, planar patches must satisfy orthogonal constraints by design (e.g., walls with a ceiling and floor). The hypothesis is that by exploiting orthogonality constraints when possible in the scene, we can perform a reconstruction from a set of points captured by 3D cameras with high accuracy and a low response time. We introduce a method that can iteratively fit a planar model in the presence of noise according to three main steps: a clustering-based unsupervised step that builds pre-clusters from the set of (noisy) points; a linear regression-based supervised step that optimizes a set of planes from the clusters; a reassignment step that challenges the members of the current clusters in a way that minimizes the residuals of the linear predictors. The main contribution is that the method can simultaneously fit different planes in a point cloud providing a good accuracy/speed trade-off even in the presence of noise and outliers, with a smaller processing time compared with previous methods. An extensive experimental study on synthetic data is conducted to compare our method with the most current and representative methods. The quantitative results provide indisputable evidence that our method can generate very accurate models faster than baseline methods. Moreover, two case studies for reconstructing planar-based objects using a Kinect sensor are presented to provide qualitative evidence of the efficiency of our method in real applications.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Fitting multiple plane-based models under geometric constraints to point clouds obtained with RGBD noisy sensors in time-constrained applications remains a challenging problem. More broadly, geometric primitive detection (planes, cuboids, spheres, cylinders, etc.) has been extensively studied for multiple applications (robotics, modeling, shape processing, rendering, interaction, animation, architecture, etc.) <ns0:ref type='bibr' target='#b25'>(Kaiser et al., 2019)</ns0:ref>. Multiple plane-based primitives are particularly interesting because they are very common in several engineering and architectonic projects in industry and construction, configuring objects, and scenarios designed with geometric characteristics between planes (angles, position, etc.). Building corridors, facades or rooms, manufactured components, packages, boxes, etc., are commonly formed by planar patches. These engineering and architectonic elements are present in different application domains, such as robot navigation, object reconstruction, and reverse engineering <ns0:ref type='bibr' target='#b40'>(Werghi et al., 1999;</ns0:ref><ns0:ref type='bibr' target='#b6'>Benko et al., 2002;</ns0:ref><ns0:ref type='bibr' target='#b1'>Anwer and Mathieu, 2016)</ns0:ref>.</ns0:p><ns0:p>Computer vision plays an important role in providing methods for modeling objects or scenes by means of processing 3D point clouds. Plane detection and model fitting are frequently used as the first stage in object and scene modeling pipelines. Robot navigation systems use plane detection and fitting to perform Simultaneous Localization and Mapping (SLAM) tasks <ns0:ref type='bibr' target='#b29'>(Lu and Song, 2015;</ns0:ref><ns0:ref type='bibr' target='#b41'>Xiao et al., 2011)</ns0:ref>. Indoor and outdoor scene reconstruction may be performed with plane detection and fitting phases A particular case of model fitting is planar model fitting, which estimates a model composed of a set of planes that are represented by a noisy 3D point cloud. Problems such as point cloud segmentation or clustering are aimed at grouping points in subsets with similar characteristics (geometric, radiometric, etc.), often with unsupervised processes, providing plane detection <ns0:ref type='bibr' target='#b17'>(Grilli et al., 2017)</ns0:ref>, without addressing model fitting. Several authors categorize model fitting methods into three groups: Hough transform-based methods, iterative methods (regression and RANSAC), and region-growing-based methods <ns0:ref type='bibr' target='#b25'>(Kaiser et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b42'>Xie et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b23'>Jin et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b11'>Deschaud and Goulette, 2010)</ns0:ref>. This common classification includes methods not only related to model fitting but also for segmentation and clustering in plane detection.</ns0:p><ns0:p>Region-growing methods build regions by expanding the area from seeds as certain conditions are met. A plane detection algorithm was proposed in <ns0:ref type='bibr' target='#b35'>(Poppinga et al., 2008)</ns0:ref> using a two-point-seed-growing approach. Different clustering methods have been proposed, such as k-means <ns0:ref type='bibr' target='#b10'>(Cohen-Steiner et al., 2004)</ns0:ref>, which is one of the most commonly used methods. In some cases, these methods are focused on clustering or region segmentation <ns0:ref type='bibr' target='#b34'>(Nurunnabi et al., 2012)</ns0:ref>. The success of these methods usually depends on the selection of the initial seed. Furthermore, they are oriented toward plane detection rather than a complete model fitting process.</ns0:p><ns0:p>Hough transform <ns0:ref type='bibr' target='#b19'>(Hough, 1962)</ns0:ref> is a common method for parameterized object detection, such as lines or circles, formerly defined for 2D images <ns0:ref type='bibr' target='#b12'>(Duda and Hart, 1972)</ns0:ref>. Extensions to deal with 3D images have been proposed in several studies <ns0:ref type='bibr' target='#b21'>(Hulik et al., 2014)</ns0:ref>. The voting process of Hough transform-based methods results in a high computational cost, especially in the presence of large input data. To reduce the computational cost, the randomized Hough transform (RHT) satisfies the voting process in a probabilistic manner <ns0:ref type='bibr' target='#b7'>(Borrmann et al., 2011)</ns0:ref>. The RHT is adequate for detecting planes in large structures. However, these methods have not yet demonstrated their efficiency in complex model-fitting problems of scenes with geometric constraints between planes.</ns0:p><ns0:p>Random sample consensus (RANSAC) is a robust approach for fitting single models in an iterative manner <ns0:ref type='bibr' target='#b14'>(Fischler and Bolles, 1981)</ns0:ref>. The method randomly selects an initial subset of the data to calculate a tentative model. It is then validated by counting the remaining data whose distance is under a threshold Manuscript to be reviewed Computer Science (i.e., inliers). The process iterates, and the best model is finally selected. In computer vision, RANSAC and its variants are widely used owing to their robustness against noise in the input data. Some methods aimed at fitting planes in images or 3D data are based on RANSAC. In some cases, the least squares estimation (LSE) and RANSAC are used together as the LSE method is used to calculate the model from the initial subset. However, RANSAC tends to simplify complex planar structures <ns0:ref type='bibr' target='#b23'>(Jin et al., 2017)</ns0:ref>.</ns0:p><ns0:p>To overcome this problem, several variants have been proposed <ns0:ref type='bibr' target='#b37'>(Saval-Calvo et al., 2015a)</ns0:ref>. The CC-RANSAC <ns0:ref type='bibr' target='#b16'>(Gallo et al., 2011)</ns0:ref> and its variants <ns0:ref type='bibr' target='#b46'>(Zhou et al., 2011</ns0:ref><ns0:ref type='bibr' target='#b45'>(Zhou et al., , 2013) )</ns0:ref> can obtain multiple surfaces by employing a modified RANSAC loss function to obtain better results. These approaches select multiple subsets (one per expected plane) and consider the relationships among them, instead of selecting one random data and trying to find the best planar model by counting the inliers. CC-RANSAC includes the largest connected component of inliers with 8-neighbor topology. The CC-RANSAC variants improve the basic method by adding vector normal information to allow the estimation of each cluster in the clustering and patch-joining steps. However, these methods do not consider the information related to the planes themselves. MC-RANSAC <ns0:ref type='bibr' target='#b37'>(Saval-Calvo et al., 2015a</ns0:ref>) uses a pre-clustering process by using a k-means algorithm to estimate the clusters and a search tree technique to improve the solutions while considering the prior constraints (angles between planes). Moreover, it extends traditional RANSAC by introducing a novel step for evaluating whether the inliers comply with the prior constraints among pre-clusters. Hence, this method outperforms the previous methods, achieving high accuracy in the final model estimation. However, the introduction of the search tree to calculate the pre-clusters and the step in RANSAC to check the constraints among plane models make the method too slow for some applications.</ns0:p><ns0:p>New methods have been developed based on RANSAC. Prior-MLESAC <ns0:ref type='bibr' target='#b44'>(Zhao et al., 2020)</ns0:ref> is based on the previous maximum likelihood estimation sampling consensus (MLESAC), which improves the extraction of vertical and non-vertical planar and cylindrical structures by exploiting prior knowledge of physical characteristics. Progressive-X (Prog-X) is an any-time algorithm for geometric multi-model fitting using the termination criterion adopted from RANSAC, improving similar methods in terms of accuracy <ns0:ref type='bibr' target='#b4'>(Barath and Matas, 2019;</ns0:ref><ns0:ref type='bibr' target='#b5'>Barath et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Regression, which is one of the most referred approaches for plane fitting, is a statistical strategy to solve the problem by finding a model that minimizes the overall error in the data. Ordinary LSE and its variants such as least median of squares (LMS) regression are widely used methods. The least squares method is a standard approach used to estimate model parameters by minimizing the squared distances between the observed data and their expected values. In computer vision, it has been widely used as the most popular form of regression analysis in various tasks. However, LSE results are highly influenced by outliers, leading to inconsistent results <ns0:ref type='bibr' target='#b33'>(Mitra and Nguyen, 2003)</ns0:ref>. LMS <ns0:ref type='bibr' target='#b31'>(Massart et al., 1986)</ns0:ref> minimizes the median of the squares and has proved to be more robust than the original LSE. However, it still fails when more than 50% of the data are outliers. Although other robust approaches have been proposed to overcome this problem, such as the least K-th order of square (LKS) or adaptive LKS <ns0:ref type='bibr' target='#b26'>(Lee et al., 1998)</ns0:ref>, the estimation of the optimal parameters requires high computational effort. Consequently, they are not viable for many applications, as the original LSE is one of the most used methods for this purpose, at least as a part of more sophisticated systems. LSE has been widely used to estimate planes or planar patches in computer vision or as a part of robust methods that can calculate them. <ns0:ref type='bibr' target='#b2'>Ara&#250;jo and Oliveira (2020)</ns0:ref> provided a new robust statistical approach, robust statistic plane detection (RSPD), for detecting planes in unorganized point clouds to achieve better accuracy and response times than previous approaches.</ns0:p><ns0:p>In recent studies, the problem of multi-model fitting have been addressed using energy functions to balance geometric errors <ns0:ref type='bibr' target='#b22'>(Isack and Boykov, 2012)</ns0:ref> and solve multiple geometric models, where greedy approaches such as RANSAC do not perform properly. These global energy-based approaches search for an optimal solution to fit all models present in the multi-structured data set, usually at high computational costs with respect to the number of models fitted <ns0:ref type='bibr' target='#b0'>(Amayo et al., 2018)</ns0:ref>. Moreover, these methods (PEARL, T-linkage, and CORAL <ns0:ref type='bibr' target='#b22'>(Isack and Boykov, 2012;</ns0:ref><ns0:ref type='bibr' target='#b30'>Magri and Fusiello, 2014;</ns0:ref><ns0:ref type='bibr' target='#b0'>Amayo et al., 2018)</ns0:ref>) have shown their efficiency in solving 2D multi-model fitting problems and homographies, but do not show extensive experimentation in the reconstruction of 3D multiple plane-based models with geometric constraints. <ns0:ref type='bibr' target='#b27'>Lin et al. (2020)</ns0:ref> formulated the problem as a global gradient minimization, proposing an updated method (Global-L0) based on a constraint model that outperforms traditional plane fitting methods.</ns0:p><ns0:p>A review of related works shows that methods for model fitting considering both accuracy and computational cost are needed. We propose a method for plane-based models reconstructed from a set of Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>3D points, taking advantage of the geometric constraints that are present in the original scene, exhibiting high accuracy in the presence of noise and outliers, and reducing the processing time. We selected the most representative methods for comparison: LSE <ns0:ref type='bibr' target='#b33'>(Mitra and Nguyen, 2003)</ns0:ref> and RANSAC <ns0:ref type='bibr' target='#b14'>(Fischler and Bolles, 1981)</ns0:ref> as classic baseline methods and MC-RANSAC <ns0:ref type='bibr' target='#b37'>(Saval-Calvo et al., 2015a)</ns0:ref>, RSPD <ns0:ref type='bibr' target='#b2'>(Ara&#250;jo and Oliveira, 2020)</ns0:ref>, Prior-MLESAC <ns0:ref type='bibr' target='#b44'>(Zhao et al., 2020)</ns0:ref>, Prog-X <ns0:ref type='bibr' target='#b4'>(Barath and Matas, 2019;</ns0:ref><ns0:ref type='bibr' target='#b5'>Barath et al., 2020)</ns0:ref>, and Global-L0 <ns0:ref type='bibr' target='#b27'>(Lin et al., 2020)</ns0:ref> are the most recent, providing a wide range of methods.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>MC-LSE: AN ITERATIVE MULTI-CONSTRAINT LEAST SQUARES ESTI-</ns0:head></ns0:div> <ns0:div><ns0:head>MATION ALGORITHM</ns0:head><ns0:p>In this section, we present our iterative method named MC-LSE for multi-constraint least-squares estimation. After the introduction of our notations in Section 2.1, we present the workflow of MC-LSE in Section 2.2, and it involves three main steps: clustering, linear regression, and reassignment.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Notations and Setting</ns0:head><ns0:p>Let us assume that we have access to a set of m points,</ns0:p><ns0:formula xml:id='formula_0'>P = {p i = (p x i , p y i , p z i ) &#8712; R 3 } m i=1</ns0:formula><ns0:p>, captured by using a 3D camera and assumed to be (likely noisy) representatives of some target planar model (e.g., cube, facade, corridor, and office). The task we are dealing with in this study consists of reconstructing this target model by learning n planes under orthogonality constraints. These constraints take the form of an n &#215; n-matrix, A, where A[ j, k] = 1 if planes j and k must be orthogonal, and 0 otherwise. In this supervised machine learning setting, (p x i , p y i ) plays the role of the input feature vector, and p z i is the dependent variable (corresponding to the depth). We approach this task by solving a joint constrained regression problem (which is described in the next section), where the minimizer takes the form of a set of n models, h &#952; j (p x i , p y i ) (with j = 1..n), that maps linearly from input (p x i , p y i ) to output p z i . h &#952; j (p x i , p y i ) is supposed to provide a good estimation of pz i of p z i as follows:</ns0:p><ns0:formula xml:id='formula_1'>&#952; 0 j + &#952; 1 j p x i + &#952; 2 j p y i + &#952; 3 j pz i = 0.</ns0:formula><ns0:p>Therefore, we deduce that h &#952; j (p x i , p y i ) is defined as follows:</ns0:p><ns0:formula xml:id='formula_2'>h &#952; j (p x i , p y i ) = &#952; 0 j + &#952; 1 j p x i + &#952; 2 j p y i &#8722;&#952; 3 j ,</ns0:formula><ns0:p>where h &#952; j is the j th plane, and &#952; j = (&#952; 0 j , &#952; 1 j , &#952; 2 j , &#952; 3 j ) is the corresponding set of parameters learned from a certain subset of points, P j &#8834; P.</ns0:p><ns0:p>Let N j = (&#952; 1 j , &#952; 2 j , &#8722;&#952; 3 j ) be the normal vector of the j th plane and &#209;l be the normal vector of any set of points P l &#8834; P. &#209;l can be easily computed by selecting the eigenvector corresponding to the smallest eigenvalue of the scatter matrix of P l . Finally, let kNN(p i ) &#8834; P be the set of k-nearest neighbors of p i , given a metric distance.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>MC-LSE</ns0:head><ns0:p>MC-LSE is an iterative algorithm that aims at reconstructing the target model from a set of points captured by using a 3D camera. The workflow of our method is presented in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, where the target model is a cube. It involves three main steps.</ns0:p><ns0:p>1. A clustering-based unsupervised step that consists of initializing n clusters from set P of points that are supposed to represent at most the n planes viewed by the 3D camera. Note that this step is performed only once.</ns0:p><ns0:p>2. A linear regression-based supervised step that learns, under orthogonal constraints (using matrix A), n planes from the clusters.</ns0:p><ns0:p>3. A reassignment step that challenges the membership of the points to the current clusters in a manner that minimizes the residuals of the regression tasks.</ns0:p><ns0:p>Steps 2 and 3 are repeated until no (or only a few) reassignments are performed. </ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.1'>Clustering Step</ns0:head><ns0:p>As illustrated in Figure <ns0:ref type='figure'>2</ns0:ref>, performing clustering from set P might be a tricky task. The points captured by the 3D camera (i) are often noisy representatives of the actual faces of the cube and (ii) may represent overlapping areas, especially at the corners of the cube (Figure <ns0:ref type='figure'>2</ns0:ref> (left)). Therefore, using a standard clustering algorithm with the Euclidean distance (such as K-Means <ns0:ref type='bibr' target='#b15'>(Forgy, 1965)</ns0:ref> as used in this study)</ns0:p><ns0:p>would lead to irrelevant clusters that would tend to bring together points (see p 1 and p 2 in the figure <ns0:ref type='figure'>)</ns0:ref> that likely do not belong to the same plane. To overcome this drawback, an efficient solution consists of projecting a 3D-point, p i , onto a 6-dimensional space by considering not only the original features, (p x i , p y i , p z i ), but also the 3D-normal vector, &#209;kNN(p i ) , of set kNN(p i ) of the nearest neighbors of p i (see Figure <ns0:ref type='figure'>2</ns0:ref> (right)). Let pi &#8712; R 6 be the corresponding point. Set P = { pi &#8712; R 6 } m i=1 is then used as the input for the clustering algorithm that outputs n clusters, P 1 , ..., P n . Note that if the number of points assigned to a given cluster is not large enough (according to a threshold tuned by the user), the corresponding plane is (at least partially) hidden and cannot be properly captured by the camera. In such a case, the cluster is deleted, and only n &#8722; 1 planes are learned. Furthermore, note that the size of the neighborhood, k (which is tuned by cross-validation), affects the homogeneity of the orientation of the normal vectors.</ns0:p><ns0:p>The normal is smoothed for a large neighborhood, but the edges of the objects are also smoothed; thus, they are less descriptive. In contrast, if k is small, the normal vectors are significantly affected by noise and less uniform for a single plane surface. To consider the specificity of the application at hand (quality of the 3D camera, level of noise in the data, and camera point of view), we suggest assigning a weight to the normal vector, which is also tuned by cross-validation.</ns0:p><ns0:p>The next step of the process is optimization under the constraints of parameters &#952; 1 , ..., &#952; n corresponding to the n planes of the target model. Note that even if the clustering step considers the normal vectors to prevent the algorithm from building irrelevant clusters, outliers may still have a considerable impact on the slope of the learned planes at the first iteration of MC-LSE (see Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>.A in the case of a target cube). To address this limitation, we suggest selecting landmarks as a certain percentage of points from each cluster, P i , that are the closest (according to the Manhattan distance) to the centroid of P i . Thus, the initialization of our planes should be improved (see Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>.B).</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.2'>Regression Step</ns0:head><ns0:p>The regression step aims to use the points of the current clusters to fit parameters &#952; 1 , ..., &#952; n of the 3D planes such that the orthogonality constraints of matrix A are satisfied. To achieve this task, only the Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>original coordinates of the points (p x , p y , p z ) are used. Planes h &#952; j ( j = 1..n) are of the following form:</ns0:p><ns0:formula xml:id='formula_3'>h &#952; j (p x i , p y i ) = &#952; 0 j + &#952; 1 j p x + &#952; 2 j p y . &#8722;&#952; 3 j</ns0:formula><ns0:p>The corresponding normal vector of each plane is given by N j = (&#952; 1 j , &#952; 2 j , &#8722;&#952; 3 j ). To correctly reconstruct the target planar model, h &#952; 1 , ..., h &#952; n must be learned under constraints such that the normal vectors, N i , N j , are orthogonal; that is,</ns0:p><ns0:formula xml:id='formula_4'>N t i N j = 0 -if A[i, j] = 1.</ns0:formula><ns0:p>Note that these constraints can be rewritten in terms of &#952; i and &#952; j . Let L be a 4 &#215; 4-matrix defined as follows:</ns0:p><ns0:formula xml:id='formula_5'>L = &#63726; &#63727; &#63727; &#63728; 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 &#63737; &#63738; &#63738; &#63739; .</ns0:formula><ns0:p>We can deduce that N t i N j can be rewritten as follows:</ns0:p><ns0:formula xml:id='formula_6'>N t i N j = &#952; i L&#952; j .<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>Planes h &#952; j (p) ( j = 1..n) is learned to find the 4 &#215; n matrix &#952; = (&#952; 1 , ..., &#952; n ) as the minimizer of the following constrained optimization least-squares problem.</ns0:p><ns0:formula xml:id='formula_7'>min &#952; n &#8721; i=1 (&#952; 0 i ,&#952; 1 i ,&#952; 2 i ) T X i &#8722;&#952; 3 i &#8722; Z i 2 s.t. &#952; T i L&#952; j = 0 if A[i, j] = 1 0 &lt; &#952; 3 i &#8804; 1 (2)</ns0:formula><ns0:p>Here,</ns0:p><ns0:p>&#8226;</ns0:p><ns0:formula xml:id='formula_8'>X i = {(1, p x k , p y k )} |P i ]</ns0:formula><ns0:p>k=1 is the set of training examples of the i th current cluster, P i , used to learn the i th plane.</ns0:p><ns0:formula xml:id='formula_9'>&#8226; Z i = {p z k } |P i ]</ns0:formula><ns0:p>k=1 is the set of dependent values.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.3'>Reassignment Step</ns0:head><ns0:p>The objective of the reassignment is to challenge the membership of the points to the clusters, allowing us to better fit, using a step-by-step method, the parameters of the planes. Similar to the clustering step, the reassignment of p j , assumed to be currently part of cluster P l , accounts for both the original features, (p x j , p y j , p z j ), and the 3D-normal vector, &#209;kNN(p j ) , to prevent two close examples belonging to two different faces from being reassigned to the same cluster. The only difference is that the normal vector, &#209;kNN(p j ) , is calculated from the nearest neighbors that belong to P l instead of from the entire dataset, P. Let p j be the projection of p j in R 6 .</ns0:p><ns0:p>The reassignment of every point, p j , consists of looking for the closest plane, h &#952; i , in terms of the Euclidean distance and assigning it to the corresponding cluster. In other words, this procedure selects a plane that minimizes the residuals. As illustrated in Figure <ns0:ref type='figure'>4</ns0:ref>, this procedure allows us to significantly improve the quality of the clusters, having a positive impact on the planes optimized at the next iteration.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>EXPERIMENTS</ns0:head><ns0:p>In this section, we present an extensive experimental study of the proposed algorithm, MC-LSE. After the presentation of the experimental setup, we report the results of comparisons with previous methods in terms of several evaluation criteria. Then, we present an analysis of MC-LSE according to different levels of noise added to the data. The results were used for the cases presented in the next section.</ns0:p></ns0:div> <ns0:div><ns0:head>7/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57554:1:2:NEW 6 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_10'>p j kNN(p j ) &#209; kNN(p j ) N 1 N 2 p j h &#1012;1 p j h &#1012;2 N 1 N 2 Figure 4.</ns0:formula><ns0:p>Illustration of the reassignment step: distance calculation between the clusters and the estimated planes, h &#952; i (left); reassignment according to the minimum distance (right).</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Experimental setup</ns0:head><ns0:p>Synthetic data from a target cube were used for the experiments. Without loss of generality, this setup allows us to quantitatively compare the methods on a simple target model by having access to the ground truth (analytical expression of the true planes of the cube). In this cube-based scenario, the number of learned planar models was set to n = 3, and matrix A was based on 2-by-2 orthogonality constraints, as presented in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>.</ns0:p><ns0:p>Synthetic data were generated by simulating a Microsoft Kinect sensor with Blensor <ns0:ref type='bibr' target='#b18'>(Gschwandtner et al., 2011)</ns0:ref>. This tool allows us to create a cube and obtain images from different points of view by moving a virtual camera. The experiments included eight points of view of the cube (see Fig. <ns0:ref type='figure'>5</ns0:ref>), simulating a counter-clockwise self-rotation of the object to validate the algorithm in terms of different geometrical and clusters characteristics. The method was implemented with MATLAB 2019 and YALMIP, a toolbox for modeling and optimization <ns0:ref type='bibr' target='#b28'>(L&#246;fberg, 2004)</ns0:ref>.</ns0:p><ns0:p>Note that different levels of Gaussian noise (mean &#181; = 0 and standard deviation &#963; from 1.00E-05 to 1.00E-04) were added to the synthetic data to evaluate the robustness of the methods (see Fig. <ns0:ref type='figure'>6</ns0:ref>).</ns0:p><ns0:p>Four performance criteria (see Fig. <ns0:ref type='figure'>7</ns0:ref>) were used to evaluate the methods. The first two aim at evaluating the global accuracy of the learned models with respect to the ground truth. The last two are used to assess the robustness of the methods in the presence of noisy data:</ns0:p><ns0:p>&#8226; Angle error: the root mean square (RMS) of the angles (in degrees) composed by the calculated fitted plane and the ground truth for each plane of the model.</ns0:p><ns0:p>&#8226; Model error: the mean of the differences between the angles (in degrees) among the learned planes that compose the model and the corresponding from the ground truth.</ns0:p><ns0:p>&#8226; Distance error: the RMS of the Euclidean distance in R 6 between the points and their closest plane.</ns0:p><ns0:p>This quantity provides an insight into the residuals of the linear regressions.</ns0:p><ns0:p>&#8226; Cluster error: the percentage of points incorrectly clustered compared to the ground truth.</ns0:p><ns0:p>P 1 Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_11'>P 2 P 3 A P 1 P 2 P 3 P 1 0 1 1 P 2 1 0 1 P 3 1 1 0</ns0:formula><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_12'>(A) (B) (C) (D) (E) (F) (G) (H)</ns0:formula><ns0:formula xml:id='formula_13'>(A) (B) (C) (D) (E) (F) (G) (H) (I) (J) Figure 6. Synthetic data for different levels of Gaussian noise: &#963; = 1 &#8226; 10 &#8722;5 (A), &#963; = 2 &#8226; 10 &#8722;5 (B), &#963; = 3 &#8226; 10 &#8722;5 (C), &#963; = 4 &#8226; 10 &#8722;5 (D), &#963; = 5 &#8226; 10 &#8722;5 (E), &#963; = 6 &#8226; 10 &#8722;5 (F), &#963; = 7 &#8226; 10 &#8722;5 (G), &#963; = 8 &#8226; 10 &#8722;5 (H), &#963; = 9 &#8226; 10 &#8722;5 (I), &#963; = 10 &#8226; 10 &#8722;5 (J)</ns0:formula></ns0:div> <ns0:div><ns0:head n='3.2'>Experimental Comparison</ns0:head><ns0:p>MC-LSE is compared to seven other methods as mentioned previously. To assess the impact of the orthogonal constraints, we employed an ordinary LSE and RANSAC <ns0:ref type='bibr' target='#b14'>(Fischler and Bolles, 1981)</ns0:ref> as baseline methods. We also compared MC-LSE with state-of-the-art methods such as multi-constraint RANSAC (MC-RANSAC) <ns0:ref type='bibr' target='#b37'>(Saval-Calvo et al., 2015a)</ns0:ref>, RSPD <ns0:ref type='bibr' target='#b2'>(Ara&#250;jo and Oliveira, 2020)</ns0:ref>, Prior-MLESAC <ns0:ref type='bibr' target='#b44'>(Zhao et al., 2020)</ns0:ref>, Prog-X <ns0:ref type='bibr' target='#b5'>(Barath et al., 2020)</ns0:ref>, and Global-L0 <ns0:ref type='bibr' target='#b27'>(Lin et al., 2020)</ns0:ref>, providing an extensive comparison of our method.</ns0:p><ns0:p>For the sake of comparison, the methods that allow a previous clustering (LSE, MC-RANSAC, and RANSAC) use the same pre-clusters as proposed in this paper. Hence, the RANSAC method used in the comparison could be considered as a variant of the related methods based on CC-RANSAC <ns0:ref type='bibr' target='#b16'>(Gallo et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b46'>Zhou et al., 2011</ns0:ref><ns0:ref type='bibr' target='#b45'>Zhou et al., , 2013))</ns0:ref>. Because these methods consider the consensus set, the largest connected Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_14'>&#209; 1 &#209; 2 N 1 N 2 &#945; &#946; &#945; &#946; (A) &#945; &#945; &#946; (B) p j kNN(p j ) &#209; kNN(p j ) p j h &#1012;1 &#209; 1 &#209; 2 kNN(p i ) &#209; kNN(p i ) p i h &#1012;2 p i (C)</ns0:formula><ns0:p>Figure <ns0:ref type='figure'>7</ns0:ref>. Performance criteria used to evaluate methods. The angle error (A) is calculated using the angles between the ground truth (solid line) and the estimated plane (dotted line). In this example, it is (&#945; 2 + &#946; 2 ) /2. The model error (B) considers the angles among the planes that compose the object compared with those of the ground truth (solid line). In this example, for an object composed of two planes, it is &#945; &#8722; &#946; /1. Finally, the distance error (C) can be calculated as</ns0:p><ns0:formula xml:id='formula_15'>(p j , NkNN(p j ) ) &#8722; (p h &#952; 1 j , N1 ) 2 + (p i , NkNN(p i ) ) &#8722; (p h &#952; 2 i , N2 ) 2 /2</ns0:formula><ns0:p>component of inliers within the spatial neighborhood and within normal vector information coherence, the use of these clusters allows them to be pre-calculated. In other words, the RANSAC checking step to conform to the consensus set considers only the patches previously calculated by k-means that contain coherence in the spatial neighborhood as well as in the normal vector. The RSPD, Prior-MLESAC, Prog-X, and Global-L0 methods do not allow the use of the initial pre-clustering because they perform a different pre-treatment that is part of the proposed method such as Prior-MLESAC (calculation of curvature characteristics, normals, etc.), a progressive sampling schema (Prog-X), global optimization approach considering the whole data (Global-L0), or a growth scheme (RSPD).</ns0:p><ns0:p>The results are reported in Table <ns0:ref type='table' target='#tab_5'>2</ns0:ref>, based on which, we can make the following remarks. MC-LSE outperforms all the other methods in terms of the angle error and model error (being the most reliable performance parameters in terms of model fitting). It is worth noting that this is true regardless of the level of noise added to the data, and the advantage of MC-LSE is even higher with an increasing level of noise.</ns0:p><ns0:p>Moreover, our constrained-optimization problem allows us to automatically satisfy the orthogonality requirements (i.e., the model error is always equal to 0), whereas the other methods suffer from an increasing inability to fulfill the 90-degrees constraint. Specifically, the RSPD method suffers more due to its use of the bottom-up patch growth scheme that detects several planes in the same cluster with a large angle deviation. The two other criteria (distance error and cluster error) provide some insight into the capacity of the methods to fit models from relevant clusters, to reduce the residuals and to segment the input data.</ns0:p><ns0:p>For the former, the two best performing methods are MC-RANSAC and MC-LSE, with very similar results for all noise levels, being the best on average. Except for LSE and RANSAC, all obtained results are very similar because the methods make use, among others, of the point distances to the estimated plane in their optimization processes to perform the fitting. In terms of the cluster error, the worst results are obtained by RSPD because it detects several planar patches per plane. LSE and RANSAC obtain the second-worst results because they only make use of partial data (pre-clusters) to obtain the planes and, consequently, have local minima. The methods that consider global constraints (MC-RANSAC, Global-LO, Prog-X, and ours) obtain very similar results, with a variation of approximately 1% of errors in the obtained clusters.</ns0:p><ns0:p>Finally, it is worth noting that the Prior-MLESAC (also considering the whole data) does not perform a complete assignment of the points of the scene to a cluster, and the results are better than the others. Thus, although MC-LSE does not obtain the best results in terms of clusters, it is capable of achieving the best fit.</ns0:p><ns0:p>Our objective is to compare MC-LSE with other methods, in terms of not only its error, but also its computational cost. It is worth remembering that MC-LSE is based on an iterative process that is repeated until convergence. Formally, convergence is reached if no reassignment is performed between the two iterations. However, note that if only a few points are assigned to the wrong cluster, MC-LSE is not prevented from learning a good model. To evaluate this behavior, we performed three additional Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>experiments consisting of stopping MC-LSE when no more than 3%, 5%, and 10% of the points change between two iterations. Figure <ns0:ref type='figure' target='#fig_5'>8</ns0:ref> shows a plot of the joint behavior of MC-LSE in terms of both the running time (in seconds) and angle error in the four settings: convergence until no (red ball), no more than 3% (cyan ball), no more than 5% (brown ball), and no more than 10% (blue point) of reassignment.</ns0:p><ns0:p>We also report the results of MC-RANSAC (blue diamond), RANSAC variant (purple square), Global-L0 (orange right arrow), RSPD (purple left arrow), Prior-MLESAC (green triangle), and Prog-X (blue star). It is important to note that LSE, RANSAC, Prior-MLESAC, MC-RANSC, and MC-LSE were implemented in MATLAB 2019, whereas Global-L0, RSPD, and Prog-X were implemented in C++, as described by the authors of the papers. Therefore, the latter obtains results in a faster time than the MATLAB version.</ns0:p><ns0:p>We can see that by relaxing the convergence constraint, MC-LSE is still very accurate with an execution time very close to the minimum provided by Global-L0, Prog-X, and RANSAC (not counting LSE and RSPD as the fastest but worst results). The RANSAC variant is faster but much less accurate, whereas MC-RANSAC yields a small error, but is much more time-consuming. The implementations of Prog-X and Global-L0 (in C++) provide similar processing times with better performance for Prog-X. Although the median of Prog-X is similar to that of MC-LSE, the results are very scattered, ranging from less than 1 degree to more than 10 for certain acquisitions. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Noise</ns0:head><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Specific Analysis of MC-LSE</ns0:head><ns0:p>Once the comparative analysis is performed, in this section, we focus on the analysis of MC-LSE. First, the pre-clustering results obtained by the clustering-based unsupervised step (Section 2.2.1) are analyzed.</ns0:p><ns0:p>Subsequently, the behavior of MC-LSE according to the number of iterations and processing time is analyzed. Next, view V5 is analyzed in detail because the results are the most different compared to the rest of the views. The main difference from the other views is that the data are highly imbalanced.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.1'>Pre-clustering results</ns0:head><ns0:p>The first step of MC-LSE is to obtain a set of pre-clusters in the scene. The calculation of this first step is critical in the final result because the entire process starts from it. Therefore, in this section, we analyze the results obtained for the four most representative clustering methods: Gaussian mixture model-based expectation and maximization (GMM-EM) <ns0:ref type='bibr' target='#b3'>(Ari and Aksoy, 2010)</ns0:ref>, hierarchical clustering (HC) <ns0:ref type='bibr' target='#b8'>(Cai et al., 2014)</ns0:ref>, self-organizing maps (SOM) <ns0:ref type='bibr' target='#b32'>(Mingoti and Lima, 2006)</ns0:ref>, and k-means <ns0:ref type='bibr' target='#b15'>(Forgy, 1965)</ns0:ref>. It should be noted that the clustering performed with SOM applies k-means to the map generated from the set P of normal vectors and 3D points (see Section 2.2.1). Manuscript to be reviewed Computer Science the model error. It is not calculated as the proposed method meets the geometrical constraints (see results in Table <ns0:ref type='table' target='#tab_5'>2</ns0:ref>). Additionally, the cluster error is calculated for the pre-clusters resulted of the unsupervised methods (pre-cluster error in the Table <ns0:ref type='table' target='#tab_8'>3</ns0:ref>).</ns0:p><ns0:p>First, the error of the clusters generated in the first step (pre-cluster error) and the result of the whole process (cluster error) are analyzed. It can be seen that the best method is GMM-EM, which optimizes the GMM distributions. This is the best result in all cases except for the lowest noise level (1.E-05). Because the noise used for testing is Gaussian noise, it can explain the very good results. However, in comparison to the final result of the process, it can be seen that all methods, on average, generate similar results, with k-means being the best. In addition, the GMM-EM algorithm provides the best results for noise levels of 2.E-05, 4.E-05, 8.E-05, and 1.E-04. As can be seen, the GMM-EM and HC methods show an excessively high deviation, resulting in very good results in some views and very bad results in others. Therefore, good pre-cluster results do not guarantee a good final result because it depends on the set of points in the pre-cluster that may be in different planes of the real model. This case will be analyzed later with View 5 in Section 3.3.3.</ns0:p><ns0:p>Finally, according to the distance error variable for all noise levels, similar results were obtained (again, except for GMM-EM with high standard deviations). In any case, the angle error and model error could be the most reliable variables for defining the accuracy of the method. However, as previously mentioned, because the method meets the geometrical constraints, the model error is 0. However, angle error shows that both SOM and k-means provide the best results, with k-means being the one with the best average error (0.66 degrees). In all cases for k-means, the RMS error is less than 1 degree except for the maximum level of noise (1.E-04) and having small deviations, indicating its capability to obtain very accurate results regardless of the viewpoint.</ns0:p><ns0:p>To conclude the study of the pre-clustering results, the influence of the input data on the pre-clustering calculation step was also analyzed. As discussed in Section 2.2.1, the clustering algorithm considers not only the original features, p i = (p x i , p y i , p z i ), but also the 3D-normal vector, &#209;kNN(p i ) , of set kNN(p i ) of the nearest neighbors of p i to conform to the corresponding points, pi &#8712; R 6 , used as inputs of the clustering algorithm. In the Table <ns0:ref type='table' target='#tab_9'>4</ns0:ref>, the results of using only &#209;kNN(p i ) (Normals), only original features p i or both, pi (Normals and points), according to the variables studied previously are shown. As can be seen, the use of normal vectors improves the performance of MC-LSE. In addition, the most reliable parameter angle error is less than 1 degree using normal vectors, and more than 14 degrees if only the original features p i are used. In this case, MC-LSE is not capable of extracting accurate plane models. From these data, it can be concluded that the use of normal vectors is very important in the clustering step and, consequently, in the results obtained by the method. Furthermore, in this case, although practically the same results are obtained, it is better to use pi as inputs to separate planes that have the same orientation but are in different positions of the scene. We explain this case in the room reconstruction of the case study described in Section 3.4.2.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.2'>Convergence and processing time results</ns0:head><ns0:p>The method converges very quickly, as shown in Table <ns0:ref type='table' target='#tab_10'>5</ns0:ref> (mean and standard deviation) and Fig. <ns0:ref type='figure'>9</ns0:ref>. The behavior in terms of convergence of the method is similar for the different levels of noise and views, except for view V5. It can converge in six or fewer iterations, even for high levels of noise (up to 7E-5).</ns0:p><ns0:p>For the highest levels of noise (from 8E-5 to 1E-4) slightly increased, with a mean of approximately 6.5 iterations for this range. The worst case is view V8 for the 1E-4 level of noise, reaching up to nine iterations. Regarding view V5, it requires the largest number of iterations to converge for all levels of noise (except for level 2E-5), reaching the maximum for noises 6.0E-05 and 10.0E-05, requiring a total of 12 iterations. Hence, this view is analyzed in depth in the following subsection.</ns0:p><ns0:p>Given only the number of iterations to converge, the variation in the quality of the solution in each iteration is not considered. Hence, it is interesting to analyze the number of iterations according to the angle error as the most precise accuracy parameter studied in the previous section. Fig. <ns0:ref type='figure' target='#fig_1'>10</ns0:ref> shows the behavior of the variable with respect to the number of iterations. It represents the mean of the quality variable for different views at a specific level of noise. The error drops significantly in the first iteration, with a performance similar to an exponential decrease. Moreover, the error does not decrease significantly after approximately 4-5 iterations for any level of noise.</ns0:p><ns0:p>It is important to remember that the method converges when the number of points that are reassigned for a cluster with respect to the previous iteration is lower than a certain threshold. In the experiments, the threshold is 0, which means that the method does not stop as long as changes in the assignment continue to occur. However, as shown in Fig. <ns0:ref type='figure' target='#fig_1'>11</ns0:ref> for a noise level of 10.0E-05 (the level with most iterations), in the first four iterations, the percentage of points that change is less than 1% except for views V5 and V6.</ns0:p><ns0:p>In iteration 5, view V6 decreases from 2.62% to 0.75%; and view V5 decreases from 3.58% to 2.08%.</ns0:p><ns0:p>From iteration 6 onwards, in all cases, it is less than 1%. After iteration 8, the change in view V5 was 0.3</ns0:p><ns0:p>The fitting time and reassignment time are completely dependent on the iterations made by the method, and they are not affected by noise. Specifically, the former in the experiments is also not dependent on the number of points in the input data. The method can provide a model fitted into the input data within 0.2463 &#177; 0.0151 s per iteration. The latter is dependent on the number of planes in which the model to be fitted is composed of the number of input data.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.3'>Analysis of view V5: a difficult case</ns0:head><ns0:p>The behavior of the results for view V5 is different from that of the rest. The face P 2 in view of V5 (Fig. <ns0:ref type='figure'>5E</ns0:ref>) is highly imbalanced with respect to the other faces P 1 and P 3 . Moreover, the normal vector of P 2 is approximately orthogonal to the point of view of the camera. In this section, we focus on the results for the highest level of noise ( Fig. <ns0:ref type='figure' target='#fig_8'>12</ns0:ref>).</ns0:p><ns0:p>First, the pre-clusters are calculated by the clustering-based unsupervised step (Sect. 2.2.1) could be analyzed in Fig. <ns0:ref type='figure' target='#fig_8'>12A</ns0:ref>. Although pre-clusters of planes P 1 (green) and P 3 (blue) are mainly distributed around the corresponding planes, the points pre-clustered as P 2 (red) are distributed on planes P 2 and P 3 .</ns0:p><ns0:p>These results can be analyzed quantitatively in Table <ns0:ref type='table' target='#tab_11'>6</ns0:ref> (legt). The points predicted as P 2 are 33.7% for the actual face, whereas 55% of the predictions are for points belonging to P 3 . In other words, the pre-cluster is distributed in two faces P 2 and P 3 , with most of the points incorrectly classified.</ns0:p><ns0:p>For the improved initialization of the planes, landmarks are selected using 50% of the points for each cluster P i closest to their centroid. The results are shown in Fig. <ns0:ref type='figure' target='#fig_8'>12B</ns0:ref> and quantitatively analyzed in Table <ns0:ref type='table' target='#tab_11'>6</ns0:ref> (middle). The landmarks for planes P 1 and P 3 are well calculated, but the results for P 2 are even worse.</ns0:p><ns0:p>The points predicted as P 2 are 33.3% for the actual face, whereas 63.9% of the predictions are for points belonging to P 3 .</ns0:p><ns0:p>Finally, Fig. <ns0:ref type='figure' target='#fig_8'>12C</ns0:ref> shows the fitted model for the method after convergence and Manuscript to be reviewed</ns0:p><ns0:p>Computer Science confusion matrix. The model was fitted perfectly to the data. The errors for the clusters were mainly 422 distributed at the intersection of the planes and at the edges of the faces. For example, 21% of the points 423 predicted as P 2 for actual P 3 points are those at the edges. Because the number of points of the cluster P 2 424 is less than the others, for approximately 10% of the captured points, the relative error is higher. Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head n='3.4'>Cases of study: reconstructing scenes</ns0:head><ns0:p>This section presents two case studies to qualitatively evaluate the performance of MC-LSE in realistic situations. Specifically, this experiment consists of registering a set of 3D point clouds using a stateof-the-art method, &#181;-MAR <ns0:ref type='bibr' target='#b38'>(Saval-Calvo et al., 2015b)</ns0:ref>, which uses multiple 3D planar surfaces to find the transformations between them. To obtain an accurate model, the estimation of transformations to align views is critical. &#181;-MAR uses the model of the planes instead of the actual 3D data to reduce noise effects and, hence, improve registration. By using multiple non-coplanar planes, the method estimates the correspondences between views and calculates the transformation using the normals and centroids of the planes. The method uses the normals of both the fixed and moving sets of plane models to determine the rotation. Next, the translation is iteratively calculated by projecting the centroids of the moving set to the fixed set and minimizing the distances. For more details, refer to the original paper <ns0:ref type='bibr' target='#b38'>(Saval-Calvo et al., 2015b)</ns0:ref>. Because the final registration result highly depends on the accuracy of the planar models, the better those models are, the more accurate the result will be. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Section 3.4.1 shows the reconstruction of two objects that are part of the original dataset of the &#181;-MAR. As in the original study, the evaluation was performed by visual inspection because there is no ground truth to compare with. Next, section 3.4.2 presents a reconstruction of a scene composed of multiple orthogonal planes (walls and ceiling), the models of which have been estimated using MC-LSE, and the &#181;-MAR has been used to align the views.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.1'>Object reconstruction</ns0:head><ns0:p>As previously mentioned, here, we show the reconstruction of two objects using a 3D planar marker registration method, where both MC-RANSAC and MC-LSE methods are used to estimate the planes of the markers (cubes) under orthogonal constraints. The planes extracted for the cubes were registered using the &#181;-MAR method. Finally, the transformations calculated for the cube are applied to the point clouds of the objects to reconstruct them.</ns0:p><ns0:p>The objects were a bomb and a taz toy. Figure <ns0:ref type='figure' target='#fig_2'>13</ns0:ref> shows the two objects and the configuration with the 3D markers around them. The set-up includes a turntable that rotates using a stepper motor and an RGB-D camera that takes color and depth images in every step of the motor. The table was covered with a blue fabric to ease the segmentation of the objects and markers.</ns0:p><ns0:p>After applying the &#181;-MAR registration method, we obtained an aligned point cloud. The result is shown in Figure <ns0:ref type='figure' target='#fig_1'>14</ns0:ref>, where the markers were removed from the scene for a clearer interpretation. Although both methods provide good results, small improvements can be observed using MC-LSE. In the taz toy, the right leg showed better alignment. The bomb toy in 14-(b) shows a more rounded shape. For better visualization, Figure <ns0:ref type='figure' target='#fig_3'>15</ns0:ref> shows the details of the reconstruction. The first row presents a zoomed view of the leg in the Taz toy, where the MC-LSE planes achieve better reconstruction than the planes of MC-RANSAC. The second row shows the bomb toy, where the object reconstructed by MC-LSE is more round, and the eyes are more defined (i.e., the views are better aligned).</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>16</ns0:ref> shows a bottom view where the markers (cubes) have not been removed to provide another reference to evaluate the accuracy of the alignment. In general, the markers are better aligned in MC-LSE, with the right cube a clear example with a more compact shape.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.2'>Room reconstruction</ns0:head><ns0:p>In the second part of the case study, we present an indoor scene reconstruction composed of multiple orthogonal planes (walls and ceiling) registered using the &#181;-MAR algorithm, to estimate planes extracted by the proposed method. This application is related to the SLAM problem in robot localization and indoor building reconstruction.</ns0:p><ns0:p>The point cloud was captured using factory calibration for the Kinect camera. The reconstruction is more challenging than in the previous case of study because it has to deal with the same problems as before, but with larger errors; the optical aberration is worse than the fact that most points are closer to the border of the image (Fig. <ns0:ref type='figure' target='#fig_1'>17B</ns0:ref>) and the planes are further (about 4 meters in some views) increasing the noise in the point cloud (Fig. <ns0:ref type='figure' target='#fig_1'>17C</ns0:ref>).</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_5'>18</ns0:ref> shows the five acquired data viewpoints of the room. &#181;-MAR uses pairwise alignment; hence, every two consecutive frames have some planes in common. Overlapped on the images are the clusters corresponding to the planes. For this case study, the preclusters corresponding to planes have been manually segmented because it is not a core contribution of the study. Moreover, the constraints have been relaxed, not being strictly orthogonal due to the curves present in the planes as shown in Fig. <ns0:ref type='figure' target='#fig_1'>17B</ns0:ref>.</ns0:p><ns0:p>The final reconstruction is shown in Figure <ns0:ref type='figure' target='#fig_10'>19</ns0:ref>. As can be seen, the final alignment is correct, and the geometrical constraints are preserved using the plane models estimated by MC-LSE. The capabilities and robustness of the proposed method for estimating planes even in the presence of high noise and optical aberration can be seen in the planes estimated in Figure <ns0:ref type='figure' target='#fig_11'>20</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head n='4'>DISCUSSION AND CONCLUSIONS</ns0:head><ns0:p>In this paper, we propose MC-LSE, an iterative method to accurately reconstruct objects composed of planes from 3D point clouds captured by cameras. The fundamental aspect of this method is the use The generalization of the proposal for many orthogonal planes is direct, and strict constraints could be incorporated without changing the proposal, as we analyzed in the scene reconstruction case study. This case is frequent for non-calibrated cameras, which are not able to provide orthogonal planes even though they exist because of the geometric distortions of the camera. Hence, it is recommended that intrinsic calibration be used to minimize the effects of optical distortions on the image.</ns0:p><ns0:p>In future research, we plan to model other geometrical constraints for planes that can be useful for other applications (for example, we plan to use other geometrical objects for calibration purposes of a multi-camera system). At the same time, we would like to introduce a random selection of the landmarks of the first iteration to not depend on the first selection of points. This could be approached from a RANSAC point of view but speeding up the solution based on the characteristics of the method. This is because it could provide the hypothesis of the planar model considering all planes while reducing the number of iterations that the method should perform. Finally, the clustering-based unsupervised step could be improved by incorporating other characteristics of the scene as the Prior-MLESAC does to reduce the iterations needed to obtain the model fitting and, consequently, the processing time.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>ACKNOWLEDGMENTS</ns0:head><ns0:p>This work was supported by the French ANR project LIVES (ANR-15-CE23-0026-03) and the Spanish State Research Agency (AEI) and the European Regional Development Fund (FEDER) under project TIN2017-89069-R. There was no additional external funding received for this study.</ns0:p></ns0:div> <ns0:div><ns0:head>26/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57554:1:2:NEW 6 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:01:57554:1:2:NEW 6 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Workflow of MC-LSE involving three main steps.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure3. Selection of landmarks to improve the initialization of the iterative procedure. On the left: normal vectors of the three visible faces (black arrows) that would be obtained from the planes generated from the whole pre-clusters. Because of the presence of noise, the quality of the reconstructed target model (here, a cube) is low. On the right: the quality of the induced normal vectors is much higher by selecting landmarks from each cluster.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 1 Figure 5 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 1 Figure 5. Point clouds captured by the camera from different views of the target cube: View 1 (A), View 2 (B), View 3 (C), View 4 (D), View 5 (E), View 6 (F), View 7 (G), View 8 (H)</ns0:figDesc><ns0:graphic coords='10,348.79,199.74,88.68,94.19' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:01:57554:1:2:NEW 6 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Comparison of plane estimation methods with respect to processing time and angle error</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 9 .Figure 10 .</ns0:head><ns0:label>910</ns0:label><ns0:figDesc>Figure 9. Iterations needed for MC-LSE to converge</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:01:57554:1:2:NEW 6 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure12. Noisy points (10E-5) captured in view V5, preclusters (A), landmarks (B), and fitted model (C) for view 5. Planes P 1 (green), P 2 (red), and P 3 (blue)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>from the frame in Figure18, which are far from the camera and close to the borders in the image; hence, the proposed method is very noisy and curved. In this figure, the three planes of the frame are shown along with the fitted model planes. The ceiling is the top part of the figure with the red model; on the right with turquoise is the frontal wall with shelves; and in green is the side wall beside the window. Note the high amount of noise and curved data in which accurate plane models are estimated by MC-LSE.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 19 .</ns0:head><ns0:label>19</ns0:label><ns0:figDesc>Figure 19. Full model reconstruction of the scene using the data acquired by the Kinect.</ns0:figDesc><ns0:graphic coords='27,141.73,63.78,413.57,171.92' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 20 .</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>Figure 20. Detail of a noisy frame a) in Fig. 18. The figure shows the three models of the planes in red, turquoise, and green properly estimated despite the noise.</ns0:figDesc><ns0:graphic coords='27,152.07,269.03,392.90,158.86' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,167.11,288.81,362.84,204.16' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,141.73,63.78,413.57,151.07' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Construction of the pre-clusters using the normal vectors of the points. For simplicity, only two faces are considered here. On the left: a set of noisy points captured by the 3D camera. Points p 1 and p 2 are very close to each other according to the Euclidean distance in R</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>p 1</ns0:cell><ns0:cell>p 2</ns0:cell><ns0:cell>&#209; kNN(p 1 )</ns0:cell><ns0:cell /><ns0:cell>&#209; kNN(p 2 )</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>kNN(p 1 )</ns0:cell><ns0:cell>p 1</ns0:cell><ns0:cell>p 2</ns0:cell><ns0:cell>kNN(p 2 )</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>Landmarks Figure 2. Pre-clusters Pre-clusters 1 st iteration Model</ns0:cell><ns0:cell>1 st iteration Model</ns0:cell></ns0:row><ns0:row><ns0:cell>(A)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(B)</ns0:cell></ns0:row></ns0:table><ns0:note>5/29PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57554:1:2:NEW 6 Jul 2021)Manuscript to be reviewed 3 . On the right: a 6-dimensional feature vector is used to represent each point. The three additional features correspond to the normal 3D-vector, &#209;kNN(p i ) , calculated from the neighborhood, kNN(p i ), of each point, p i . In this way, p 1 and p 2 are no longer close and will probably belong to two different clusters, C 1 and C 2 , shown in blue and green, respectively.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Target cube and the corresponding matrix of constraints A.</ns0:figDesc><ns0:table /><ns0:note>8/29PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57554:1:2:NEW 6 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 2 -</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>continued from the previous page Noise Method Distance error Cluster error Angle error Model error RSPD 5.10 &#177; 1.34 36.92 &#177; 16.45 12.35 &#177; 11.77 10.39 &#177; 10.42</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Prior-MLESAC</ns0:cell><ns0:cell>5.24 &#177; 0.69</ns0:cell><ns0:cell>2.23 &#177; 0.93</ns0:cell><ns0:cell>5.87 &#177; 3.75</ns0:cell><ns0:cell>3.29 &#177; 1.95</ns0:cell></ns0:row><ns0:row><ns0:cell>Prog-X</ns0:cell><ns0:cell>6.35 &#177; 4.25</ns0:cell><ns0:cell>6.49 &#177; 2.30</ns0:cell><ns0:cell>6.86 &#177; 16.84</ns0:cell><ns0:cell>3.74 &#177; 9.75</ns0:cell></ns0:row><ns0:row><ns0:cell>MC-LSE (ours)</ns0:cell><ns0:cell>3.37 &#177; 0.36</ns0:cell><ns0:cell>5.62 &#177; 2.27</ns0:cell><ns0:cell>0.57 &#177; 0.40</ns0:cell><ns0:cell>0.00 &#177; 0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>6,00E-05 LSE</ns0:cell><ns0:cell>12.17 &#177; 1.15</ns0:cell><ns0:cell>28.06 &#177; 9.92</ns0:cell><ns0:cell>34.70 &#177; 3.65</ns0:cell><ns0:cell>58.43 &#177; 2.95</ns0:cell></ns0:row><ns0:row><ns0:cell>RANSAC</ns0:cell><ns0:cell cols='2'>5.26 &#177; 2.40 10.06 &#177; 10.86</ns0:cell><ns0:cell>4.36 &#177; 2.64</ns0:cell><ns0:cell>6.72 &#177; 9.00</ns0:cell></ns0:row><ns0:row><ns0:cell>MC-RANSAC</ns0:cell><ns0:cell>4.19 &#177; 0.94</ns0:cell><ns0:cell>6.77 &#177; 2.33</ns0:cell><ns0:cell>1.86 &#177; 1.39</ns0:cell><ns0:cell>0.48 &#177; 0.27</ns0:cell></ns0:row><ns0:row><ns0:cell>Global-L0</ns0:cell><ns0:cell>4.82 &#177; 0.33</ns0:cell><ns0:cell>5.50 &#177; 2.14</ns0:cell><ns0:cell>10.87 &#177; 2.65</ns0:cell><ns0:cell>10.08 &#177; 1.94</ns0:cell></ns0:row><ns0:row><ns0:cell>RSPD</ns0:cell><ns0:cell cols='4'>6.11 &#177; 2.19 40.92 &#177; 10.48 28.24 &#177; 13.33 24.01 &#177; 13.47</ns0:cell></ns0:row><ns0:row><ns0:cell>Prior-MLESAC</ns0:cell><ns0:cell>5.86 &#177; 0.86</ns0:cell><ns0:cell>1.94 &#177; 1.11</ns0:cell><ns0:cell>9.11 &#177; 3.85</ns0:cell><ns0:cell>5.38 &#177; 2.72</ns0:cell></ns0:row><ns0:row><ns0:cell>Prog-X</ns0:cell><ns0:cell>6.45 &#177; 5.73</ns0:cell><ns0:cell>6.59 &#177; 2.36</ns0:cell><ns0:cell>1.04 &#177; 0.53</ns0:cell><ns0:cell>0.13 &#177; 0.69</ns0:cell></ns0:row><ns0:row><ns0:cell>MC-LSE (ours)</ns0:cell><ns0:cell>3.65 &#177; 0.34</ns0:cell><ns0:cell>5.70 &#177; 2.15</ns0:cell><ns0:cell>0.79 &#177; 0.47</ns0:cell><ns0:cell>0.00 &#177; 0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>7,00E-05 LSE</ns0:cell><ns0:cell cols='2'>12.31 &#177; 1.19 29.41 &#177; 10.83</ns0:cell><ns0:cell>34.33 &#177; 3.49</ns0:cell><ns0:cell>57.66 &#177; 3.69</ns0:cell></ns0:row><ns0:row><ns0:cell>RANSAC</ns0:cell><ns0:cell cols='2'>18.60 &#177; 14.16 19.09 &#177; 13.27</ns0:cell><ns0:cell cols='2'>3.58 &#177; 1.11 41.53 &#177; 41.22</ns0:cell></ns0:row><ns0:row><ns0:cell>MC-RANSAC</ns0:cell><ns0:cell>4.19 &#177; 0.94</ns0:cell><ns0:cell>6.77 &#177; 2.33</ns0:cell><ns0:cell>1.86 &#177; 1.39</ns0:cell><ns0:cell>0.48 &#177; 0.27</ns0:cell></ns0:row><ns0:row><ns0:cell>Global-L0</ns0:cell><ns0:cell>5.09 &#177; 0.58</ns0:cell><ns0:cell>5.63 &#177; 2.09</ns0:cell><ns0:cell>11.08 &#177; 1.62</ns0:cell><ns0:cell>10.46 &#177; 2.41</ns0:cell></ns0:row><ns0:row><ns0:cell>RSPD</ns0:cell><ns0:cell cols='4'>4.86 &#177; 0.89 38.50 &#177; 10.47 28.86 &#177; 18.97 24.11 &#177; 16.26</ns0:cell></ns0:row><ns0:row><ns0:cell>Prior-MLESAC</ns0:cell><ns0:cell>6.17 &#177; 1.10</ns0:cell><ns0:cell>2.17 &#177; 0.96</ns0:cell><ns0:cell>10.45 &#177; 6.38</ns0:cell><ns0:cell>6.48 &#177; 6.14</ns0:cell></ns0:row><ns0:row><ns0:cell>Prog-X</ns0:cell><ns0:cell>6.06 &#177; 4.01</ns0:cell><ns0:cell>6.88 &#177; 2.09</ns0:cell><ns0:cell>5.78 &#177; 12.65</ns0:cell><ns0:cell>1.46 &#177; 5.35</ns0:cell></ns0:row><ns0:row><ns0:cell>MC-LSE (ours)</ns0:cell><ns0:cell>3.90 &#177; 0.44</ns0:cell><ns0:cell>6.01 &#177; 2.16</ns0:cell><ns0:cell>0.69 &#177; 0.31</ns0:cell><ns0:cell>0.00 &#177; 0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>8,00E-05 LSE</ns0:cell><ns0:cell cols='2'>13.10 &#177; 1.37 30.40 &#177; 11.50</ns0:cell><ns0:cell>35.22 &#177; 3.61</ns0:cell><ns0:cell>61.73 &#177; 4.16</ns0:cell></ns0:row><ns0:row><ns0:cell>RANSAC</ns0:cell><ns0:cell cols='2'>12.27 &#177; 13.33 14.42 &#177; 14.18</ns0:cell><ns0:cell cols='2'>4.51 &#177; 1.81 24.33 &#177; 36.41</ns0:cell></ns0:row><ns0:row><ns0:cell>MC-RANSAC</ns0:cell><ns0:cell>4.04 &#177; 0.50</ns0:cell><ns0:cell>5.95 &#177; 2.03</ns0:cell><ns0:cell>2.54 &#177; 2.49</ns0:cell><ns0:cell>0.58 &#177; 0.39</ns0:cell></ns0:row><ns0:row><ns0:cell>Global-L0</ns0:cell><ns0:cell>5.34 &#177; 0.61</ns0:cell><ns0:cell cols='2'>6.18 &#177; 2.29 21.11 &#177; 26.03</ns0:cell><ns0:cell>12.18 &#177; 4.82</ns0:cell></ns0:row><ns0:row><ns0:cell>RSPD</ns0:cell><ns0:cell>5.39 &#177; 1.50</ns0:cell><ns0:cell cols='3'>41.94 &#177; 9.84 34.65 &#177; 19.54 27.89 &#177; 12.33</ns0:cell></ns0:row><ns0:row><ns0:cell>Prior-MLESAC</ns0:cell><ns0:cell>5.61 &#177; 0.37</ns0:cell><ns0:cell>3.17 &#177; 1.38</ns0:cell><ns0:cell>6.96 &#177; 1.72</ns0:cell><ns0:cell>4.49 &#177; 2.26</ns0:cell></ns0:row><ns0:row><ns0:cell>Prog-X</ns0:cell><ns0:cell>4.80 &#177; 1.07</ns0:cell><ns0:cell>8.64 &#177; 2.84</ns0:cell><ns0:cell>1.50 &#177; 0.71</ns0:cell><ns0:cell>0.17 &#177; 0.54</ns0:cell></ns0:row><ns0:row><ns0:cell>MC-LSE (ours)</ns0:cell><ns0:cell>4.18 &#177; 0.43</ns0:cell><ns0:cell>6.56 &#177; 2.81</ns0:cell><ns0:cell>0.67 &#177; 0.46</ns0:cell><ns0:cell>0.00 &#177; 0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>9,00E-05 LSE</ns0:cell><ns0:cell cols='2'>14.18 &#177; 1.75 31.80 &#177; 12.02</ns0:cell><ns0:cell>35.78 &#177; 4.42</ns0:cell><ns0:cell>64.16 &#177; 3.82</ns0:cell></ns0:row><ns0:row><ns0:cell>RANSAC</ns0:cell><ns0:cell cols='2'>10.99 &#177; 11.82 15.46 &#177; 15.30</ns0:cell><ns0:cell cols='2'>8.53 &#177; 5.03 18.43 &#177; 29.55</ns0:cell></ns0:row><ns0:row><ns0:cell>MC-RANSAC</ns0:cell><ns0:cell>4.54 &#177; 0.99</ns0:cell><ns0:cell>6.89 &#177; 2.76</ns0:cell><ns0:cell>3.00 &#177; 2.59</ns0:cell><ns0:cell>0.63 &#177; 0.48</ns0:cell></ns0:row><ns0:row><ns0:cell>Global-L0</ns0:cell><ns0:cell>5.71 &#177; 0.33</ns0:cell><ns0:cell cols='2'>6.13 &#177; 2.27 21.49 &#177; 27.30</ns0:cell><ns0:cell>10.65 &#177; 2.35</ns0:cell></ns0:row><ns0:row><ns0:cell>RSPD</ns0:cell><ns0:cell>6.82 &#177; 3.74</ns0:cell><ns0:cell cols='2'>42.32 &#177; 9.64 37.27 &#177; 17.87</ns0:cell><ns0:cell>29.12 &#177; 7.44</ns0:cell></ns0:row><ns0:row><ns0:cell>Prior-MLESAC</ns0:cell><ns0:cell>6.61 &#177; 1.62</ns0:cell><ns0:cell cols='2'>3.76 &#177; 3.06 22.89 &#177; 39.52</ns0:cell><ns0:cell>8.78 &#177; 9.16</ns0:cell></ns0:row><ns0:row><ns0:cell>Prog-X</ns0:cell><ns0:cell>5.18 &#177; 0.96</ns0:cell><ns0:cell cols='2'>7.88 &#177; 2.72 10.31 &#177; 16.30</ns0:cell><ns0:cell>4.81 &#177; 10.03</ns0:cell></ns0:row><ns0:row><ns0:cell>MC-LSE (ours)</ns0:cell><ns0:cell>4.41 &#177; 0.45</ns0:cell><ns0:cell>7.16 &#177; 2.82</ns0:cell><ns0:cell>0.71 &#177; 0.26</ns0:cell><ns0:cell>0.00 &#177; 0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>1,00E-04 LSE</ns0:cell><ns0:cell cols='2'>14.93 &#177; 1.83 34.34 &#177; 11.28</ns0:cell><ns0:cell>36.24 &#177; 5.04</ns0:cell><ns0:cell>66.21 &#177; 3.72</ns0:cell></ns0:row><ns0:row><ns0:cell>RANSAC</ns0:cell><ns0:cell cols='2'>8.65 &#177; 5.68 12.58 &#177; 12.01</ns0:cell><ns0:cell>5.25 &#177; 2.48</ns0:cell><ns0:cell>7.91 &#177; 9.46</ns0:cell></ns0:row><ns0:row><ns0:cell>MC-RANSAC</ns0:cell><ns0:cell>4.49 &#177; 0.51</ns0:cell><ns0:cell>6.97 &#177; 2.37</ns0:cell><ns0:cell>2.48 &#177; 1.36</ns0:cell><ns0:cell>1.02 &#177; 0.35</ns0:cell></ns0:row><ns0:row><ns0:cell>Global-L0</ns0:cell><ns0:cell>5.76 &#177; 0.55</ns0:cell><ns0:cell>6.81 &#177; 2.46</ns0:cell><ns0:cell>12.29 &#177; 3.86</ns0:cell><ns0:cell>10.73 &#177; 2.51</ns0:cell></ns0:row><ns0:row><ns0:cell>RSPD</ns0:cell><ns0:cell>5.45 &#177; 1.17</ns0:cell><ns0:cell cols='3'>41.85 &#177; 9.84 45.35 &#177; 23.14 24.45 &#177; 12.14</ns0:cell></ns0:row><ns0:row><ns0:cell>Prior-MLESAC</ns0:cell><ns0:cell>5.93 &#177; 0.53</ns0:cell><ns0:cell>3.16 &#177; 1.59</ns0:cell><ns0:cell>8.21 &#177; 2.34</ns0:cell><ns0:cell>5.32 &#177; 4.52</ns0:cell></ns0:row><ns0:row><ns0:cell>Prog-X</ns0:cell><ns0:cell>5.50 &#177; 1.30</ns0:cell><ns0:cell>8.42 &#177; 3.12</ns0:cell><ns0:cell>6.11 &#177; 11.58</ns0:cell><ns0:cell>4.08 &#177; 9.80</ns0:cell></ns0:row><ns0:row><ns0:cell>MC-LSE (ours)</ns0:cell><ns0:cell>5.05 &#177; 1.49</ns0:cell><ns0:cell>8.00 &#177; 2.66</ns0:cell><ns0:cell>1.54 &#177; 1.71</ns0:cell><ns0:cell>0.00 &#177; 0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Average LSE</ns0:cell><ns0:cell>10.72 &#177; 1.12</ns0:cell><ns0:cell>24.41 &#177; 9.04</ns0:cell><ns0:cell>29.54 &#177; 3.87</ns0:cell><ns0:cell>48.36 &#177; 4.05</ns0:cell></ns0:row><ns0:row><ns0:cell>RANSAC</ns0:cell><ns0:cell>7.39 &#177; 6.02</ns0:cell><ns0:cell>10.04 &#177; 8.22</ns0:cell><ns0:cell cols='2'>3.40 &#177; 1.75 11.55 &#177; 15.69</ns0:cell></ns0:row><ns0:row><ns0:cell>MC-RANSAC</ns0:cell><ns0:cell>3.48 &#177; 0.53</ns0:cell><ns0:cell>5.80 &#177; 2.21</ns0:cell><ns0:cell>1.78 &#177; 1.29</ns0:cell><ns0:cell>0.57 &#177; 0.33</ns0:cell></ns0:row><ns0:row><ns0:cell>Global-L0</ns0:cell><ns0:cell>4.46 &#177; 0.40</ns0:cell><ns0:cell>5.56 &#177; 2.25</ns0:cell><ns0:cell>11.68 &#177; 6.87</ns0:cell><ns0:cell>9.36 &#177; 2.43</ns0:cell></ns0:row><ns0:row><ns0:cell>RSPD</ns0:cell><ns0:cell cols='4'>6.41 &#177; 3.84 39.27 &#177; 11.81 26.63 &#177; 17.70 22.02 &#177; 15.46</ns0:cell></ns0:row><ns0:row><ns0:cell>Prior-MLESAC</ns0:cell><ns0:cell>5.39 &#177; 0.62</ns0:cell><ns0:cell>2.39 &#177; 1.49</ns0:cell><ns0:cell>7.80 &#177; 6.55</ns0:cell><ns0:cell>4.10 &#177; 3.12</ns0:cell></ns0:row><ns0:row><ns0:cell>Prog-X</ns0:cell><ns0:cell>5.38 &#177; 3.28</ns0:cell><ns0:cell>6.78 &#177; 2.48</ns0:cell><ns0:cell>3.45 &#177; 5.94</ns0:cell><ns0:cell>1.52 &#177; 3.76</ns0:cell></ns0:row></ns0:table><ns0:note>12/29PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57554:1:2:NEW 6 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 2 -</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>continued from the previous page</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Noise</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell cols='2'>Distance error Cluster error</ns0:cell><ns0:cell>Angle error</ns0:cell><ns0:cell>Model error</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MC-LSE (ours)</ns0:cell><ns0:cell>3.41 &#177; 0.46</ns0:cell><ns0:cell>5.75 &#177; 2.30</ns0:cell><ns0:cell>0.66 &#177; 0.47</ns0:cell><ns0:cell>0.00 &#177; 0.00</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Results (mean &#177; standard deviation) of the comparative study in terms of four evaluation criteria (see Section 3.1) and with respect to various levels of Gaussian noise (from 1.E-05 to 1.E-04). The results in bold font indicate the best method.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>10 1</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>LSE</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>LSE (median)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>RANSAC</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>RANSAC (median)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>MC-RANSAC</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>MC-RANSAC (median)</ns0:cell></ns0:row><ns0:row><ns0:cell>(s) time</ns0:cell><ns0:cell /><ns0:cell>Global-L0 Global-L0 (median) RSPD</ns0:cell></ns0:row><ns0:row><ns0:cell>Estimation</ns0:cell><ns0:cell /><ns0:cell>RSPD (median) Prior-MLSAC Prior-MLSAC (median) Progressive-X</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Progressive-X (median)</ns0:cell></ns0:row><ns0:row><ns0:cell>10 0</ns0:cell><ns0:cell /><ns0:cell>MC-LSE MC-LSE (median)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>MC-LSE (3%, median)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>MC-LSE (5%, median)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>MC-LSE (10%, median)</ns0:cell></ns0:row><ns0:row><ns0:cell>10 0</ns0:cell><ns0:cell>10 1</ns0:cell><ns0:cell>10 2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Angle error (rms degrees)</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>lists the results of the entire MC-LSE for the evaluation criteria (see Section 3.1) with respect to various levels of Gaussian noise (from 1.E-05 to 1.E-04) for the eight tested different views, except for</ns0:figDesc><ns0:table /><ns0:note>13/29PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57554:1:2:NEW 6 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>error (percentage) 16.416 &#177; 2.880 14.189 &#177; 3.104 17.365 &#177; 12.024 Cluster error (percentage) 5.693 &#177; 2.279 5.753 &#177; 2.248 17.015 &#177; 13.151 Distance error (rms distance) 3.246 &#177; 0.351 3.236 &#177; 0.353 7.635 &#177; 3.183 Angle error (rms degrees) 0.640 &#177; 0.246 0.677 &#177; 0.536 14.639 &#177; 13.867 Results (mean &#177; standard deviation) of the pre-clustering method considering all experimental data (for different various levels of Gaussian noise and viewpoints). The results in bold font indicate the best method.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>14/29PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57554:1:2:NEW 6 Jul 2021)Manuscript to be reviewed</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Convergence and processing time are evaluated according to the following features: Iterations as the number of iterations used to learn the models. Fitting time as the time (in seconds) required to fit the linear models and the Reassignment time as the time (in seconds) used to recalculate the clusters at each iteration with respect to the learned planes.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>(right) the 16/29 PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57554:1:2:NEW 6 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Confusion matrices for clustering of input data from view 5 and noise level 10.0E-05.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Pre-clustered based on k-means (left), landmarks (middle), and results obtained by MC-LSE (right)</ns0:cell></ns0:row></ns0:table><ns0:note>18/29PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57554:1:2:NEW 6 Jul 2021)</ns0:note></ns0:figure> <ns0:note place='foot' n='15'>/29 PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57554:1:2:NEW 6 Jul 2021)</ns0:note> <ns0:note place='foot' n='20'>/29 PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57554:1:2:NEW 6 Jul 2021)</ns0:note> </ns0:body> "
"Dear Editor of 'PeerJ' journal. Attached you can find our response to the reviewers comments regarding our paper entitled 'Iterative Multilinear Optimization for Planar Model Fitting under Geometric Constraints'. The changes have been highlighted in the paper in blue font. We would also like to express our gratitude to the reviewers for their valuable comments, which have indeed helped us to improve the paper. Looking forward to hearing from you, The authors Sincerely, Dr. Jorge Azorín López Computer Technology Department Universidad de Alicante PO Box 99 03080 Alicante Spain phone: +34 965903400 fax: +34 965909643 e-mail: jazorin@dtic.ua.es Reviewer #1 Basic reporting 1. The literature review focused on three principal categories, but there were few related research papers from 2018-2020. I suggest that the manuscript include state-of-the-arts to help readers understand the latest developments in this field. Thank you very much for your valuable input. We have rewritten much of the introduction and updated the state of the art by incorporating new references corresponding to reviews and research papers that have been published in the last two years 2019, 2020. In addition, following your suggestions, we have considered restructuring the initial sections of the paper to differentiate the state of the art. In this way, the scope and objectives of the research are more clearly specified in the introduction. Then, in the 'related work' section, the relevant works in the field of model fitting of planes with orthogonality constraints and considering temporal constraints are reviewed in an updated way. The entire introduction has been highlighted in blue as most of it has been rewritten, reorganised and updated. Moreover, the abstract has been modified in consequence. In particular, the following new references have been added [1], [2], [3], [4], [5], [6], [7], :[8]: [1] [2] [3] [4] [5] [6] A. Kaiser, J. A. Ybanez Zepeda, and T. Boubekeur, “A Survey of Simple Geometric Primitives Detection Methods for Captured 3D Data,” Comput. Graph. Forum, vol. 38, no. 1, pp. 167–196, Feb. 2019, doi: 10.1111/cgf.13451. D. Barath and J. Matas, 'Progressive-X: Efficient, Anytime, Multi-Model Fitting Algorithm'. Proceedings of the IEEE International Conference on Computer Vision. 2019. Barath, Daniel and Noskova, Jana and Ivashechkin, Maksym and Matas, Jiří. 'MAGSAC++, a fast, reliable and accurate robust estimator' Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2020. Y. Xie, J. Tian, and X. X. Zhu, “Linking Points with Labels in 3D: A Review of Point Cloud Semantic Segmentation,” IEEE Geoscience and Remote Sensing Magazine, vol. 8, no. 4. Institute of Electrical and Electronics Engineers Inc., pp. 38–59, 01-Dec-2020, doi: 10.1109/MGRS.2019.2937630. B. Zhao, X. Hua, K. Yu, W. Xuan, X. Chen, and W. Tao, “Indoor Point Cloud Segmentation Using Iterative Gaussian Mapping and Improved Model Fitting,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 11, pp. 7890–7907, Nov. 2020, doi: 10.1109/TGRS.2020.2984943. A. M. C. Araújo and M. M. Oliveira, “A robust statistics approach for plane detection in unorganized point clouds,” Pattern Recognit., vol. 100, p. 107115, Apr. 2020, doi: 10.1016/j.patcog.2019.107115. [7] [8] L. Magri and A. Fusiello, “T-linkage: A continuous relaxation of J-linkage for multi-model fitting,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2014, pp. 3954– 3961, doi: 10.1109/CVPR.2014.505. Y. Lin, J. Li, C. Wang, Z. Chen, Z. Wang, and J. Li, “Fast regularityconstrained plane fitting,” ISPRS J. Photogramm. Remote Sens., vol. 161, pp. 208–217, Mar. 2020, doi: 10.1016/j.isprsjprs.2020.01.009. 2. In line 83, “Region growing methods are robust to noisy point clouds” should be explained in detail. Generally speaking, region growing methods suffer from noise. The idea that “Region growing methods are robust to noisy point clouds” has been derived from the works (Nurunnabi et al., 2012), (Cohen-Steiner et al., 2004) and (Poppinga et al., 2008). However, it seems to be a poorly substantiated statement and is not a central idea for the development of the paper. For these reasons, following your suggestions we have considered removing this sentence from the paper. Experimental design 1. In table 2-3, with the increase of noise, MC-LSE gradually loses its advantage in Dist. Error Cluster Error, but which didn’t influence the algorithm accuracy in terms of angle error and model error. The author needs to explain it in the paper. In addition, the positions of Tables 2 and 3 may need to be switched. Because you mentioned Table 3 first in the article instead of Table 2. Distance error and cluster error provide some insight about the capacity of the methods to fit models from relevant clusters, to reduce the residuals and to segment the input data. For the distance error, the two best performing methods are MCRANSAC and the proposed MC-LSE with very close results for all noise levels. In any case, except for LSE and RANSAC, all obtained results are very similar since the methods make use, among others, of the point distances to the estimated plane in their optimization processes to perform the fitting. Regarding the Cluster error, the worst results are obtained by RSPD since it detects several planar patches per plane. After that, LSE and RANSAC are the worst results since they only make use of partial data (pre-clusters) to obtain the planes, and in consequence having local minima. From there, the methods that consider global constraints (MC-RANSAC, Global-LO, Prog-X and ours) obtain very similar results with a variation of approximately 1% of errors in the obtained clusters. Finally, it is worth noting that the Prior-MLESAC (also considering the whole data), since it does not perform a complete assignment of the points of the scene to a cluster, the results are better than the others. Thus, although MC-LSE does not obtain the best results in terms of clusters, it can achieve the best fits. These arguments have been included in the paper. 2. In table4,“Table 4. Results (mean ± standard deviation) of the pre-clustering method considering different input data with respect to various levels of Gaussian noise (from 1.E-05 to 1.E-04) and viewpoints”, but there is no Gaussian noise information in the table. It has been rewritten to clarify the caption: Table 4. Results (mean ± standard deviation) of the pre-clustering method considering all experimental data (for different various levels of Gaussian noise and viewpoints) Validity of the findings Although the results showed that the proposed method was better, the methods for comparisons were old methods. Comparisons with state-of-the-art methods are necessary. As a consequence of updating the state of the art, new methods have been included in the experimental comparison: • Global-L0: Y. Lin, J. Li, C. Wang, Z. Chen, Z. Wang, and J. Li, “Fast regularity-constrained plane fitting,” ISPRS J. Photogramm. Remote Sens., vol. 161, pp. 208–217, Mar. 2020, doi: 10.1016/j.isprsjprs.2020.01.009. • RSPD: A. M. C. Araújo and M. M. Oliveira, “A robust statistics approach for plane detection in unorganized point clouds,” Pattern Recognit., vol. 100, p. 107115, Apr. 2020, doi: 10.1016/j.patcog.2019.107115. • Prior-MLSAC: B. Zhao, X. Hua, K. Yu, W. Xuan, X. Chen, and W. Tao, “Indoor Point Cloud Segmentation Using Iterative Gaussian Mapping and Improved Model Fitting,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 11, pp. 7890–7907, Nov. 2020, doi: 10.1109/TGRS.2020.2984943. • Prog-X: D. Barath and J. Matas, 'Progressive-X: Efficient, Anytime, MultiModel Fitting Algorithm'. Proceedings of the IEEE International Conference on Computer Vision. 2019. Barath, Daniel and Noskova, Jana and Ivashechkin, Maksym and Matas, Jiří. 'MAGSAC++, a fast, reliable and accurate robust estimator' Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2020. Comments for the author The manuscript presented a planar fitting model based on the least-squares estimation method (LEM). Different from the ordinary LEM, the model was robust to noises by involving the geometric constraints. The method is simple but interesting, and the article is well organized. In addition to a few comments, I encourage the authors to check the mistakes in their manuscript to enhance the levels of paper presentation. For example, In line42, “Computer vision plays an important role (in) providing methods for…” In line 135, “It allow (allows) us to provide a high accuracy…” In line 136, “with respect (to) other iterative methods…” In line 464, “edges that belongs (belong) to different faces.” The paper has been proofread by the Elsevier proofreading services. Reviewer #2 Basic reporting Some formatting problems will be improved such as formula annotation, table header. Thank you very much for your comments. A detailed revision of language and formatting has been carried out with the support of the Elsevier proofreading services. In response to reviewers' comments, an update of the introduction and state of the art has been made, as well as an addition of new methods to the experimental comparison. The new version of the article shows the changes in blue. Experimental design no comment Validity of the findings no comment Comments for the author This paper proposed an approach to simultaneously fit different planes in a point cloud using linear regression estimators and normal vectors to the planes with three steps: a clustering-based unsupervised step, regression-based supervised step and a reassignment step. Experimental results show the effectiveness of the proposed method. It is a nice job of providing good accuracy/speed trade-off in the presence of noise and outliers. Thank you very much for your comments. Reviewer #3 Basic reporting The English need some polishing in terms of sentence construction. Literature references are appropriate. Figures and tables of acceptable quality. The manuscript is self-contained with relevant results to hypotheses. Thank you very much for your comments. A detailed revision of language and formatting has been carried out with the support of the Elsevier proofreading services. Experimental design The experimental design appears to be original. The research question is well defined, relevant and meaningful. A rigorous investigation has been carried out. The methods have been described in sufficient details. Thank you very much for your comments. Validity of the findings The method appears to be novel. Data have been provided and appears to be statistically sound. Conclusions are well stated. Thank you very much for your comments. Comments for the author The English should be checked and few bad sentences here and there should be corrected which will make the manuscript more attractive. Thank you very much for your comments. A detailed revision of language and formatting has been carried out with the support of the Elsevier proofreading services. In response to reviewers' comments, an update of the introduction and state of the art has been made, as well as an addition of new methods to the experimental comparison. The new version of the article shows the changes in blue. Certificate of Elsevier Language Editing Services The following article was edited by Elsevier Language Editing Services: 'Iterative multilinear optimization for planar1model fitting under geometric constraints' Authored by: Jorge Azorin-Lopez Date: 26-May-2021 Serial number: LELA-3198-E2B3CD0DC657 "
Here is a paper. Please give your review comments after reading it.
223
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>In recent years, Graph Convolutional Networks (GCNs) have emerged rapidly due to their excellent performance in graph data processing. However, recent researches show that GCNs are vulnerable to adversarial attacks. An attacker can maliciously modify edges or nodes of the graph to mislead the model's classification of the target nodes, or even cause a degradation of the model's overall classification performance. In this paper, we first propose a black-box adversarial attack framework based on Derivative-Free Optimization (DFO) to generate graph adversarial examples without using gradient and apply advanced DFO algorithms conveniently. Second, we implement a direct attack algorithm (DFDA) using Nevergrad library based on the framework. Additionally, we overcome the problem of large search space by redesigning the perturbation vector using constraint size. Finally, we conducted a series of experiments on different datasets and parameters. The results show that DFDA outperforms Nettack in most cases, and it can achieve an average attack success rate of more than 95% on the Cora dataset when perturbing at most 8 edges. This demonstrates that our framework can fully exploit the potential of DFO methods in node classification adversarial attacks.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>26 discontinuous and the features may also have discrete values. This makes it difficult to use gradient information to attack <ns0:ref type='bibr' target='#b21'>(Z&#252;gner et al., 2018)</ns0:ref>, especially under a black-box condition where only classification output vector can be obtained.</ns0:p><ns0:p>Derivative-free optimization (DFO) algorithms <ns0:ref type='bibr' target='#b2'>(Conn et al., 2009)</ns0:ref> are a class of algorithms that do not compute the gradient but only use the value of the objective function to optimize. These algorithms are often used in cases where the derivative of the objective function is undefined, or where it is difficult to obtain a reliable value of the derivative. It has been successfully applied to attack traditional deep neural networks <ns0:ref type='bibr' target='#b18'>(Ughi et al., 2020)</ns0:ref>. There is also some DFO attack work on GNNs. For example, <ns0:ref type='bibr' target='#b3'>Dai et al. (2018)</ns0:ref> implemented a black-box GCN adversarial attack algorithm based on a genetic algorithm by setting the population, fitness, selection, crossover and mutation in detail. <ns0:ref type='bibr' target='#b1'>Chen et al. (2019)</ns0:ref> proposed a community detection attack algorithm with a genetic algorithm and verified their algorithm has good transferability. However, without a uniform framework, these works usually have to implement custom versions of the algorithms for a certain problem. To the author's best knowledge, there is no general framework that can quickly apply various DFO algorithms in the field of graph adversarial attacks.</ns0:p><ns0:p>In this paper, we propose a black-box adversarial attack framework based on the idea of DFO. It consists of three steps: Input Setting (design the loss function, perturbation vector, constraints and so on), Iterative Query (generate perturbation vectors and query the black-box GCN model iteratively) and Final Perturbation (modify graph data with perturbation that minimize the loss function). As facing the difficulty of using gradient and the inconvenience of applying and comparing DFO algorithms, the key idea insight of our approach is (1) regarding graph adversarial attacks as a search problem in a discrete solution space and using derivative-free optimizers (DFOers) to solve it; (2) abstracting the specific task of graph adversarial attacks as an optimization problem about a certain form perturbation vector in order to switch and compare various DFOers conveniently.</ns0:p><ns0:p>Moreover, we use the Nevergrad <ns0:ref type='bibr' target='#b12'>(Rapin and Teytaud, 2018)</ns0:ref> library to implement a black-box direct adversarial attack algorithm (called DFDA) on GCN-based node classification tasks. Following the framework above, we set Attack Loss Function, Perturbation Vector, Perturbation Constraint, Mapping Function and Derivative-Free Optimizer separately.</ns0:p><ns0:p>We conducted a series of experiments on Cora, Citeseer and Polblogs. Without loss of generality, we attack node 0 of the Cora dataset with five different DFOers to compare the classification margin and comprehensive performance. Then, we randomly select 50 nodes to attack separately to study the average attack success rate of DFDA with different iteration numbers, perturbation constraints and perturbation types. Finally, we compare DFDA with a classical algorithm Nettack <ns0:ref type='bibr' target='#b21'>(Z&#252;gner et al., 2018</ns0:ref>) -a well-performing greedy algorithm-under different defense models. Nettack is a well-performing adversarial attack algorithm based on greedy approach. The results show that all the selected DFOers can search for effective perturbations, and DFDA is superior to Nettack in most cases.</ns0:p><ns0:p>The contributions of this paper are listed below: Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>graph adversarial samples without using the gradient. Specifically, we use DFO methods to perform attacks effectively. Additionally, the uniform outputs of DFOers (perturbation vectors) make it convenient to use and compare different DFO algorithms.</ns0:p><ns0:p>&#8226; We have implemented a direct adversarial attack algorithm on node classification tasks for GCNs based on the framework above. Facing the potential problem of too large search space, we set the perturbation vector dimension to the constraint size and set the elements of the perturbation vector as the pointers indicating the perturbed position of the original matrix. This approach reduces the search space from exponential level to power level and enables the perturbation vector to pass the constraint check more easily.</ns0:p><ns0:p>&#8226; We have conducted a series of experiments under various conditions. The results show that we can achieve an average attack success rate of more than 95% on the Cora dataset when perturbing at most 8 edges. We compare our algorithm with the classical algorithm Nettack under different defense models and find that DFDA outperforms Nettack in most cases. During the experiments, instead of copying an original graph at each perturbation iteration, we use inverse perturbation to restore the perturbed graph to its original state. This can effectively reduce the computation cost under a large number of iterations. This paper is organized as follows. 'Preliminaries' gives the basic concepts of GCN-based node classification and adversarial attacks. 'Derivative-Free Adversarial Attack on GCNs' describes our framework and algorithms, followed by the experimental results in 'Evaluation'. 'Related Work' introduced the related work, followed by some concluding remarks in 'Conclusion'.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>There has been some classical work on node classification adversarial attacks. The concept of graph adversarial attacks was first introduced by <ns0:ref type='bibr' target='#b21'>Z&#252;gner et al. (2018)</ns0:ref>. They proposed a gray-box attack algorithm Nettack based on the greedy approach. In this algorithm, the attacker can obtain training labels to train the surrogate model. The adversary can generate an adversarial sample by attacking the surrogate model and migrate it to the target model for attacks. Subsequently, they proposed Mettack <ns0:ref type='bibr' target='#b22'>(Z&#252;gner and G&#252;nnemann, 2019)</ns0:ref>, an attack algorithm that can reduce the global classification accuracy. This algorithm attacks based on the gradient information of the adjacency matrix. <ns0:ref type='bibr' target='#b3'>Dai et al. (2018)</ns0:ref> proposed a black-box attack algorithm named RL-S2V based on reinforcement learning. This method performs attacks by injecting nodes into the graph.</ns0:p><ns0:p>Mettack is a global attack algorithm that attacks to decline the classification accuracy of a whole graph instead of a target node. Black-box queries in RL-S2V setting only return the prediction classes rather than class probability vectors. Therefore we do not compare with the above algorithms. We compare DFDA with Nettack because the attack settings are more similar: both of them can perform direct perturbations of edges and features of a target node. For the sake of reasonableness, we unify Nettack's settings in terms of node selection and constraints with DFDA.</ns0:p></ns0:div> <ns0:div><ns0:head>3/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62580:1:1:NEW 30 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>DFO methods like genetic algorithms are used in some graph adversarial attack tasks. For example, <ns0:ref type='bibr' target='#b3'>Dai et al. (2018)</ns0:ref> applied genetic algorithms to node classification adversarial attacks and adapted it as a baseline algorithm for their main algorithm RL-S2V. Experiments in their paper demonstrate that genetic algorithms are effective for combinatorial optimization problems like graph adversarial attacks. <ns0:ref type='bibr' target='#b1'>Chen et al. (2019)</ns0:ref> proposed a community detection attack algorithm based on genetic algorithms and verified that the algorithm has good transferability. In this paper, we propose an adversarial attack framework to make it convenient to apply and compare different DFO algorithms.</ns0:p><ns0:p>Nevergrad <ns0:ref type='bibr' target='#b12'>(Rapin and Teytaud, 2018)</ns0:ref> is a Python3 based open-source framework developed by Facebook, which provides a large number of implementations of DFO algorithm optimizers, such as, differential evolution algorithms, fast genetic algorithms, covariance matrix adaptive algorithms, particle swarm optimization algorithms, etc.</ns0:p><ns0:p>When DFO algorithms need to be applied, researchers usually need to implement custom versions of algorithms for real problems, which consume a lot of time and do not facilitate the comparisons among algorithms. The algorithm DFDA we proposed is based on Nevergrad, using which the strengths and weaknesses of various DFO algorithms can be easily compared. It can help researchers find suitable algorithms that deserve to be further customized.</ns0:p></ns0:div> <ns0:div><ns0:head>PRELIMINARIES</ns0:head><ns0:p>In this section, we introduce the notions about GCN-based node classification and the general form of adversarial attack on graph.</ns0:p></ns0:div> <ns0:div><ns0:head>Node Classification with GCN</ns0:head><ns0:p>The classification task on graph data mainly consists of two cases, one is graph-level classification, where the graph G is a whole with a label, and the other is node-level classification, where each node in the graph G belongs to a class in the label set Y Y Y . In this paper, we focus on node-level semi-supervised classification. Semi-supervised node classification is a task in which the labels of unknown nodes are derived by training under the condition that the training set labels are known.</ns0:p><ns0:p>In the following we give the mathematical definition of semi-supervised node classification. For an undirected graph G, given the adjacency matrix A A A &#8712; {0, 1} N&#215;N , the node feature matrix X X X &#8712; R N&#215;d (the feature of the node i is x x x i &#8712; R d ), the label of the i-th node </ns0:p><ns0:formula xml:id='formula_0'>( f &#952; (G)) = &#8721; v i &#8712;V L &#8467; ( f &#952; (A A A, X X X) i , y i ) (1)</ns0:formula><ns0:p>where f &#952; (A A A, X X X) i denotes the class probilities of node v i and y i denotes the true label of node v i . The loss function &#8467;(&#8226;, &#8226;) denotes the cross-entropy error.</ns0:p></ns0:div> <ns0:div><ns0:head>4/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62580:1:1:NEW 30 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We use the 2-layer GCN proposed by <ns0:ref type='bibr' target='#b7'>(Kipf and Welling, 2017)</ns0:ref> as the target model. The 2-layer GCN is one of the victim models commonly used in adversarial attack experiments. It is defined as follows:</ns0:p><ns0:formula xml:id='formula_1'>f (A A A, X X X) = Softmax &#194; A A ReLU &#194; A AX X XW W W (1) W W W (2) (2)</ns0:formula><ns0:p>where &#194;</ns0:p><ns0:formula xml:id='formula_2'>A A = D D D &#8722; 1 2 &#195; A A D D D &#8722; 1 2 , &#195; A A = A A A + I I I and D D D ii = &#8721; j &#195; A A i j . Each row vector f (A A A, X X X) i &#8712; [0, 1]</ns0:formula><ns0:p>k of the output matrix represents the class probability vector of node v i .</ns0:p></ns0:div> <ns0:div><ns0:head>General Forms of Adversarial Attack</ns0:head><ns0:p>Given a graph G = (A A A, X X X) and a set of attack target nodes V t &#8838; V . Let y i denote the true class of node v i . For node v i in the test set, since we do not know its true label, we can adopt the method of self-learning <ns0:ref type='bibr' target='#b22'>(Z&#252;gner and G&#252;nnemann, 2019)</ns0:ref>, that is, regarding the model output on the clean graph as the true labels of unknown nodes. The attack objective is to find a perturbed graph</ns0:p><ns0:formula xml:id='formula_3'>G &#8242; = (A A A &#8242; , X X X &#8242; ), which makes the attack target function L atk minimum, i.e., min L atk ( f &#952; (G &#8242; )) = &#8721; v i &#8712;V t &#8467; atk ( f &#952; * (G &#8242; ) i , y i ) s.t., &#952; * = arg min &#952; L train f &#952; &#284; (3)</ns0:formula><ns0:p>where &#8467; atk is the loss function of the attack. &#284; can be chosen as either the original graph G or the perturbed graph G &#8242; , corresponding to the poisoning attack scenario and the escape attack scenario, respectively. Poisoning attack means that the GCN will be retrained with the perturbed graph while evasion attack represents the cases that the GCN will not be retrained <ns0:ref type='bibr' target='#b17'>(Sun et al., 2018)</ns0:ref>.</ns0:p><ns0:p>In particular, for an adversarial attack with a certain single node v i , the objective function can be formulated as:</ns0:p><ns0:formula xml:id='formula_4'>min L atk ( f &#952; (G &#8242; )) = &#8467; atk ( f &#952; * (G &#8242; ) i , y i ) .<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>However, the perturbation of G &#8242; is constrained and does not allow unrestricted modification on the graph. A realistic assumption is that the attack needs to generate as small and indistinguishable perturbations as possible, i.e., G &#8712; &#934;(G), where &#934;(G)</ns0:p><ns0:p>denotes the constraint domain. If a perturbation upper bound &#8710; is given, a typical perturbation constraint can be expressed as</ns0:p><ns0:formula xml:id='formula_5'>A A A &#8722; A A A &#8242; 0 + X X X &#8722; X X X &#8242; 0 &#8804; &#8710;.<ns0:label>(5)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>DERIVATIVE-FREE ADVERSARIAL ATTACK ON GCNS</ns0:head><ns0:p>In this section, we introduce a black-box adversarial attack framework for GCNs based on the idea of DFO. And then, we implement a direct attack algorithm on the GCN node classification task using the Nevergrad algorithm library. Finally, we optimize the attack algorithm to solve the problem of excessive dimensionality encountered during the algorithm implementation.</ns0:p></ns0:div> <ns0:div><ns0:head>5/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62580:1:1:NEW 30 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Final Perturbation</ns0:head><ns0:p>At the end of the iterative query, the perturbation vector that minimizes the loss function will be imposed to the graph G to generate G f inal . At this point, the whole adversarial sample generation process is finished. Subsequently, we put G f inal in (3) as G &#8242; for poisoning attacks or evasion attacks.</ns0:p></ns0:div> <ns0:div><ns0:head>Derivative-Free Direct Attack (DFDA)</ns0:head><ns0:p>The Derivative-Free Direct Attack (DFDA) algorithm is a black-box adversarial attack method on GCN node classification tasks. In this algorithm, the adversary can directly modify the connections and features of the target node to mislead GCNs to misclassify the node as a chosen class (called target class). DFDA can perform structural perturbation (modifying the adjacency matrix) and feature perturbation (modifying the feature matrix).</ns0:p><ns0:p>First, we set up the five subsections of Input Setting of the framework.</ns0:p><ns0:p>Attack Loss Function: Considering the success rate of the attack, the second most probable class originally predicted by the clean graph is selected as the target class. The loss function is designed as, Perturbation Vector: For structural perturbations, the perturbation vector needs to be set to {0, 1} N&#8722;1 to describe all possible perturbations that can be generated, since the target node may have edges with all other N &#8722; 1 nodes. The size of the search space for this problem is O(2 N ), which is of exponential level. Since the number of edges that an attacker can modify (usually set to &#8710;) is very small to ensure the invisibility of the perturbation in practical situations, the perturbation constraint is usually very tight. The vast majority of perturbations are not qualified. To reduce the search space and improve the probability of passing the constraint check, we define the structural perturbation</ns0:p><ns0:formula xml:id='formula_6'>L atk = f (A A A, X X X) v t ,c 1 &#8722; f (A A A, X X X) v t ,c 2 (6) f (A A A, X X X) v t is</ns0:formula><ns0:formula xml:id='formula_7'>vector as &#951; A &#8712; {x &#8712; N | x &lt; N} &#8710; .</ns0:formula><ns0:p>As is shown in Figure <ns0:ref type='figure'>2</ns0:ref>, each element u &#8712; &#951; A like a position pointer represents that the connection between the target node and the node v u is changed. This method reduces the search space significantly (down to O(N &#8710; )).</ns0:p><ns0:p>Similarly, we set the feature perturbation vector to &#951; X &#8712; {x &#8712; N | x &lt; d} &#8710; . Here, we only consider the case where the feature matrix is of the form X X X = {0, 1} N&#215;d .</ns0:p></ns0:div> <ns0:div><ns0:head>7/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62580:1:1:NEW 30 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref>. The setting of Perturbation Vector. The upper vector is in the form of {0, 1} N . 0 means that the connection status remains unchanged, and 1 means that the edge is added or removed. We design the lower vector which uses the position pointers as elements to decline the dimension.</ns0:p><ns0:p>Perturbation Constraint: Based on the design of the above perturbation vector, we define the constraints as: a) no duplicate elements in &#951; A and &#951; X ; and b) v t / &#8712; &#951; A .</ns0:p><ns0:p>Constraint a guarantees that no duplicate modifications will be made to a particular edge while b guarantees that the graph structure will not generate self-loops.</ns0:p><ns0:p>Mapping Function: The perturbation mapping function is defined as follows.</ns0:p><ns0:p>For each element i in &#951; A ,</ns0:p><ns0:formula xml:id='formula_8'>A v t i := 1 &#8722; A v t i .<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>For each element j in &#951; F ,</ns0:p><ns0:formula xml:id='formula_9'>X v t j := 1 &#8722; X v t j .<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>Derivative-Free Optimizer: We choose OnePlusOne, DiscreteOnePlusOne, Dou-bleFastGAOnePlusOne, DE and RandomSearch in Nevergrad as DFOers to generate the perturbation vectors.</ns0:p><ns0:p>After defining the output, we follow the framework for Iterative Query and Final Perturbation. In the Iterative Query step, our approach is slightly different from the framework. In each iteration, we let the DFOer generate three perturbation vectors simultaneously, perform constraint checking and loss query separately, and finally select the perturbation vector with the smallest attack loss as the output of this iteration. These can make the optimization process more stable.</ns0:p></ns0:div> <ns0:div><ns0:head>EVALUATION</ns0:head><ns0:p>We have implemented our algorithm on DeepRobust <ns0:ref type='bibr' target='#b8'>(Li et al., 2020)</ns0:ref>, an adversarial attack algorithm library developed on PyTorch. As DeepRobust integrates classical attack and defense algorithms on the image and graph domains, it can support the comparison between our algorithm and other algorithms.</ns0:p><ns0:p>We consider the following three research questions:</ns0:p><ns0:p>&#8226; RQ1: Which DFOer is the most suitable in the setting of DFDA? Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 1 The Derivative-Free Direct Attack (DFDA) algorithm Generate perturbation vector &#951;</ns0:p><ns0:formula xml:id='formula_10'>Input: Original clean graph G = (A A A, X X X), target node v t &#8712; V t , resource budget &#946; , pertur- bation constraint &#8710;, black-box target model M Output: Perturbed adversarial sample G = (A A A &#8242; , X X X &#8242; ) 1: p v t &#8592;</ns0:formula><ns0:formula xml:id='formula_11'>(t) A &#8712; {x &#8712; N | x &lt; N} &#8710; , &#951; (t) X &#8712; {x &#8712; N | x &lt; d} &#8710; with a chosen DFOer 7: until v t / &#8712; &#951; (t)</ns0:formula><ns0:p>A and &#951;</ns0:p><ns0:formula xml:id='formula_12'>(t) A , &#951;<ns0:label>(t)</ns0:label></ns0:formula><ns0:p>X does not contain duplicate elements 8:</ns0:p><ns0:formula xml:id='formula_13'>A A A (t) &#8592; A A A, X X X (t) &#8592; X X X 9:</ns0:formula><ns0:p>for each element i in vector &#951; (t)</ns0:p><ns0:p>A do 10:</ns0:p><ns0:formula xml:id='formula_14'>A A A (t) [v t ] [i] &#8592; 1 &#8722; A A A [v t ] [i] 11:</ns0:formula><ns0:p>end for 12:</ns0:p><ns0:p>for each element j in vector &#951;</ns0:p><ns0:formula xml:id='formula_15'>(t) X do 13: X X X (t) [v t ] [i] &#8592; 1 &#8722; X X X [v t ] [i] 14:</ns0:formula><ns0:p>end for 15:</ns0:p><ns0:formula xml:id='formula_16'>p v t &#8592; Query(M , v t ) 16: loss (t) &#8592; p (t) v t [c 1 ] &#8722; p (t) v t [c 2 ] 17:</ns0:formula><ns0:p>Inform the optimizer of loss 18: end while</ns0:p><ns0:formula xml:id='formula_17'>19: A A A &#8242; &#8592; A A A, X X X &#8242; &#8592; X X X 20: Select &#951; (m) A , &#951; (m)</ns0:formula><ns0:p>X that minimizes the loss 21: for each element i in vector &#951; (m)</ns0:p><ns0:formula xml:id='formula_18'>A do 22: A A A &#8242;(t) [v t ] [i] &#8592; 1 &#8722; A A A [v t ] [i] 23: end for 24: for each element j in vector &#951; (m) X do 25: X X X &#8242;(t) [v t ] [i] &#8592; 1 &#8722; X X X [v t ] [i] 26: end for 27: return G &#8242; = A A A &#8242; , X X X &#8242;</ns0:formula></ns0:div> <ns0:div><ns0:head>9/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62580:1:1:NEW 30 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; RQ3: Compare with Nettack, how does our method perform under different defense models and scenarios?</ns0:p><ns0:p>To answer these questions, we design three kinds of experiments in the next section.</ns0:p><ns0:p>To answer RQ1, we conducted a single-node experiment, attacking only one node, to analyze the attack results. Without loss of generality, we compare the effect of different DFOers on node 0 in Cora. In this experiment, we select the most appropriate optimizer for subsequent experiments.</ns0:p><ns0:p>To answer RQ2, we conducted multi-node experiments in which all nodes in a target node set were attacked separately. The percentage of 'successful' and 'misleading' nodes were counted. In the multi-node experiments, we investigate the effects of resource budget &#946; , perturbation constraint &#8710;, and perturbation type on the success rate of the attack.</ns0:p><ns0:p>To answer RQ3, we conducted comprehensive attack experiments on different datasets and different defense models. In these experiments, we investigate the attack effect of DFDA under three defense models in evasion attack and poisoning attack scenarios. We also compare the DFDA with the greedy algorithm Nettack in poisoning attack cases.</ns0:p></ns0:div> <ns0:div><ns0:head>Dataset and Settings</ns0:head><ns0:p>Dataset The commonly used datasets in the field of graph adversarial attacks are Cora, Citeseer and Polblogs <ns0:ref type='bibr' target='#b13'>(Rossi and Ahmed, 2015)</ns0:ref>. We present the statistics for each dataset in the following Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>. Among them, Cora and Citeseer are attribute graphs, i.e., each node in the graph has a specific dimensional attribute; the Polblogs dataset is a directed weighted graph with no node features. Cora and Citeseer are more sparse, while Polblogs is denser and has only two categories. Due to their strong representation, we perform attacks based on these three datasets. Setting We describe some of the parameters and their range of values during the experiment in Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref>. The poisoning attack considered in this paper refers to the case where the model is not retrained at query time and is retrained at test time after the attack. In this case, our approach is equivalent to generating an adversarial sample with the training parameters of the original model and then transferring this sample to the model with the new parameters after retraining.</ns0:p><ns0:p>We define 'original class' as the maximum probability class of the target node's black-box query before the attack. We define 'successful attack' as the case that the maximum probability class after the attack is different from the original class. We define 'misleading success' as the maximum probability class of the node after the attack is the same as the selected target class. 'Classification margins' here are defined The maximum number of iterations of the gradient-free optimizer. This parameter controls the number of computational resources.</ns0:p></ns0:div> <ns0:div><ns0:head>Positive integers perturbation type</ns0:head><ns0:p>Perturbation of structure or of features structure, feature, both constraint size &#8710;</ns0:p><ns0:p>The maximum number of edges or features that can be modified for each node manipulated by the attacker; this parameter controls the strength of the perturbation.</ns0:p><ns0:p>For structural perturbations: 1 &#8764; N For feature perturbations: 1 &#8764; d scenario Control whether the final perturbation is injected before training(poisoning) or after training(escape)</ns0:p></ns0:div> <ns0:div><ns0:head>Poisoning or Evasion</ns0:head></ns0:div> <ns0:div><ns0:head>DFOer</ns0:head><ns0:p>Indicates which derivative-free optimizer in Nevergrad is selected OnePlusOne, DiscreteOne-PlusOne, DoubleFastGA, etc.</ns0:p><ns0:p>1 The same hyperparameters are chosen for the target model 2-layer GCN, GCN-Jaccard and GCN-SVD: hidden layer dimension is 16, dropout rate is 0.5, learning rate is 0.1, and weight decay is 5 * 10 &#8722;4 .</ns0:p><ns0:p>Besides DeepRobust, we also use packages include: Python 3.7; PyTorch 1.8.1;</ns0:p><ns0:p>Nevergrad 0.4.3 post2; H5py 3.2.1, etc. In terms of hardware, the processor used for the experiment was a 2.6 GHz hexa-core Intel Core i7.</ns0:p></ns0:div> <ns0:div><ns0:head>Single-node Experiments (RQ1)</ns0:head><ns0:p>In this part, we attack just one node with different DFOers. We conducted Experiment The horizontal coordinate is the number of iterations while the vertical coordinate is the value of the loss function. The smaller the loss function value, the greater the difference between the target class probability value and the original class probability value, the better the attack effect. The figures marked 'B' and 'C' show the class probabilities before and after the attacks. The horizontal coordinate is class labels, and the vertical coordinate refers to the probability that the node belongs to a certain class.</ns0:p><ns0:p>The figures marked 'B' shows the classification probabilities on unperturbed graphs and the figures marked 'C' shows the probabilities on perturbed graphs. In this experiment, the greater the probability of class 6 and the smaller the probability of class 5, the better the experiment result is. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science than -0.7 of the attack power. However, during the running stage, DiscreteOnePlusOne often generates perturbations that violate the constraints, which is very inefficient. Dou-bleFastGA can combine speed and accuracy, so this optimizer is selected for the later experiments.</ns0:p></ns0:div> <ns0:div><ns0:head>Multi-node Experiments (RQ2)</ns0:head><ns0:p>In this part, we choose a set of target nodes from the test set and attack them separately.</ns0:p><ns0:p>Every time we attack a node, we will test and record whether the attack was successful.</ns0:p><ns0:p>We define Success Rate (SR) as the ratio of the 'successful attack' number to the total attacks number, Misleading Rate (MR) as the ratio of the 'misleading success' number to the number of the total attacks. In Experiment 2 &amp; 3, we try to find the effect of different parameter settings on SR and MR.</ns0:p><ns0:p>Experiment 2: The effect of different resource budget &#946; The parameter settings and results of Experiment 2 are shown in Table <ns0:ref type='table' target='#tab_9'>5 and 6</ns0:ref>. The SR directly reflects the attack effectiveness of the algorithm: the higher the SR, the better the effectiveness. The MR reflects the directional misleading ability of the attack: the higher the ratio of MR to SR, the more directional the attack is.</ns0:p><ns0:p>The SR gradually increases with the increase of resource budget (i.e., the number of iterations), but the time consumption is proportional to the number of iterations.</ns0:p><ns0:p>The ratio of MR to SR increases with the resource budget, which indicates that the DoubleFastGA is capable to find the appropriate optimization direction to make it more probable to classify nodes to the target class. When &#946; = 100, the attack reaches the balance of efficiency and performance. It can be seen that even with the smallest resource budget (&#946; = 10), 68% of the nodes are attacked successfully and 54% are misled successfully. When &#946; &#8805; 25, the SR can exceed 80%.</ns0:p><ns0:p>During the experiment, we found that each iteration needs to perturb at the original image, which requires many deep copy operations on the original image which consumes Manuscript to be reviewed</ns0:p><ns0:p>Computer Science a lot of computational time. We use 'inverse perturbation' to solve this problem. During each iteration, we directly perturb the original graph data, and then 'inversely perturb' the graph after each query to restore it to the original data. This approach greatly reduces the computational time of deep copies for a large number of iterations. As shown in the last two columns of Table <ns0:ref type='table' target='#tab_9'>6</ns0:ref>, the time consumption after improvement is about 50% of that before.</ns0:p><ns0:p>Experiment 3: Influence of constraint &#8710; and perturbation type on attack effect Table <ns0:ref type='table' target='#tab_11'>7</ns0:ref> shows the parameter settings of Experiment 3. It can be seen from Tabel 8 that the structure perturbation effect is better than the feature perturbation. The combination of structure perturbation and feature perturbation can achieve a stronger attack effect but requires a more tolerant constraint (the sum of structural constraint and feature constraint). Only structure perturbations were used in subsequent experiments.</ns0:p><ns0:p>As the perturbation constraint size increases, the optimizer can search in a larger space. Obviously, a larger constraint size leads to a better attack effect.</ns0:p></ns0:div> <ns0:div><ns0:head>Comprehensive Experiments (RQ3)</ns0:head><ns0:p>In this part we conduct Experiment 4 &amp; 5 to find how DFDA performs under 3 different models: base model 2-layer GCN mentioned in Equation (2), GCN-Jaccard <ns0:ref type='bibr' target='#b20'>(Wu et al., 2019)</ns0:ref> and GCN-SVD <ns0:ref type='bibr' target='#b4'>(Entezari et al., 2020)</ns0:ref>. Experiment 4 is conducted in an evasion scenario where the model doesn't retrain, while Experiment 5 is in a poisoning scenario that needs retraining before attacks. Moreover, we compare DFDA with classic method Nettack in Experiment 5. We randomly attack 30 test set nodes with DFDA (using the DoubleFastGA optimizer) and Nettack respectively. We repeated the process five times and calculated the mean and standard deviation of the SRs to make the comparison more Of the parameters involved in the above experiments, the three parameters -scenario, perturbation type and target model -are used to distinguish between different types of adversarial attacks, in decreasing order of importance. The scenario distinguishes between poisoning attacks and evasion attacks, the perturbation type distinguishes between structure and feature perturbations, and the target model distinguishes between attacks under different defense models.</ns0:p><ns0:p>There are three numerical type parameters that influence the effectiveness of the attack. In descending order of importance, they are constraint size, resource budget and number of target nodes. Constraint size determines the number of the edges or features that can be manipulated and directly controls the ease of the attack task. The resource budget determines the number of iterations. A large resource budget can increase the success rate of the attack to some extent. The number of target nodes determines the stability of the success rate of the attack. The larger the nodes number, the less the fluctuation in success rate.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In this paper, we focus on the use of derivative-free optimization (DFO) ideas in graph adversarial attacks. We first introduce a DFO-based black-box adversarial attack framework against GCNs. Then we implement a direct attack algorithm (DFDA) using Nevergrad library, using which we can easily compare the performance of various derivative-free optimizers on node classification attack tasks. Moreover, we solve the problem of large search space by declining the perturbation vector dimension. Finally, we conducted three kinds of experiments on Cora, Citeseer and Polblogs. The results show that DFDA outperforms Nettack in most cases. It can achieve an average success rate of more than 95% on Cora when perturbing at most 8 edges, which demonstrates</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>y i &#8712; {1, 2, ..., k}, a set of labeled nodes V L (training set, where |V L | = N L ), and a set of unlabeled nodes V U (test set, where |V U | = N U ). The objective is to train a model f &#952; (G) on the graph G with training parameter &#952; that predicts the label of each node in V U . The general idea of semi-supervised learning model training is to minimize the loss function of the model on the training set as much as possible, i.e., min L train</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Derivative-Free Black-Box Attack Framework</ns0:figDesc><ns0:graphic coords='7,141.73,63.79,425.15,201.63' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>the output of the black-box query of the target node v t as a class probability vector. c 1 and c 2 denote the first and second largest probability classes of the black-box query before the attack. When the loss function decreases, the target class probability increases and the correct class probability decreases.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62580:1:1:NEW 30 Jul 2021) Manuscript to be reviewed Computer Science as Z v,c &#8722; max c &#8242; =c Z v,c &#8242; where c is the original class, Z v,c is the probability of the class c given to the node v by the attacked model. The lower the classification margins, the better the attack performance.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>in order to find an appropriate DFOer by comparing their attack loss curves and 11/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62580:1:1:NEW 30 Jul 2021) Manuscript to be reviewed Computer Science classification margins. In this experiment, different DFOers are taken to attack node 0 of Cora to let the model classify the node as class 6 instead of its original class 5.Experiment 1: Attack by 5 different DFOers on node 0 of Cora</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 to 7 show the loss curves and class probability results of the 5 different DFOers. The figures with mark 'A' show the attack loss curves of different DFOers.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>From</ns0:head><ns0:label /><ns0:figDesc>the experimental result figures, we can see that various DFOers can all successfully mislead the model in our algorithm. As shown in Figure 3A, Figure 6A, and Figure 7A, OnePlusPne, DE, and RandomSearch can all obtain minimum values of -0.4&#8764;-0.6, which can achieve less than -0.65 of the classification margin. However, judging from the oscillation degree of the attack loss curves, they do not have the potential of continuous optimization and cannot continuously reduce the value of the loss function with the increase of iteration number. From Figure 4A and Figure 5A, it can be seen that DiscreteOnePlusOne and Dou-bleFastGA can quickly find the appropriate optimization direction and optimize the loss function stably, and can reach a minimum value of about -0.8, corresponding to less 12/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62580:1:1:NEW 30 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure 3. OnePlusOne</ns0:figDesc><ns0:graphic coords='14,206.79,63.78,283.43,98.62' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure 4. DiscreteOnePlusOne</ns0:figDesc><ns0:graphic coords='14,206.79,202.74,283.43,98.62' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure 6. DE</ns0:figDesc><ns0:graphic coords='14,206.79,480.68,283.43,98.62' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 8 .Figure 14 .</ns0:head><ns0:label>814</ns0:label><ns0:figDesc>Figure 8. Average SR on 2-layer GCN &amp; Cora Figure 9. Average SR on 2-layer GCN &amp; Citeseer Figure 10. Average SR on 2-layer GCN &amp; Polblogs</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>Query(M , v t ) 2: Get the most probable class c 1 and the second most probable class c 2 in p v t</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>3: t &#8592; 0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>4: while t &lt; &#946; do</ns0:cell></ns0:row><ns0:row><ns0:cell>5:</ns0:cell><ns0:cell>repeat</ns0:cell></ns0:row><ns0:row><ns0:cell>6:</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Dataset Statistics</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='5'>Dataset Nodes Edges Features Classes</ns0:cell></ns0:row><ns0:row><ns0:cell>Cora</ns0:cell><ns0:cell>2708</ns0:cell><ns0:cell>5429</ns0:cell><ns0:cell>1433</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell>Citeseer</ns0:cell><ns0:cell>3327</ns0:cell><ns0:cell>4732</ns0:cell><ns0:cell>3703</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Polblogs 1490 19025</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>2</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Description of experimental parameters</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameters</ns0:cell><ns0:cell>Meaning</ns0:cell><ns0:cell>Range</ns0:cell></ns0:row><ns0:row><ns0:cell>dataset</ns0:cell><ns0:cell>Dataset used to train the graph neural network</ns0:cell><ns0:cell>Cora, Citeseer and Polblogs</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>target model 1 The graph neural network model that will be attacked</ns0:cell><ns0:cell>2-layer GCN, GCN-Jaccard and GCN-SVD</ns0:cell></ns0:row><ns0:row><ns0:cell>node ID</ns0:cell><ns0:cell>Target nodes attacked in single-node experiments</ns0:cell><ns0:cell>0 &#8764; N &#8722; 1(only test set nodes are taken)</ns0:cell></ns0:row><ns0:row><ns0:cell>number of target nodes</ns0:cell><ns0:cell>Number of target nodes for multi-node experiments</ns0:cell><ns0:cell>0 &#8764; N U &#8722; 1</ns0:cell></ns0:row><ns0:row><ns0:cell>resource budget</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>&#946;</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Parameter settings of Experiment 1</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Model 1</ns0:cell><ns0:cell cols='2'>Node ID &#946;</ns0:cell><ns0:cell>Type 2 &#8710; Scenario</ns0:cell></ns0:row><ns0:row><ns0:cell>Cora</ns0:cell><ns0:cell>2-layer GCN</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell cols='2'>100 Structure 5 Poisoning</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>1,2 Refers to 'target model' and 'perturbation type'.</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Results of Experiment 1</ns0:figDesc><ns0:table><ns0:row><ns0:cell>DFOer</ns0:cell><ns0:cell cols='5'>Origin Class Target Class New Class Result Margin</ns0:cell></ns0:row><ns0:row><ns0:cell>OnePlusOne</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>MS 1</ns0:cell><ns0:cell>-0.67</ns0:cell></ns0:row><ns0:row><ns0:cell>DiscreteOnePlusOne</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>MS</ns0:cell><ns0:cell>-0.75</ns0:cell></ns0:row><ns0:row><ns0:cell>DoubleFastGA 2</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>MS</ns0:cell><ns0:cell>-0.70</ns0:cell></ns0:row><ns0:row><ns0:cell>DE</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>MS</ns0:cell><ns0:cell>-0.65</ns0:cell></ns0:row><ns0:row><ns0:cell>RandomSearch</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>MS</ns0:cell><ns0:cell>-0.67</ns0:cell></ns0:row></ns0:table><ns0:note>1 Means 'misleading success'.2 The full name is 'DoubleFastGAOnePlusOne'.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>and 4 show the parameter settings and results of Experiment 1 respectively.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Parameter settings of Experiment 2</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Dataset Number 1 Type</ns0:cell><ns0:cell>&#8710;</ns0:cell><ns0:cell>Model</ns0:cell><ns0:cell>Scenario</ns0:cell><ns0:cell>DFOer</ns0:cell></ns0:row><ns0:row><ns0:cell>Cora</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell cols='4'>Structure 5 2-layer GCN Poisoning DoubleFastGA</ns0:cell></ns0:row></ns0:table><ns0:note>1 Number of the target nodes.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Results of Experiment 2</ns0:figDesc><ns0:table><ns0:row><ns0:cell>&#946;</ns0:cell><ns0:cell>SR(MR) MR/SR</ns0:cell><ns0:cell>Running Time</ns0:cell><ns0:cell>Running Time (Improved)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>10 0.68(0.54) 79.41%</ns0:cell><ns0:cell>5m3s</ns0:cell><ns0:cell>3m39s</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>25 0.82(0.72) 87.80%</ns0:cell><ns0:cell>9m15s</ns0:cell><ns0:cell>5m28s</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>50 0.80(0.72) 90.00%</ns0:cell><ns0:cell>18m33s</ns0:cell><ns0:cell>10m30s</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>100 0.94(0.86) 91.49%</ns0:cell><ns0:cell>37m11s</ns0:cell><ns0:cell>18m37s</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>200 0.94(0.86) 91.49%</ns0:cell><ns0:cell>1h14m37s</ns0:cell><ns0:cell>33m35s</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Parameter settings of Experiment 3</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Dataset Number &#8710;</ns0:cell><ns0:cell>Model</ns0:cell><ns0:cell>Scenario</ns0:cell><ns0:cell>DFOer</ns0:cell></ns0:row><ns0:row><ns0:cell>Cora</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell cols='3'>5 2-layer GCN Poisoning DoubleFastGA</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Results of Experiment 3</ns0:figDesc><ns0:table><ns0:row><ns0:cell>&#8710;</ns0:cell><ns0:cell>Feature Structure</ns0:cell><ns0:cell>Both 1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>2 0.10(0.08) 2 0.60(0.56) 0.70(0.68)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>4 0.14(0.14) 0.88(0.84) 0.92(0.88)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>6 0.22(0.20) 0.96(0.86) 0.94(0.94)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>8 0.24(0.24) 0.92(0.80) 0.98(0.80)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>10 0.30(0.30) 0.98(0.92) 0.98(0.94)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>1 When structure perturbations and feature pertur-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>bations are carried out simultaneously, each type</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>of perturbations has a constraint of &#8710;.</ns0:cell></ns0:row></ns0:table><ns0:note>2 Represents 'SR(MR)'.</ns0:note></ns0:figure> <ns0:note place='foot'>* The data represent the means and standard deviations of SRs. The bold indicates the average SR of the algorithm that performs better under the same conditions.</ns0:note> <ns0:note place='foot' n='18'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62580:1:1:NEW 30 Jul 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot' n='20'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62580:1:1:NEW 30 Jul 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Rebuttal Dear Editors We thank reviewers for their detailed and helpful feedback and have edited the manuscript to address their concerns. We list the suggestion in the reviews and respond to them respectively. We believe that the manuscript is now suitable for publication in PeerJ Computer Science. Review 1 1. Authors are suggested to shift the related work after the Introduction part. Authors are suggested to include future work in the conclusion section. This is a helpful suggestion. We have added such future work about indirect attacks and global attacks in the Conclusion section (line 506-508 in the highlighted manuscript, the same as below) and shifted the related work after the Introduction part (line 109-145). 2. Authors are suggested to give the order of these parameters based on their importance towards classifying adversarial attacks on graph nodes, after the experimental evaluation. Many thanks to the reviewers for this suggestion. We have divided the main parameters into two categories and ordered each in descending order of importance. One category is scenario, perturbation type and target model, which are used to distinguish between different types of adversarial attacks. The other category is constraint size, resource budget and number of target nodes, which are numerical parameters that have influences on the effectiveness of the attack. As suggested, we have placed the specific details in the last two paragraphs of the Evaluation section. Review 2 - Line 43 > uppercase - Line 235 > “DeeoRobust” - DeepRobust - Figure 3, 4, 5 and 6 should be improved. In overall the labels are too small. Review 2 propose a detailed list of comments. We agree to these comments and will modify the final version according to them. We have corrected the spelling mistakes and submitted clear figures with large labels. Details are listed below: • In line 43 'derivative-free' has been changed to “Derivative-free”. • In line 272 (line 235 of the original manuscript) “DeeoRobust” has been changed to “DeepRobust”. • In Page 13, 17 and 18 of the revised manuscript with tracked changes, Figure 3 to 16 have been improved by using larger labels and re-structuring. The subgraphs of the old Figure 3 to 7 were changed to horizontal arrangement and labeled “A” “B” and “C”. Subfigures in Page 17 and 18 have been independently configured as figures. Other Changes • All images have white borders removed and image files have been named in numerical order. • Missing citations for tables and figures have been filled in (line 326, 371, 392, 411, 421-422). • In line 351 “-0.40” has been changed to “-0.4” to be consistent with “-0.6”. • In line 466 “Conclusions” has been changed to “Conclusion” for consistency with the previous section. • “Parameter setting” in Table 3, 5, 7, 9 and 11 have been changed to “Parameter settings”. • We have removed the Acknowledgement section because it includes the funding number. • We have set up a code repository (https://github.com/yangrunze1013/DFDA.git) to store DeepRobust (an open-source tool for adversarial attacks) embedded in our algorithm, which is convenient for researchers to run our program. We will only submit code files that are entirely our own on PeerJ. Note All the tables are within the TEX file instead of submitting DOCX files. Teng Long, On behalf of all authors. "
Here is a paper. Please give your review comments after reading it.
224
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The emergence of the novel coronavirus pneumonia (Covid-19) pandemic at the end of 2019 led toworldwide chaos. However, the world breathed a sigh of relief when a few countries announced the development of a vaccine and gradually began to distribute it.</ns0:p><ns0:p>Nevertheless, the emergence of another wave of this pandemic returned us to the starting point. At present, early detection of infected people is the paramount concern of both specialists and health researchers. This paper proposes a method to detect infected patients through chest x-ray images by using the large dataset available online for Covid-19 (COVIDx), which consists of 2128 X-ray images of Covid-19 cases, 8066 normal cases, and 5575 cases of pneumonia. A hybrid algorithm is applied to improve image quality before undertaking neural network training. This algorithm combines two different noise-reduction filters in the image, followed by a contrast enhancement algorithm. To detect Covid-19, we propose a novel convolution neural network (CNN) architecture called KL-MOB (Covid-19 detection network based on the MobileNet structure). The performance of KL-MOB is boosted by adding the Kullback-Leibler (KL) divergence loss function when trained from scratch. The KL divergence loss function is adopted for content-based image retrieval and fine-grained classification to improve the quality of image representation. The results are impressive: the overall benchmark accuracy, sensitivity, specificity, and precision are 98.7%, 98.32%, 98.82%, and 98.37%, respectively. These promising results should help other researchers develop innovative methods to aid specialists. The tremendous potential of the method proposed herein can also be used to detect Covid-19 quickly and safely in patients throughout the world.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION 33</ns0:head><ns0:p>The novel coronavirus 2019 researchers to use pretrained networks to build their own models <ns0:ref type='bibr' target='#b40'>(Narin et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b42'>Ozturk et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b7'>Apostolopoulos and Mpesiana, 2020;</ns0:ref><ns0:ref type='bibr' target='#b13'>Civit-Masot et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b3'>Albahli, 2020;</ns0:ref><ns0:ref type='bibr' target='#b58'>Sethy and Behera, 2020;</ns0:ref><ns0:ref type='bibr'>Apostolopoulos et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b12'>Chowdhury et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b16'>Farooq and Hafeez, 2020;</ns0:ref><ns0:ref type='bibr' target='#b35'>Maghdid et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b21'>Hemdan et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b62'>Taresh et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b44'>Punn and Agarwal, 2021)</ns0:ref>. Given that Covid-19 infected millions of people worldwide within a few months of its detection, a mid-range dataset of positive cases was made available for public use <ns0:ref type='bibr' target='#b65'>(Wang et al., 2020)</ns0:ref>. This dataset can be uploaded from https://github. com/lindawangg/COVID-Net/blob/master/docs/COVIDx.md. This, in turn, has enabled further progress in developing new, accurate, in-depth models for Covid-19 recognition <ns0:ref type='bibr' target='#b2'>(Ahmed et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b0'>Afshar et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b64'>Ucar and Korkmaz, 2020;</ns0:ref><ns0:ref type='bibr' target='#b33'>Luz et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b22'>Hirano et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b53'>Rezaul Karim et al., 2020)</ns0:ref>. However, some medical imaging issues usually pose difficulties in the recognition task, reducing the performance of these models. These issues include, but are not limited to, insufficient training data, inter-class ambiguity, intra-class variation, and visible noise. These problems oblige us to significantly enhance the discrimination capability of the associated model.</ns0:p><ns0:p>One way around these issues is to use proper image preprocessing techniques for noise reduction and contrast enhancement. A closer look at the available images reveals the presence of various types of noise, such as impulsive, Poisson, speckle, and Gaussian noise [see Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> for the most common types of noise in x-ray images <ns0:ref type='bibr' target='#b43'>(Paul et al., 2018)</ns0:ref>]. However, the most prevalent studies have focused only on some of these types of noise (e.g., Gaussian and Poisson). In particular, among many other techniques, histogram equalization (HE) <ns0:ref type='bibr' target='#b13'>(Civit-Masot et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b63'>Tartaglione et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b53'>Rezaul Karim et al., 2020)</ns0:ref>, contrast limited adaptive histogram equalization (CLAHE) <ns0:ref type='bibr' target='#b15'>(El-bana et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b55'>Saiz and Barandiaran, 2020;</ns0:ref><ns0:ref type='bibr' target='#b36'>Maguolo and Nanni, 2021;</ns0:ref><ns0:ref type='bibr' target='#b52'>Ramadhan et al., 2020)</ns0:ref>, adaptive total variation method(ATV) <ns0:ref type='bibr' target='#b44'>(Punn and Agarwal, 2021)</ns0:ref>, white balance followed by CLAHE <ns0:ref type='bibr' target='#b59'>(Siddhartha and Santra, 2020)</ns0:ref>, intensity normalization followed by CLAHE (N-CLAHE) <ns0:ref type='bibr' target='#b23'>(Horry et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b14'>El Asnaoui and Chawki, 2020)</ns0:ref>, Perona-Malik filter (PMF), unsharp masking (UM) (Rezaul <ns0:ref type='bibr' target='#b53'>Karim et al., 2020)</ns0:ref>, Bi-histogram equalization with adaptive sigmoid function (BEASF) <ns0:ref type='bibr' target='#b20'>(Haghanifar et al., 2020)</ns0:ref>, The gamma correction (GC) <ns0:ref type='bibr'>(Rahman et al., 2021)</ns0:ref>, Moment Exchange algorithm (MoEx), CLAHE <ns0:ref type='bibr' target='#b34'>(Lv et al., 2021)</ns0:ref>, local phase enhancement (LPE) <ns0:ref type='bibr' target='#b45'>(Qi et al., 2021)</ns0:ref>, image contrast enhancement algorithm (ICEA) <ns0:ref type='bibr' target='#b10'>(Canayaz, 2021)</ns0:ref>, and Gaussian filter <ns0:ref type='bibr' target='#b37'>(Medhi et al., 2020)</ns0:ref> are, as far as we are aware, the only adopted techniques in Covid-19 recognition to date. An overview of these works is listed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. It should be noted that the CLAHE algorithm has widely used by the majority, while some pursued a hybridization method. Moreover, the utilized filters can result in blurry (by Gaussian filter) or blocky (by PMF) features in the processed image. Accordingly, there is still room to incorporate more effective preprocessing techniques to further increase the accuracy of these systems. Motivated by the outstanding results in the previously mentioned works as well as the need for closeto-perfect recognition models, this paper integrates novel image preprocessing enhancement with deep learning to meet the challenges arising from data deficiency and complexity. Specifically, we combine an adaptive median filter (AMF) and a non-local means filter (NLMF) to remove the noise from the images. Numerous works have already analyzed the performance of these two filters for denoising x-ray imagery <ns0:ref type='bibr' target='#b30'>(Kim et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b51'>Raj and Venkateswarlu, 2012;</ns0:ref><ns0:ref type='bibr' target='#b46'>Rabbouch et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b56'>Sawant et al., 1999;</ns0:ref><ns0:ref type='bibr' target='#b38'>Mirzabagheri, 2017)</ns0:ref>, demonstrating their superiority over various filters, including the ones in the cited works in terms of removing impulsive, Poisson, and speckle noise while preserving the useful image details. We then utilize the CLAHE approach that has been already applied for the enhancement of contrast in medical images <ns0:ref type='bibr' target='#b69'>(Zhou et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b60'>Sonali et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b66'>Wen et al., 2016)</ns0:ref>, to enhance the contrast of the denoised images. The enhanced images are finally fed into the state-of-the-art convolution neural network (CNN) called MobileNet <ns0:ref type='bibr' target='#b25'>(Howard et al., 2017)</ns0:ref>, which has been recently utilized for the same classification task by <ns0:ref type='bibr'>(Apostolopoulos et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b7'>Apostolopoulos and Mpesiana, 2020)</ns0:ref>. MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. The motivation behind choosing a MobileNet CNN is that it not only helps to reduce overfitting but also runs faster than a regular CNN and has significantly fewer parameters (4.24) <ns0:ref type='bibr' target='#b25'>(Howard et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b67'>Yu et al., 2020)</ns0:ref>. Moreover, MobileNets employ two global hyperparameters based on depthwise separable convolutions to strike a balance between efficiency and accuracy.</ns0:p><ns0:p>KL divergence is one of the measures that reflect the distribution divergence between different probabilities, which has been widely used in the problem of classification imbalanced datasets <ns0:ref type='bibr' target='#b61'>(Su et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b17'>Feng et al., 2018)</ns0:ref>. The KL divergence loss function is more commonly used when using models that learn to approximate a more complex function than simply multiclass classification, such as in the case of an autoencoder used for learning a dense feature representation under a model that must reconstruct the original input. Indeed, the lack of necessary extracted features from the images sometimes cannot provide expected accuracy in the classification result. In this work, inspired by the variational autoencoder learning <ns0:ref type='bibr' target='#b32'>(Kingma and Welling, 2013;</ns0:ref><ns0:ref type='bibr' target='#b4'>Alfasly et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b5'>Alghaili et al., 2020)</ns0:ref> the Kullback-Leibler (KL)</ns0:p><ns0:p>divergence is adopted to devise more efficient and accurate representations and measure how far we are from the optimal solution during the iterations. We evaluated the performance of the proposed framework on the COVIDx dataset in terms of a wide variety of metrics: accuracy, sensitivity, specificity, precision, area under the curve, and computational efficiency. Simulation results reveal that the proposed framework significantly outperforms state-of-the-art models from both quantitative and qualitative perspectives.</ns0:p><ns0:p>The novelty of this study is not only to clarify significant features in the CXR images by developing a hybrid algorithm but also proposes a novel approach in how to devise more efficient and accurate by using KL loss. The intent behind this study is not only to achieve a high classification accuracy but to achieve this by training an automated end-to-end deep learning framework based on CNN. This method is superior to transfer learning for evaluating the importance of features derived from imagery, as it is not relying on features previously learned by the pretrained model, which was first trained on nonmedical images. The main contributions of this work can be summarized as follows:</ns0:p><ns0:p>&#8226; For Covid-19 recognition, we propose an automated end-to-end deep learning framework based on MobileNet CNN with KL divergence loss function.</ns0:p><ns0:p>&#8226; We propose an impressive approach to ensure a sufficiently diverse representation by predicting the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science output of the mean &#181; and standard-deviation &#963; of the Gaussian distribution.</ns0:p><ns0:p>&#8226; We incorporate a novel preprocessing enhancement technique consisting of AMF, NLMF, and CLAHE to meet the challenges arising from data deficiency and complexity.</ns0:p><ns0:p>&#8226; We analyze the performance of the preprocessing enhancement scheme to demonstrate its role in enhancing the discrimination capability of the proposed model.</ns0:p><ns0:p>The rest of this paper is organized as follows: Section (2) describes the phases of the proposed method.</ns0:p><ns0:p>Section (3) highlights the experimental results. Section (4) discusses these results, and the conclusion is presented in Section (5).</ns0:p></ns0:div> <ns0:div><ns0:head>PROPOSED METHOD</ns0:head><ns0:p>In this section, we briefly describe the scenario of the methodology used to achieve the purpose of this study. The proposed method is depicted in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Data Acquisition</ns0:head><ns0:p>In this work, we used the COVIDx dataset used by <ns0:ref type='bibr' target='#b65'>(Wang et al., 2020)</ns0:ref> to train and evaluate the proposed model. In brief, the COVIDx dataset is an open-source dataset that can be downloaded from https: //github.com/lindawangg/COVID-Net/blob/master/docs/COVIDx.md. The instructions given by <ns0:ref type='bibr' target='#b65'>(Wang et al., 2020)</ns0:ref> were followed to set up the new dataset. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Data Preprocessing Method</ns0:head><ns0:p>In this study, we attempt to provide an algorithm that would increase the image quality by using a hybrid technique consisting of noise reduction and contrast enhancement. Specifically, two efficient filters are used for noise reduction while CLAHE is used for contrast enhancement. The first filter is the AMF, which removes impulse noise <ns0:ref type='bibr' target='#b41'>(Ning et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b29'>Khare and Chugh, 2014</ns0:ref>). This filter is followed by the NLMF algorithm that calculates similarity based on patches instead of pixels. Given a discrete noisy image u = u(i) for pixel I, the estimated value of NL[u](i) is the weighted average of all pixels:</ns0:p><ns0:formula xml:id='formula_0'>NL[u](i) = &#8721; j&#8712;i w(i, j).u( j), (<ns0:label>1</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>)</ns0:formula><ns0:p>where the weight family w(i, j) j depends on the similarity between the pixels i and j.</ns0:p><ns0:p>The similarity between the two pixels i and j is defined by the similarity of the intensity of gray-level vectors u(N i ) and u(N j ), where N l signifies a square neighborhood of fixed size and centered at a pixel L. The similarity is measured as a function to minimize the weighted Euclidean distance, u</ns0:p><ns0:formula xml:id='formula_2'>(N i )&#8722;u(N j ) 2 (2,a)</ns0:formula><ns0:p>where a &gt; 0 is the Gaussian kernel standard deviation. The pixels with a similar gray-level neighborhood to u(N i ) have larger weights in average. These weights are defined as;</ns0:p><ns0:formula xml:id='formula_3'>w(i, j) = 1 Z (i) e &#8722; u(N i )&#8722;u(N j ) 2 (2,a) h 2 , (<ns0:label>2</ns0:label></ns0:formula><ns0:formula xml:id='formula_4'>)</ns0:formula><ns0:p>where Z (i) is the normalizing constant, and the settings h works as a filtering degree.</ns0:p><ns0:p>Next, CLAHE is applied to the denoised images to achieve an acceptable visualization and to compensate for the effect of filtration that may contribute to some blurring on the images <ns0:ref type='bibr' target='#b26'>(Huang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b57'>Senthilkumar and Senthilmurugan, 2014)</ns0:ref>. Since there are many homogeneous regions in medical images, CLAHE is suitable for optimizing medical images as the CLAHE algorithm creates non-overlapping homogeneous regions.</ns0:p></ns0:div> <ns0:div><ns0:head>Classification Neural Network Model</ns0:head><ns0:p>We used a deep neural network structure called a MobileNet neural network <ns0:ref type='bibr' target='#b25'>(Howard et al., 2017)</ns0:ref>. All images were resized to 224 &#215; 224 &#215; 3 before being used as input to the neural network. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Type Stride Filter Shape Size in Size out Conv1</ns0:note><ns0:formula xml:id='formula_5'>2 3&#215;3&#215;3&#215;32 224&#215;224&#215;3 112&#215;112&#215;32 Conv2 dw 1 3 &#215; 3 &#215; 32 112&#215;112&#215;32 112&#215;112&#215;32 Conv2 pw 1 1 &#215; 1 &#215; 32 &#215; 64 112&#215;112&#215;32 112&#215;112&#215;64 Conv3 dw 2 3 &#215; 3 &#215; 64 112&#215;112&#215;64 56&#215;56&#215;64 Conv3 pw 1 1 &#215; 1 &#215; 64 &#215; 128 56&#215;56&#215;64 56&#215;56&#215;128 Conv4 dw 1 3 &#215; 3 &#215; 128 56&#215;56&#215;128 56&#215;56&#215;128 Conv4 pw 1 1 &#215; 1 &#215; 128 &#215; 128 56&#215;56&#215;128 56&#215;56&#215;128 Conv5 dw 2 3 &#215; 3 &#215; 128 56&#215;56&#215;128 56&#215;56&#215;128 Conv5 pw 1 1 &#215; 1 &#215; 128 &#215; 256 28&#215;28&#215;128 28&#215;28&#215;128 Conv6 dw 1 3 &#215; 3 &#215; 256 28&#215;28&#215;256 28&#215;28&#215;265 Conv6 pw 1 1 &#215; 1 &#215; 256 &#215; 256 28&#215;28&#215;256 28&#215;28&#215;256 Conv7 dw 2 3 &#215; 3 &#215; 256 28&#215;28&#215;256 14&#215;14&#215;256 Conv7 pw 1 1 &#215; 1 &#215; 256 &#215; 512 14&#215;14&#215;256 14&#215;14&#215;512 Conv8-12 dw 1 3 &#215; 3 &#215; 512 14&#215;14&#215;512 14&#215;14&#215;512 Conv8-12 pw 1 1 &#215; 1 &#215; 512 &#215; 512 14&#215;14&#215;512 14&#215;14&#215;512 Conv13 dw 2 3 &#215; 3 &#215; 512 14&#215;14&#215;512 7&#215;7&#215;512 Conv13 pw 1 1 &#215; 1 &#215; 512 &#215; 1024 7&#215;7&#215;512 7&#215;7&#215;1024 Conv14 dw 2 3 &#215; 3 &#215; 1024 7&#215;7&#215;1024 7&#215;7&#215;1024 Conv14 pw 1 1&#215;1&#215;1024&#215;1024 7&#215;7&#215;1024 7&#215;7&#215;1024 GAP 1 Pool 7 &#215; 7 7&#215;7&#215;1024 1&#215;1&#215;1024 Dropout 1 Probability=0.001 1&#215;1&#215;1024 1&#215;1&#215;1024 FC (&#181;) 1 128&#215; 3 1&#215;1&#215;1024 1&#215;1&#215;128 FC (&#963; ) 1 128&#215; 3 1&#215;1&#215;1024 1&#215;1&#215;128 Softmax 1 Classifier 1&#215;1&#215;128 1&#215;1&#215;3</ns0:formula><ns0:p>Table <ns0:ref type='table'>3</ns0:ref>. Layers of prposed CNN model architecture.</ns0:p><ns0:p>output is a feature vector of size 1024 for each time step. Then, a dropout layer is used with a probability of 0.001. The output of the dropout layer goes to two fully connected layers that generate an output of size 128. One fully connected layer is used to predict the mean &#181;, which is used to extract the most significant features from those features extracted in previous layers. The other is used to predict the standard deviation &#963; of a Gaussian distribution, which is used to calculate the KL loss function. The output of the fully connected layer, which used to predict the mean &#181; goes to the last layer (Softmax classifier), which is defined by</ns0:p><ns0:formula xml:id='formula_6'>L CE (o, v) = &#8722; v &#8721; i=1 o i log ( e pi &#8721; v j e p j ) ,<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where v indicates the output vector, o indicates the objective vector, and p j indicates the input to the 164 neuron j.</ns0:p><ns0:p>165</ns0:p><ns0:p>The categorical cross-entropy loss function is generally used to address such a multiclass classification problem. The three classes are provided with labels such as '0' being a Covid-19 case, '1' being a normal case, and '2' being pneumonia. We adopted Kullback-Leibler divergence loss function to devise more efficient and accurate representations. Moreover, the combined KL loss with the categorical cross-entropy loss function would enforce the network to give a consistent output, in addition to the preprocessing applied to the input image. The KL divergence distribution between the &#181;;&#963; and the prior is considered as a regularization that aids in addressing the issue of overfitting. KL loss function is defined by</ns0:p><ns0:formula xml:id='formula_7'>D KL = &#8722; 1 2 n &#8721; i=1 (1 + log (&#963; i ) &#8722; &#181; 2 i &#8722; &#963; i ) , (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_8'>)</ns0:formula><ns0:p>where n is the output vector of the average pooling layer with the size of 1024, &#181; is the mean, which is predicted from one fully connected layer, and &#963; is the standard deviation of a Gaussian distribution, which is predicted from the other fully connected layer in the network, Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>. The multitask learning loss function for our proposed network is now defined as; Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_9'>L = &#945;D KL + L CE (o, v) ,<ns0:label>(5</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We use a weighted loss function as illustrated in Equation <ns0:ref type='formula' target='#formula_9'>5</ns0:ref>. The weight of KL loss &#945; is empirically set to (0:1) to be used as a one-hot vector, which not only ensures a clear representation of the true class, but also helps in addressing the large variance arising due to unbalanced data.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiments</ns0:head><ns0:p>window size was taken to be 5 &#215; 5 for effective filtering. The resultant image was then subjected to the NLMF technique. The performance of the NLMF was depended on 7 &#215; 7 of the search window, 5 x 5 of the similarity window, and a degree of filtering h = 1. Furthermore, we increased the contrast using CLAHE with the bin of 256 and block size of 128 in slope 3 to get the enhanced images. We passed the images to KL-MOB as the input to predict the CXR image (Covid-19, normal, or pneumonia). Because many functions are not built-in functions from deep learning libraries, such as the relu6 activation function with a max value of six, we built an interface for the evaluation process that contains all layers in the network, as in a training network, but which is not used for training. Instead, it is used to pass on the input image to produce the output.</ns0:p><ns0:p>The proposed model (KL-MOB) is implemented by using the Python programming language. All experiments were conducted on a Tesla K80 GPU graphics card on Google Collaboratory with an Intel&#169; i7-core @3.6GHz processor and 16GB RAM with 64-bit Windows 10 operating system. The original and enhanced images are used separately to train the KL-MOB. In the first stage, the baseline model is trained to verify the influence of the KL loss on performance. The network is trained by using a SoftMax classifier with an Adam optimizer <ns0:ref type='bibr' target='#b31'>(Kingma and Ba, 2014)</ns0:ref> with the initial learning rate set to 0.0001 and a batch size of 32. The dataset used for training is divided into 70% as a training set and 30% as a validation set. The total number of parameters is 3,488,426, where the number of trainable parameters is 3,466,660, and the nontrainable parameters are 21,766. In the training period, 200 epochs were completed to check the KL-MOB model accuracy and loss, which are shown in Figures <ns0:ref type='figure' target='#fig_9'>5 and 6</ns0:ref>.</ns0:p><ns0:p>Beforehand, we conducted a comprehensive investigation to determine the impact of various feature sizes. As is shown in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science noting that the results are relatively acceptable for all vectors in the enhanced data, but the best result is achieved when the output vector is 128 with an accuracy of 96.06 %. In contrast, the output vector 256 in the original data achieved the best value with an accuracy of 93.24 %. This can be attributed to that the KL divergence between &#181;;&#963; distribution and the prior is considered as a regularization which helps to overcome the overfitting problem. </ns0:p></ns0:div> <ns0:div><ns0:head>Model</ns0:head></ns0:div> <ns0:div><ns0:head>Performance Evaluation</ns0:head></ns0:div> <ns0:div><ns0:head>Preprocessing Performance Evaluation</ns0:head><ns0:p>The performance of the proposed preprocessing technique was quantified by using various evaluation metrics such as mean average error (MAE) and peak signal-to-noise Ratio (PSNR). These metrics are desirable because they can be rapidly quantified.</ns0:p><ns0:p>Definition: x(i, j) denotes the samples of the original image, y(i, j) denotes the samples of the output image.M and N are the number of pixels in row and column directions, respectively. MAE is calculated as in Equation <ns0:ref type='formula' target='#formula_10'>6</ns0:ref>, where a large value means that the images are of poor quality. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_10'>MAE = |E(x) &#8722; E(y)| , (<ns0:label>6</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The limited value PSNR implies that the images are of low quality. PSNR is described in terms of Mean Square Error MSE as follows:</ns0:p><ns0:formula xml:id='formula_11'>PSNR = 10 log 10 MAX 2 MSE ,<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>where MAX 2 is the maximum possible pixel intensity value 255 when the pixel is represented by 8 bits.</ns0:p></ns0:div> <ns0:div><ns0:head>MSE</ns0:head><ns0:formula xml:id='formula_12'>= 1 MN M&#8722;1 &#8721; i=1 N&#8722;1 &#8721; j=1 [x(i, j) &#8722; y(i, j)] 2 , (<ns0:label>8</ns0:label></ns0:formula><ns0:formula xml:id='formula_13'>)</ns0:formula><ns0:p>Neural Network Performance Evaluation</ns0:p><ns0:p>The test set described in the previous section was used to evaluate KL-MOB. The classification outcome has four cases: True Positive (TP), False Positive (FP), True Negative (TN), and False Negative (FN). The metrics used to measure the performance are accuracy (ACC), sensitivity (TPR), specificity (SPC), and precision (PPV) and are defined as follows:</ns0:p><ns0:formula xml:id='formula_14'>Accuracy (ACC) = T P + T N T P + FP + T N + FN ,<ns0:label>(9)</ns0:label></ns0:formula><ns0:formula xml:id='formula_15'>Sensitivity (T PR) = T P T P + FN , (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_16'>) Speci f icity (SPC) = T N FP + T N ,<ns0:label>(11)</ns0:label></ns0:formula><ns0:formula xml:id='formula_17'>Precision (PPV ) = T P T P + FP ,<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>The graph of true positive rate (TPR) and false positive rate (FPR) is the receiver operating characteristic (ROC) curve. The FPR is calculated as follows:</ns0:p><ns0:formula xml:id='formula_18'>False Positive Rate (FPR) = FP FP + T N ,<ns0:label>(13)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>In the experiments, noise reduction and contrast enhancement performance were evaluated independently, since they are two separate issues. The average value was computed for all images in each class. Tables <ns0:ref type='table' target='#tab_6'>5 and 6</ns0:ref> show the results for noise reduction and image enhancement, respectively. Figure <ns0:ref type='figure' target='#fig_11'>7</ns0:ref> shows the noise reduction techniques that were applied to the original image and the hybrid method used in this work. Although the denoising filters could smooth and blur the resulting images, this can be enhanced by improving the image edges and by highlighting the high-frequency components to remove the residual noise. Figure <ns0:ref type='figure' target='#fig_12'>8</ns0:ref> displays the original images and their enhanced versions. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Method</ns0:head><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>This work proposes an approach that combines noise-reduction algorithms with contrast enhancement.</ns0:p><ns0:p>This approach introduces a type of hybrid filtering and contrast enhancement for the data set of images used for Covid-19 detection. The well-known measurable methods PSNR and MAE were used as image quality measurements for assessing and comparing image quality. The results of Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref> show that using an AMF followed by a NLMF is entirely favorable for eliminating noise. The proposed hybrid algorithm is applied to the entire image instead of just parts of the image and preserves important details. Figure <ns0:ref type='figure' target='#fig_15'>11</ns0:ref> illustrates the difference between the original CXRs and CXRs enhanced by applying the method proposed herein. Furthermore, we judge the lung damage in the enhanced image to be more perspicuous than in the original image. In addition, CLAHE with a bin of 256 gives the best PSNR, as shown in Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref>.</ns0:p><ns0:p>To show the impact of the KL divergence loss on the efficacy of the proposed method, we performed several experiments using the categorical entropy loss function (CCE) and the mean square error (MSE) loss function. The results obtained in Table <ns0:ref type='table' target='#tab_8'>8</ns0:ref> show that the proposed method has a great impact on the Manuscript to be reviewed In our experiment of 100 patients with Covid-19, only one was misclassified with a 99.0% PPV for Covid-19, which compares favorably with previous results of 98.9%, and 96.12% for <ns0:ref type='bibr' target='#b65'>(Wang et al., 2020)</ns0:ref> and (Rezaul <ns0:ref type='bibr' target='#b53'>Karim et al., 2020)</ns0:ref>, respectively. In addition, we compare the results obtained from the KL-MOB model with those from previous studies that used the same or similar datasets for evaluation (see Table <ns0:ref type='table' target='#tab_9'>9</ns0:ref>). Not included in the comparison are studies that used smaller datasets <ns0:ref type='bibr' target='#b16'>(Farooq and Hafeez, 2020;</ns0:ref><ns0:ref type='bibr' target='#b0'>Afshar et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b22'>Hirano et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b64'>Ucar and Korkmaz, 2020)</ns0:ref>. The results show that, for all performance metrics [accuracy, sensitivity (TPR), specificity, and PPV for overall detection], the KL-MOB model produces superior results compared with the models of <ns0:ref type='bibr' target='#b65'>(Wang et al., 2020)</ns0:ref> patterns in medical images to be recognized at a level comparable to that of an experienced radiologist.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>The KL loss function is used to boost the performance of the KL-MOB model, which outperforms recent approaches, as shown by the results. Moreover, it is also believed that the notion of using KL divergence can be extended to other similar scenarios such as content-based image retrieval and finegrained classification to improve the quality of object representation. Considering several essential factors such as the pattern by which Covid-19 infections spread, image acquisition time, scanner availability, and costs, we hope that these findings will make a useful contribution to the fight against Covid-19 and increase the acceptance of artificial-intelligence-assisted applications in clinical practice.</ns0:p><ns0:p>In future work, we will further enhance the proposed method's performance by including lateral views of CXR images in the training data because, in some cases, frontal-view CXR images do not permit a clear diagnosis of pneumonia cases. Besides, this work lacked in applying some of the techniques such as progressive resizing <ns0:ref type='bibr' target='#b8'>(Bhatt et al., 2021a)</ns0:ref>, which can be applied to CNNs to carry out imaging-based diagnostics. Furthermore, visual ablation studies <ns0:ref type='bibr' target='#b9'>(Bhatt et al., 2021b;</ns0:ref><ns0:ref type='bibr' target='#b28'>Joshi et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b18'>Gite et al., 2021)</ns0:ref> can be performed along with deep learning, which will significantly improve the detection of Covid-19 manifestations in the CXR images. Since only a limited number of CXR images are available for Covid-19 infection, out-of-distribution issues may arise, so more data from related distributions is needed for further evaluation. There are several techniques that would be another way to overcome this problem, include, but are not limited to data augmentation techniques <ns0:ref type='bibr' target='#b11'>(Chaudhari et al., 2019)</ns0:ref>, transfer learning <ns0:ref type='bibr' target='#b62'>(Taresh et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b8'>Bhatt et al., 2021a)</ns0:ref>, domain-adaptation <ns0:ref type='bibr' target='#b68'>(Zhang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b27'>Jin et al., 2021)</ns0:ref> and adversarial learning <ns0:ref type='bibr' target='#b19'>(Goel et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b47'>Rahman et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b39'>Motamed et al., 2021)</ns0:ref>, etc. Finally, the image enhancement must be verified by a radiologist, which we have not yet been able to do due to the emerging conditions.</ns0:p></ns0:div> <ns0:div><ns0:head>ACKNOWLEDGMENTS</ns0:head><ns0:p>This work was supported by the National Natural Science Foundation <ns0:ref type='bibr'>[61572177]</ns0:ref>. There was no additional external funding received for this study.</ns0:p></ns0:div> <ns0:div><ns0:head>13/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61123:1:2:NEW 8 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>is a recently recognized disease caused by the severe acute 34 respiratory syndrome coronavirus 2 (SARS-CoV-2). Being highly transmissible and life-threatening, it 35 has rapidly turned into a global pandemic, affecting worldwide health and well-being. Tragically, no 36 effective treatment has yet been approved for patients with Covid-19. However, patients can have a good 37 chance of survival if they are diagnosed sufficiently early. 38 As a widely available, time-and cost-effective diagnostic tool, chest x-rays (CXRs) can potentially be 39 used for early recognition of Covid-19. Nevertheless, Covid-19 can share similar radiographic features 40 with other types of pneumonia, making it difficult for radiologists to manually distinguish between the 41 two. As a result, manual detection of Covid-19 is time-consuming and mistake-prone because it is left 42 to the subjective judgment of the radiologist. It is thus highly desirable to develop automated detection 43 techniques. 44 With the rapid global spread of Covid-19, researchers have begun using state-of-the-art deep-learning techniques to automate the recognition of Covid-19. The initial lack of Covid-19 data compelled earlier</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Noisy images: (A) image with impulsive noise and (B) image with Gaussian noise.</ns0:figDesc><ns0:graphic coords='3,245.13,494.48,206.77,106.28' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>, which generally consists of two phases: (a) image preprocessing, to overcome the existing drawbacks mentioned in the previous section; (b) training and testing dedicated to image classification.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Framework of study.</ns0:figDesc><ns0:graphic coords='5,142.28,255.34,412.50,148.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Architecture of proposed neural network.</ns0:figDesc><ns0:graphic coords='6,159.72,452.06,377.60,172.80' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61123:1:2:NEW 8 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Figure 4 presents the curve comparisons of all training processes. With the maximum training epoch set to 200. A large gap between training and validation in both original and enhanced images indicates the presence of overfitting.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Accuracy and loss graphs for baseline model:(A) training and validation accuracy of the original images, (B) training and validation loss of the original images, (C) training and validation accuracy of the enhanced images and (D) training and validation loss of the enhanced images.</ns0:figDesc><ns0:graphic coords='8,178.49,341.30,340.08,234.48' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Accuracy and loss graphs for KL-MOB on training and validation of the original images: (A) accuracy and (B) loss.</ns0:figDesc><ns0:graphic coords='9,186.53,63.77,324.00,118.80' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Accuracy and loss graphs for KL-MOB on training and validation of the enhanced images: (A) accuracy and (B) loss.</ns0:figDesc><ns0:graphic coords='9,186.53,229.63,324.00,118.80' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61123:1:2:NEW 8 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Result of noise-reduction techniques applied to images: (A) original image, (B) image denoised by AMF, (C) image denoised by NLMF, (D) image denoised by proposed method.</ns0:figDesc><ns0:graphic coords='11,186.01,191.94,325.00,117.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Results of image enhancement: (A) original image, (B) image enhanced by CLAHE, (C) image enhanced by proposed method.</ns0:figDesc><ns0:graphic coords='11,211.02,392.31,275.00,99.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. ROC curves of different classes for original images: (A) Covid-19, (B) normal, and (C) pneumonia.</ns0:figDesc><ns0:graphic coords='12,164.27,63.78,368.50,99.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. ROC curves of different classes for enhanced images: (A) Covid-19, (B) normal, and (C) pneumonia.</ns0:figDesc><ns0:graphic coords='12,164.27,220.03,368.50,99.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. The first and third columns show the original images, and the second and fourth columns show the corresponding enhanced images.</ns0:figDesc><ns0:graphic coords='13,225.67,63.79,245.70,200.20' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Confusion matrix for KL-MOB applied to COVIDx test dataset.</ns0:figDesc><ns0:graphic coords='13,249.31,356.82,198.42,170.07' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>An overview of image enhancement techniques and the deep learning method used for Covid-19 detection.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Study</ns0:cell><ns0:cell>Image enhancement appraoch</ns0:cell><ns0:cell>Method</ns0:cell></ns0:row><ns0:row><ns0:cell>Civit-Masot et al. (2020)</ns0:cell><ns0:cell>HE</ns0:cell><ns0:cell>VGG16</ns0:cell></ns0:row><ns0:row><ns0:cell>Tartaglione et al. (2020)</ns0:cell><ns0:cell>HE</ns0:cell><ns0:cell>ResNet18, ResNet50, DenseNet121</ns0:cell></ns0:row><ns0:row><ns0:cell>Ramadhan et al. (2020)</ns0:cell><ns0:cell>CLAHE</ns0:cell><ns0:cell>COVIDLite</ns0:cell></ns0:row><ns0:row><ns0:cell>El-bana et al. (2020)</ns0:cell><ns0:cell>CLAHE</ns0:cell><ns0:cell>InceptionV3</ns0:cell></ns0:row><ns0:row><ns0:cell>Saiz and Barandiaran (2020)</ns0:cell><ns0:cell>CLAHE</ns0:cell><ns0:cell>VGG16</ns0:cell></ns0:row><ns0:row><ns0:cell>Maguolo and Nanni (2021)</ns0:cell><ns0:cell>CLAHE</ns0:cell><ns0:cell>AlexNet</ns0:cell></ns0:row><ns0:row><ns0:cell>Punn and Agarwal (2021)</ns0:cell><ns0:cell>ATV</ns0:cell><ns0:cell>ResNet, InceptionV3, InceptionResNetV2, DenseNet169, and NASNetLarge</ns0:cell></ns0:row><ns0:row><ns0:cell>Siddhartha and Santra (2020)</ns0:cell><ns0:cell>White balance, CLAHE</ns0:cell><ns0:cell>COVIDLite</ns0:cell></ns0:row><ns0:row><ns0:cell>Horry et al. (2020)</ns0:cell><ns0:cell>N-CLAHE</ns0:cell><ns0:cell>VGG19</ns0:cell></ns0:row><ns0:row><ns0:cell>El Asnaoui and Chawki (2020)</ns0:cell><ns0:cell>CLAHE</ns0:cell><ns0:cell>VGG16, VGG19, DenseNet201, InceptionResNetV2, InceptionV3, Resnet50, and MobileNetV2</ns0:cell></ns0:row><ns0:row><ns0:cell>Rezaul Karim et al. (2020)</ns0:cell><ns0:cell>HE, PMF, UM</ns0:cell><ns0:cell>DeepCOVIDExplainer</ns0:cell></ns0:row><ns0:row><ns0:cell>Medhi et al. (2020)</ns0:cell><ns0:cell>Gaussian filtering</ns0:cell><ns0:cell>Deep CNN</ns0:cell></ns0:row><ns0:row><ns0:cell>Haghanifar et al. (2020)</ns0:cell><ns0:cell>CLAHE, BEASF</ns0:cell><ns0:cell>COVID-CXNet (UNet+DenseNet)</ns0:cell></ns0:row><ns0:row><ns0:cell>Rahman et al. (2021)</ns0:cell><ns0:cell>GC</ns0:cell><ns0:cell>Seven different deep CNN networks for classification and modified Unet network for segmentation</ns0:cell></ns0:row><ns0:row><ns0:cell>Lv et al. (2021)</ns0:cell><ns0:cell>MoEx, CLAHE</ns0:cell><ns0:cell>Cascade-SEME net</ns0:cell></ns0:row><ns0:row><ns0:cell>Qi et al. (2021)</ns0:cell><ns0:cell>LPE</ns0:cell><ns0:cell>Fus-ResNet50</ns0:cell></ns0:row><ns0:row><ns0:cell>Canayaz (2021)</ns0:cell><ns0:cell>ICEA</ns0:cell><ns0:cell>MH-COVIDNet</ns0:cell></ns0:row></ns0:table><ns0:note>2/16PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61123:1:2:NEW 8 Jul 2021)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>We used the same test set that was used for evaluation by<ns0:ref type='bibr' target='#b65'>(Wang et al., 2020)</ns0:ref>, making only a slight change by increasing the number of Covid-19 images to 100 instead of 92.We further split the training data keeping 70% data for training and 30% data for validation. Table2summarizes the number of images in each class and the total number of images used for training and testing. The number of images for each class.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Classes</ns0:cell><ns0:cell>Total</ns0:cell><ns0:cell>Training set 70%</ns0:cell><ns0:cell>Validation set 30%</ns0:cell><ns0:cell>Test set (unseen)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Covid-19 2128</ns0:cell><ns0:cell>1420</ns0:cell><ns0:cell>608</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Normal</ns0:cell><ns0:cell>8066</ns0:cell><ns0:cell>5027</ns0:cell><ns0:cell>2154</ns0:cell><ns0:cell>885</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Pneumonia 5575</ns0:cell><ns0:cell>3487</ns0:cell><ns0:cell>1494</ns0:cell><ns0:cell>594</ns0:cell></ns0:row><ns0:row><ns0:cell>Total</ns0:cell><ns0:cell>15769</ns0:cell><ns0:cell>9933</ns0:cell><ns0:cell>4257</ns0:cell><ns0:cell>1579</ns0:cell></ns0:row></ns0:table><ns0:note>Since few CXR images of positive Covid-19 cases are available, we downloaded more Covid-19 x-ray images from https://github. com/ml-workgroup/covid-19-image-repository, and from https://github.com/ armiro/COVID-CXNet/tree/master/chest_xray_images/covid19. Duplicated images were omitted from the new dataset to ensure that the proposed training model is more accurate. Thus, the actual number of images in the Covid-19 class is 2128 instead of the 1770 images from COVIDx (updated on January 28, 2021). 4/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61123:1:2:NEW 8 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>, the training accuracy differs with the size of the output vector. It is worth 7/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61123:1:2:NEW 8 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Model performance on different feature sizes.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Output vector</ns0:cell><ns0:cell cols='2'>Accuracy% enhanced original</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>64</ns0:cell><ns0:cell>93.26</ns0:cell><ns0:cell>88.31</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>128</ns0:cell><ns0:cell>96.06</ns0:cell><ns0:cell>89.36</ns0:cell></ns0:row><ns0:row><ns0:cell>KL-MOB</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>95.87</ns0:cell><ns0:cell>93.24</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>512</ns0:cell><ns0:cell>94.83</ns0:cell><ns0:cell>91.08</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>1024</ns0:cell><ns0:cell>94.47</ns0:cell><ns0:cell>90.38</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Average PSNR (db) and MAE for the various noise-reduction methods. .35 17.12 25.98 21.91 16.20 Proposed method 19.14 23.13 17.28 25.45 22.11 16.01 </ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Covid19</ns0:cell><ns0:cell>Normal</ns0:cell><ns0:cell>Pneumonia</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>PSNR MAE PSNR MAE PSNR MAE</ns0:cell></ns0:row><ns0:row><ns0:cell>AMF</ns0:cell><ns0:cell cols='3'>21.91 14.46 21.19 17.88 20.43 19.47</ns0:cell></ns0:row><ns0:row><ns0:cell>NLMF</ns0:cell><ns0:cell cols='3'>20.47 19.19 20.41 19.41 20.40 19.40</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Proposed method 22.04 14.38 21.21 17.59 20.45 19.32</ns0:cell></ns0:row></ns0:table><ns0:note>9/16PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61123:1:2:NEW 8 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Average PSNR (db) and MAE for the various contrast-enhancement methods.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Metrics for original images and for images enhanced by KL-MOB.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>217</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Enhanced image</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Original image</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='8'>ACC% PPV% SPC% TPR% ACC% PPV% SPC% TPR%</ns0:cell></ns0:row><ns0:row><ns0:cell>Covid19</ns0:cell><ns0:cell>99.87</ns0:cell><ns0:cell>99.00</ns0:cell><ns0:cell>99.93</ns0:cell><ns0:cell>99.00</ns0:cell><ns0:cell>92.61</ns0:cell><ns0:cell>96.83</ns0:cell><ns0:cell>99.13</ns0:cell><ns0:cell>74.39</ns0:cell></ns0:row><ns0:row><ns0:cell>Normal</ns0:cell><ns0:cell>98.24</ns0:cell><ns0:cell>98.30</ns0:cell><ns0:cell>97.85</ns0:cell><ns0:cell>98.64</ns0:cell><ns0:cell>97.11</ns0:cell><ns0:cell>98.17</ns0:cell><ns0:cell>98.99</ns0:cell><ns0:cell>93.86</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Pneumonia 97.99</ns0:cell><ns0:cell>97.81</ns0:cell><ns0:cell>98.68</ns0:cell><ns0:cell>97.31</ns0:cell><ns0:cell>91.00</ns0:cell><ns0:cell>81.30</ns0:cell><ns0:cell>86.74</ns0:cell><ns0:cell>98.26</ns0:cell></ns0:row><ns0:row><ns0:cell>Overall</ns0:cell><ns0:cell>98.70</ns0:cell><ns0:cell>98.37</ns0:cell><ns0:cell>98.82</ns0:cell><ns0:cell>98.32</ns0:cell><ns0:cell>93.57</ns0:cell><ns0:cell>92.10</ns0:cell><ns0:cell>94.95</ns0:cell><ns0:cell>88.84</ns0:cell></ns0:row></ns0:table><ns0:note>10/16PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61123:1:2:NEW 8 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Performance on the test set with different loss functions.Figure12shows the confusion matrix of the proposed network: all classes are identified with high true positives. Note that the Covid-19 cases are 99% correctly classified by the KL-MOB model. Only 1% of Covid-19 cases are misclassified as pneumonia (non-Covid-19), and 1.4% of the normal cases are misclassified as pneumonia. Only 0.2% of pneumonia (non-Covid-19) cases are wrongly classified as Covid-19. These results demonstrate that the proposed KL-MOB has a strong potential for detecting</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='10'>performance of KL-MOB, thereby justifying the selection of the proposed network architecture and its</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>associated training/learning schemes.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Loss function</ns0:cell><ns0:cell cols='8'>Enhanced image ACC% PPV% SPC% TPR% ACC% PPV% SPC% TPR% Original image</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CCE</ns0:cell><ns0:cell>96.79</ns0:cell><ns0:cell>95.22</ns0:cell><ns0:cell>97.60</ns0:cell><ns0:cell>95.42</ns0:cell><ns0:cell>90.14</ns0:cell><ns0:cell>87.94</ns0:cell><ns0:cell>92.23</ns0:cell><ns0:cell>83.05</ns0:cell></ns0:row><ns0:row><ns0:cell>KL-MOB</ns0:cell><ns0:cell>MSE</ns0:cell><ns0:cell>92.50</ns0:cell><ns0:cell>89.70</ns0:cell><ns0:cell>94.16</ns0:cell><ns0:cell>86.92</ns0:cell><ns0:cell>85.12</ns0:cell><ns0:cell>94.53</ns0:cell><ns0:cell>97.50</ns0:cell><ns0:cell>95.11</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Proposed method</ns0:cell><ns0:cell>98.70</ns0:cell><ns0:cell>98.37</ns0:cell><ns0:cell>98.82</ns0:cell><ns0:cell>98.32</ns0:cell><ns0:cell>93.57</ns0:cell><ns0:cell>92.10</ns0:cell><ns0:cell>94.95</ns0:cell><ns0:cell>88.84</ns0:cell></ns0:row></ns0:table><ns0:note>11/16PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61123:1:2:NEW 8 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Comparative performance of the various models with the improvement percentage compared to the state of art.CONCLUSIONThis work proposes a novel CNN-based MobileNet-structured neural network for detecting Covid-19 using COVIDx, which is the most widely used public dataset of CXR images to date. The evaluation of this approach shows that it outperforms the recent approach in terms of accuracy, specificity, sensitivity, and precision (98.7%, 98.%, 98.32%, and 98.37%, respectively). The proposed method relies on image manipulation by applying a hybrid technique to enhance the visibility of CXR images. This advanced preprocessing technique facilitates the task of the KL-MOB model to extract features, allowing complex</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Study</ns0:cell><ns0:cell>Classifier</ns0:cell><ns0:cell cols='4'>ACC% SPC% TPR% PPV%</ns0:cell></ns0:row><ns0:row><ns0:cell>Wang et al. (2020)</ns0:cell><ns0:cell>COVID-Net (large)</ns0:cell><ns0:cell>95.56</ns0:cell><ns0:cell>96.67</ns0:cell><ns0:cell>93.33</ns0:cell><ns0:cell>93.55</ns0:cell></ns0:row><ns0:row><ns0:cell>Ahmed et al. (2020)</ns0:cell><ns0:cell>ReCoNet</ns0:cell><ns0:cell>97.48</ns0:cell><ns0:cell>97.39</ns0:cell><ns0:cell>97.53</ns0:cell><ns0:cell>96.27</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Rezaul Karim et al. (2020) DeepCOVIDExplainer 98.11</ns0:cell><ns0:cell>98.19</ns0:cell><ns0:cell>95.06</ns0:cell><ns0:cell>96.84</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed method</ns0:cell><ns0:cell>KL-MOB</ns0:cell><ns0:cell>98.7</ns0:cell><ns0:cell>98.82</ns0:cell><ns0:cell>98.32</ns0:cell><ns0:cell>98.37</ns0:cell></ns0:row><ns0:row><ns0:cell>% Improvement</ns0:cell><ns0:cell /><ns0:cell>0.60</ns0:cell><ns0:cell>0.64</ns0:cell><ns0:cell>3.43</ns0:cell><ns0:cell>1.58</ns0:cell></ns0:row></ns0:table><ns0:note><ns0:ref type='bibr' target='#b53'>and (Rezaul Karim et al., 2020)</ns0:ref>.The promising deep learning models used for the detection of Covid from radiography images indicate that deep learning likely still has untapped potential and can play a more significant role in fighting this pandemic. There is definitely still room for improvement through other processes such as increasing the number of images, implementing another preprocessing technique, i.e., data augmentation, utilizing different noise filters, and enhancement techniques.12/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61123:1:2:NEW 8 Jul 2021)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:note place='foot' n='16'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61123:1:2:NEW 8 Jul 2021)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Mundher Taresh Researcher College of Information Science and Engineering Hunan University, Chang Sha, 400013, China mundhert@hnu.edu.cn Dr Ketan Kotecha Academic Editor, PeerJ Computer Science peer.review@peerj.com June 20th, 2021 Subject: Revision and resubmission of manuscript 61123 Dear editors and reviewers, Thank you for your letter and for the reviewers’ comments on our manuscript entitled “KL-MOB: automated COVID-19 recognition using a novel approach based on image enhancement and a modified MobileNet CNN” (Article ID: 61123). All of these comments were very helpful for revising and improving our paper. We have studied these comments carefully and have made corresponding corrections that we hope will meet with your approval. The responses to the reviewers’ comments are provided below. We would like to express our great appreciation to you and the reviewers for the comments on our paper. If you have any further queries, please do not hesitate to contact us. Kind regards, MUNDHER TARESH On behalf of all authors. I would like to thank the anonymous reviewers for their time and effort in reviewing our manuscript and providing constructive comments. All comments raised by the referees have been taken into consideration in preparing the revised version of the manuscript. In the following paragraphs, we provide point-by-point responses to the comments: Reviewer 1 (Dweepna Garg) Our thanks for your in-depth and thoughtful comments and suggestions. We appreciate your time and have addressed all of them as follows: Basic reporting Grammatical mistakes need to be rectified. There are some writing mistakes such as Covid1-9 in line number 145. The manuscript should be thoroughly checked. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript thoroughly and have corrected all grammar and sentence errors. Furthermore, we have submitted the manuscript to a professional English language editor, https://authorservices.aip.org, to help correcting the grammars and sentences. Kindly see the EDITORIAL CERTIFICATE attached to the response letter. We hope you will find these amendments satisfactory. Equation-2 should be rewritten to differentiate - sign. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and rewritten equations 2 to differentiate - sign. For your convenience, we present the equation below and highlight the amendment in the manuscript. We hope you will find these amendments satisfactory. Figure 6 and 7 should be enlarged enough to read values. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and enlarged Fig. 6(9) and 7(10) to be easily readable. For your convenience, we present these figures below and highlight them in the revised manuscript as well. We hope you will find these amendments satisfactory. Experimental design Explanation of proposed architecture needs to be re-write and it should meet academic writing criteria. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and re-written the explanation of the proposed architecture to be more clear and to meet academic writing. To guarantee this, we have further submitted the manuscript to a professional English language editor, https://authorservices.aip.org, For your convenience, we present the revised explanation below and highlight the amendment in the manuscript. “We used a deep neural network structure called a MobileNet neural network (Howard et al., 2017). All images were resized to 224 × 224 × 3 before being used as input to the neural network. Figure 3 depicts the architecture of the proposed neural network. Apart from the first layer, which is a full convolution, the MobileNets are constructed using depthwise separable convolutions. Depthwise separable convolution is a factorized convolution that factorizes standard convolution into a depthwise convolution and a 1 × 1 convolution called pointwise convolution. This procedure reduces the computations and model size drastically. The overall architecture of the MobileNet is shown in Table 3. The deep convolutional neural network is used to extract high context features per input instance. The global average pooling layer is used to reduce the spatial dimensions of the features extracted. The output is a feature vector of size 1024 for each time step. Then, a dropout layer is used with a probability of 0.001. The output of the dropout layer goes to two fully connected layers that generate an output of size 128. One fully connected layer is used to predict the mean μ, which is used to extract the most significant features from those features extracted in previous layers. The other is used to predict the standard deviation σ of a Gaussian distribution, which is used to calculate the KL loss function. The output of the fully connected layer used to predict the mean μ goes to the last layer, which is a fully connected layer containing the SoftMax classifier.” We hope you will find these amendments satisfactory. How KL-Divergence used and by using it, how your work has been enhanced? Justify. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and clearly demonstrated how KL-Divergence has been utilized, and justified how it enhanced our work. For your convenience, we present that below and highlight the amendment in the manuscript at lines 164 and 194. “The output of the dropout layer goes to two fully connected layers that generate an output of size 128. One fully connected layer is used to predict the mean μ, which is used to extract the most significant features from those features extracted in previous layers. The other is used to predict the standard deviation σ of a Gaussian distribution, which is used to calculate the KL loss function. The output of the fully connected layer used to predict the mean μ goes to the last layer (Softmax classifier), which is defined by……” “We conducted a comprehensive investigation to determine the impact of various feature sizes. As is shown in Table 4, the training accuracy differs with the size of the output vector. It is worth noting that the results are relatively acceptable for all vectors in the enhanced data, but the best result is achieved when the output vector is 128 with an accuracy of 96.06 %. In contrast, the output vector 256 in the original data achieved the best value with an accuracy of 93.24 %. This can be attributed to that the KL divergence between the µ; σ distribution and the prior is considered as a regularization which helps to overcome the overfitting problem.” Table 4. Model performance on different feature sizes We hope you will find these amendments satisfactory. Authors have added KL-Divergence loss, work should also be compared with other loss methods to show work efficiency. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and explained that. For your convenience, we present that below and highlight the amendment in the manuscript at line 229. “In order to show the impact of the KL divergence loss on the efficacy of the proposed method, we performed several experiments using the categorical entropy loss function (CCE) and the mean square error (MSE) loss function. The results obtained in Table 8 show that the proposed method has a great impact on the performance of KL-MOB, thereby justifying the selection of the proposed network architecture and its associated training/learning schemes.” Table 8. Performance on the test set with different loss functions We hope you will find these amendments satisfactory. Validity of the findings Table 1 shows that classes are imbalanced and large variance. How it is handled during training? Response: Thank you for providing constructive comments. In this work, we recommend the following performance measures that can give more insight into the accuracy of the model than traditional classification accuracy: confusion matrix, precision, sensitivity, specificity, and area under the curve. Based on the reviewer’s comment, we have revised the manuscript and clearly demonstrated that. For your convenience, we present that below and highlight the amendment in the manuscript at line 166. “We use a weighted loss function as illustrative in equation 5. The weight of KL loss α is empirically set to (0:1) to be used as a one-hot vector which not only ensures a clear representation of the true class, but also helps in addressing the large variance arising due to unbalanced data.” We hope you will find these amendments satisfactory. The loss and accuracy graphs need to be presented. Epoch and training explanation needs to present. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and presented the loss and accuracy graphs. Furthermore, epoch and training also have been explained and presented. For your convenience, we present that below and highlight the amendment in the manuscript. “The network is trained by using a SoftMax classifier with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate set to 0.0001, and a batch size of 32. The total number of parameters is 3,488,426, where the number of trainable parameters is 3,466,660, and the nontrainable parameters are 21,766. In the training period, 200 epochs were completed to check the KL-MOB model loss and accuracy which are shown in Figures 5 and 6.” We hope you will find these amendments satisfactory. Authors can compare his work with baseline Deep CNN classifier models. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and conducted a comparison of our work with baseline Deep CNN classifier models. For your convenience, we present that below and highlight the amendment in the manuscript at line 183. In the first stage, the baseline model is trained to verify the influence of the KL loss on performance. Figure 4 presents the curve comparisons of all training processes. With the maximum training epoch set to 200. A large gap between training and validation in both original and enhanced images, indicating the presence of overfitting. We hope you will find these amendments satisfactory. Reviewer 2 Our thanks for your in-depth and thoughtful comments and suggestions. We appreciate your time. Based on the reviewer’s comment, we have revised the manuscript and added the following paragraphs. For your convenience, we present that below and highlight the amendment in the manuscript at lines 103 and 113. “The lack of necessary extracted features from the images sometimes cannot provide expected accuracy in the classification result. in this work, the Kullback-Leibler (KL) divergence is adopted to devise more efficient and accurate representations and measure how far we are from the optimal solution during the iterations.” “The novelty of this study is not only to clarify significant features in the CXR images by developing a hybrid algorithm but also proposed a novel approach in how to devise more efficient and accurate representations by using KL loss. The intent behind this study is not only to achieve a high classification accuracy but to achieve this by training an automated end-to-end deep learning framework based on CNN. This method is superior to transfer learning for evaluating the importance of features derived from imagery, as it is not relying on features previously learned by the pre-trained model, which was first trained on non-medical116images. The main contributions of this work can be summarized as follows: • For Covid-19 recognition, we propose an automated end-to-end deep learning framework based on MobileNet CNN with KL divergence loss function. • We propose an impressive approach to ensure a sufficiently diverse representation by predicting the output of the meanμand standard-deviationσof the Gaussian distribution. • We incorporate a novel preprocessing enhancement technique consisting of AMF, NLMF, and CLAHE to meet the challenges arising from data deficiency and complexity. • We analyze the performance of the preprocessing enhancement scheme to demonstrate its role in enhancing the discrimination capability of the proposed model.” We hope you will find these amendments satisfactory. Reviewer: Shilpa Gite Our thanks for your in-depth and thoughtful comments and suggestions. We appreciate your time and have addressed all of them as follows: Basic reporting Clear, unambiguous, technical English language has been used in the paper. Still the manuscript can be improved by avoiding a few grammatical errors. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript thoroughly and have corrected all grammar and sentence errors. Furthermore, we have submitted the manuscript to a professional English language editor, https://authorservices.aip.org, to help correcting the grammars and sentences. Kindly see the EDITORIAL CERTIFICATE attached to the response letter. We hope that the revised paper contains clear and accurate expressions. Sufficient field background/context provided, however I feel Literature review part(preferably in a tabular format) can be enhanced further by adding related papers from the current year i.e. 2021. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and tabulated the related work. For your convenience, we present the related paragraph below and highlight the amendment in the manuscript at line 75. “An overview of these works is listed in Table 1. It should be noted that the CLAHE algorithm has widely used by the majority, while some pursued a hybridization method.” We hope that the revised paper contains clear and accurate expressions. A short paragraph on MobileNet can form a baseline for the proposed research so authors are advised to add it in the literature. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and added a short paragraph on MobileNet. For your convenience, we present that below and highlight the amendment in the manuscript at line 90. “The enhanced images are finally fed into the state-of-the-art CNN called MobileNet (Howard et al., 2017), which has been recently utilized for the same classification task by (Apostolopoulos et al., 2020; Apostolopoulos and Mpesiana, 2020). MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. The motivation behind choosing a MobileNet CNN is that it not only helps to reduce overfitting but also runs faster than a regular CNN and has significantly fewer parameters (4.24) (Howard et al., 2017; Yu et al., 2020). Moreover, MobileNets employ two global hyperparameters based on depthwise separable convolutions to96strike a balance between efficiency and accuracy.” We hope that the revised paper contains clear and accurate expressions. Professional article structure, figures, tables. Raw data shared. Line no. 85 states 'The motivation behind choosing Mobile CNN is that it not only helps to reduce overfitting but also runs faster than regular CNN with many fewer parameters (Howard et al., 2017; Yu et al., 2020)'.Please specify. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and specified that. For your convenience, we present that below and highlight the amendment in the manuscript at line 94. “The motivation behind choosing Mobile CNN is that it not only helps to reduce overfitting but also runs faster than regular CNN with many fewer parameters(4.24 m) (Howard et al., 2017; Yu et al., 2020).” We hope that the revised paper contains clear and accurate expressions. KL divergence loss function must be explained with proper citations and references. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and explained KL divergence loss. For your convenience, we present that below and highlight the amendment in the manuscript at line 98. “KL divergence is one of the measures that reflect distribution divergences between different probabilities, which has been widely used in the problem of classification with imbalanced datasets (Su et al.,2015; Feng et al., 2018). The KL divergence loss function is more commonly used when using models that learn to approximate a more complex function than simply multiclass classification, such as in the case of an autoencoder used for learning a dense feature representation under a model that must reconstruct the original input. Indeed, the lack of necessary extracted features from the images sometimes cannot provide expected accuracy in the classification result. In this work, inspired by the variational autoencoder learning (Kingma and Welling, 2013; Alfasly et al., 2019; Alghaili et al., 2020) the Kullback-Leibler (KL) divergence is adopted to devise more efficient and accurate representations and measure how far we are from the optimal solution during the iterations.” We hope that the revised paper contains clear and accurate expressions. The section 'Classification Neural Network Model' line no.127 should be presented in a tabular format to make it readable and understandable. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and presented that in a tabular format . For your convenience, we highlight the amendment in the manuscript line 163. Table 3. Layers of prposed CNN model architecture We hope that the revised paper contains clear and accurate expressions. Experimental design Original primary research within Aims and Scope of the journal. Research question well defined, relevant & meaningful. However, the explicit summary can be included which should state the important aspects of the proposed research to fill an identified knowledge gap. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and included more aspect of the proposed research. For your convenience, we present that below and highlight the amendment in the manuscript at lines 103 and 113. “The lack of necessary extracted features from the images sometimes cannot provide expected accuracy in the classification result. in this work, the Kullback-Leibler (KL) divergence is adopted to devise more efficient and accurate representations and measure how far we are from the optimal solution during the iterations.” “The novelty of this study is not only to clarify significant features in the CXR images by developing a hybrid algorithm but also proposed a novel approach in how to devise more efficient and accurate representations by using KL loss. The intent behind this study is not only to achieve a high classification accuracy but to achieve this by training an automated end-to-end deep learning framework based on CNN. This method is superior to transfer learning for evaluating the importance of features derived from imagery, as it is not relying on features previously learned by the pre-trained model, which was first trained on non-medical116images. The main contributions of this work can be summarized as follows: • For Covid-19 recognition, we propose an automated end-to-end deep learning framework based on MobileNet CNN with KL divergence loss function. • We propose an impressive approach to ensure a sufficiently diverse representation by predicting the output of the meanμand standard-deviationσof the Gaussian distribution. • We incorporate a novel preprocessing enhancement technique consisting of AMF, NLMF, and CLAHE to meet the challenges arising from data deficiency and complexity. • We analyze the performance of the preprocessing enhancement scheme to demonstrate its role in enhancing the discrimination capability of the proposed model.” We hope that the revised paper contains clear and accurate expressions. Methods described with sufficient detail & information to replicate. However there are many Contrast enhancement techniques such as Filtering with morphological operator or Histogram equalization or Median filtering. The notion of choosing CLAHE should be justified with more details. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and justified the notion of choosing CLAHE in more details. For your convenience, we present that below and highlight the amendment in the manuscript at lines 88 and 151. “We then utilize the CLAHE approach that has been already applied for the enhancement of contrast in medical images (Zhou et al., 2016; Sahu et al., 2019; Wen et al., 2016), to enhance the contrast of the denoised images.” “Next, CLAHE is applied to the denoised images to achieve an acceptable visualization and to compensate for the effect of the filtration that may contribute to some blurring to the images (Huang et al., 2016; Senthilkumar and Senthilmurugan, 2014). Since there are many homogeneous regions in medical images, CLAHE is suitable for optimizing medical images as the CLAHE algorithm creates non-overlapping homogeneous.” We hope that the revised paper contains clear and accurate expressions. Training testing percentage not mentioned clearly in Table 2. The number of images for each class Based on the reviewer’s comment, we have revised the manuscript and mentioned that. For your convenience, we present that below and highlight the amendment in the manuscript. Table 2. The number of images for each class. We hope that the revised paper contains clear and accurate expressions. Line no. 164 needs a full stop. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and corrected that. For your convenience, we present that below and highlight the amendment in the manuscript. “Figure 5 displays the original images and their enhanced versions.” We hope that the revised paper contains clear and accurate expressions. Line no 173 'Adaptation of such an approach introduced…' reframe this. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and reframed that. For your convenience, we present that below and highlight the amendment in the manuscript. “This work proposes an approach that combines noise-reduction algorithms with contrast enhancement. This approach introduces a type of hybrid filtering and contrast enhancement for the data set of images used for Covid-19 detection.” We hope that the revised paper contains clear and accurate expressions. Same nomenclature should be used throughout the paper such as table 3 contains 'our method' whereas table 5 uses 'proposed method' Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and use the same nomenclature (Proposed method). We hope that the revised paper contains clear and accurate expressions. Validity of the findings Conclusions are well stated, linked to original research question & limited to supporting results. % improvement in the evaluation can be added in the results table as the last row to state the impact of the proposed model. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and added a raw in the results table to state the impact of the proposed model. For your convenience, we present that below and highlight the amendment in the manuscript. Table 9. Comparative performance of the various models with the improvement percentage compared to the state of art. Author should include some small portion of future work mentioning following: https://www.sciencedirect.com/science/article/pii/S2405844021013141 https://doi.org/10.1155/2021/8828404 What about using Grad CAM for visual explanations of the detection as given in https://peerj.com/articles/cs-348/ and https://ieeexplore.ieee.org/abstract/document/9391727 https://doi.org/10.7717/peerj-cs.340 for Lime tool in XAI If data is not balanced authors must state some data augmentation techniques and particularly the advanced using GANs as in https://link.springer.com/article/10.1007%2Fs00500-019-04602-2 or in the upcoming domains such as transfer learning/domain adaptation/adversarial learning etc. Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and included some portion of future work. For your convenience, we present that below and highlight the amendment in the manuscript. “Besides, this work lacked in applying some of the techniques such as progressive resizing (Bhatt et al., 2021a), which can be applied on CNNs to carry out imaging-based diagnostics. Furthermore, visual ablation studies (Bhatt et al., 2021b; Joshi et al., 2021; Gite et al.,2021) can be performed along with deep learning, which will significantly improve the detection of Covid-19 CXR images manifestations. Since only a limited number of CXR images are available for Covid-19 infections, out-of-distribution issues may arise, so more data from related distributions is needed for further evaluation. There are several techniques that would be another way to overcome this problem, include, but are not limited to data augmentation techniques (Chaudhari et al., 2019), transfer learning (Taresh et al., 2021; Bhatt et al., 2021a), domain-adaptation (Zhang et al., 2020; Jin et al., 2021) and adversarial learning (Goel et al., 2021; Rahman et al., 2020; Motamed et al., 2021), etc. Finally, the image enhancement must be verified by a radiologist, which we have not yet been able to do due to the emerging conditions.” We hope that the revised paper contains clear and accurate expressions. "
Here is a paper. Please give your review comments after reading it.
225
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The emergence of the novel coronavirus pneumonia (Covid-19) pandemic at the end of 2019 led toworldwide chaos. However, the world breathed a sigh of relief when a few countries announced the development of a vaccine and gradually began to distribute it.</ns0:p><ns0:p>Nevertheless, the emergence of another wave of this pandemic returned us to the starting point. At present, early detection of infected people is the paramount concern of both specialists and health researchers. This paper proposes a method to detect infected patients through chest x-ray images by using the large dataset available online for Covid-19 (COVIDx), which consists of 2128 X-ray images of Covid-19 cases, 8066 normal cases, and 5575 cases of pneumonia. A hybrid algorithm is applied to improve image quality before undertaking neural network training. This algorithm combines two different noise-reduction filters in the image, followed by a contrast enhancement algorithm. To detect Covid-19, we propose a novel convolution neural network (CNN) architecture called KL-MOB (Covid-19 detection network based on the MobileNet structure). The performance of KL-MOB is boosted by adding the Kullback-Leibler (KL) divergence loss function when trained from scratch. The KL divergence loss function is adopted for content-based image retrieval and fine-grained classification to improve the quality of image representation. The results are impressive: the overall benchmark accuracy, sensitivity, specificity, and precision are 98.7%, 98.32%, 98.82%, and 98.37%, respectively. These promising results should help other researchers develop innovative methods to aid specialists. The tremendous potential of the method proposed herein can also be used to detect Covid-19 quickly and safely in patients throughout the world.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION 33</ns0:head><ns0:p>The novel coronavirus 2019 With the rapid global spread of Covid-19, researchers have begun using state-of-the-art deep-learning techniques to automate the recognition of Covid-19. The initial lack of Covid-19 data compelled earlier researchers to use pretrained networks to build their own models <ns0:ref type='bibr' target='#b41'>(Narin et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b43'>Ozturk et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b7'>Apostolopoulos and Mpesiana, 2020;</ns0:ref><ns0:ref type='bibr' target='#b13'>Civit-Masot et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b3'>Albahli, 2020;</ns0:ref><ns0:ref type='bibr' target='#b58'>Sethy and Behera, 2020;</ns0:ref><ns0:ref type='bibr'>Apostolopoulos et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b12'>Chowdhury et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b16'>Farooq and Hafeez, 2020;</ns0:ref><ns0:ref type='bibr' target='#b36'>Maghdid et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b21'>Hemdan et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b62'>Taresh et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b45'>Punn and Agarwal, 2021)</ns0:ref>. Given that Covid-19 infected millions of people worldwide within a few months of its detection, a mid-range dataset of positive cases was made available for public use <ns0:ref type='bibr' target='#b65'>(Wang et al., 2020)</ns0:ref>. This dataset can be uploaded from https://github. com/lindawangg/COVID-Net/blob/master/docs/COVIDx.md. This, in turn, has enabled further progress in developing new, accurate, in-depth models for Covid-19 recognition <ns0:ref type='bibr' target='#b2'>(Ahmed et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b0'>Afshar et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b64'>Ucar and Korkmaz, 2020;</ns0:ref><ns0:ref type='bibr' target='#b34'>Luz et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b22'>Hirano et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b53'>Rezaul Karim et al., 2020)</ns0:ref>. However, some medical imaging issues usually pose difficulties in the recognition task, reducing the performance of these models. These issues include, but are not limited to, insufficient training data, inter-class ambiguity, intra-class variation, and visible noise. These problems oblige us to significantly enhance the discrimination capability of the associated model. Specifically, regarding the x-ray image, the common characteristics are grayscale color space, high noise, low intensity, poor contrast, and weak boundary representation, which will normally affect the information of the image <ns0:ref type='bibr' target='#b27'>(Ikhsan et al., 2014)</ns0:ref>.</ns0:p><ns0:p>One way around these issues is to use proper image preprocessing techniques for noise reduction and contrast enhancement. A closer look at the available images reveals the presence of various types of noise, such as impulsive, Poisson, speckle, and Gaussian noise [see Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> for the most common types of noise in x-ray images <ns0:ref type='bibr' target='#b44'>(Paul et al., 2018)</ns0:ref>]. However, the most prevalent studies have focused only on some of these types of noise (e.g., Gaussian and Poisson). In particular, among many other techniques, histogram equalization (HE) <ns0:ref type='bibr' target='#b13'>(Civit-Masot et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b63'>Tartaglione et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b53'>Rezaul Karim et al., 2020)</ns0:ref>, contrast limited adaptive histogram equalization (CLAHE) <ns0:ref type='bibr' target='#b15'>(El-bana et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b55'>Saiz and Barandiaran, 2020;</ns0:ref><ns0:ref type='bibr' target='#b37'>Maguolo and Nanni, 2021;</ns0:ref><ns0:ref type='bibr' target='#b52'>Ramadhan et al., 2020)</ns0:ref>, adaptive total variation method(ATV) <ns0:ref type='bibr' target='#b45'>(Punn and Agarwal, 2021)</ns0:ref>, white balance followed by CLAHE <ns0:ref type='bibr' target='#b59'>(Siddhartha and Santra, 2020)</ns0:ref>, intensity normalization followed by CLAHE (N-CLAHE) <ns0:ref type='bibr' target='#b23'>(Horry et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b14'>El Asnaoui and Chawki, 2020)</ns0:ref>, Perona-Malik filter (PMF), unsharp masking (UM) (Rezaul <ns0:ref type='bibr' target='#b53'>Karim et al., 2020)</ns0:ref>, Bihistogram equalization with adaptive sigmoid function (BEASF) <ns0:ref type='bibr' target='#b20'>(Haghanifar et al., 2020)</ns0:ref>, the gamma correction (GC) <ns0:ref type='bibr' target='#b50'>(Rahman et al., 2021)</ns0:ref>, histogram stretching (HS) <ns0:ref type='bibr' target='#b66'>(Wang et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b70'>Zhang et al., 2021)</ns0:ref>, Moment Exchange algorithm (MoEx), CLAHE <ns0:ref type='bibr' target='#b35'>(Lv et al., 2021)</ns0:ref>, local phase enhancement (LPE) <ns0:ref type='bibr' target='#b46'>(Qi et al., 2021)</ns0:ref>, image contrast enhancement algorithm (ICEA) <ns0:ref type='bibr' target='#b10'>(Canayaz, 2021)</ns0:ref>, and Gaussian filter <ns0:ref type='bibr' target='#b38'>(Medhi et al., 2020)</ns0:ref> are, as far as we are aware, the only adopted techniques in Covid-19 recognition to date. An overview of these works is listed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. It should be noted that the CLAHE algorithm has widely used by the majority, while some pursued a hybridization method. Moreover, the utilized filters can result in blurry (by Gaussian filter) or blocky (by PMF) features in the processed image. Accordingly, there is still room to incorporate more effective preprocessing techniques to further increase the accuracy of these systems. an adaptive median filter (AMF) and a non-local means filter (NLMF) to remove the noise from the images. Numerous works have already analyzed the performance of these two filters for denoising x-ray imagery <ns0:ref type='bibr' target='#b31'>(Kim et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b51'>Raj and Venkateswarlu, 2012;</ns0:ref><ns0:ref type='bibr' target='#b47'>Rabbouch et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b56'>Sawant et al., 1999;</ns0:ref><ns0:ref type='bibr' target='#b39'>Mirzabagheri, 2017)</ns0:ref>, demonstrating their superiority over various filters, including the ones in the cited works in terms of removing impulsive, Poisson, and speckle noise while preserving the useful image details. We then utilize the CLAHE approach that has been already applied for the enhancement of contrast in medical images <ns0:ref type='bibr'>(Zhou et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b60'>Sonali et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b67'>Wen et al., 2016)</ns0:ref>, to enhance the contrast of the denoised images. The enhanced images are finally fed into the state-of-the-art convolution neural network (CNN) called MobileNet <ns0:ref type='bibr' target='#b25'>(Howard et al., 2017)</ns0:ref>, which has been recently utilized for the same classification task by <ns0:ref type='bibr'>(Apostolopoulos et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b7'>Apostolopoulos and Mpesiana, 2020)</ns0:ref>. MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. The motivation behind choosing a MobileNet CNN is that it not only helps to reduce overfitting but also runs faster than a regular CNN and has significantly fewer parameters (4.24) <ns0:ref type='bibr' target='#b25'>(Howard et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b68'>Yu et al., 2020)</ns0:ref>. Moreover, MobileNets employ two global hyperparameters based on depthwise separable convolutions to strike a balance between efficiency and accuracy.</ns0:p><ns0:p>KL divergence is one of the measures that reflect the distribution divergence between different probabilities, which has been widely used in the problem of classification imbalanced datasets <ns0:ref type='bibr' target='#b61'>(Su et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b17'>Feng et al., 2018)</ns0:ref>. The KL divergence loss function is more commonly used when using models that learn to approximate a more complex function than simply multiclass classification, such as in the case of an autoencoder used for learning a dense feature representation under a model that must reconstruct the original input. Indeed, the lack of necessary extracted features from the images sometimes cannot provide expected accuracy in the classification result. In this work, inspired by the variational autoencoder learning <ns0:ref type='bibr' target='#b33'>(Kingma and Welling, 2013;</ns0:ref><ns0:ref type='bibr' target='#b4'>Alfasly et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b5'>Alghaili et al., 2020)</ns0:ref> the Kullback-Leibler (KL)</ns0:p><ns0:p>divergence is adopted to devise more efficient and accurate representations and measure how far we are from the optimal solution during the iterations. We evaluated the performance of the proposed framework on the COVIDx dataset in terms of a wide variety of metrics: accuracy, sensitivity, specificity, precision, area under the curve, and computational efficiency. Simulation results reveal that the proposed framework significantly outperforms state-of-the-art models from both quantitative and qualitative perspectives.</ns0:p><ns0:p>The novelty of this study is not only to clarify significant features in the CXR images by developing a hybrid algorithm but also proposes a novel approach in how to devise more efficient and accurate by using KL loss. The intent behind this study is not only to achieve a high classification accuracy but to Manuscript to be reviewed</ns0:p><ns0:p>Computer Science achieve this by training an automated end-to-end deep learning framework based on CNN. This method is superior to transfer learning for evaluating the importance of features derived from imagery, as it is not relying on features previously learned by the pretrained model, which was first trained on nonmedical images. The main contributions of this work can be summarized as follows:</ns0:p><ns0:p>&#8226; For Covid-19 recognition, we propose an automated end-to-end deep learning framework based on MobileNet CNN with KL divergence loss function.</ns0:p><ns0:p>&#8226; We propose an impressive approach to ensure a sufficiently diverse representation by predicting the output of the mean &#181; and standard-deviation &#963; of the Gaussian distribution.</ns0:p><ns0:p>&#8226; We incorporate a novel preprocessing enhancement technique consisting of AMF, NLMF, and CLAHE to meet the challenges arising from data deficiency and complexity.</ns0:p><ns0:p>&#8226; We analyze the performance of the preprocessing enhancement scheme to demonstrate its role in enhancing the discrimination capability of the proposed model.</ns0:p><ns0:p>The rest of this paper is organized as follows: Section (2) describes the phases of the proposed method.</ns0:p><ns0:p>Section (3) highlights the experimental results. Section (4) discusses these results, and the conclusion is presented in Section (5).</ns0:p></ns0:div> <ns0:div><ns0:head>PROPOSED METHOD</ns0:head><ns0:p>In this section, we briefly describe the scenario of the methodology used to achieve the purpose of this study. The proposed method is depicted in Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Data Acquisition</ns0:head><ns0:p>In this work, we used the COVIDx dataset used by <ns0:ref type='bibr' target='#b65'>(Wang et al., 2020)</ns0:ref> to train and evaluate the proposed model. In brief, the COVIDx dataset is an open-source dataset that can be downloaded from https: //github.com/lindawangg/COVID-Net/blob/master/docs/COVIDx.md. The instructions given by <ns0:ref type='bibr' target='#b65'>(Wang et al., 2020)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Data Preprocessing Method</ns0:head><ns0:p>In this study, we attempt to provide an algorithm that would increase the image quality by using a hybrid technique consisting of noise reduction and contrast enhancement. Specifically, two efficient filters are used for noise reduction while CLAHE is used for contrast enhancement. The first filter is the AMF, which removes impulse noise <ns0:ref type='bibr' target='#b42'>(Ning et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b30'>Khare and Chugh, 2014</ns0:ref>). This filter is followed by the NLMF algorithm that calculates similarity based on patches instead of pixels. Given a discrete noisy image u = u(i) for pixel I, the estimated value of NL[u](i) is the weighted average of all pixels:</ns0:p><ns0:formula xml:id='formula_0'>NL[u](i) = &#8721; j&#8712;i w(i, j).u( j), (<ns0:label>1</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>)</ns0:formula><ns0:p>where the weight family w(i, j) j depends on the similarity between the pixels i and j.</ns0:p><ns0:p>The similarity between the two pixels i and j is defined by the similarity of the intensity of gray-level vectors u(N i ) and u(N j ), where N l signifies a square neighborhood of fixed size and centered at a pixel L.</ns0:p><ns0:p>The similarity is measured as a function to minimize the weighted Euclidean distance, u(N i )&#8722;u(N j ) 2</ns0:p><ns0:p>(2,a)</ns0:p><ns0:p>where a &gt; 0 is the Gaussian kernel standard deviation. The pixels with a similar gray-level neighborhood with u(N i ) have larger weights in average. These weights are defined as;</ns0:p><ns0:formula xml:id='formula_2'>w(i, j) = 1 Z(i) e &#8722; u(N i )&#8722;u(N j ) 2 (2,a) h 2 ,<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where Z(i) is the normalizing constant:</ns0:p><ns0:formula xml:id='formula_3'>Z(i) = &#8721; j e &#8722; u(N i )&#8722;u(N j ) 2 (2,a) h 2</ns0:formula><ns0:p>, and the parameter h acts as a degree of filtering.</ns0:p><ns0:p>Next, CLAHE is applied to the denoised images to achieve an acceptable visualization and to compensate for the effect of filtration that may contribute to some blurring on the images <ns0:ref type='bibr' target='#b26'>(Huang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b57'>Senthilkumar and Senthilmurugan, 2014)</ns0:ref>. Since there are many homogeneous regions in medical images, CLAHE is suitable for optimizing medical images as the CLAHE algorithm creates non-overlapping homogeneous regions.</ns0:p></ns0:div> <ns0:div><ns0:head>Classification Neural Network Model</ns0:head><ns0:p>We used a deep neural network structure called a MobileNet neural network <ns0:ref type='bibr' target='#b25'>(Howard et al., 2017)</ns0:ref>. All images were resized to 224 &#215; 224 &#215; 3 before being used as input to the neural network. Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> <ns0:ref type='table' target='#tab_2'>3</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_4'>Type Stride Filter Shape Size in Size out Conv1 2 3&#215;3&#215;3&#215;32 224&#215;224&#215;3 112&#215;112&#215;32 Conv2 dw 1 3 &#215; 3 &#215; 32 112&#215;112&#215;32 112&#215;112&#215;32 Conv2 pw 1 1 &#215; 1 &#215; 32 &#215; 64 112&#215;112&#215;32 112&#215;112&#215;64 Conv3 dw 2 3 &#215; 3 &#215; 64 112&#215;112&#215;64 56&#215;56&#215;64 Conv3 pw 1 1 &#215; 1 &#215; 64 &#215; 128 56&#215;56&#215;64 56&#215;56&#215;128 Conv4 dw 1 3 &#215; 3 &#215; 128 56&#215;56&#215;128 56&#215;56&#215;128 Conv4 pw 1 1 &#215; 1 &#215; 128 &#215; 128 56&#215;56&#215;128 56&#215;56&#215;128 Conv5 dw 2 3 &#215; 3 &#215; 128 56&#215;56&#215;128 56&#215;56&#215;128 Conv5 pw 1 1 &#215; 1 &#215; 128 &#215; 256 28&#215;28&#215;128 28&#215;28&#215;128 Conv6 dw 1 3 &#215; 3 &#215; 256 28&#215;28&#215;256 28&#215;28&#215;265 Conv6 pw 1 1 &#215; 1 &#215; 256 &#215; 256 28&#215;28&#215;256 28&#215;28&#215;256 Conv7 dw 2 3 &#215; 3 &#215; 256 28&#215;28&#215;256 14&#215;14&#215;256 Conv7 pw 1 1 &#215; 1 &#215; 256 &#215; 512 14&#215;14&#215;256 14&#215;14&#215;512 Conv8-12 dw 1 3 &#215; 3 &#215; 512 14&#215;14&#215;512 14&#215;14&#215;512 Conv8-12 pw 1 1 &#215; 1 &#215; 512 &#215; 512 14&#215;14&#215;512 14&#215;14&#215;512 Conv13 dw 2 3 &#215; 3 &#215; 512 14&#215;14&#215;512 7&#215;7&#215;512 Conv13 pw 1 1 &#215; 1 &#215; 512 &#215; 1024 7&#215;7&#215;512 7&#215;7&#215;1024 Conv14 dw 2 3 &#215; 3 &#215; 1024 7&#215;7&#215;1024 7&#215;7&#215;1024 Conv14 pw 1 1&#215;1&#215;1024&#215;1024 7&#215;7&#215;1024 7&#215;7&#215;1024 GAP 1 Pool 7 &#215; 7 7&#215;7&#215;1024 1&#215;1&#215;1024 Dropout 1 Probability=0.001 1&#215;1&#215;1024 1&#215;1&#215;1024 FC (&#181;) 1 128&#215; 3 1&#215;1&#215;1024 1&#215;1&#215;128 FC (&#963; ) 1 128&#215; 3 1&#215;1&#215;1024 1&#215;1&#215;128 Softmax 1 Classifier 1&#215;1&#215;128 1&#215;1&#215;3</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>standard deviation &#963; of a Gaussian distribution, which is used to calculate the KL loss function. The output of the fully connected layer, which used to predict the mean &#181; goes to the last layer (Softmax classifier), which is defined by</ns0:p><ns0:formula xml:id='formula_5'>L CE (o, v) = &#8722; v &#8721; i=1 o i log ( e pi &#8721; v j e p j ) ,<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where v indicates the output vector, o indicates the objective vector, and p j indicates the input to the neuron j.</ns0:p><ns0:p>The categorical cross-entropy loss function is generally used to address such a multiclass classification problem. The three classes are provided with labels such as '0' being a Covid-19 case, '1' being a normal case, and '2' being pneumonia. We adopted Kullback-Leibler divergence loss function to devise more efficient and accurate representations. Moreover, the combined KL loss with the categorical cross-entropy loss function would enforce the network to give a consistent output, in addition to the preprocessing applied to the input image. The KL divergence distribution between the &#181;;&#963; and the prior is considered as a regularization that aids in addressing the issue of overfitting. KL loss function is defined by</ns0:p><ns0:formula xml:id='formula_6'>D KL = &#8722; 1 2 n &#8721; i=1 (1 + log (&#963; i ) &#8722; &#181; 2 i &#8722; &#963; i ) , (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_7'>)</ns0:formula><ns0:p>where n is the output vector of the average pooling layer with the size of 1024, &#181; is the mean, which is predicted from one fully connected layer, and &#963; is the standard deviation of a Gaussian distribution, which is predicted from the other fully connected layer in the network, Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>. The multitask learning loss function for our proposed network is now defined by</ns0:p><ns0:formula xml:id='formula_8'>L = &#945;D KL + L CE (o, v) . (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>)</ns0:formula><ns0:p>We use a weighted loss function as illustrated in Equation <ns0:ref type='formula' target='#formula_8'>5</ns0:ref>. The weight of KL loss &#945; is empirically set to (0:1) to be used as a one-hot vector, which not only ensures a clear representation of the true class, but also helps in addressing the large variance arising due to unbalanced data.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiments</ns0:head><ns0:p>window size was taken to be 5 &#215; 5 for effective filtering. The resultant image was then subjected to the NLMF technique. The performance of the NLMF was depended on 7 &#215; 7 of the search window, 5 x 5 of the similarity window, and a degree of filtering h = 1. Furthermore, we increased the contrast using CLAHE with the bin of 256 and block size of 128 in slope 3 to get the enhanced images. We passed the images to KL-MOB as the input to predict the CXR image (Covid-19, normal, or pneumonia). Because many functions are not built-in functions from deep learning libraries, such as the relu6 activation function with a max value of six, we built an interface for the evaluation process that contains all layers in the network, as in a training network, but which is not used for training. Instead, it is used to pass on the input image to produce the output.</ns0:p><ns0:p>The proposed model (KL-MOB) is implemented by using the Python programming language. All experiments were conducted on a Tesla K80 GPU graphics card on Google Collaboratory with an Intel&#169; i7-core @3.6GHz processor and 16GB RAM with 64-bit Windows 10 operating system. The original and enhanced images are used separately to train the KL-MOB. In the first stage, the baseline model is trained to verify the influence of the KL loss on performance. The network is trained by using a SoftMax classifier with an Adam optimizer <ns0:ref type='bibr' target='#b32'>(Kingma and Ba, 2014)</ns0:ref> with the initial learning rate set to 0.0001 and a batch size of 32. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Beforehand, the impact of different feature sizes on training accuracy has been investigated via conducting extensive experiments. Original images perform best when the length is set to 256, with an accuracy of 93.24%, whereas the enhanced images perform best when the length is set to 128 with an accuracy of 96.06%, as shown in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>. This can be attributed to the fact that the KL divergence between &#181;;&#963; distribution and the prior is considered as a regularization which helps to overcome the overfitting problem. </ns0:p></ns0:div> <ns0:div><ns0:head>Model</ns0:head></ns0:div> <ns0:div><ns0:head>Performance Evaluation</ns0:head></ns0:div> <ns0:div><ns0:head>Preprocessing Performance Evaluation</ns0:head><ns0:p>The performance of the proposed preprocessing technique was quantified by using various evaluation metrics such as mean average error (MAE) and peak signal-to-noise Ratio (PSNR). These metrics are desirable because they can be rapidly quantified.</ns0:p><ns0:p>Definition: x(i, j) denotes the samples of the original image, y(i, j) denotes the samples of the output image.M and N are the number of pixels in row and column directions, respectively. MAE is calculated as in Equation <ns0:ref type='formula' target='#formula_10'>6</ns0:ref>, where a large value means that the images are of poor quality.</ns0:p><ns0:formula xml:id='formula_10'>MAE = |E(x) &#8722; E(y)| ,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>The limited value PSNR implies that the images are of low quality. PSNR is described in terms of Mean Square Error MSE as follows:</ns0:p><ns0:formula xml:id='formula_11'>PSNR = 10 log 10 MAX 2 MSE ,<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>where MAX 2 is the maximum possible pixel intensity value 255 when the pixel is represented by 8 bits.</ns0:p></ns0:div> <ns0:div><ns0:head>MSE</ns0:head><ns0:formula xml:id='formula_12'>= 1 MN M&#8722;1 &#8721; i=1 N&#8722;1 &#8721; j=1 [x(i, j) &#8722; y(i, j)] 2 , (<ns0:label>8</ns0:label></ns0:formula><ns0:formula xml:id='formula_13'>)</ns0:formula><ns0:p>Neural Network Performance Evaluation</ns0:p><ns0:p>The test set described in the previous section was used to evaluate KL-MOB. The classification outcome has four cases: True Positive (TP), False Positive (FP), True Negative (TN), and False Negative (FN). The metrics used to measure the performance are accuracy (ACC), sensitivity (TPR), specificity (SPC), and precision (PPV) and are defined as follows: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_14'>Accuracy (ACC) = T P + T N T P + FP + T N + FN ,<ns0:label>(9)</ns0:label></ns0:formula><ns0:formula xml:id='formula_15'>Sensitivity (T PR) = T P T P + FN ,<ns0:label>(10) 9</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Speci f icity (SPC) =</ns0:p><ns0:formula xml:id='formula_16'>T N FP + T N , (<ns0:label>11</ns0:label></ns0:formula><ns0:formula xml:id='formula_17'>)</ns0:formula><ns0:formula xml:id='formula_18'>Precision (PPV ) = T P T P + FP ,<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>The graph of true positive rate (TPR) and false positive rate (FPR) is the receiver operating characteristic (ROC) curve. The FPR is calculated as follows:</ns0:p><ns0:formula xml:id='formula_19'>False Positive Rate (FPR) = FP FP + T N . (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_20'>)</ns0:formula></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>In the experiments, noise reduction and contrast enhancement performance were evaluated independently, since they are two separate issues. The average value was computed for all images in each class. Tables <ns0:ref type='table' target='#tab_6'>5 and 6</ns0:ref> show the results for noise reduction and image enhancement, respectively. Figure <ns0:ref type='figure'>7</ns0:ref> shows the noise reduction techniques that were applied to the original image and the hybrid method used in this work. Although the denoising filters could smooth and blur the resulting images, this can be enhanced by improving the image edges and by highlighting the high-frequency components to remove the residual noise. Figure <ns0:ref type='figure'>8</ns0:ref> displays the original images and their enhanced versions. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Method</ns0:head><ns0:p>Computer Science quality measurements for assessing and comparing image quality. The results of Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref> show that using an AMF followed by a NLMF is entirely favorable for eliminating noise. The proposed hybrid algorithm is applied to the entire image instead of just parts of the image and preserves important details. Figure <ns0:ref type='figure' target='#fig_16'>11</ns0:ref> illustrates the difference between the original CXRs and CXRs enhanced by applying the method proposed herein. Furthermore, we judge the lung damage in the enhanced image to be more perspicuous than in the original image. In addition, CLAHE with a bin of 256 gives the best PSNR, as shown in Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref>. In our experiment of 100 patients with Covid-19, only one was misclassified with a 99.0% PPV for Covid-19, which compares favorably with previous results of 98.9%, and 96.12% for <ns0:ref type='bibr' target='#b65'>(Wang et al., 2020)</ns0:ref> and <ns0:ref type='bibr' target='#b53'>(Rezaul Karim et al., 2020)</ns0:ref>, respectively. In addition, we compare the results obtained from the KL-MOB model with those from previous studies that used the same or similar datasets for evaluation (see Table <ns0:ref type='table' target='#tab_9'>9</ns0:ref>). Not included in the comparison are studies that used smaller datasets <ns0:ref type='bibr' target='#b16'>(Farooq and Hafeez, 2020;</ns0:ref><ns0:ref type='bibr' target='#b0'>Afshar et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b22'>Hirano et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b64'>Ucar and Korkmaz, 2020)</ns0:ref>. The results show that, for all <ns0:ref type='bibr' target='#b70'>(Zhang et al., 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>This work proposes a novel CNN-based MobileNet-structured neural network for detecting Covid-19 using COVIDx, which is the most widely used public dataset of CXR images to date. The evaluation of this approach shows that it outperforms the recent approach in terms of accuracy, specificity, sensitivity, and precision (98.7%, 98.%, 98.32%, and 98.37%, respectively). The proposed method relies on image manipulation by applying a hybrid technique to enhance the visibility of CXR images. This advanced preprocessing technique facilitates the task of the KL-MOB model to extract features, allowing complex patterns in medical images to be recognized at a level comparable to that of an experienced radiologist.</ns0:p><ns0:p>The KL divergence is used to boost the performance of the KL-MOB model, which outperforms recent approaches, as shown by the results. Where the KL divergence between the &#181;;&#963; distribution and the prior is considered as a regularization, which aids to overcome the overfitting problem. Moreover, it is also believed that the notion of using KL divergence can be extended to other similar scenarios such as content-based image retrieval and fine-grained classification to improve the quality of object representation.</ns0:p><ns0:p>Considering several essential factors such as the pattern by which Covid-19 infections spread, image acquisition time, scanner availability, and costs, we hope that these findings will make a useful contribution Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to the fight against Covid-19 and increase the acceptance of artificial-intelligence-assisted applications in clinical practice.</ns0:p><ns0:p>In future work, we will further enhance the proposed method's performance by including lateral views of CXR images in the training data because, in some cases, frontal-view CXR images do not permit a clear diagnosis of pneumonia cases. Besides, this work lacked in applying some of the techniques such as progressive resizing <ns0:ref type='bibr' target='#b8'>(Bhatt et al., 2021a)</ns0:ref>, which can be applied to CNNs to carry out imaging-based diagnostics. Furthermore, visual ablation studies <ns0:ref type='bibr' target='#b9'>(Bhatt et al., 2021b;</ns0:ref><ns0:ref type='bibr' target='#b29'>Joshi et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b18'>Gite et al., 2021)</ns0:ref> can be performed along with deep learning, which will significantly improve the detection of Covid-19 manifestations in the CXR images. Since only a limited number of CXR images are available for Covid-19 infection, out-of-distribution issues may arise, so more data from related distributions is needed for further evaluation. There are several techniques that would be another way to overcome this problem, include, but are not limited to data augmentation techniques <ns0:ref type='bibr' target='#b11'>(Chaudhari et al., 2019)</ns0:ref>, transfer learning <ns0:ref type='bibr' target='#b62'>(Taresh et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b8'>Bhatt et al., 2021a)</ns0:ref>, domain-adaptation <ns0:ref type='bibr' target='#b69'>(Zhang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b28'>Jin et al., 2021)</ns0:ref> and adversarial learning <ns0:ref type='bibr' target='#b19'>(Goel et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b49'>Rahman et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b40'>Motamed et al., 2021)</ns0:ref>, etc. Finally, the image enhancement must be verified by a radiologist, which we have not yet been able to do due to the emerging conditions.</ns0:p></ns0:div> <ns0:div><ns0:head>ACKNOWLEDGMENTS</ns0:head><ns0:p>This work was supported by the National Natural Science Foundation <ns0:ref type='bibr'>[61572177]</ns0:ref>. There was no additional external funding received for this study.</ns0:p></ns0:div> <ns0:div><ns0:head>14/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61123:2:0:NEW 1 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>is a recently recognized disease caused by the severe acute 34 respiratory syndrome coronavirus 2 (SARS-CoV-2). Being highly transmissible and life-threatening, it 35 has rapidly turned into a global pandemic, affecting worldwide health and well-being. Tragically, no 36 effective treatment has yet been approved for patients with Covid-19. However, patients can have a good 37 chance of survival if they are diagnosed sufficiently early, where they would undergo the plan of remedial 38 measures correctly. 39 As a widely available, time-and cost-effective diagnostic tool, chest x-rays (CXRs) can potentially be 40 used for early recognition of Covid-19. Nevertheless, Covid-19 can share similar radiographic features 41 with other types of pneumonia, making it difficult for radiologists to manually distinguish between the techniques.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Noisy images: (A) image with impulsive noise and (B) image with Gaussian noise.</ns0:figDesc><ns0:graphic coords='3,245.13,554.27,206.77,106.28' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61123:2:0:NEW 1 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>, which generally consists of two phases: (a) image preprocessing, to overcome the existing drawbacks mentioned in the previous section; (b) training and testing dedicated to image classification.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Framework of study.</ns0:figDesc><ns0:graphic coords='5,142.28,364.01,412.50,148.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Architecture of proposed neural network.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Figure 4 presents the curve comparisons of all training processes. With the maximum training epoch set to 200. A large gap between training and validation in both original and enhanced images indicates the presence of overfitting.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Accuracy and loss graphs for baseline model:(A) training and validation accuracy of the original images, (B) training and validation loss of the original images, (C) training and validation accuracy of the enhanced images and (D) training and validation loss of the enhanced images.</ns0:figDesc><ns0:graphic coords='9,178.49,63.77,340.08,234.48' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>230</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Accuracy and loss graphs for KL-MOB on training and validation of the original images: (A) accuracy and (B) loss.</ns0:figDesc><ns0:graphic coords='9,186.53,399.19,324.00,118.80' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Accuracy and loss graphs for KL-MOB on training and validation of the enhanced images: (A) accuracy and (B) loss.</ns0:figDesc><ns0:graphic coords='9,186.53,575.55,324.00,118.80' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>/ 18 PeerJ</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:05:61123:2:0:NEW 1 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. Result of noise-reduction techniques applied to images: (A) original image, (B) image denoised by AMF, (C) image denoised by NLMF, (D) image denoised by proposed method.</ns0:figDesc><ns0:graphic coords='11,186.01,542.12,325.00,117.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. ROC curves of different classes for original images: (A) Covid-19, (B) normal, and (C) pneumonia.</ns0:figDesc><ns0:graphic coords='12,164.27,367.51,368.50,99.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. ROC curves of different classes for enhanced images: (A) Covid-19, (B) normal, and (C) pneumonia.</ns0:figDesc><ns0:graphic coords='12,164.27,526.05,368.50,99.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. The first and third columns show the original images, and the second and fourth columns show the corresponding enhanced images.</ns0:figDesc><ns0:graphic coords='13,225.67,148.22,245.70,200.20' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61123:2:0:NEW 1 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='7,159.72,63.78,377.60,172.80' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Manuscript to be reviewed An overview of image enhancement techniques and the deep learning method used for Covid-19 detection.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The number of images for each class.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Classes</ns0:cell><ns0:cell>Total</ns0:cell><ns0:cell>Training set 70%</ns0:cell><ns0:cell>Validation set 30%</ns0:cell><ns0:cell>Test set (unseen)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Covid-19 2128</ns0:cell><ns0:cell>1420</ns0:cell><ns0:cell>608</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Normal</ns0:cell><ns0:cell>8066</ns0:cell><ns0:cell>5027</ns0:cell><ns0:cell>2154</ns0:cell><ns0:cell>885</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Pneumonia 5575</ns0:cell><ns0:cell>3487</ns0:cell><ns0:cell>1494</ns0:cell><ns0:cell>594</ns0:cell></ns0:row><ns0:row><ns0:cell>Total</ns0:cell><ns0:cell>15769</ns0:cell><ns0:cell>9933</ns0:cell><ns0:cell>4257</ns0:cell><ns0:cell>1579</ns0:cell></ns0:row></ns0:table><ns0:note>were followed to set up the new dataset. Since few CXR images of positive Covid-19 cases are available, we downloaded more Covid-19 x-ray images from https://github. com/ml-workgroup/covid-19-image-repository, and from https://github.com/ armiro/COVID-CXNet/tree/master/chest_xray_images/covid19. Duplicated images were omitted from the new dataset to ensure that the proposed training model is more accurate. Thus, the actual number of images in the Covid-19 class is 2128 instead of the 1770 images from COVIDx (updated on January 28, 2021). We used the same test set that was used for evaluation by<ns0:ref type='bibr' target='#b65'>(Wang et al., 2020)</ns0:ref>, making only a slight change by increasing the number of Covid-19 images to 100 instead of 92.We further split the training data keeping 70% data for training and 30% data for validation. Table2summarizes the number of images in each class and the total number of images used for training and testing.4/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61123:2:0:NEW 1 Aug 2021)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Layers of prposed CNN model architecture.</ns0:figDesc><ns0:table /><ns0:note>The deep convolutional neural network is used to extract high context features per input instance. The global average pooling layer is used here to reduce the spatial dimensions of the features extracted. The output is a feature vector of size 1024 for each time step. Then, a dropout layer is used with a probability of 0.001. The output of the dropout layer goes to two fully connected layers that generate an output of size 128. One fully connected layer is used to predict the mean &#181;, which is used to extract the most significant features from those features extracted in previous layers. The other is used to predict the6/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61123:2:0:NEW 1 Aug 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Model performance on different feature sizes.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Output vector</ns0:cell><ns0:cell cols='2'>Accuracy% enhanced original</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>64</ns0:cell><ns0:cell>93.26</ns0:cell><ns0:cell>88.31</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>128</ns0:cell><ns0:cell>96.06</ns0:cell><ns0:cell>89.36</ns0:cell></ns0:row><ns0:row><ns0:cell>KL-MOB</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>95.87</ns0:cell><ns0:cell>93.24</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>512</ns0:cell><ns0:cell>94.83</ns0:cell><ns0:cell>91.08</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>1024</ns0:cell><ns0:cell>94.47</ns0:cell><ns0:cell>90.38</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Average PSNR (db) and MAE for the various noise-reduction methods. .35 17.12 25.98 21.91 16.20 Proposed method 19.14 23.13 17.28 25.45 22.11 16.01 </ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Covid19</ns0:cell><ns0:cell>Normal</ns0:cell><ns0:cell>Pneumonia</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>PSNR MAE PSNR MAE PSNR MAE</ns0:cell></ns0:row><ns0:row><ns0:cell>AMF</ns0:cell><ns0:cell cols='3'>21.91 14.46 21.19 17.88 20.43 19.47</ns0:cell></ns0:row><ns0:row><ns0:cell>NLMF</ns0:cell><ns0:cell cols='3'>20.47 19.19 20.41 19.41 20.40 19.40</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Proposed method 22.04 14.38 21.21 17.59 20.45 19.32</ns0:cell></ns0:row><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Covid19</ns0:cell><ns0:cell>Normal</ns0:cell><ns0:cell>Pneumonia</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>PSNR MAE PSNE MAE PSNR MAE</ns0:cell></ns0:row><ns0:row><ns0:cell>CLAHE</ns0:cell><ns0:cell>17.83 27</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Average PSNR (db) and MAE for the various contrast-enhancement methods.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Metrics for original images and for images enhanced by KL-MOB.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>266</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Enahnced image</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Original image</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='10'>ACC% PPV% SPC% TPR% MCC% ACC% PPV% SPC% TPR% MCC%</ns0:cell></ns0:row><ns0:row><ns0:cell>Covid19</ns0:cell><ns0:cell>99.87</ns0:cell><ns0:cell>99.00</ns0:cell><ns0:cell>99.93</ns0:cell><ns0:cell>99.00</ns0:cell><ns0:cell>98.93</ns0:cell><ns0:cell>92.61</ns0:cell><ns0:cell>96.83</ns0:cell><ns0:cell>99.13</ns0:cell><ns0:cell>74.39</ns0:cell><ns0:cell>80.60</ns0:cell></ns0:row><ns0:row><ns0:cell>Normal</ns0:cell><ns0:cell>98.24</ns0:cell><ns0:cell>98.30</ns0:cell><ns0:cell>97.85</ns0:cell><ns0:cell>98.64</ns0:cell><ns0:cell>96.53</ns0:cell><ns0:cell>97.11</ns0:cell><ns0:cell>98.17</ns0:cell><ns0:cell>98.99</ns0:cell><ns0:cell>93.86</ns0:cell><ns0:cell>93.77</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Pneumonia 97.99</ns0:cell><ns0:cell>97.81</ns0:cell><ns0:cell>98.68</ns0:cell><ns0:cell>97.31</ns0:cell><ns0:cell>96.03</ns0:cell><ns0:cell>91.00</ns0:cell><ns0:cell>81.30</ns0:cell><ns0:cell>86.74</ns0:cell><ns0:cell>98.26</ns0:cell><ns0:cell>82.53</ns0:cell></ns0:row><ns0:row><ns0:cell>Overall</ns0:cell><ns0:cell>98.70</ns0:cell><ns0:cell>98.37</ns0:cell><ns0:cell>98.82</ns0:cell><ns0:cell>98.32</ns0:cell><ns0:cell>96.60</ns0:cell><ns0:cell>93.57</ns0:cell><ns0:cell>92.10</ns0:cell><ns0:cell>94.95</ns0:cell><ns0:cell>88.84</ns0:cell><ns0:cell>85.90</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Table8show that the proposed method has a great impact on the performance of KL-MOB, thereby justifying the selection of the proposed network architecture and its associated training/learning schemes. Performance on the test set with different loss functions.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Loss function</ns0:cell><ns0:cell cols='8'>Enhanced image ACC% PPV% SPC% TPR% ACC% PPV% SPC% TPR% Original image</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CCE</ns0:cell><ns0:cell>96.79</ns0:cell><ns0:cell>95.22</ns0:cell><ns0:cell>97.60</ns0:cell><ns0:cell>95.42</ns0:cell><ns0:cell>90.14</ns0:cell><ns0:cell>87.94</ns0:cell><ns0:cell>92.23</ns0:cell><ns0:cell>83.05</ns0:cell></ns0:row><ns0:row><ns0:cell>KL-MOB</ns0:cell><ns0:cell>MSE</ns0:cell><ns0:cell>92.50</ns0:cell><ns0:cell>89.70</ns0:cell><ns0:cell>94.16</ns0:cell><ns0:cell>86.92</ns0:cell><ns0:cell>85.12</ns0:cell><ns0:cell>94.53</ns0:cell><ns0:cell>97.50</ns0:cell><ns0:cell>95.11</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Proposed method</ns0:cell><ns0:cell>98.70</ns0:cell><ns0:cell>98.37</ns0:cell><ns0:cell>98.82</ns0:cell><ns0:cell>98.32</ns0:cell><ns0:cell>93.57</ns0:cell><ns0:cell>92.10</ns0:cell><ns0:cell>94.95</ns0:cell><ns0:cell>88.84</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>Figure 12 shows the confusion matrix of the proposed network: all classes are identified with high</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>true positives. Note that the Covid-19 cases are 99% correctly classified by the KL-MOB model. Only</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>1% of Covid-19 cases are misclassified as pneumonia (non-Covid-19), and 1.4% of the normal cases are</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>misclassified as pneumonia. Only 0.2% of pneumonia (non-Covid-19) cases are wrongly classified as</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>Covid-19. These results demonstrate that the proposed KL-MOB has a strong potential for detecting</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>Covid-19. In particular, with limited Covid-19 cases, the results show that no confusion arises between</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>normal patients and Covid-19 patients.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Comparative performance of the various models with the improvement percentage compared to the state of art.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Study</ns0:cell><ns0:cell>Classifier</ns0:cell><ns0:cell cols='4'>ACC% SPC% TPR% PPV%</ns0:cell></ns0:row><ns0:row><ns0:cell>Wang et al. (2020)</ns0:cell><ns0:cell>COVID-Net (large)</ns0:cell><ns0:cell>95.56</ns0:cell><ns0:cell>96.67</ns0:cell><ns0:cell>93.33</ns0:cell><ns0:cell>93.55</ns0:cell></ns0:row><ns0:row><ns0:cell>Ahmed et al. (2020)</ns0:cell><ns0:cell>ReCoNet</ns0:cell><ns0:cell>97.48</ns0:cell><ns0:cell>97.39</ns0:cell><ns0:cell>97.53</ns0:cell><ns0:cell>96.27</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Rezaul Karim et al. (2020) DeepCOVIDExplainer 98.11</ns0:cell><ns0:cell>98.19</ns0:cell><ns0:cell>95.06</ns0:cell><ns0:cell>96.84</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed method</ns0:cell><ns0:cell>KL-MOB</ns0:cell><ns0:cell>98.7</ns0:cell><ns0:cell>98.82</ns0:cell><ns0:cell>98.32</ns0:cell><ns0:cell>98.37</ns0:cell></ns0:row><ns0:row><ns0:cell>% Improvement</ns0:cell><ns0:cell /><ns0:cell>0.60</ns0:cell><ns0:cell>0.64</ns0:cell><ns0:cell>3.43</ns0:cell><ns0:cell>1.58</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>The promising deep learning models used for the detection of Covid-19 from radiography images</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>indicate that deep learning likely still has untapped potential and can play a more significant role in</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>fighting this pandemic. There is definitely still room for improvement through: (a) the other preprocesses</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>such as increasing the number of images, implementing another preprocessing technique, i.e., data</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>augmentation, utilizing different noise filters, and enhancement techniques. (b) design a model that deals</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>with multiple inputs simultaneously, where utilizing multiple modalities may achieve superior outcomes</ns0:cell></ns0:row><ns0:row><ns0:cell>than the individual modality</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>12/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61123:2:0:NEW 1 Aug 2021) Manuscript to be reviewed Computer Science Figure 12. Confusion matrix for KL-MOB applied to COVIDx test dataset. performance metrics [accuracy, sensitivity (TPR), specificity, and PPV for overall detection], the KL-MOB model produces superior results compared with the models of (Wang et al., 2020) and (Rezaul Karim et al., 2020).</ns0:note></ns0:figure> </ns0:body> "
"Mundher Taresh Researcher College of Information Science and Engineering Hunan University, Chang Sha, 400013, China mundhert@hnu.edu.cn Dr Davide Chicco Academic Editor, PeerJ Computer Science peer.review@peerj.com Aug 1st, 2021 Subject: Revision and resubmission of manuscript 61123 Dear editors and reviewers, Thank you for your letter and for the reviewers’ comments on our manuscript entitled “KL-MOB: automated COVID19 recognition using a novel approach based on image enhancement and a modified MobileNet CNN” (Article ID: 61123). All of these comments were very helpful for revising and improving our paper. We have studied these comments carefully and have made corresponding corrections that we hope will meet with your approval. The responses to the reviewers’ comments are provided below. We would like to express our great appreciation to you and the reviewers for the comments on our paper. If you have any further queries, please do not hesitate to contact us. Kind regards, MUNDHER TARESH On behalf of all authors. I would like to thank the anonymous reviewers for their time and effort in reviewing our manuscript and providing constructive comments. All comments raised by the referees have been taken into consideration in preparing the revised version of the manuscript. In the following paragraphs, we provide point-by-point responses to the comments: 1 Editor comments (Davide Chicco) Please measure all the performances of the binary classification through the Matthews correlation coefficient (MCC) besides the rates already employed. Response: Thank you for providing constructive comments. Based on the editor’s comment, we have revised the manuscript and measured all the performances through the MCC (Table 7). We hope that the revised paper contains clear and accurate expressions. 2 Reviewer 4 (Anonymous) Basic reporting The emergence of the novel coronavirus pneumonia (Covid-19) pandemic at the end of 2019 led to worldwide chaos. However, the world breathed a sigh of relief when a few countries announced the development of a vaccine and gradually began to distribute it. Nevertheless, the emergence of another wave of this pandemic returned us to the starting point. At present, early detection of infected people is the paramount concern of both specialists and health researchers. Experimental design This paper proposes a method to detect infected patients through chest x-ray images by using the large dataset available online for Covid-19 (COVIDx), which consists of 2128 X-ray images of Covid-19 cases, 8066 normal cases, and 5575 cases of pneumonia. A hybrid algorithm is applied to improve image quality before undertaking neural network training. This algorithm combines two different noise-reduction filters in the image, followed by a contrast enhancement algorithm. To detect Covid-19, we propose a novel convolution neural network (CNN) architecture called KL-MOB (Covid-19 detection network based on the MobileNet structure). Validity of the findings The performance of KL-MOB is boosted by adding the Kullback–Leibler (KL) divergence loss function when trained from scratch. The KL divergence loss function is adopted for content-based image retrieval and fine-grained classification to improve the quality of image representation. The results are impressive: the overall benchmark accuracy, sensitivity, specificity, and precision are 98.7%, 98.32%, 98.82%, and 98.37%, respectively. 3 Additional comments (i) Pls give the reasons why “patients can have a good chance of survival if they are diagnosed sufficiently early.”? Response: Thank you for providing constructive comments. The National Institutes of Health (NIH) published guidelines on prophylaxis use, testing, and management of patients with COVID-19 (https://www.covid19treatmentguidelines.nih.gov). The recommendation external icon is based on scientific evidence and expert opinion and is regularly updated as more data become available (https://www.covid19treatmentguidelines.nih.gov/management/clinical-management/hospitalized-adults--therapeuticmanagement). The U.S. Food and Drug Administration (FDA) has approved redeliver (Veklury) for the treatment of COVID-19 in certain situations. Clinical management of COVID-19 includes infection prevention and control measures and supportive care, including supplemental oxygen and mechanical ventilatory support when indicated. Based on the editor’s comment, we have revised the manuscript and demonstrated that briefly in line 37. We hope that the revised paper contains clear and accurate expressions. (ii) Why do we need noise reduction here in “One way around these issues is to use proper image preprocessing techniques for noise reduction and contrast enhancement.”? Response: Thank you for providing constructive comments. There is significant noise in original recorded X-ray medical images due to the instability of the electric power, sensor, or X-ray generator, etc., which degrades the image quality by blurring the edges and obscuring important information. Software-based image improvement as noise reduction is essential and required for next-edge differential enhancement, especially for digital medical X-ray images. Based on the editor’s comment, we have revised the manuscript and demonstrated that briefly in line 60. We hope that the revised paper contains clear and accurate expressions. (iii) Some COVID-19 papers could be discussed, see “MIDCAN: A multiple input deep convolutional attention network for Covid-19 diagnosis based on chest CT and chest X-ray” and “PSSPNN: PatchShuffle stochastic pooling neural network for an explainable diagnosis of COVID-19 with multiple-way data augmentation” Response: Thank you for providing constructive comments. Based on the reviewer’s comment, we have revised the manuscript and added the relevant papers, lines 76 and 302. We hope that the revised paper contains clear and accurate expressions. 4 (iv) Eq.2 what does Z mean? Response: Thank you for providing constructive comments. Z(i) is the normalizing constant, which is defined at line 166. Based on the reviewer’s comment, we have revised the manuscript and showed that in line 164. We hope that the revised paper contains clear and accurate expressions. (v) How do you design the model? Response: Thank you for providing constructive comments. Inspired by the variational autoencoder learning, we trained the model with KL (Kullback-Leibler) divergence. Variational feature learning is employed to derive highly discriminating representations for images, which helps to improve the performance of Covid-19 re-identification. Variational feature learning learns Gaussian distribution from CNN-based features. Two fully connected layers are used to predict the means µ and the standard deviation σ of a Gaussian distribution N (µ, σ). The outputs of this stage ensure the representation to diverse sufficiently. Besides, the outputs µ is well normalized. Along with KL divergence, Softmax classifier on top of the means µ layer is used to learn the model. Based on the reviewer’s comment, we have provided a link to where the training code is available, (https://github.com/puhpuh/KL-MOB/blob/main/TRAINING). Moreover, the training code was uploaded in supplemental files We hope that the revised paper contains clear and accurate expressions. (vi) What is the effect of KL divergence? Response: Thank you for providing constructive comments. The KL divergence is used to boost the performance of the KL-MOB model, which outperforms recent approaches, as shown by the results. Where the KL divergence between the µ; σ distribution and the prior is considered as a regularization which helps to overcome the overfitting problem. Based on the reviewer’s comment, we have revised the manuscript and showed that at line 313. We hope that the revised paper contains clear and accurate expressions. (vii) What can you find from “In contrast, the output vector 256 in the original data achieved the best value with an accuracy of 93.24 %.”? Response: Thank you for providing constructive comments. In our model, as illustrated in Figure 3, we only keep the Means µ layer as the final feature vector. We have first done extensive experiments to investigate the impact of different feature sizes. we found that the best results are achieved when the length is 256 for the original images, and 128 for the enhanced images. Based on the reviewer’s comment, we have revised the manuscript and rephrased that at line 231. We hope that the revised paper contains clear and accurate expressions. 5 "
Here is a paper. Please give your review comments after reading it.
226
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. Microservices are an architectural approach of growing use, and the optimal granularity of a microservice directly affects the application's quality attributes and usage of computational resources. Determine microservice granularity is an open research topic.</ns0:p><ns0:p>Methodology. We conducted a systematic literature review to analyze literature that address the definition of microservice granularity. We searched in IEEE Xplore, ACM Digital Library and Scopus. The research questions were: Which approaches have been proposed to define microservice granularity and determine the microservices' size? Which metrics are used to evaluate microservice granularity? Which quality attributes are addressed when researching microservice granularity?</ns0:p><ns0:p>Results. We found 326 papers and selected 29 articles after applying inclusion and exclusion criteria. The quality attributes most often addressed are runtime properties (e.g. scalability and performance), not development properties (e.g. maintainability). Most proposed metrics were about the product, both static (coupling, cohesion, complexity, source code) and runtime (performance, and usage of computational resources), and a few were about the development team and process. The most used techniques for defining microservices granularity were machine learning (clustering), semantic similarity, genetic programming, and domain engineering. Most papers were concerned with migration from monoliths to microservices; and a few addressed green-field development, but none address improvement of granularity in existing microservice-based systems.</ns0:p><ns0:p>Conclusions. Methodologically speaking, microservice granularity research is at a Wild West stage: no standard definition, no clear development -operation trade-offs, and scarce conceptual reuse (e.g., few methods seem applicable or replicable in projects other than their initial proposal). These gaps in granularity research offer clear options to investigate on continuous improvement of the development and operation of microservice-based systems.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>meaning the number of operations exposed by the microservice, along with the number of microservices that are part of the whole application, and second by its complexity and dependencies. The goal is to have low coupling, low complexity, and high cohesion between microservices. <ns0:ref type='bibr'>Hassan, Bahsoon, and Kasman (2020)</ns0:ref> stated that a granularity level determines 'the service size and the scope of functionality a service exposes <ns0:ref type='bibr' target='#b30'>(Kulkarni &amp; Dwivedi, 2008)</ns0:ref>'. Granularity adaptation entails merging or decomposing microservices thereby moving to a finer or more coarse-grained granularity level <ns0:ref type='bibr' target='#b22'>(Hassan, Bahsoon, &amp; Kazman, 2020)</ns0:ref>. <ns0:ref type='bibr' target='#b25'>Homay et al. (2020)</ns0:ref> stated that 'the problem in finding service granularity is to identify a correct boundary (size) for each service in the system. In other words, each service in the system needs to have a concrete purpose, as decoupled as possible, and add value to the system. A service has a correct or good granularity if it maximizes system modularity while minimizing the complexity. Modularity in the sense of flexibility, scalability, maintainability, and traceability, whereas complexity in terms of dependency, communication, and data processing' <ns0:ref type='bibr' target='#b25'>(Homay et al., 2020)</ns0:ref>. The quality of a microservices-based system is influenced by the granularity of its microservices, because their size and number directly affect the system's quality attributes. The optimal size or granularity of a microservice directly affects application performance, maintainability, storage (transactions and distributed queries), and usage and consumption of computational resources (mainly in the cloud, the usual platform to deploy and execute microservices). Although the size of microservice or optimal granularity is a discussion topic, few patterns, methods, or models exist to determine how small a microservice should be, as others have already pointed out: <ns0:ref type='bibr' target='#b60'>Soldani et al. (2018)</ns0:ref> noticed the difficulty of identifying the business capacities and delimited contexts that can be assigned to each microservice <ns0:ref type='bibr' target='#b60'>(Soldani, Tamburri &amp; Heuvel, 2018)</ns0:ref>. <ns0:ref type='bibr'>Bogner et al. (2017)</ns0:ref> claimed that 'the appropriate microservice granularity is still one of the most discussed properties (How small is small enough?), as shown in the difficulty of defining acceptable value ranges for source code metrics' <ns0:ref type='bibr' target='#b8'>(Bogner, Wagner &amp; Zimmermann, 2017a)</ns0:ref>. <ns0:ref type='bibr' target='#b71'>Zimmermann (2017)</ns0:ref> indicated that professionals request more concrete guidance than the frequent advice to 'define a limited context for each domain concept that will be exposed as a service' <ns0:ref type='bibr' target='#b71'>(Zimmermann, 2017)</ns0:ref>. <ns0:ref type='bibr' target='#b27'>Jamshidi et al. (2018)</ns0:ref> affirmed that the real challenge is finding the right modules, with the correct size, the correct assignment of responsibilities, and welldesigned interfaces, and besides, no agreement on the correct size of microservices exist <ns0:ref type='bibr' target='#b27'>(Jamshidi et al., 2018)</ns0:ref>.The aim of this article is to identify the main approaches in the literature that define microservice granularity or that use it in the process of designing microservice-based systems, either from scratch or migrated from monoliths. A systematic literature review was carried out on key scientific computing literature databases (IEEE Xplore, ACM Digital Library, and Scopus); we formulated three research questions, we defined inclusion and exclusion criteria, and current research trends to identify relevant works, challenges and gaps for future research were identified. Research papers that address the problem of microservice granularity are detailed; the research questions are (RQ1) which approaches have been proposed to define microservice granularity? (RQ2) which metrics are used to evaluate microservice granularity? and (RQ3) which quality attributes are addressed when researching microservice granularity?</ns0:p><ns0:p>Very few previous works have reviewed the definition of microservices granularity; we did not find any review that details the techniques, methods or methodologies used to define granularity, none describe the metrics used to evaluate it, and few addresses the quality attributes considered to define it. Contributions of this work are as follow: (1) we identified and classified research papers that address the problem of microservice granularity, therefore we defined the state of the art; (2) we identified and defined the metrics currently used to assess the granularity of microservices-based systems;</ns0:p><ns0:p>(3) we identified the quality attributes that researchers studied to define microservice granularity; (4) we identified the case studies used to validate the methods, which can serve as a dataset for future evaluations of methods or techniques to define granularity. The remainder of this article is organized as follows: Section 2 defines previous related works; Section 3 presents the survey design; Section 4 organizes the results; Section 5 discusses the trends and research gaps; and Section 6 summarizes and concludes.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Related work</ns0:head><ns0:p>A small number of literature reviews have been published on microservice architecture research, some of these papers include analysis of application modeling and architecture; design and development patterns; industrial adoption; state of practice; grey literature review; and analysis and interviews with industry leaders, software architects, and application developers of microservices-based applications; whereas two papers focused on microservice granularity specifically <ns0:ref type='bibr' target='#b22'>(Hassan, Bahsoon &amp; Kazman, 2020)</ns0:ref> and <ns0:ref type='bibr' target='#b57'>(Schmidt &amp; Thiry, 2020)</ns0:ref>. The literature reviews are described in chronological order below. Di <ns0:ref type='bibr' target='#b14'>Francesco (2017)</ns0:ref>, <ns0:ref type='bibr' target='#b15'>Di Francesco, Lago and Malavolta (2019)</ns0:ref> focused on determining the publication trends of research studies on architecture with microservices and on the potential for industrial adoption of existing research. They point out that few studies have focused on design patterns and architectural languages for microservices, and research gaps exist in areas related to quality attributes <ns0:ref type='bibr' target='#b14'>(Di Francesco, 2017)</ns0:ref> <ns0:ref type='bibr' target='#b15'>(Di Francesco, Lago &amp; Malavolta, 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b71'>Zimmermann (2017)</ns0:ref> extracted the principles of microservices from the literature and makes a comparison with SOA and highlights the critical points in the research on microservices, as a result of the review and discussions with industry opinion leaders, developers and members of the service-oriented community. He raises five research issues: (1) service interface design (contracting and versioning), (2) microservice assembly and hosting, (3) microservice integration and discovery, (4) service dependency management, and (5) service and end client application testing <ns0:ref type='bibr' target='#b71'>(Zimmermann, 2017)</ns0:ref>. <ns0:ref type='bibr' target='#b27'>Jamshidi et al. (2018)</ns0:ref> presented a technological and architectural perspective on the evolution of microservices. Their editorial introduction also set out future research challenges: (1) service modularization and refactoring; (2) service granularity; (3) front-end integration; (4) resource monitoring and management; (5) failure, recovery and self-repair; and (6) organizational culture and coordination <ns0:ref type='bibr' target='#b27'>(Jamshidi et al., 2018)</ns0:ref>. <ns0:ref type='bibr' target='#b60'>Soldani, Tamburri, and Heuvel (2018)</ns0:ref> systematically analyzed the grey industrial literature on microservices to define the state of practice, identified the technical and operational problems and benefits of the architectural style based on microservices at an industrial level. When designing a microservice-based application, key issues involve determining the right granularity of its microservices and the design of its security policies. During development time, managing distributed storage and application testing, is challenging. Another pain was usage of network and computing resources during operation <ns0:ref type='bibr' target='#b60'>(Soldani, Tamburri &amp; Heuvel, 2018)</ns0:ref>. <ns0:ref type='bibr' target='#b17'>Ghofrani and L&#252;bke (2018)</ns0:ref> focused on identify challenges and gaps in the design and development of microservices, they describe the main reasons leveraging and preventing the usage of systematic approaches in microservice architectures; and the suggestions or solutions that improve aspects of the microservices architecture. Ghofrani and L&#252;bke provided an updated map of the state of practice in microservice architecture and its complexities for future research. According to the results of their survey, optimization in security, response time, and performance had higher priorities than resilience, reliability, fault tolerance, and memory usage are research gaps <ns0:ref type='bibr' target='#b17'>(Ghofrani &amp; L&#252;bke, 2018)</ns0:ref>. <ns0:ref type='bibr' target='#b43'>Osses, M&#225;rquez, and Astudillo (2018)</ns0:ref> summarized the 44 architectural patterns of microservices, and proposed a microservice architectural pattern taxonomy: front-end, back-end, orchestration, migration, internet of things, and DevOps. There was no specific pattern to define the adequate microservice granularity, and they just proposed designing the application as a set of modules, each one an independent business function with its data, developed by a separate team and deployed in a separate process <ns0:ref type='bibr' target='#b43'>(Osses, M&#225;rquez &amp; Astudillo, 2018)</ns0:ref>. <ns0:ref type='bibr' target='#b19'>Hamzehloui, Sahibuddin &amp; Salah (2019)</ns0:ref> aimed to identify the common trends and direction of research in microservices. They stated that infrastructure-related issues were more common than software-related issues, and the cloud was the most common platform for running microservices. At the infrastructure level, automation and monitoring require more research, as do software development and design in microservices; safety, maintenance, and costs were three other areas that have been studied relatively less compared to other topics <ns0:ref type='bibr' target='#b19'>(Hamzehloui, Sahibuddin &amp; Salah, 2019)</ns0:ref>. Vera-Rivera, Gaona, and Astudillo (2019) identified the challenges and research trends present in the phases of the development process and in the management of quality attributes of microservice-based applications (Vera-Rivera, Gaona Cuevas &amp; <ns0:ref type='bibr'>Astudillo, 2019)</ns0:ref>. This article was more general, it did not emphasize in granularity. <ns0:ref type='bibr' target='#b22'>Hassan, Bahsoon, and Kazman (2020)</ns0:ref> carried out a systematic mapping study to provide a better understanding of the transition to microservices; they consolidated various views (industrial, research/academic) of the principles, methods, and techniques commonly adopted to assist in the transition to microservices. They identified gaps in the state of the art and the practice related to reasoning about microservice granularity. In particular, they identified possible research topics concerning (1) systematic architecture-oriented modeling support for microservice granularity, (2) a dynamic architectural assessment approach for reasoning about the cost and benefit of granularity adaptation, and (3) effective decision support for informing reasoning about microservice granularity at runtime <ns0:ref type='bibr' target='#b22'>(Hassan, Bahsoon &amp; Kazman, 2020)</ns0:ref>. They focused on understanding the transition to microservices and the microservice granularity problem (a direct antecedent of this study). They considered quality attributes but not metrics. Their sources were gray literature (blog articles, presentations, and videos, as means of reporting first-hand industrial experiences) and research papers, whereas our study put emphasis only on white literature (articles published in journals and scientific events). Our work was more specific and detailed evaluating the methods and techniques to define granularity, whereas their work detailed those methods and techniques in general. Our work are complementary to their work and take a deeper look at the definition of granularity. <ns0:ref type='bibr' target='#b57'>Schmidt and Thiry (2020)</ns0:ref> carried out a systematic literature review, they found proposals for identification, decomposition, partitioning or breaking down the application domain to reach an adequate granularity for microservices. Moreover, the research aims to highlight the usage of Model-Driven Engineering (MDE) or Domain-Driven Design (DDD) approaches <ns0:ref type='bibr' target='#b57'>(Schmidt &amp; Thiry, 2020)</ns0:ref>. They emphasized on DDD and MDE; whether the selected studies cover DDD or apply MDE; and which elements, principles, practices, and patterns authors applied; they did not include metrics and quality attributes. Therefore, our work is complementary to their work. Most previous literature reviews do not emphasize granularity, they concern general topics of microservice architecture. To our knowledge, this is the first study focus specifically on classified and detailed research papers of microservice granularity, including quality attributes that motivate working on it, methods/techniques to improve it, and metrics to measure it.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Survey methodology</ns0:head><ns0:p>A systematic literature review was carried out following the approach introduced by.</ns0:p><ns0:p>Kitchenham define systematic literature reviews as 'a form of secondary study that uses a well-defined methodology to identify, analyze and interpret all available evidence related to a specific research question in a way that is unbiased and (to a degree) repeatable' <ns0:ref type='bibr' target='#b29'>(Kitchenham, 2004)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.'>Planning the systematic literature review</ns0:head><ns0:p>The objectives of this systematic literature review are defined as follows: first, to identify the proposals that address the microservice granularity problem; second, to identify the metrics that have been used to evaluate microservice granularity; and third, to analyze the quality attributes addressed in those works to evaluate microservice granularity. Few studies or reviews specifically address the problem of microservice granularity, and very few identify the metrics along with the quality attributes addressed to assess microservice granularity. A review protocol specifies the methods that will be used to undertake a specific systematic review. We selected research papers through two queries strings used in the IEEE Xplore, ACM Digital Library, and Scopus; then the papers were screening and reviewed, we applied inclusion and exclusion criteria, next we tabulated the papers, the contribution of each selected paper was</ns0:p><ns0:p>The main objective of the work was to identify the methods, techniques or methodologies used to determine the microservices granularity. QS1 and QS2 addresses all research question; for each of the proposals selected in each QS, the metrics used, and the quality attributes addressed were identified. The query strings were used in IEEE Xplore 1 , ACM Digital Library 2 and Scopus 3 , searching for papers' titles, abstracts and keywords. The search in these databases, yield 969 results for QS1 and 146 results for QS2. The search was performed in July 2020.</ns0:p><ns0:p>3. Data extraction strategy. First, papers were tabulated; second, duplicated papers were removed; third, title, abstract, and conclusions of all papers were reviewed and analyzed. Each coauthor of this report carried out this process.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>Study selection criteria and procedures.</ns0:head><ns0:p>We selected primary research papers that make a specific proposal (methodology, model, technique, or method) about microservices granularity, including migrations from monolith to microservices and decompositions of systems in microservices. After obtaining the relevant studies, the inclusion and exclusion criteria were applied (see table <ns0:ref type='table'>1</ns0:ref>). We excluded any paper about monolith migrations that were not directly related to the definition of microservice granularity. We also excluded papers that proposed methods, techniques, or models for SOA, web services or mobile services.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.'>Synthesis of the extracting data.</ns0:head><ns0:p>Each of the selected papers was evaluated in full-text form, taking detailed note of the work and the contributions made.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.'>Conducting the systematic literature review</ns0:head><ns0:p>The review process was carried out as follow:</ns0:p><ns0:p>1. We download the full-text paper.</ns0:p><ns0:p>2. Each co-author read and review the paper.</ns0:p><ns0:p>3. Each co-author uses the classification criteria on the paper, using the table presented in the appendix A; this was carried out by each co-author independently.</ns0:p><ns0:p>4. We discussed and analyzed the results obtained by each author, resolving doubts and contradictions and the results are presented in appendix A.</ns0:p><ns0:p>The results of applying the research protocol is presented in figure <ns0:ref type='figure'>1</ns0:ref>. To analyze the works presenting definitions of the granularity of microservices, classification criteria were defined. These criteria were based on the classification performed by <ns0:ref type='bibr' target='#b70'>(Wieringa et al., 2006)</ns0:ref>, and have been widely used in previous systematic literature reviews: <ns0:ref type='bibr' target='#b15'>(Di Francesco, Lago &amp; Malavolta, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b19'>(Hamzehloui, Sahibuddin &amp; Salah, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b22'>(Hassan, Bahsoon &amp; Kazman, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b68'>(Vural, Koyuncu &amp; Guney, 2017</ns0:ref>). To answer the research questions, we added the classification criteria in each paper: metrics, stage of the development process, technique used, and quality attributes studied or analyzed; namely: &#61623; Approach: Structural or behavioral aspects proposed in the papers to define the granularity of microservices <ns0:ref type='bibr' target='#b22'>(Hassan, Bahsoon &amp; Kazman, 2020)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_0'>&#61623;</ns0:formula><ns0:p>&#61623; Quality attribute studied: The Quality attributes considered in the proposal, such as performance, availability, reliability, scalability, maintainability, security, and complexity.</ns0:p><ns0:p>&#61623; Research contribution: Type of contribution made in the article; namely, method, application, problem formulation, reference architecture, middleware, architectural language, design pattern, evaluation, or comparison.</ns0:p><ns0:p>&#61623; Experimentation type: Type of experimentation used to validate the proposal; namely experiment, case study, theoretical, or other.</ns0:p><ns0:p>&#61623; Technique used: This criterion describes the technique, method or model used to define the granularity of the microservices.</ns0:p><ns0:p>&#61623; Input data: Type of input data used to identify the microservices (i.e. uses cases, logs, source code, execution traces, among others) &#61623; Type of case study: This criterion determines if the case study is a toy example (hypothetical case) or a real-life case study. We identified the case study.</ns0:p><ns0:p>&#61623; Automatization level: This criterion determines the level of automation of the proposed technique, if it is manual, automatic, or semi-automatic.</ns0:p><ns0:p>Finally, results were presented in four sections: first, the classification of the selected papers; second, the main contributions and research gaps in sizing and definition of microservice granularity were detailed; third, metrics were described an ordered by year and type; and fourth, quality attributes were detailed, and results were discussed, leading to conclusions presented in this article.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>Results</ns0:head><ns0:p>The search process took place in July 2020. The search in the databases of scientific publications when applying the search strings (QS1 and QS2) related to the granularity of the microservices yield 969 and 146 works respectively (see table 2).</ns0:p><ns0:p>After applying the inclusion and exclusion criteria, 29 papers were selected that address the definition of the granularity of microservices. (see table <ns0:ref type='table'>3</ns0:ref>). The summarized results of this systematic literature review are synthesized in figure <ns0:ref type='figure'>2</ns0:ref>. For RQ1, we identified the papers that propose a method, model, or methodology to define the microservice granularity; metrics are fundamental because they allow one to measure, monitor, and evaluate any aspect of a microservice, thus defining or determining the appropriate granularity of a microservice. For RQ2, we identified metrics used to evaluate microservice granularity and their decomposition. Figure <ns0:ref type='figure'>2</ns0:ref> shows the type and number of metrics and whether it was applied to microservice, system, development process, or development team. These metrics are detailed in section 4.3. Finally, for RQ3 we synthesized the works that address quality attributes to evaluate microservices granularity.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.'>Classification of the selected papers</ns0:head><ns0:p>Appendix A shows the tabulated data and the results of the evaluation of classification criteria. Most papers were published in conference (86%), and only four (14%) were published in journals. All selected papers were published between 2016 and the beginning of <ns0:ref type='bibr'>2020 (2 in 2016, 7 in 2017, 6 in 2018, 12 in 2019, and 2 in 2020)</ns0:ref>. The development process phases addressed by each proposal are shown in figure <ns0:ref type='figure'>3</ns0:ref>. Several papers emphasize more than one phase, (e.g., P10 focuses on development and deployment, as befits a method for migration from monolith to microservices). Most of the proposed methods focus on the design (79%) and development (38%) phase, with only one addressing testing (3%). Migrations from monolithic architectures to microservices are very common and important 19 of 29 papers (66%). The papers that do not address migration do focus on identifying microservices in the design phase; therefore, defining the size and granularity of microservices from the design phase on is key, because it has implications for development, testing and deployment. Further, most papers (79%) focus on the design phase; implicitly or explicitly, they suggest that defining the 'right' microservices granularity from the design phase on is fundamental. However, some authors affirm that reasoning about microservices size and performance is not possible at design time; indeed, <ns0:ref type='bibr' target='#b21'>(Hassan, Ali &amp; Bahsoon, 2017)</ns0:ref> affirm that the expected behavior of the system cannot be fully captured at design time. On the research strategy (see figure <ns0:ref type='figure'>4</ns0:ref>), validation research and solution proposals account for almost all (14 and 11, respectively); proposals that have been tested and validated in practice are very few, namely P5 (a reference architecture) and P16 (a method for candidates microservice identification from monolithic systems). On the type of contribution (see figure <ns0:ref type='figure'>5</ns0:ref>), the vast majority (17 papers) proposed methods (59%), some proposed methodologies (24%), few proposed reference architectures (7%) and problem formulation (7%), Only one propose an evaluation or comparison (3%). On the validation approach (see figure <ns0:ref type='figure'>6</ns0:ref>), most papers (69%) used case studies for validation and evaluation, other papers use experiments (37%), and most of also used case studies.</ns0:p><ns0:p>More than half of studies (13 of 29) validated their proposals using realistic (but not realhypothetical) case studies, and the remaining almost-half (14 of 29) used real-life case studies, real-life case studies achieve better validation than hypothetical case studies. Even better, some studies ( <ns0:ref type='formula'>8</ns0:ref>) used actual open-source projects. The case studies found in the reviewed articles are summarized in table 4; they are valuable resources to validate future research and to compare new methods with those identified in this review. In any case, other microservice-based datasets have been found to be beyond the reach of this study; for example, <ns0:ref type='bibr' target='#b49'>(Rahman, Panichella &amp; Taibi, 2019)</ns0:ref> The most used case studies to validate the proposals were Kanban boards (P6, P20, P28) and Money transfer (P6, P20, P28); they were used by 3 papers, followed by JPetsStore (P16, P29) and Cargo tracking (P6, P24) which was used by 2 papers.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.'>RQ1: Approaches to define microservices granularity.</ns0:head><ns0:p>The granularity of microservices involves defining their size and the number that will be part of the application. From the proposal of S. Newman <ns0:ref type='bibr' target='#b40'>(Newman, 2015)</ns0:ref>, microservices follow the principle of simple responsibility that says 'Gather things that change for the same reason and separate things that change for different reasons'. The size, dimension, or granularity of microservices have traditionally been defined as follows:</ns0:p><ns0:p>1. Trial and error, depending on the experience of the architect or developer.</ns0:p><ns0:p>2. According to the number of lines of code. <ns0:ref type='table'>2020:10:54842:1:1:NEW 15 Jul 2021)</ns0:ref> Manuscript to be reviewed Computer Science microservice-based applications. There is no specific pattern that helps to determine the number of microservices and their size, that is, the number of operations it must contain <ns0:ref type='bibr' target='#b72'>(Zimmermann et al., 2019)</ns0:ref>. The size of the microservice or optimal granularity is one of the most discussed properties and there are few patterns, methods, or models to determine how small a microservice should be. In this respect, some authors have addressed this problem and proposed the solutions summarized in table 5. The proposed techniques were classified into manual, semi-automatic, or automatic techniques; manual techniques are methods, procedures, or methodologies performed by the architect or developer decomposing systems following a few steps. Automatic techniques use some type of algorithm to generate decomposition, and the system generates the decomposition. Semiautomatic combine one part made manually and with another made automatically. Most papers proposed manual procedures to identify the microservice granularity (15 papers); some proposals were automatic (8 papers) and few proposals paper were semi-automatic (6 papers). The most used case studies to validate the proposals were Kanban boards and Money Transfer (P6, P20, P28). The papers from 2017 and 2018 are mostly manual methods or methodologies that detail the way to decompose or determine microservices, using DDD, domain engineering, or a specific methodology. Later, the papers from 2019, and 2020 propose semi-automatic, and automatic methods that use intelligent algorithms and machine learning mostly focused on migrations from monolith to microservices. We can observe a chronological evolution in the proposals, the type of techniques used to define the granularity of the microservices that are part of an application are presented in figure <ns0:ref type='figure'>7</ns0:ref>, semantic similarity, machine learning, and genetic programing were the most important techniques.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.'>RQ2: Metrics to evaluate the microservice granularity.</ns0:head><ns0:p>Software metrics allow us to measure and monitor different aspects and characteristics of a software process and product, and there are metrics at the level of design, implementation, testing, maintenance, and deployment. These metrics allow understanding, controling, and improving of what happens during the development and maintenance of the software, to take corrective and preventive actions. In the methods and models identified, most of them (59%) used some metrics to determine microservices granularity. We would have expected greater importance for metrics in automatic methods to validate granularity in microservice-based applications and evaluate decompositions yield by methods. We identify metrics for coupling, cohesion, granularity, complexity, performance, use of computational resources, development team, source code, and so on (see Table <ns0:ref type='table'>6</ns0:ref>). We classified them into four groups: about development team, about microservices development process, about the system, and about each microservice. Most identified metrics (40) focused on a microservice, and only two address the microservice development processes. There is a research gap for metrics to evaluate the full development process of microservice-based applications and their impact on the granularity of microservices. The most used metrics are related to coupling (14 proposed metrics), followed by performance and cohesion (13 metrics), next, computational resources metrics (8 metrics), and complexity and source code (7 metrics); (see figure <ns0:ref type='figure'>8</ns0:ref>). Nine papers used coupling metrics (P11, P12, P13, P14, P15, P16, P21, P22, P24), and seven papers used cohesion metrics (P12, P14, P16, P21, P24, P27, P28), whereas performance metrics was used by five papers (P4, P10, P12, P21, P23); Complexity metrics were considered by two papers (P8, P22), although they are fundamental characteristic of microservices. More proposals that include more complexity metrics are required, as well as metrics related to the microservice development process. The other metrics were used by only one paper each. We found that 11 papers used coupling or cohesion metrics, and 5 papers used both. Only one (P24) used coupling, cohesion, and complexity metrics. The size and number of microservices that compose an application directly affects its maintainability. Automation of tests, continuous integration and deployment are essential especially when microservices and many distributed systems must be managed independently by each microservice. <ns0:ref type='bibr' target='#b9'>(Bogner, Wagner &amp; Zimmermann, 2017b</ns0:ref>) performed a literature review to measure the maintainability of software and identified metrics in four dominant design properties: size, complexity, coupling, and cohesion. For service-based systems, they also analyzed its application to systems based on microservices and presented a maintainability model for services (MM4S), consisting of service properties related to automatically collectible service metrics. The metrics proposed by them can be used or adapted to determine the adequate granularity of the microservices that are going to be part of an application. Considering <ns0:ref type='bibr' target='#b9'>(Bogner, Wagner &amp; Zimmermann, 2017b)</ns0:ref>, <ns0:ref type='bibr' target='#b11'>(Candela et al., 2016)</ns0:ref>, and related papers, we detail the following metrics, which can be used or adapted to define the right granularity of the microservices and to evaluate decompositions.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.1.'>Coupling metrics</ns0:head><ns0:p>The coupling measures the degree of dependence of one software component in relation to another. If there is a high degree of coupling, the software component cannot function properly without the other component; furthermore, when we change a software component, we must obligatorily change the other component. For these reasons when designing microservice-based applications, we should look for a low degree of coupling between each microservice. <ns0:ref type='bibr' target='#b39'>(Mazlami, Cito &amp; Leitner, 2017)</ns0:ref> represented the information in the monolith and create an undirected, edge-weighted graph G. Each graph edge has a weight defined by the weight function; this weight function determines how strong the coupling between the neighboring edges is, according to the coupling strategy in use. These coupling strategies can be used as metrics to define the granularity. These metrics are defined as follows:</ns0:p><ns0:p>Dependency weight. <ns0:ref type='bibr' target='#b1'>(Ahmadvand &amp; Ibrahim, 2016)</ns0:ref> said 'dependency weight indicates the frequency of using the dependency. For example, the dependency weight between a billing and shopping cart is high, because with each call to the former a call is required to the latter. On the other hand, the dependency weight between the billing service and a service managing the metadata of payment methods is low, because the former calls the latter only once a day'.</ns0:p><ns0:p>Logical coupling. <ns0:ref type='bibr' target='#b16'>(Gall, Jazayeri &amp; Krajewski, 2003)</ns0:ref> coined the term logical coupling as a retrospective measure of implicit coupling based on the revision history of an application source code. <ns0:ref type='bibr' target='#b39'>(Mazlami, Cito &amp; Leitner, 2017)</ns0:ref> define the value of the logical coupling is 1 if classes (C 1 , C 2 ) have changed together in a certain commit. They use the logical coupling aggregate which is the sum of the logical coupling for each pair of classes.</ns0:p><ns0:p>Semantic coupling. Basically, semantic coupling couples together classes that contain code about the same things, i.e., domain model entities. The semantic coupling strategy can compute a score that indicates how related two files are, in terms of domain concepts or 'things' expressed in code and identifiers <ns0:ref type='bibr' target='#b39'>(Mazlami, Cito &amp; Leitner, 2017)</ns0:ref>.</ns0:p><ns0:p>Contributor count and contributor coupling. The contributor coupling strategy aims to incorporate the team-based factors into a formal procedure that can be used to cluster class files according to the team-based factors (reduce communication overhead to external teams and maximize internal communication and cohesion inside developer teams). It does so by analyzing the authors of changes on the class files in the version control history of the monolith. The procedure to compute the contributor coupling is applied to all class files. In the graph G representing the original monolith M, the weight on any edge is equal to the contributor coupling between two classes C i and C j that are connected in the graph. The weight is defined as the cardinality of the intersection of the sets of developers that contributed to class C i and C j <ns0:ref type='bibr' target='#b39'>(Mazlami, Cito &amp; Leitner, 2017)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Structural coupling.</ns0:head><ns0:p>Structural coupling consists of the number of classes outside package P j referenced by classes in the package P j divided by the number of packages <ns0:ref type='bibr' target='#b11'>(Candela et al., 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Afferent coupling (Ca).</ns0:head><ns0:p>The number of classes in other packages (services) that depend upon classes within the package (service) itself, as such it indicates the package's (service's) responsibility <ns0:ref type='bibr' target='#b38'>(Martin, 2002)</ns0:ref> cited by <ns0:ref type='bibr' target='#b32'>(Li et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Efferent coupling (Ce).</ns0:head><ns0:p>The number of classes in other packages (services), that the classes in a package (service) depend upon, thus indicates its dependence on others <ns0:ref type='bibr' target='#b38'>(Martin, 2002)</ns0:ref> cited by <ns0:ref type='bibr' target='#b32'>(Li et al., 2019)</ns0:ref>.</ns0:p><ns0:p>The score is divided by the total number of microservices within the system <ns0:ref type='bibr' target='#b12'>(Cojocaru, Uta &amp; Oprescu, 2019)</ns0:ref>.</ns0:p><ns0:p>Silhouette score. <ns0:ref type='bibr' target='#b41'>(Nunes, Santos &amp; Rito Silva, 2019</ns0:ref>) define the silhouette score as 'the difference between the mean nearest-cluster distance a and the mean intra-cluster distance b divided by the greatest of a and b. This score ranges its values from &#8722;1 to 1, representing incorrect clustering (samples on wrong clusters) and highly dense clustering, respectively. This metric creates a parallelism with the overall coupling of the clusters of the system, as our objective was to obtain a high intra-cluster similarity and a low inter-cluster similarity, so the partition between clusters is well defined'. <ns0:ref type='bibr'>(Perepletchikov et al., 2007)</ns0:ref> proposed a set of design-level metrics to measure the structural attribute of coupling in service-oriented systems, which can be adapted to microservices.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.2.'>Cohesion metrics</ns0:head><ns0:p>Cohesion and coupling are two contrasting goals. A solution balancing high cohesion and low coupling is the goal for developers. <ns0:ref type='bibr' target='#b11'>(Candela et al., 2016)</ns0:ref> employed a two-objective approach aimed at maximizing package cohesion and minimizing package coupling. They used class dependencies and structural information to measure the structural cohesion, which can be adapted to microservices. The metrics used in the related work are listed below: Lack of cohesion. Lack of cohesion of classes for the J th package (P j ) measured as the number of pairs of classes in P j with no dependency between them <ns0:ref type='bibr' target='#b11'>(Candela et al., 2016)</ns0:ref>. (Al-Debagy &amp; Martinek, 2020) used the metric in a different way as 'It is based on Henderson-Sellers's lack of cohesion metrics. But the proposed version is modified to be applicable for microservices' APIs. It works by finding how many times a microservice has used a specific operation's parameter, divided by the product of the number of operations multiplied by the number of unique parameters'. <ns0:ref type='bibr' target='#b11'>(Candela et al., 2016)</ns0:ref> used lack of structural cohesion to measure the lack of cohesion between classes divided by the number of packages, and they also defined a lack of conceptual cohesion as a metric.</ns0:p><ns0:p>Relational cohesion. Relational cohesion was defined as 'the ratio between the number of internal relations and the number of types in a package (service). Internal relations include inheritance between classes, invocation of methods, access to class attributes, and explicit references like creating a class instance. Higher numbers of RC indicate higher cohesion of a package (service)' <ns0:ref type='bibr' target='#b31'>(Larman, 2012)</ns0:ref> cite by <ns0:ref type='bibr' target='#b32'>(Li et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Cohesion at the domain level (CHD).</ns0:head><ns0:p>The cohesiveness of interfaces provided by a service at the domain level. The higher the CHD, the more functionally cohesive that service is <ns0:ref type='bibr' target='#b28'>(Jin et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Average cohesion at the domain level (Avg. CHD).</ns0:head><ns0:p>The average of all CHD values within the system <ns0:ref type='bibr' target='#b28'>(Jin et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Cohesion at the message level (CHM). The cohesiveness of the interfaces published by a service at the message level. The higher a service's CHM, the more cohesive the service is, from an external perspective. CHM is the average functional cohesiveness <ns0:ref type='bibr' target='#b28'>(Jin et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Service interface data cohesion (SIDC).</ns0:head><ns0:p>The cohesion of a given service S with respect to the similarity of parameter data types of its interface's operations <ns0:ref type='bibr'>(Perepletchikov, Ryan &amp; Frampton, 2007)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Service Interface Usage Cohesion (SIUC).</ns0:head><ns0:p>The cohesion of a given service S based on the invocation behavior of clients using operations from its interface <ns0:ref type='bibr'>(Perepletchikov, Ryan &amp; Frampton, 2007)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Entities composition.</ns0:head><ns0:p>According to <ns0:ref type='bibr' target='#b12'>(Cojocaru, Uta &amp; Oprescu, 2019)</ns0:ref>, 'entities composition assesses whether the entities are equally distributed among the proposed microservices and no duplicates, which might break the cohesion, exist. They define an entity as the class, or action of the service'.</ns0:p></ns0:div> <ns0:div><ns0:head>Relation composition.</ns0:head><ns0:p>According to <ns0:ref type='bibr' target='#b12'>(Cojocaru, Uta &amp; Oprescu, 2019</ns0:ref>) 'relation composition assesses the quantitative variation in published language per relation. It applies the concept of relative assessment to entities shared between the services via their communication paths. The test identifies services communicating much more data than their peers, and thus potential communication bottlenecks'.</ns0:p><ns0:p>Responsibilities composition. <ns0:ref type='bibr' target='#b12'>(Cojocaru, Uta &amp; Oprescu, 2019</ns0:ref>) stated that the responsibilities composition 'assesses to what extent the use case responsibilities are equally distributed among the proposed microservices. It uses the coefficient of variation between the number of use case responsibilities of each microservice. Services having relatively more responsibility may imply low cohesion: a service providing multiple actions violates the single responsibility principle'.</ns0:p><ns0:p>Semantic similarity: According to <ns0:ref type='bibr' target='#b12'>(Cojocaru, Uta &amp; Oprescu, 2019</ns0:ref>) 'semantic similarity uses lexical distance assessment algorithms to flag the services that contain unrelated components or unrelated actions hindering cohesion'. <ns0:ref type='bibr'>(Perepletchikov, Ryan &amp; Frampton, 2007</ns0:ref>) 'reviewed categories of cohesion initially proposed for object-oriented software in order to determine their conceptual relevance to service-oriented designs'; and proposed a set of metrics for cohesion that can be adapted for microservices.</ns0:p></ns0:div> <ns0:div><ns0:head>Interaction number (IRN).</ns0:head><ns0:p>The number of calls for methods among all pairs of extracted microservices. The smaller is the IRN, the better the quality of candidate microservices as a low IRN reflects loose coupling <ns0:ref type='bibr' target='#b53'>(Saidani et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Number of executions.</ns0:head><ns0:p>The number of test requests sent to the system or microservices.</ns0:p></ns0:div> <ns0:div><ns0:head>Maximum request time.</ns0:head><ns0:p>The maximum time for a request (output) made from one microservice to another. It is the</ns0:p></ns0:div> <ns0:div><ns0:head>Maximum response time:</ns0:head><ns0:p>The maximum response time is that of a call (input) or request to the system or microservice. It is the time to process a response to another microservice.</ns0:p></ns0:div> <ns0:div><ns0:head>Number of packets sent:</ns0:head><ns0:p>The packets sent to the system or microservice. <ns0:ref type='bibr' target='#b50'>(Ren et al., 2018</ns0:ref>) (P29) they used package analysis (PA), static structure analysis (SSA), class hierarchy analysis (CHA), static call graph analysis (SCGA), and combined static and dynamic analysis (CSDA) to evaluate migration performance. However, they did not explain the details of the performance analysis test or the metrics they used.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.5.'>Other quality attributes metrics.</ns0:head><ns0:p>Few metrics were directly related to quality attributes. The metrics proposed in the revised works are defined as follows:</ns0:p><ns0:p>Cost of quality assurance. It can be calculated by adding up the time spent by testers validating not only the new features but also the non-regression on existing ones, along with the time spent on release management <ns0:ref type='bibr' target='#b18'>(Gouigoux &amp; Tamzalit, 2017)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Cost of deployment.</ns0:head><ns0:p>The time spent by operational teams to deploy a new release, in man-days; it decreases greatly as teams automate deployment <ns0:ref type='bibr' target='#b18'>(Gouigoux &amp; Tamzalit, 2017)</ns0:ref>.</ns0:p><ns0:p>Security impact. The security policy applied to requirements or services. Assets and threats identified lead to deployed security mechanisms, which form security policies. <ns0:ref type='bibr' target='#b1'>(Ahmadvand &amp; Ibrahim, 2016)</ns0:ref> mapped the identified policies to the corresponding functional requirements, mainly based on their access to the system assets. Security impact is a qualitative value (low, medium, high).</ns0:p><ns0:p>Scalability impact. <ns0:ref type='bibr' target='#b1'>(Ahmadvand &amp; Ibrahim, 2016)</ns0:ref> define th Scalability impact as the required level of scalability (high, medium, low) to implement a a functional requirement or service. Defining the requirements at design time for a software system to be scalable is a challenging PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:1:1:NEW 15 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science task. <ns0:ref type='bibr' target='#b1'>(Ahmadvand &amp; Ibrahim, 2016)</ns0:ref> think that a requirements engineer should answer a question such as 'What is the anticipated number of simultaneous users for this functionality?'.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.6.'>Computational resources metrics</ns0:head><ns0:p>The computational resources are all the software and hardware elements necessary for the operation of the microservice-based applications. The proposed metrics are listed.</ns0:p></ns0:div> <ns0:div><ns0:head>Average of memory.</ns0:head><ns0:p>The average memory consumption for each microservice or application.</ns0:p></ns0:div> <ns0:div><ns0:head>Average of disk. The average of disk consumption for each microservice or application.</ns0:head><ns0:p>Average of network. The average network bandwidth consumption for the entire system; Kb/s used by system or microservice <ns0:ref type='bibr' target='#b4'>(De Alwis et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Average of CPU.</ns0:head><ns0:p>The average of the CPU consumption by the system or microservice.</ns0:p><ns0:p>Service composition cost (SCC). <ns0:ref type='bibr' target='#b26'>(Homay et al., 2019)</ns0:ref> stated that 'identifying which existing functionalities in the service are consuming more resources is not an easy task. Therefore, we suggest relying on each request that a service provider receives from a service consumer. Because each request is a chain of stats or activities that needs to be satisfied inside of the service provider to generate a related response. The cost-of-service composition for the service s will be equal to the maximum cost of requests (routes)'.</ns0:p></ns0:div> <ns0:div><ns0:head>Service decomposition cost (SDC).</ns0:head><ns0:p>According to <ns0:ref type='bibr' target='#b26'>(Homay et al., 2019)</ns0:ref>, 'By refining a service into smaller services, we will make some drawbacks. The SDC is a function that calculates the overhead of refining the service s into smaller pieces'.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.7.'>Team metrics</ns0:head><ns0:p>Each microservice can be developed by a different team, and with different programming languages and database engines. It is important to consider metrics that allow analysis of microservices' granularity and its impacts on the development team. The proposed metrics in the analyzed are as follows:</ns0:p><ns0:p>Team size reduction (TSR). Reduced team size translates to reduced communication overhead and thus more productivity and the team's focus can be directed toward the actual domain problem and service for which it is responsible. TSR is a proxy for this team-oriented quality aspect. Let RM be a microservice recommendation for a monolith M. TSR is computed as the average team size across all microservice candidates in the RM divided by the team size of the original monolith M <ns0:ref type='bibr' target='#b39'>(Mazlami, Cito &amp; Leitner, 2017)</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:1:1:NEW 15 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Commit count. The number of commits in the code repository made by the developers.</ns0:p><ns0:p>We found very few metrics related to the development team. This can be an interesting topic for future research.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.8.'>Source code metrics</ns0:head><ns0:p>The source code is one of the most important sources for analyzing certain characteristics of an application. Some authors have used it to identify microservices and define their granularity. The proposed metrics are described below:</ns0:p><ns0:p>Code size in lines of code. The total size of the code in the repository, in terms of in lines of code, microservices' lines of code, or application's lines of code.</ns0:p><ns0:p>The number of classes per microservice. Helps to understand how large the identified microservice is and to identify if any microservice is too big compared to others. The number of classes should be minimized because a smaller number of classes implies more independent development of the microservice <ns0:ref type='bibr' target='#b61'>(Taibi &amp; Syst, 2019)</ns0:ref>.</ns0:p><ns0:p>The number of duplicated classes. In some cases, two execution traces will have several classes in common. The number of duplicated classes helps one to reason about the different slicing options, considering not only the size of the microservices but also the number of duplications, that will be then reflected in the microservices' development. Duplicated classes should be avoided since duplication adds to the system's size and maintenance <ns0:ref type='bibr' target='#b61'>(Taibi &amp; Syst, 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Internal co-change frequency (ICF).</ns0:head><ns0:p>How often entities within a service change together as recorded in the revision history. A higher ICF means that the entities within this service will be more likely to evolve together. The ICF is the average of all ICFs within the system <ns0:ref type='bibr' target='#b28'>(Jin et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>External co-change frequency (ECF).</ns0:head><ns0:p>How often entities assigned to different services change together, according to the revision history. A lower ECF score means that entity pairs located in different services are expected to evolve more independently. Similarly, ECF is the average ECF value of all services within the system <ns0:ref type='bibr' target='#b28'>(Jin et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>The ratio of ECF to ICF (REI).</ns0:head><ns0:p>The ratio of co-change frequency across services vs. the cochange frequency within services. The ratio is expected to be less than 1.0 if co-changes happen more often inside a service than across different services. The smaller the ratio is, the less likely co-changes are across services, and the extracted services tend to evolve more independently.</ns0:p><ns0:p>Ideally, all co-changes should happen inside the services. REI is calculated as ECF divided by ICF <ns0:ref type='bibr' target='#b28'>(Jin et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Modularity quality measure. The modularity of a component or service can be measured from multiple perspectives, such as structural, conceptual, historical, and dynamic dimensions <ns0:ref type='bibr' target='#b11'>(Candela et al., 2016)</ns0:ref>. They extend the modularity quality (MQ), as defined by <ns0:ref type='bibr' target='#b34'>(Mancoridis et al., 1998)</ns0:ref> as structural and conceptual dependencies, using structural modularity quality and conceptual modularity quality to assess the modularity of service candidates. Structural modularity quality (SMQ) measures the quality of modularity from a structural perspective. The higher the SMQ, the better modularized the service is. On the other hand, conceptual modularity quality (CMQ), similarly measures modularity quality from a conceptual perspective. The higher the CMQ, the better <ns0:ref type='bibr' target='#b28'>(Jin et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.9.'>Granularity metrics</ns0:head><ns0:p>Measuring granularity is complex. Granularity is related to size, including the number of functionalities or services that the application or microservice will have. It is also related to coupling and cohesion. Being more granular implies that microservice has no dependencies and can function independently, as an independent and encapsulated piece. Six granularity metrics were identified:</ns0:p><ns0:p>Weighted service interface count (WSIC): WSIC(S) is the number of exposed interface operations of service S. The default weight is set to 1. Alternate weighting methods, which need to be validated empirically, can take into considera-tion the number and the complexity of data types of parameters in each interface. <ns0:ref type='bibr' target='#b24'>(Hirzalla, Cleland-Huang &amp; Arsanjani, 2009)</ns0:ref>. WSIC(S) is the number ofexposed interface operations of service S. Operations can be weighted based on the number of parameters or their granularity (e.g. a complex nested object) with the default weight being set to 1 <ns0:ref type='bibr' target='#b8'>(Bogner, Wagner &amp; Zimmermann, 2017a)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Component Balance (CB):</ns0:head><ns0:p>The CB is a system-level metric to evaluate the appropriateness of granularity, i.e. if the number and size uniformity of the components (in this case, services) are in a favorable range for maintainability <ns0:ref type='bibr' target='#b10'>(Bouwers et al., 2011)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Operation number (OPN):</ns0:head><ns0:p>The OPN is used to compute the average number of public operations exposed by an extracted microservice to other candidate microservices. The smaller the OPN is the better <ns0:ref type='bibr' target='#b53'>(Saidani et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Number of microservices:</ns0:head><ns0:p>The number of microservices that are part of the system or application.</ns0:p></ns0:div> <ns0:div><ns0:head>Lines of code:</ns0:head><ns0:p>The lines of code measure the number of lines of code in the microservice. Additionally, it may consider the total size of the code in the repository. <ns0:ref type='bibr' target='#b12'>(Cojocaru, Uta &amp; Oprescu, 2019)</ns0:ref> they stated that 'the number of nanoentities (attributes or fields of a class) computes the number of nanoentities assigned to each proposed service, storing the result as a floating-point parameterized list. The list's length is equal to the number of services found in the system model specification file'.</ns0:p></ns0:div> <ns0:div><ns0:head>Number of nanoentities:</ns0:head><ns0:p>The fundamentals of microservices suggest that they must have low coupling, high cohesion, and low complexity. Based on the described metrics, a model or method could be defined that uses artificial intelligence to determine the most appropriate dimensioning and size for microservices. Some have already been defined, mainly for migrations from monoliths to microservices. The number of microservices, their size, and their computational complexity directly affect the use of computational resources and therefore their cost of deployment. This is an interesting topic for future research. In conclusion, some papers used metrics to evaluate the granularity of the microservices, including coupling, cohesion, number of calls, number of requests, and response time, although few methods or techniques use complexity as a metric, even though it seems fundamental for microservices. More research that considers design-level metrics is needed to define the granularity of the microservices that are part of an application, as well as research proposing models, methods, or techniques to determine the most appropriate granularity.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.4.'>RQ3: Quality attributes to define the microservice granularity.</ns0:head><ns0:p>Quality attributes are essential for today's applications. Availability, performance, automatic scaling, maintainability, security, and fault tolerance are essential features that every application must handle. An architecture based on microservices allows independent management of quality attributes, according to the specific need of each microservice. This is one of the main advantages compared to monolithic architectures. The size and number of microservices that compose an application directly affect its quality attributes. Creating more microservices may affect maintainability because testing costs will increase, even more so if automated testing is not available. Moreover, performance may also be affected by having to integrate and process data from several distributed applications. Clearly, quality attributes are impacted by microservices granularity and should be considered when defining a model, method, or technique to determine granularity (see Fig. <ns0:ref type='figure'>9</ns0:ref>). Surprisingly, 62% of the identified proposals did not consider or report any quality attributes at all. Of those that did, scalability and performance were the most considered (7 papers, 24%), followed by maintainability and availability (2 papers, 7%); and lastly, reliability (fault tolerance), security, functionality, and modularity with only one paper each. More research is needed that considers quality attributes to define the granularity of the microservices comprising an application. Security and fault tolerance are key attributes that microservice-based applications must handle, few works addressed these features (see table <ns0:ref type='table'>7</ns0:ref>). We grouped the software quality attributes into the following two categories: firstly, according to runtime characteristics, (scalability, performance, reliability, availability, and functionality), which are observable during execution; and secondly according to software as an artifact characteristic (maintainability, modularity, reusability), which are not observable during execution <ns0:ref type='bibr' target='#b7'>(Bass, Clemens &amp; Katzman, 1998)</ns0:ref> <ns0:ref type='bibr' target='#b6'>(Astudillo, 2005)</ns0:ref>; run time characteristics were the most used ones, having been addressed by 8 papers (P2, P4, P10, P11, P12, P13, P21 y P29); only two papers addressed software artifact characteristics (P3 y P16); only one paper used both artifact and runtime characteristics (P16). Therefore, more proposals are required to define microservices granularity considering both runtime and software artifact characteristics.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.1.'>Runtime characteristics</ns0:head><ns0:p>In this section, we detail the runtime quality attributes and the way they were addressed by the papers, whether they used metrics to evaluate their proposals. &#61623; Scalability, performance, and reliability (fault tolerance) were used by only one paper (P2).</ns0:p><ns0:p>P2 proposed a re-implementation of otoo.de (a real-life case study). They defined the granularity through vertical decomposition, they used DevOps including continuous deployment, to deliver features quickly to customers. Team organization is crucial for success, this organization was based on Conway's Law. Full automation of quality assurance and software deployment allows for early fault and error detection, thus reducing repair times both during development and during operations <ns0:ref type='bibr' target='#b23'>(Hasselbring &amp; Steinacker, 2017)</ns0:ref>. This paper did not propose metrics for evaluation. &#61623; Scalability and performance were used by two papers (P10, P29); P10 proposed an automatic decomposition method, which was based on a black-box approach that mines the application access logs using a clustering method to discover URL partitions having similar performance and resource requirements. Such partitions were mapped to microservices <ns0:ref type='bibr' target='#b1'>(Ahmadvand &amp; Ibrahim, 2016)</ns0:ref>. The metrics used in this paper were performance metrics (response time SLO violations, number of calls, number of rejected requests, and throughput) and computational resource metrics (Avg. CPU, number of virtual machines used, and allocated virtual machines). P29 used the source code and the runtime logs in a semi-automatic method, it used granularity, performance, and source code metrics to evaluate the decompositions. They presented a program analysis-based method to migrate monolith legacy applications to microservices architecture; this method used a function call graph, a Markov chain model to represent migration characteristics, and a k-means hierarchical clustering algorithm <ns0:ref type='bibr' target='#b50'>(Ren et al., 2018)</ns0:ref>. &#61623; Only performance was used by two papers (P4, P13). P4 examined the granularity problem of the microservice and explored its effect on the latency of the application. Two approaches for the deployment of microservices were simulated; the first one with microservices in a single container, and the second one with microservices divided into separate containers.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:1:1:NEW 15 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>They discussed the findings in the context of the Internet of Things (IoT) application architectures <ns0:ref type='bibr' target='#b58'>(Shadija, Rezai &amp; Hill, 2017)</ns0:ref>; that paper corresponds to an evaluation or comparison, it is not a method to define the microservice granularity; it used performance metrics (response time and the number of calls). P13 presented three formal coupling strategies and embedded those in a graph-based clustering algorithm: (1) logical coupling, (2) semantic coupling, and (3) contributor coupling. The coupling strategies rely on metainformation from monolithic code bases to construct a graph representation of the monoliths that are in turn processed by the clustering algorithm to generate recommendations for potential microservice candidates in a refactoring scenario; P13 was the only one that proposed development team based metrics; logical coupling, average domain redundancy, contributor coupling, semantic coupling, commit count, contributor count, and lines of code were the metrics used by this paper. &#61623; Scalability and security were used by one paper (P11). P11 proposed a methodology, consisting of a series of steps and activities that must be carried out to identify the microservices that will be part of the system. It is based on the use cases and the analysis made by the architect in terms of the scalability and security of each use case, as well as the dependencies with the other use cases <ns0:ref type='bibr' target='#b1'>(Ahmadvand &amp; Ibrahim, 2016)</ns0:ref>; this paper used the following metrics: dependency weight, security impact, and scalability impact (qualitative metrics). &#61623; Scalability, performance, and availability were addressed by two papers (P12, P21). P12 presented discovery techniques that help identifying the appropriate parts of consumeroriented business systems that could be redesigned as microservices with desired characteristics such as high cohesion, low coupling, high scalability, high availability, and high processing efficiency <ns0:ref type='bibr' target='#b5'>(De Alwis et al., 2018)</ns0:ref>. They proposed microservice discovery algorithms and heuristics. It was an automatic method that used coupling (structural coupling), cohesion (lack of cohesion), computational resources (avg. memory, avg. disk), and performance metrics (number of requests, execution time). P21 was a semi-automatic method, a genetic algorithm with semantic similarity based on DISCO and non-dominated sorting genetic algorithm-II (NSGAII). That paper presented four microservice patterns, namely object association, exclusive containment, inclusive containment, and subtyping for 'greenfield' (new) development of software while demonstrating the value of the patterns for 'brownfield' (evolving) developments by identifying prospective microservices <ns0:ref type='bibr' target='#b4'>(De Alwis et al., 2019)</ns0:ref>. The metrics used by that paper were: structural coupling, lack of cohesion, the average CPU, the average of the network, number of executions, and the number of packets sent.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.2.'>Software as an artifact characteristic</ns0:head><ns0:p>Only two software as an artifact characteristic were used, which were maintainability and modularity. Only maintainability was used by one paper (P3); whereas maintainability, and modularity were used by (P16), which proposed the most complete method.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:1:1:NEW 15 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>P3 used a balance between the cost of quality assurance and the cost of deployment for defining microservices granularity, it was a manual method. The choice of granularity should be based on the balance between the costs of quality assurance and the cost of deployment <ns0:ref type='bibr' target='#b18'>(Gouigoux &amp; Tamzalit, 2017)</ns0:ref>. P16 presented a framework that consists of three major steps: (1) extracting representative execution traces, (2) identifying entities using a search-based functional atom grouping algorithm, and (3) identifying interfaces for service candidates <ns0:ref type='bibr' target='#b28'>(Jin et al., 2019)</ns0:ref>. They also presented a comprehensive measurement system to quantitatively evaluate service candidate quality in terms of functionality, modularity, and evolvability. P16 proposed an automatic method, which used a search-based functional atom grouping algorithm and a non-dominated sorting genetic algorithm-II (NSGA II). The evaluation metrics were coupling (Integrating interface number), cohesion at message level, cohesion at domain level, structural modularity quality, conceptual modularity quality, internal co-change frequency (ICF), external Co-change frequency (ECF), and the ratio of ECF to ICF.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.3.'>Quality attributes and artificial intelligence</ns0:head><ns0:p>In some cases, artificial intelligence techniques are being used to improve the quality attributes of microservices. For example: <ns0:ref type='bibr' target='#b3'>(Alipour &amp; Liu, 2017)</ns0:ref> proposed two machine learning algorithms and predicted the resource demand of microservice backend systems, as emulated by a Netflix workload reference application. They proposed a microservice architecture that encapsulates monitoring functions of metrics and learning of workload patterns. Then, this service architecture is used to predict the future workload for making decisions about resource provisioning. <ns0:ref type='bibr' target='#b47'>(Prachitmutita et al., 2018)</ns0:ref> proposed a new self-scaling framework based on the predicted workload, with an artificial neural network, a recurrent neural network, and a resource scaling optimization algorithm used to create an automated system to manage the entire application with Infrastructure-as-a-service (IaaS). <ns0:ref type='bibr' target='#b33'>(Ma et al., 2018)</ns0:ref> proposed an approach, called scenario-based microservice retrieval (SMSR), to recommend appropriate microservices for users based on the Behavior-driven Development (BDD) test scenarios written by the user. The proposed service retrieval algorithm is based on word2vec, an automatic learning method widely used in natural language processing (NLP) to perform service filtering and calculate service similarity. <ns0:ref type='bibr' target='#b0'>(Abdullah, Iqbal &amp; Erradi, 2019</ns0:ref>) proposed a complete automated system for breaking down an application into microservices, implementing microservices using appropriate resources, and automatically scaling microservices to maintain the desired response time. Artificial intelligence can help to improve and control different characteristics of microservices, especially those related to improving quality attributes. Some proposals have been made in this regard, but more research is needed. Finally, we identified the automatic and semi-automatic methods, which used metrics and addressed some quality attribute to define the granularity (see table <ns0:ref type='table'>8</ns0:ref>). Only six papers meet those conditions (P10, P12, P13, P16, P21, and P29), and were the more suitable methods to define the granularity of microservices. Also, we identified semi-automatic methodologies, which used metrics to define the granularity, only two papers were found (P14 and P22); there were no automatic methodologies, most of them were manual methodologies (P11, P17, P18, and P26) and only one was semi-automatic but it did not use metrics (P25). P14 was a data flow-driven decomposition algorithm. Their methodology, first, the use case specification and business logics were analyzed based on requirements; second, the detailed dataflow diagrams (DFD) at different levels and the corresponding process-datastore version of DFD (DFD PS) are constructed from business logics based on requirement analysis; third, we designed an algorithm to automatically condense the DFD PS to a decomposable DFD, in which the sentences between processes and data stores are combined; last but not least, microservice candidates were identified and extracted automatically from the decomposable DFD <ns0:ref type='bibr' target='#b32'>(Li et al., 2019)</ns0:ref>. The metrics used by P14 were coupling (afferent coupling, efferent coupling, instability) and cohesion (relational cohesion) metrics. P14 did not address any quality attribute directly. P22 proposed a clustering algorithm applied to aggregate domain entities. It used coupling (silhouette score) and complexity (number of singleton clusters, maximum cluster size) metrics. The authors proposed an approach to the migration of monolith applications to a microservice architecture that focused on the impact of the decomposition on the monolith business logic <ns0:ref type='bibr' target='#b41'>(Nunes, Santos &amp; Rito Silva, 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.'>Discussion</ns0:head><ns0:p>The use of artificial intelligence techniques to determine the appropriate granularity or to identify the microservices that will be part of an application is a growing trend; this is especially true of machine learning clustering algorithms and genetic algorithms, with an emphasis on semantic similarity to group the microservices that refer to the same entity. Domain engineering and DDD are still among the most used techniques. Migration of software systems implies many architectural decisions that should be systematically evaluated to assess concrete trade-offs and risks <ns0:ref type='bibr' target='#b13'>(Cruz et al., 2019)</ns0:ref>. In these cases, the beginning is given from a monolithic system that must be decomposed into microservices, and that monolithic system has important data sources that allow for the identification and evaluation of the candidate microservices. These sources are mainly: the source code, the use cases, the database, the logs, and the execution traces. It should be noted that the development of microservice-based applications is closely related to agile practices and DevOps, and none of the input data that are being considered in the proposed methods correspond to agile artifacts such as user stories, product backlog, iteration planning and others. Therefore, research work is needed at this point. The migration from monolith to microservices is a topic with much interest and is widely studied. In contrast, the design and development of microservice-based applications from scratch have few related proposals. The proposed methods emphasize artifacts available at run time, PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:1:1:NEW 15 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science development, deployment, or production, which are hardly available when starting a project from scratch at design time. The development of microservice-based applications from scratch resembles component-based development <ns0:ref type='bibr' target='#b65'>(Vera-Rivera &amp; Rojas Morales, 2010)</ns0:ref>, in which microservices are reusable software components. In (Vera-Rivera, 2018) we characterized the process of developing applications based on microservices, identifying two fundamental parts, first the development of each microservice and then the development of applications based on those microservices. The definition of adequate granularity is fundamental to the development of microservices-based applications <ns0:ref type='bibr' target='#b63'>(Vera-Rivera, 2018)</ns0:ref>. The granularity of a monolith is not the most optimal and defining an operation by microservice is also not optimal. Hence, if an application that offers 100 operations should have 100 microservices, it would not be optimal either, due to latency, performance, and management of this large distributed system. The optimal granularity is somewhere in between the monolithic application and the operation by microservice system, this granularity should be defined according to the characteristics of the application, the development team, the non-functional requirements, the available resources, and design, development, and operation trade-offs. The research gaps focus on proposing techniques or methods that allow for the evaluation of granularity and its impact on tests, considering security controls, fault tolerance mechanisms and DevOps. By managing more microservices or larger microservices, testing can be slower and more tedious. Moreover, the pipelines of continuous integration and deployment would be more complex. Determining the appropriate number of microservices and their impact on continuous deployment is an interesting research topic. Few works address these issues. In addition, few papers use as input data or analysis units the artifacts used in agile development, such as user stories, product backlog, release planning, Kanban board and its data, to propose agile methods or new practices that allow for determination or evaluation of the microservices that will be part of the application. None of the proposed works focuses on agile software development. Several interesting works have been proposed, but there are still few specific, actionable proposals; more research is needed to propose design patterns, good practices, more complete models, methods, or tools that can be generalized to define microservices granularity considering metrics, quality attributes and trade-offs.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1.'>Research trends:</ns0:head><ns0:p>We detail de research trends according to the analyzed papers, the trends summarized as follow:</ns0:p><ns0:p>&#61623; The most used techniques to define microservice granularity included machine learning clustering, semantic similarity, genetic programming, and domain engineering. &#61623; The most used research strategies were validation research and solution proposals. &#61623; The most used validation method was the case study, although some studies used experimental evaluations. We summarized the case studies found in the reviewed papers &#61623; The use of metrics was evidenced to evaluate the granularity of the microservices comprising an application. Performance and coupling were the most used metrics; they help to identify microservices and their granularity more objectively. &#61623; Migrations from monoliths to microservices have been widely studied. Methods and techniques have been proposed to decompose applications into microservices, with the source code, logs, execution traces, and even use cases used as input data. These methods are used mainly during design and development time. &#61623; Scalability and performance were the most addressed quality attributes in the reviewed papers; they are fundamental for microservice-based applications. Finally, the main reason to migrate a monolithic application to microservices is precisely to improve performance and scalability, followed by fault tolerance, maintainability, and modularity.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.'>Research gaps:</ns0:head><ns0:p>Research gaps allow us to propose new research works and future work, we identify the research gaps, which are listed as follow:</ns0:p><ns0:p>&#61623; Research works that include techniques or methods to evaluate granularity and its impact on tests, while also considering security controls, fault-tolerance mechanisms, and DevOps. &#61623; Metrics were grouped into four categories: development team, development process, microservice-based application (system), and microservice itself. Few metrics were found for the development team or development process, more research is necessary in these groups. &#61623; Few methods have been proposed to define the most adequate microservices granularity at testing or deployment time. &#61623; More research is required that uses agile development artifacts as inputs, (i.e. user stories, product backlog, release planning, Kanban boards, and their data), to propose new agile practices to define or assess microservices' granularity. None of the proposal identified in this survey focused on agile software development.</ns0:p></ns0:div> <ns0:div><ns0:head n='6.'>Threats to validity</ns0:head></ns0:div> <ns0:div><ns0:head n='6.1.'>External validity</ns0:head><ns0:p>We express a threat to external validity regarding the search and selection of primary papers, which may not be representative of the state of the art in the definition of the granularity of microservices, to reduce this risk we used a systematic and well defined process, using two search strings so that the papers obtained are representative, to define and select the papers that were included in our review, each of the authors made their selection and tabulation independently, then in common agreement and group discussion were selected by applying the inclusion and exclusion criteria. In addition, the systematic literature review process carried out corresponds to the classic and widely used in other reviews, proposed by Kitchenham <ns0:ref type='bibr' target='#b29'>(Kitchenham, 2004)</ns0:ref>, also our study significantly include research papers have undergone a rigorous peer-review process, which is a well-established requirement for high quality publications, so the selected paper may be representatives to define the state-of-the-art of microservices granularity definition. For each paper obtained from the query strings, the reason why it was included or excluded from the review was defined. We did not include grey literature. By using a systematic method already established and widely used in other reviews, the replicability of our study is guaranteed, and the process was rigorously followed to reduce this threat.</ns0:p></ns0:div> <ns0:div><ns0:head n='6.2.'>Internal validity</ns0:head><ns0:p>In order to reduce the researcher bias a pre-defined protocol were defined (See figure <ns0:ref type='figure'>1</ns0:ref>). The classification criteria for the selected papers were carefully selected and they were defined in other literature reviews, these literature reviews were explained in the related work section. We download the selected papers; they were shared with all authors for review. The papers were summarized, we detail the contribution and made the classification and analysis based on fulltext papers. We specified a paper ID, the used technique, the input data, the full paper summary and synthesis, the description of the proposal, the journal or conference where it was published, and observations and comments. We tabulated the papers using the classification criteria explained in section 4, we review each selected paper based on the interpretation of the contribution raised in each one, then we grouped the papers. To reduce the selection bias this process was reviewed for each author independently. The threats to data synthesis and results were mitigated by having a unified classification and description scheme and following a standard protocol where a systematic process was done and externally evaluated. The data extraction process was aligned with our research questions, also we applied the guidelines of a classic systematic literature review, following a research protocol, thus making our research easy to check and replicate.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This systematic literature review identified the main contributions and research gaps regarding the dimensioning and definition of the granularity of microservices comprising an application. Methods, methodologies, and techniques to determine the granularity of microservices were identified. Microservice granularity research is at a Wild West stage: no standard definition exist, development-operation trade-offs are unclear, there is little notion of continuous granularity improvement, and conceptual reuse is scarce (e.g. few methods seem applicable or replicable in projects other than the first to use them). These gaps in granularity research offer clear options We propose a microservice granularity definition, first by its size or dimensions, meaning the number of operations (services) exposed by the microservice, along with the number of microservices that are part of the whole application and by its complexity and dependencies. The goal is to have low coupling, low complexity, and high cohesion of the microservices. Defining the most optimal granularity for microservices can significantly improve performance, maintainability, scalability, network use and consumption, computational resources, and cost, because microservices mainly are deployed in the cloud. As future work we will propose 'Microservice Backlog' a model and techniques to define and evaluate the microservice granularity at design time, using metrics to evaluate the granularity. We want to develop a genetic programing technique and a semantic grouping algorithm to group the user stories of the product backlog into candidate microservices, so the architect or development team can evaluate the candidate decomposition of the application.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 1. Inclusion and exclusion criteria</ns0:head></ns0:div> <ns0:div><ns0:head>Inclusion criteria Description</ns0:head><ns0:p>Primary research papers that make a specific proposal about the size, granularity, or decomposition of applications to microservices.</ns0:p><ns0:p>This criterion focuses on identifying primary research papers that propose or define the size or granularity of microservices, also we include migrations from monolith to microservices that carry out a proposal to decompose the monolithic application to microservices.</ns0:p><ns0:p>Papers that propose a methodology, model, technique, or method to define granularity, size, or dimension of microservices.</ns0:p><ns0:p>The objective of the review is to identify the models, methods, methodologies, or techniques used to define the microservice granularity.</ns0:p><ns0:p>Migrations that include a methodology, model, technique, or method to define granularity, size, or dimension of microservices.</ns0:p><ns0:p>We include migrations from monolithic applications to microservices that reason about the definition of microservice granularity, those migrations that focus on other aspects are not included.</ns0:p><ns0:p>Papers published in journals and conference proceedings in the field of software architecture, software engineering and computer science.</ns0:p><ns0:p>We focus on research papers published in international journals and conferences only in software architecture, software engineering, and computer science. We include only peer-reviewed papers. We did not include gray literature.</ns0:p></ns0:div> <ns0:div><ns0:head>Exclusion criteria Description</ns0:head><ns0:p>Tutorial, example, experience, and opinion articles. We do not include tutorials, examples, experiences, and opinion articles, because they do not correspond to primary research papers, or they do not carry out a new contribution in the definition of microservices granularity.</ns0:p><ns0:p>Survey and literature review. We exclude survey papers, and literature reviews because they are secondary research papers that list the contributions of other authors.</ns0:p><ns0:p>Use of microservices in other areas. The use of microservices architecture in other areas is evident and fundamental, for this review they were excluded because they do not directly address the problem of defining the microservice granularity.</ns0:p><ns0:p>Papers that do not include a methodology, model, technique, or method to define granularity, size, or dimension of microservices.</ns0:p><ns0:p>Articles related to the microservice architecture, which do not make a specific proposal on the definition of microservices granularity are excluded.</ns0:p><ns0:p>Papers which propose a specific method, technique or model for SOA, web services or mobile services.</ns0:p><ns0:p>The fundamentals of SOA, web services, and mobile services are different from the fundamentals of microservices architecture, so specific proposals in these topics are not included.</ns0:p><ns0:p>Literature only in the form of abstracts, blogs, or presentations.</ns0:p><ns0:p>We used full-text articles, excluding those that are only available in abstract, blog, or presentation form (not peerreviewed).</ns0:p><ns0:p>Articles not written in English or Spanish.</ns0:p><ns0:p>Only we include papers written in English or Spanish in other languages are excluded. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>&#61623;</ns0:head><ns0:label /><ns0:figDesc>Metrics used: Which metrics are used to define the granularity of microservices? &#61623; Development process phases: Phases of the development process on which the work focuses. Research strategies: Includes solution proposal, validation research, experience paper, opinion paper, philosophical paper, and evaluation research.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>shared a dataset composed of 20 opensource projects using specific microservice architecture patterns; and (Marquez &amp; Astudillo, 2018) shared a dataset of open source microservice-based projects when investigating actual use of architectural patterns.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>3.</ns0:head><ns0:label /><ns0:figDesc>By implementation units. 4. By business capabilities. 5. By capabilities of the development team or teams. 6. Using domain-driven design. 7. Number of methods or exposed interfaces. Richardson (2020) proposed four decomposition patterns, which allow for the decomposition of an application into services: (1) Decompose by business capability: define services corresponding to business capabilities; (2) decompose by subdomain: define services corresponding to DDD subdomains; (3) self-contained service: design services to handle synchronous requests without waiting for other services to respond. (4) service per team: each service is owned by a team, which has sole responsibility for making changes, and ideally each team has only one service (Richardson &amp; microservices.io). Zimmermann et al. (2019) proposed a microservice API patter (MAP) for API design and evolution. The patterns are divided in five categories: (1) foundation, (2) responsibility, (3) structure, (4) quality, and (5) evolution. These patterns are an important reference for developing PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:1:1:NEW 15 Jul 2021) Manuscript to be reviewed Computer Science because they are valuable resources with which to validate future research and to compare new methods (See table 4). The most common case studies were Kanban boards, Money transfer, JPetsStore and Cargo Tracking, which are either hypothetical or open-source projects.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:1:1:NEW 15 Jul 2021) Manuscript to be reviewed Computer Science for research on continuous improvement of the development and operation of microservicebased systems.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,178.87,525.00,527.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,178.87,525.00,349.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,178.87,525.00,252.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='41,42.52,178.87,525.00,333.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,178.87,525.00,351.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='43,42.52,178.87,525.00,369.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='44,42.52,229.87,525.00,324.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='45,42.52,178.87,525.00,354.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='46,42.52,178.87,525.00,339.75' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>1 Table 3 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Selected papers, related works on the definition of granularity of microservices. 2 Their Design Trade-offs: A self-adaptive Roadmap(Hassan &amp; Bahsoon, 2016) . Conference paper P2 Microservice Architectures for Scalability, Agility and Reliability in E-Commerce<ns0:ref type='bibr' target='#b23'>(Hasselbring &amp; Steinacker, 2017)</ns0:ref>. Microservices: A Dataflow-Driven Approach (Chen,Li &amp; Li, 2017).A Dataflow-driven Approach to Identifying Microservices from Monolithic Applications<ns0:ref type='bibr' target='#b32'>(Li et al., 2019)</ns0:ref>.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>ID.</ns0:cell><ns0:cell>Paper</ns0:cell><ns0:cell>Year</ns0:cell><ns0:cell>Type</ns0:cell></ns0:row><ns0:row><ns0:cell>P1</ns0:cell><ns0:cell cols='3'>Microservices and Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell>P3 P4</ns0:cell><ns0:cell>From Monolith to Microservices: Lessons Learned on an Industrial Migration to a Web Oriented Architecture (Gouigoux &amp; Tamzalit, 2017). Microservices: Granularity vs. Performance (Shadija, Rezai &amp; Hill, 2017).</ns0:cell><ns0:cell /><ns0:cell>Conference paper Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell>P5 P6 P7 P8 P9 P10 P11 P12 P13 P14</ns0:cell><ns0:cell cols='3'>Microservice Ambients: An Architectural Meta-Modelling Approach for Microservice Granularity (Hassan, Ali &amp; Bahsoon, 2017). Microservices Identification Through Interface Analysis (Baresi, Garriga &amp; De Renzis, 2017). Partitioning Microservices: A domain engineering approach (Jos&#233;lyne et al., 2018). A Case Study on Measuring the Size of Microservices (Vural, Koyuncu &amp; Misra, 2018) Identifying Microservices Using Functional Decomposition (Tyszberowicz et al., 2018). Unsupervised Learning Approach for Web Application Auto-decomposition into Microservices (Abdullah, Iqbal &amp; Erradi, 2019). Requirements Reconciliation for Scalable and Secure Microservice (De)composition (Ahmadvand &amp; Ibrahim, 2016). Function-Splitting Heuristics for Discovery of Microservices in Enterprise Systems (De Alwis et al., 2018). Extraction of Microservices from Monolithic Software Architectures (Mazlami, Cito &amp; Leitner, 2017). From Monolith to Conference paper Conference paper Conference paper Conference paper Conference paper Conference paper Journal paper Conference paper Conference paper Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Journal paper</ns0:cell></ns0:row><ns0:row><ns0:cell>P15</ns0:cell><ns0:cell>From Monolithic Systems to Microservices: A Decomposition Framework</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Based on Process Mining (Taibi &amp; Syst, 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P16</ns0:cell><ns0:cell>Service Candidate Identification from Monolithic Systems Based on</ns0:cell><ns0:cell /><ns0:cell>Journal paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Execution Traces (Jin et al., 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P17</ns0:cell><ns0:cell>The ENTICE Approach to Decompose Monolithic Services into</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Microservices (Kecskemeti, Marosi &amp; Kertesz, 2016).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Towards a Methodology to Form Microservices from Monolithic Ones</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>(Kecskemeti, Kertesz &amp; Marosi, 2017).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P18</ns0:cell><ns0:cell>Refactoring Orchestrated Web Services into Microservices Using</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Decomposition Pattern (Tusjunt &amp; Vatanawood, 2018).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P19</ns0:cell><ns0:cell>A logical architecture design method for microservices architectures (Santos</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>et al., 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P20</ns0:cell><ns0:cell>A New Decomposition Method for Designing Microservices (Al-Debagy &amp;</ns0:cell><ns0:cell /><ns0:cell>Journal paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Martinek, 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P21</ns0:cell><ns0:cell>Business Object Centric Microservices Patterns (De Alwis et al., 2019).</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell>P22</ns0:cell><ns0:cell>From a Monolith to a Microservices Architecture: An Approach Based on</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Transactional Contexts (Nunes, Santos &amp; Rito Silva, 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P23</ns0:cell><ns0:cell>Granularity Cost Analysis for Function Block as a Service (Homay et al.,</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P24</ns0:cell><ns0:cell>MicroValid: A Validation Framework for Automatically Decomposed</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Microservices (Cojocaru, Uta &amp; Oprescu, 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P25</ns0:cell><ns0:cell>Migration of Software Components to Microservices: Matching and</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Synthesis (Christoforou, Odysseos &amp; Andreou, 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P26</ns0:cell><ns0:cell>Microservice Decomposition via Static and Dynamic Analysis of the</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Monolith (Krause et al., 2020).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P27</ns0:cell><ns0:cell>Towards Automated Microservices Extraction Using Multi-objective</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Evolutionary Search (Saidani et al., 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P28</ns0:cell><ns0:cell>Extracting Microservices' Candidates from Monolithic Applications:</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Interface Analysis and Evaluation Metrics Approach (Al-Debagy &amp; Martinek,</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>2020).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:1:1:NEW 15 Jul 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot' n='1'>IEEE Xplore: https://ieeexplore.ieee.org/ 2 ACM Digital Library: https://dl.acm.org/ 3 Scopus: https://www.scopus.com/search/form.uri?display=basic PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:1:1:NEW 15 Jul 2021)</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:1:1:NEW 15 Jul 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"8 July 2021 Dear Editors and Reviewers PeerJ Computer Science Thank you for the comments and for the opportunity to address them in the paper. We send you the answer to the reviewers' comments and my corrected manuscript. We think we have an improved version of the paper now. We have tried to address the comments from all reviewers as much as we could. We address the most important issues reported by the reviewers. I appreciate the opportunity to present my work at this prestigious journal, the suggestions and corrections are very important to improve our research, we hope that the paper meets the expectations and can be published. Please, let us know about your final decision and additional comments. Thanks. Best regard, Dr. Fredy Humberto Vera Rivera Docente Ingeniería de Sistemas e Informática Universidad Francisco de Paula Santander. Correo Electrónico: fredyhumbertovera@ufps.edu.co, freve9@hotmail.com, freve9@gmail.com Phone: +57 301-6079412 City: Cúcuta – Colombia 1. EDITOR COMMENTS • Improving the overall structure (mostly R1, but all reviewers suggest some structural improvements) • Integrating and condensing (shortening) the initial sections of the paper (R1 and R3) Answer: We improve the overall structure of the paper; we move the research questions (RQ) to the introduction (line 128-131). We join section 2 in the introduction and review the redaction. We separate the section 5 in two sections, first the results (new section 4) and second the discussions of the results (section 5). the quantitative data are presented and second the analysis and discussion of the results, we add a more descriptive description and relations among the selected papers; also, we highlight the trends and the gaps in the microservices granularity. We organize all paper, the research questions were reviewed and justified, and the aim of the revision is detailed in the introduction section. • Rewriting the threats to validity section (R3) Answer: We review and rewrite the threats to validity section; we detail the external and internal validity. • Improving the description of the research method (mostly R3, but all reviewers had some methodological concerns) Answer: We improve the description of the research method. We explain the review process of the selected papers (lines 342 - 347); this was carried out by each co-author independently, resolving doubts and contradictions together. Table 1 list the inclusion/exclusion criteria, we edit the table 1 including a description of each criterion. • Substantially improving the related work section (R3). Particularly note that for a literature review, it is not uncommon to have a very short, related work section, since the topically 'related' research is discussed extensively in the main part of the paper. Answer: We review and restate the previous work section, we include the related papers that conducted a SLR in other topics of microservices, as well as related SLR in microservices granularity. The review of related works was carried out in chronological order, from 2017 to 2020, for each work the objectives or topics addressed by the authors were identified and the most important conclusions were summarized. For space and in order not to increase the length of the article, we do not include more details. Additionally, we compare and highlight the differences of these studies with our review. 2. REVIEWER 1 Basic reporting • An important improvement potential is the amount of text and reading time that is needed until reaching the objectives and Research Questions of the study. It is not easy to make the links between the motivation points, challenges, research gaps, objectives of the study and research questions. Therefore, i suggest for the research questions to be moved closer to the text that motivates the study and states the challenges and research gaps addressed. Answer: We move the research questions (RQ) to the introduction (line 128-131). We join section 2 in the introduction and review the redaction. • Another aspect that this paper has large potential for improvement is structure. The division of sections can be organized differently to improve readability and understandability of the content. Specifically, section 2 and section 3 seem to have a very similar purpose of providing the required Background in the available literature and on drawing the theoretical landscape of the topic. Hence, maybe they could be either merged or explicitly describing their respective value to justify and motivate their separation. Answer: We change the structure of the paper, the introduction joined with section 2. First, the introduction establishes the context and scope of the article, which is part of the microservices architecture; the microservices definition and importance for software development is established; then the research gaps present in the microservices architecture are defined, then the problem of microservices granularity is explained and its theoretical definitions; next the objective of this research paper, the research questions, and the contributions are established; finally the structure of the paper is detailed. We separate previous work of the introduction because we found 11 literature review papers, which are focused in microservice architecture; to analyze in more detail and highlight the differences and proposals made by the authors, it was included in its own section. For each review paper we identify the aim, the topic and the more important conclusion. • Section 5 even though it is organized nicely based on the objectives of the study, the separation of descriptive results and reflections or discussion on concepts would make it easier to read through and understand. For example,in the 'Classification of the selected papers', information / data gathered are described about the findings from reviewing the literature in the first and last 2 paragraphs, split by a paragraph that discusses and reflects upon migrations, agile practices and DevOps. I suggest that each piece of result is either presented and then discussed or reflected upon consistently, or all results are stated and then all reflections and discussions are made separately. The in-between format that is currently present is not optimal. Answer: We separate the section 5 in two sections, first the results (new section 4) and second the discussions of the results (section 5). We identified and detailed the trends and research gaps in the definition of microservices granularity. • The past-sentence is used in the paper sometimes inconsistently which makes it hard to comprehend. The paper can be written entirely in present-sentence. Answer: We review the grammatic of all paper, and we write the paper using present-sentence. • In section 2 and 3 the literature is reviewed in a very linear fashion, with small summaries of each study next to each other, supported by quotes. This is good but can be improved my synthesizing further the concepts from the different studies and making a more detailed review of the contents. Answer: The review of related works was carried out in chronological order, from 2017 to 2020, for each work the objectives or topics addressed by the authors were identified and the most important conclusions were summarized. For space and in order not to increase the length of the article, we do not include more details. Additionally, we compare and highlight the differences of these studies with our review. • In-text reference formatting is often incorrect. For example, in line 115: instead of '(Hassan, Bahsoon & Kazman, 2020) stated that a granularity....' should be 'Hassan, Bahsoon & Kazman (2020) stated that a granularity....'. I suggest you review the referencing formatting more thoroughly in the entire text. Answer: We review the citations and change it according to the instructions for authors in the entire text. • Titles in section 5 are very long and they could be shortened. Answer: We shortened the titles of section 5. 5.1. Classification of the selected papers. 5.2. Approaches to define microservices granularity. 5.3. Metrics to evaluate microservices granularity and 5.4. Quality attributes to define microservices granularity. Experimental design • The topics that are researched, could be specified and introduced earlier in the paper. So, it should be clear what the objectives are, what are the research questions and then describe how this study is designed to address these. The objectives and research questions should be the starting point of justifying the methodological choices made. In addition, this would make clear why is important to answer the research questions asked in this study and thus, to motivate further the study choices. Answer: We move the research questions (RQ) to the introduction (line 128-131). We join section 2 in the introduction and review the redaction, also we clarify the need to carry out this literature review. • Also, the queries that guide data extraction are nicely put and clear. However, a description presenting how they came into surface would provide clarity on why they are the right search terms. Some of them seem to be different and it is not clear what is the impact of that. For example, the term 'decomposition' seems to me not equivalent to the rest of the search terms used and hence, how this affects the queries and their accuracy? Answer: Hassan, Bahsoon and Kasman (2020) stated that a granularity level determines “the service size and the scope of functionality a service exposes (Kulkarni & Dwivedi, 2008)”. Granularity adaptation entails merging or decomposing microservices thereby moving to a finer or more coarse-grained granularity level (Hassan, Bahsoon & Kazman, 2020). So, we include the justification for each search string and its importance in the literature review, as well as how it relates to the research questions in line 302 and line 309. Query string 1 (QS1): ('micro service' OR microservice) AND (granularity OR size OR dimensioning OR decomposition). According to Hassan, Bahsoon and Kasman (2020), it is established that the granularity of microservices is related to the size and dimension of the microservice, so these terms are included in the search string. Additionally, the decomposition of monolithic applications to microservices is an important research topic, so we include this word. Query string 2 (QS2): ('micro service' OR microservice) AND 'granularity' AND ('method' OR 'technique' OR 'methodology'), targeting only research papers. The main objective of the work was to identify the methods, techniques or methodologies used to determine the microservices granularity, so we include these words. QS1 and QS2 addresses all research question; for each of the proposals selected in each QS; the metrics used and the quality attributes addressed were identified. • Furthermore, the selection criteria are clear but their reasoning is not explicitly justified. Some elaboration to describe why are those criteria selected, what other criteria were considered, why other criteria were not selected and what are other potentially valuable criteria would make the authors' rationale more clear. Answer: The table 1 was edited, we included a column where the description of the inclusion / exclusion criteria was detailed. • The selection of the papers is stated to be for studies that implicitly or explicitly discuss about granularity levels. This should be further discussed, both methodologically, but also in threats to validity. Answer: The threats to validity section were improved and rewritten, we included a discussion of the article selection process and how the bias was reduced. • Finally, more details about the analysis conducted on the content of the chosen papers could be described further, to make clear how the knowledge was utilized. Answer: We explain the review process of the selected papers (lines 342 - 347); this was carried out by each co-author independently, resolving doubts and contradictions together. • On the content of the research questions, I suggest to further motivate why we need to answer these questions, where are they going to specifically contribute and most importantly if this is the kind of study we currently need in the field. Having that said, one can argue that most of the studies included in the review do not explicitly investigate granularity (granularity is not their central point that they study). They investigate other matters of microservices, like migrations or decompositions and discussion eventually brings some points about granularity. Hence i am reluctant to fully accept that a review on granularity specifically is what we need. Answer: This study is important in the field because the issue of granularity definition is of great interest in both academia and industry, and several researchers have raised this problem, as stated in the introduction (lines 103 - 120). The definition of the correct size and the functionalities that each microservice must contain, affects the quality attributes of the system, and affects testing, deployment, and maintenance; therefore, identify the metrics that are being used to evaluate the granularity is very important, describe them and use them as references to propose other methods, techniques or methodologies that allow defining the granularity of the microservices in a more objective way. Identifying the quality attributes studied and how they were evaluated is an important reference for future work to define the granularity of the microservices that are part of an application. We include this paragraph after the research questions (lines 294 - 300). We disagree with the reviewer in the point of the studies included in the review do not explicitly investigate granularity (granularity is not their central point that they study). Our inclusion and exclusion criteria aim to select the works that address the definition of the granularity of microservices, these criteria were better explained in table 1, we include the migrations from monolith to microservices that use some software artifact to determine the microservices that are going to be part of the application, in which directly or indirectly the definition of the granularity of the microservices is addressed, granularity in the sense of determining the number of microservices that are part of the application, their size (functionalities or services that they handle) and the scope of the functionalities or services. • I cannot see that enough papers specifically investigate granularity in depth and therefore, it is not clear to me how we can extract valid definitions and metrics about granularity of microservices from the studies included. Most of the studies discuss about granularity and state it as a challenge but not many investigate in depth how granularity is decided. Hence, the gap identified in this paper is valid but the reasons about why this type of research is accurately covering this gap is not clear. Answer: We disagree with the reviewer; we perform a systematic review using the query strings and the inclusion and exclusion criteria for selecting the surveyed papers. Most of the selected articles correspond to manual, automatic, and semi-automatic methods used to identify the microservices that are part of the application, their size and scope, using machine learning techniques, genetic algorithms, semantic similarity, and domain driven design. • Another very important issue with the presented study is on the analysis of the observations from the reviewed literature. There is room for further analysis to present deeper insights into the concepts. While the results are doing a good job in describing the definitions, metrics and quality attributes for granularity, it would be very insightful to discuss more relations and also differences between different studies. For example, the differences between different categories of papers on the metrics that they use to evaluate granularity. Furthermore, even though the metrics are organized in metric types, they could be analyzed more consistently and systematically, to make clear how impactful they seem to be (e.g. some metrics can be seen in many papers or some metric types appear more frequently). • Deeper analysis of the results can be proven to be very insightful and also help making the current results less overwhelming and more organized. This in combination with the need to better motivate the research questions and adjust the research questions to be more accurate in addressing objectives and research gaps, leads me to my final verdict below. Answer: The results section is divided in two, first the quantitative data are presented and second the analysis and discussion of the results, we add a more descriptive description and relations among the selected papers; also, we highlight the trends and the gaps in the microservices granularity. Table 6 details the metrics by paper, which are divided in four groups (Metric relative to 1) Dev. Team, 2) Dev. Process, 3) System, or 4) microservice) and metric type (Coupling, cohesion, granularity, team, computation resource, source code, performance, and other quality attribute), in the discussion section we include a deeper analysis of table 6. We organize all paper, the research questions were reviewed and justified, and the aim of the revision is detailed in the introduction section. 3. REVIEWER 2 Abstract. • Some results are not related to the research questions of the survey. For example: “The most used approaches were machine learning, semantic similarity, genetic programming, and domain engineering. Most papers were concerned with migration from monoliths to microservices; and a few addressed greenfield development, but none address improvement of granularity in existing microservice-based systems.” Also, the quantitative results (as applicable) should be reported (e.g., number of metrics found). Some conclusions are also not related to results and research questions. Answer: The results section is divided in two, first the quantitative data are presented and second the analysis and discussion of the results, we review that the conclusions and discussions are according to the obtained data. In the results section, the data were better detailed, being specific in the findings that allowed us to obtain the conclusions and respond to the research questions. Introduction • It is not clear why the “granularity” important topic and why research questions are important and relevant. Also, the summary of the findings should be included. Some statements are wrong/unjustifiable. SOA was always a style for loosely-coupled distributed systems “SOA and web services are more related to monolithic applications that ….” Answer: We move the research questions (RQ) to the introduction (line 128-131). We join section 2 in the introduction and review the redaction, also we clarify the need to carry out this literature review. • Previous work => Related work. Summary of some related works needs to be simplified (without simply coping the research questions used by those studies). “(Ghofrani & Lübke, 2018) focused on solving the following questions …….” Answer: The review of related works was carried out in chronological order, from 2017 to 2020, for each work the objectives or topics addressed by the authors were identified and the most important conclusions were summarized. For space and in order not to increase the length of the article, we do not include more details. Additionally, we compare and highlight the differences of these studies with our review. • The relevance/importance/relationship of “”granularity”, “metrics”, and “quality attributes” need to be highlighted. This would lead to research gaps and questions. Answer: The definition of the correct size and the functionalities that each microservice must contain, affects the quality attributes of the system, and affects testing, deployment, and maintenance; therefore, identify the metrics that are being used to evaluate the granularity is very important, describe them and use them as references to propose other methods, techniques or methodologies that allow defining the granularity of the microservices in a more objective way. Identifying the quality attributes studied and how they were evaluated is an important reference for future work to define the granularity of the microservices that are part of an application. We include this paragraph after the research questions (lines 294 - 300). Experimental design • Why does RQ1 include “microservice granularity and determine the microservices’ size?” Is “size” not part of “granularity”? • Not sure how the keyword “dimensioning” is relevant. Answer: Hassan, Bahsoon and Kasman (2020) stated that a granularity level determines “the service size and the scope of functionality a service exposes (Kulkarni & Dwivedi, 2008)”. Granularity adaptation entails merging or decomposing microservices thereby moving to a finer or more coarse-grained granularity level (Hassan, Bahsoon & Kazman, 2020). The word size is part of the granularity definition, so we delete this word of the research question. The word dimension means “a measurement of something in a particular direction, especially its height, length, or width”, it is related to scope in the Hassan et al. definition, so this word is very important in the definition of the granularity. • The inclusion/exclusion criteria should be listed clearly. Answer: Table 1 list the inclusion/exclusion criteria, we edit the table 1 including a description of each criterion. Results and discussions • There are some mismatches between RQ1 and the results in Section 5.2. As per RQ1, 5.2 should report “approaches to defining microservice granularity and size”. Answer: Yes, the title was wrongly written, at the suggestion of another reviewer it was modified to make it shorter and more concrete. We shortened the titles of section 5 as follow: 5.1. Classification of the selected papers. 5.2. Approaches to define microservices granularity. 5.3. Metrics to evaluate microservices granularity and 5.4. Quality attributes to define microservices granularity. • As per RQ3, the reader will expect to see the relationship between the microservice granularity and quality attributes in Section 5.4. But, this perspective is not clear. In generals, the results should be related to RQs. Answer: Table 7 details the papers and the quality attributes addressed, which were divided into two groups: 1) Runtime characteristics, and 2) Software artifact characteristics. Also Table 8 details the paper, the metrics, and the quality attributes, therefore in the discussion section we included a deeper analysis of these tables. • There is no section on discussion of the results and their implications. A discussion section should be included. Answer: The results section is divided in two, first the quantitative data are presented and second the analysis and discussion of the results, we add a more descriptive description and relations among the selected papers; also, we highlight the trends and the gaps in the microservices granularity. • Threats to validity can be more elaborated and made comprehensive. The authors are advised to study Threats to validity reported in the good existing survey papers. Answer: The threats to validity section were improved and rewritten, we included a discussion of the article selection process and how the bias was reduced. Validity of the findings • A comprehensive catalog of microservice metrics are important. However, the application of these metrics should be highlighted. Answer: Table 6 details the metrics by paper, which are divided in four groups (Metric relative to 1) Dev. Team, 2) Dev. Process, 3) System, or 4) microservice) and metric type (Coupling, cohesion, granularity, team, computation resource, source code, performance, and other quality attribute), in the discussion section we include a deeper analysis of table 6. Table 8 details the paper, the metrics, and the quality attributes, also in the discussion section we included a deeper analysis of this table. 4. REVIEWER 3 Basic reporting • The references are good. However, in the text, when citing the authors of a paper, you have to use direct citation. For example: “(Zimmermann, 2017) extracted the principles...” (line 155) should be “Zimmermann (2017) extracted the principles…”. Another example: “(Hassan, Bahsoon & Kazman, 2020) stated that a granularity…” turns into “Hassanm, Bashsoon and Kazman (2020) states that a granularity…”. This should be fixed throughout the whole text! Answer: We corrected the citations as the reviewer requested. Experimental design • Introduction and Microservices granularity: I think those two sections should be merged in only one, the Introduction. The whole text of Section 2 could be placed at line 80. This will make clear the definitions of (microservice) granularity just after the authors affirm that that is still a challenge (lines 75 and 76). In addition, it will be easier to see the arguments for why addressing such research topic, i.e., the work’s motivation (currently lines 125 – 142). With this, the content of lines 81-86 should be adjusted to reference the body of text that will be introduced just before it. I also suggest stating the research questions after mentioning them at the Introductions (line 91). Answer: We include the section 2 at line 80 and merge the section 2 in the introduction, also we include the research questions in the introduction (line 129-131). We review and adjust the introduction. • Previous work: I didn’t understand what the criteria were to include the papers in that section. As you are presenting an SLR, I was expecting that related work were papers that conducted an SLR on other topics of microservices, showing that none of them has approached the subject of granularity, or even discussing grey literature about microservice granularity. If the authors want to bring a broader discussion on different topics (application modelling and architecture; design and development patterns; industrial adoption; state of practice; grey literature review; and analysis and interviews with industry leaders, software architects, and application developers of microservices-based applications – lines 146 to 149), then more studies on those topics should be described in this section. For instance, I missed work about migration to microservices, since this is one of the main situations in which a more precise/appropriate microservice granularity impacts the system evolution process. Answer: We review and restate the previous work section, we include the related papers that conducted a SLR in other topics of microservices, as well as related SLR in microservices granularity. The review of related works was carried out in chronological order, from 2017 to 2020, for each work the objectives or topics addressed by the authors were identified and the most important conclusions were summarized. For space and in order not to increase the length of the article, we do not include more details. Additionally, we compare and highlight the differences of these studies with our review. • Survey Methodology: Since the paper describes an SLR, I think the methodology could be more detailed. The authors followed a very classic schema proposed by Kitchenhan (2004). However, I also suggest the authors reading the following papers: Kai Petersen, Sairam Vakkalanka, and Ludwik Kuzniarz. 2015. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology 64 (2015). http://www.sciencedirect.com/science/article/pii/S0950584915000646. He Zhang, Muhammad Ali Babar, and Paolo Tell. 2011. Identifying relevant studies in software engineering. Information and Software Technology 53, 6 (2011). http://www.sciencedirect.com/science/article/pii/S0950584910002260 I think this section could be divided into (i) Planning the Mapping, where the authors could describe the Research Questions, Search Strategy, Study Selection and Data extraction; and (ii) Conducting the mapping, in which the process defined in the planning phase is executed and explained step-by-step. Answer: Kitchenham and Charters define systematic literature reviews as “a form of secondary study that uses a well-defined methodology to identify, analyze and interpret all available evidence related to a specific research question in a way that is unbiased and (to a degree) repeatable. We used the proposed methodology, which is not a mapping study. Wohling et al. stated that “A mapping study (also referred to as scoping study) have broader research questions, aiming to identify the state of practice or research on a topic and typically identify research trends. Due to their broader scope, the search and classification procedures are less stringent, and have more qualitative characteristics”. We used a Systematic Literature Review follow the Kitchenham and Charters methodology. Also, we used the book (Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A. 2012. Experimentation in software engineering. DOI: 10.1007/978-3-642-29044-2) to define our SLR methodology. • Why did not the authors search other bases, such as SpringerLink and ScienceDirect? Answer: Scopus indexes nearly the entire ScienceDirect database, but without the articles’ full text. Scopus builds the profiles and metrics using that data. (https://service.elsevier.com/app/answers/detail/a_id/28240/supporthub/agrm/p/15838/). Some papers that we found in our search string were consulted in ScienceDirect database for full text review; Also, Scopus indexed the SpringerLink publications. ACM and IEEE are the more important database of scientific publications in computer science, so we included these databases, some works published in ACM and IEEE are not in Scopus. • Was the snowball technique used? If not, why? Answer: No, the snowballing technique was not used, because our SLR methodology follow the guideline for systematic reviews proposed by Kitchenham B. in your book (Procedures for performing systematic reviews. DOI: 10.1.1.122.3308, 2004.), this book did not propose the snowballing technique. • I would expect more detailed exclusion criteria, such as: (i) studies published only as a (short) abstract; (ii) studies not written in English; (iii) studies to which it was not possible to have access; (iv) duplicate studies; Answer: The table 1 was edited, we included a column where the description of the inclusion / exclusion criteria was detailed. • How did the authors resolve possible conflicts during the paper reading step? Answer: We review each selected paper based on the interpretation of the contribution raised in each one, then we grouped the papers. To reduce the selection bias this process was reviewed for each author independently, the possible conflicts were resolved by group discussions among the co-authors after the classification, and if no common ground was found, the conflict was resolved by voting. We restated the section Threats to validity where we better explain how the bias and validity of the review was addressed. Results • Lines 365-366: “…and the remaining almost-half (14 of 29) used real-life case studies, thus achieving better validation”. What do you mean by better validation? Is that your feeling or something proved? Answer: The writing was improved by highlighting that real case studies imply a better validation. So, real-life case studies achieve better validation than hypothetical case studies. • About the size, dimension or granularity of microservices, the authors provide 6 possible metrics (lines 387-392). However, I missed metrics regarding the number of methods or exposed interfaces. Wouldn’t that be included? Answer: I agree with the reviewer, so we include the “number of methods or exposed interfaces” as a form of defining the microservices granularity. • Lines 414-415: “Most papers proposed manual procedures to identify the microservices (15 papers);”. Shouldn’t it be “identify microservice size”? Answer: Yes, we include “Most papers proposed manual procedures to identify the microservices granularity”. • Lines 461-462: “Several interesting works have been proposed, but there are still few specific, actionable proposals”. What do you mean by “still few specific, actionable proposals”? Answer: We refer to 'actionable proposals' to the proposals than can be applicable or replicable in projects other than their initial proposal. So, we change the sentence with a better redaction. • I liked the discussion of RQ1. However, how is it related to the analysed studies (P1 to P29)? For instance, how or which of those work address microservice decomposition, patterns, migration, and microservice-based application developed from scratch? Even though Table 5 summarizes this data, it has to be explained in the text. The authors did that when answering RQ3. Answer: We explain better the table 5 in the text, we did not found papers that address patterns, middleware or architectural language, the data is presented as follow: Contribution Type Problem formulation 2 7 Reference architecture 2 7 Method 17 59 Evaluation or comparison 1 3 Methodology 7 24 Middleware 0 0 Architectural language 0 0 Design Patter 0 0 We include a paragraph to explain that design pattern, middleware, and architectural language were not addressed. Additionally, we include in table 5 another column which detail if the paper addresses a migration or a microservice-based application developed from scratch. • Regarding answering RQ2, I would like to know which metrics are new and which ones are reused or adapted from general-purpose or other architectures, like SOA. Furthermore, it would also be very interesting if the authors could explain in which situation (e.g. migration, decomposition, development from scratch…) each metric has been used. Answer: We include a paragraph to explain in which situation (e.g. migration, decomposition, development from scratch…) each metric has been used, from table 5 and table 6 we obtain this information. To know which metrics are new and which ones are reused or adapted from general-purpose or other architectures, like SOA, it is very difficult to identify from the obtained data and it is out of scope of this SLR. • Some metrics are very general, such as logical coupling and semantical coupling. I wonder how those metrics have been applied to the microservice world, as the authors did in the dependency weight metrics. For instance, I am very curious to see how the metrics Function Point (line 689) has been used to estimate microservice granularity. Answer: Some authors do not show in detail the use of metrics in their work, they give a superficial definition or include a reference where the details of the metric are explained. For this reason, it was not possible to delve into the use of each metric, the scope of this work was to identify the metrics, catalog them and describe their use according to what was published by the authors in the selected works. • Lines 478-481: The authors said that classified the metrics into four groups, but it seems that there have been found metrics for only two groups. In that case, wouldn’t it be better to propose only two groups rather than four? Answer: We found few metrics related to development team (P13) and development process (P3) with only one paper, respectively. We kept those groups because a research gap can be highlighted in the proposed metrics related to the development team and the development process. Table 6 shows the papers and metrics by metric type. • Lines 484 – 485: The authors said that there have been found 14 coupling metrics and 13 cohesion metrics, but the text present 17 and eight metrics of each type, respectively. Please review the numbers for all types of metrics. Answer: We review the number of metrics, we included other metrics which were proposed by other authors; the paragraph that begins on line 588, it is explained that other authors were considered in addition to those that resulted in the literature review process, because we see that they make an interesting proposal of other metrics. Considering (Bogner, Wagner & Zimmermann, 2017b), (Candela et al., 2016), and related papers, we detail the following metrics, which can be used or adapted to define the right granularity of the microservices and to evaluate decompositions. • Please remove the period at the end of the names of subsections 5.2.1 to 5.2.8 Answer: We remove the period at the end of the names of subsections. • Line 513: “I change a software component, I must…”. Please use the passive voice. Answer: We correct the sentence and use de passive voice. • Why is “Contributor count and contributor coupling” (line 539) a coupling metrics rather than a team metrics? Answer: The authors of the papers defined these metrics as coupling metrics. Also, the contributor count may be a team metric. We include this metrics in the coupling metrics because the calculation of the metric is related to the coupling. • Lines 553-557: Repeated Metrics. Answer: We delete the repeated metrics. • Do the metrics Number of singleton clusters (line 702) and maximum cluster size (line 707) refer to containers rather than clusters when considering microservices? Answer: No, these metrics do not refer to containers, the metrics are used in a cluster algorithm for determining de microservice granularity. • Lines 711-713: Shouldn’t that paragraph be in Section 5.2.2? Answer: Yes, we move that paragraph to the section 5.2.2. • Can the authors explain the difference between the metrics maximum request time (line 732) and maximum response time (line 735)? Answer: The request (out) time corresponds to the response time of a request or call made by one microservice to another. The response (in) time corresponds to the time that the microservice takes to respond to a call. We better explain the definition of these metrics. • Can the authors explain how the metrics cost of quality assurance (line 749) is related to microservice granularity? Answer: A microservice with a large granularity implies a higher test cost, even more so when these are not automated, than a smaller microservice, which can be tested faster. Although also by having more microservices to test, we would have more distributed applications to test and maintain. Without having the testing process automated, quality assurance will be more difficult and expensive. The definition of microservice granularity affects the quality assurance process. We include these paragraphs in the discussion section. • Lines 762-763: “(Ahmadvand & Ibrahim, 2016) stated that the level of scalability required for each functional requirement or service.” -> Missing something in the sentence to make sense. Answer: We clarify the sentence with a better wording, as follow: Scalability impact. (Ahmadvand & Ibrahim, 2016) define the Scalability impact as the required level of scalability (high, medium, low) to implement a functional requirement or service. • Wouldn’t “Internal co-change frequency” (line 830) and “External co-change frequency” (line 835) be coupling metrics? Answer: Yes, they could be coupling metrics, but the authors of the reviewed articles used it specifically in the source code. • Lines 860-861: “Being more granular implies that microservice has no dependencies and can function only”. I disagree with that. One microservice can be very big in terms of LOC, providing many services related to the domain object, but has no dependencies. Therefore, it would not be granular. Answer: In the example given by the reviewer, a large microservice is granular when it has no dependencies and can function independently. But from the point of view of the number of operations exposed by having more services, its granularity is greater because there is a larger service interface with more operations to maintain. We review the redaction of the paragraph. • Where is the weight in the metrics “Weighted service interface count” (line 864)? What is the difference from these metrics to “operation number”, since both of them count the number of exposed public methods? Answer: We modify the metric definition as follow: Weighted service interface count (WSIC): WSIC(S) is the number of exposed interface operations of service S. The default weight is set to 1. Alternate weighting methods, which need to be validated empirically, can take into considera- tion the number and the complexity of data types of parameters in each interface. (Hirzalla, Cleland-Huang & Arsanjani, 2009). WSIC(S) is the number ofexposed interface operations of service S. Operations can be weighted based on the number of parameters or their granularity (e.g. a complex nested object) with the default weight being set to 1 (Bogner, Wagner & Zimmermann, 2017a). • Wouldn’t “Number of microservices” (line 875) be a system metrics rather than a microservice metrics? Answer: Yes, the number of microservices is a system or application-level metric. See Table 6. • What is the difference between the metrics “Code size in lines of code” (line 815), classified as a source code metrics, and “Lines of code” (line 878), classified as granularity metrics? Answer: Code size in lines of code is the same metric that lines of code, but we classify it in both groups: source code and granularity metrics. • What is a nanoentity (lines 881 and 882)? Answer: We explain in the text the meaning of nanoentity which is an attribute of a class. Also, Entity is a class. • Line 889: “Some have already been defined, mainly for migrations from monoliths to microservices”. Please add references to that sentence. Answer: It is a conclusion of the obtained data, see figure 2, table 5, and table 6. • Line 1045: What is a DFD PS? Answer: It is the process-datastore version of DFD ( DFD PS ) representing the business logics. We included this definition in the paper. Validity of the findings • Threats to Validity: This section has to be completely rewritten, since the text there does not present the threats to the validity of the work and how they have been minimized. Usually threats to validity concern external, internal, construction, and conclusion validity. To a more detailed description about that, please refer to the book: Experimentation in software engineering. Springer Science & Business Media. Claes Wohlin, Per Runeson, Martin Höst, Magnus C Ohlsson, Björn Regnell, and Anders Wesslén. Answer: The threats to validity section were improved and rewritten, we included a discussion of the article selection process and how the bias was reduced. • Conclusion: I think that section looks like more a summary of the whole paper rather than a conclusion section. Maybe the authors could create a subsection “Summary”, at the end of the Section “Results and Discussion” and move most of the text from this section to that new one. This way, the readers can find more easily what the conclusions of this work are. Answer: We improve and rewrite the conclusions section. • Lines 1114 – 1116: “Finally, the main reason to migrate a monolithic application to microservices is precisely to improve performance and scalability, followed by fault tolerance, maintainability, and modularity”. Where did you get that information from? Could please cite some references? Answer: This information come from the obtained data, see figure 2 and RQ3. It is a conclusion of our review. • Finally, although the authors mentioned research gaps, I would like to see possible future work of the presented paper. Answer: We include the future work of the presented papers, future directions of the microservices granularity research were detailed. "
Here is a paper. Please give your review comments after reading it.
227
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. Microservices are an architectural approach of growing use, and the optimal granularity of a microservice directly affects the application's quality attributes and usage of computational resources. Determine microservice granularity is an open research topic.</ns0:p><ns0:p>Methodology. We conducted a systematic literature review to analyze literature that address the definition of microservice granularity. We searched in IEEE Xplore, ACM Digital Library and Scopus. The research questions were: Which approaches have been proposed to define microservice granularity and determine the microservices' size? Which metrics are used to evaluate microservice granularity? Which quality attributes are addressed when researching microservice granularity?</ns0:p><ns0:p>Results. We found 326 papers and selected 29 articles after applying inclusion and exclusion criteria. The quality attributes most often addressed are runtime properties (e.g. scalability and performance), not development properties (e.g. maintainability). Most proposed metrics were about the product, both static (coupling, cohesion, complexity, source code) and runtime (performance, and usage of computational resources), and a few were about the development team and process. The most used techniques for defining microservices granularity were machine learning (clustering), semantic similarity, genetic programming, and domain engineering. Most papers were concerned with migration from monoliths to microservices; and a few addressed green-field development, but none address improvement of granularity in existing microservice-based systems.</ns0:p><ns0:p>Conclusions. Methodologically speaking, microservice granularity research is at a Wild West stage: no standard definition, no clear development -operation trade-offs, and scarce conceptual reuse (e.g., few methods seem applicable or replicable in projects other than their initial proposal). These gaps in granularity research offer clear options to investigate on continuous improvement of the development and operation of microservice-based systems.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>meaning the number of operations exposed by the microservice, along with the number of microservices that are part of the whole application, and second by its complexity and dependencies. The goal is to have low coupling, low complexity, and high cohesion between microservices. <ns0:ref type='bibr'>Hassan, Bahsoon, and Kasman (2020)</ns0:ref> stated that a granularity level determines 'the service size and the scope of functionality a service exposes <ns0:ref type='bibr' target='#b29'>(Kulkarni &amp; Dwivedi, 2008)</ns0:ref>'. Granularity adaptation entails merging or decomposing microservices thereby moving to a finer or more coarse-grained granularity level <ns0:ref type='bibr' target='#b21'>(Hassan, Bahsoon, &amp; Kazman, 2020)</ns0:ref>. <ns0:ref type='bibr' target='#b24'>Homay et al. (2020)</ns0:ref> stated that 'the problem in finding service granularity is to identify a correct boundary (size) for each service in the system. In other words, each service in the system needs to have a concrete purpose, as decoupled as possible, and add value to the system. A service has a correct or good granularity if it maximizes system modularity while minimizing the complexity. Modularity in the sense of flexibility, scalability, maintainability, and traceability, whereas complexity in terms of dependency, communication, and data processing' <ns0:ref type='bibr' target='#b24'>(Homay et al., 2020)</ns0:ref>. The quality of a microservices-based system is influenced by the granularity of its microservices, because their size and number directly affect the system's quality attributes. The optimal size or granularity of a microservice directly affects application performance, maintainability, storage (transactions and distributed queries), and usage and consumption of computational resources (mainly in the cloud, the usual platform to deploy and execute microservices). Although the size of microservice or optimal granularity is a discussion topic, few patterns, methods, or models exist to determine how small a microservice should be, as others have already pointed out: <ns0:ref type='bibr' target='#b59'>Soldani et al. (2018)</ns0:ref> noticed the difficulty of identifying the business capacities and delimited contexts that can be assigned to each microservice <ns0:ref type='bibr' target='#b59'>(Soldani, Tamburri &amp; Heuvel, 2018)</ns0:ref>. <ns0:ref type='bibr'>Bogner et al. (2017)</ns0:ref> claimed that 'the appropriate microservice granularity is still one of the most discussed properties (How small is small enough?), as shown in the difficulty of defining acceptable value ranges for source code metrics' <ns0:ref type='bibr' target='#b8'>(Bogner, Wagner &amp; Zimmermann, 2017a)</ns0:ref>. Zimmermann (2017) indicated that professionals request more concrete guidance than the frequent advice to 'define a limited context for each domain concept that will be exposed as a service' <ns0:ref type='bibr' target='#b43'>(Zimmermann, 2017)</ns0:ref>. <ns0:ref type='bibr' target='#b26'>Jamshidi et al. (2018)</ns0:ref> affirmed that the real challenge is finding the right modules, with the correct size, the correct assignment of responsibilities, and welldesigned interfaces, and besides, no agreement on the correct size of microservices exist <ns0:ref type='bibr' target='#b26'>(Jamshidi et al., 2018)</ns0:ref>.The aim of this article is to identify the main approaches in the literature that define microservice granularity or that use it in the process of designing microservice-based systems, either from scratch or migrated from monoliths. A systematic literature review was carried out on key scientific computing literature databases (IEEE Xplore, ACM Digital Library, and Scopus); we formulated three research questions, we defined inclusion and exclusion criteria, and current research trends to identify relevant works, challenges and gaps for future research were identified. Research papers that address the problem of microservice granularity are detailed; the research questions are (RQ1) which approaches have been proposed to define microservice granularity? (RQ2) which metrics are used to evaluate microservice granularity? and (RQ3) which quality attributes are addressed when researching microservice granularity?</ns0:p><ns0:p>Very few previous works have reviewed the definition of microservices granularity; we did not find any review that details the techniques, methods or methodologies used to define granularity, none describe the metrics used to evaluate it, and few addresses the quality attributes considered to define it. Contributions of this work are as follow: (1) we identified and classified research papers that address the problem of microservice granularity, therefore we defined the state of the art; (2) we identified and defined the metrics currently used to assess the granularity of microservices-based systems;</ns0:p><ns0:p>(3) we identified the quality attributes that researchers studied to define microservice granularity; (4) we identified the case studies used to validate the methods, which can serve as a dataset for future evaluations of methods or techniques to define granularity. The remainder of this article is organized as follows: Section 2 defines previous related works; Section 3 presents the survey design; Section 4 organizes the results; Section 5 discusses the trends and research gaps; and Section 6 summarizes and concludes.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Related work</ns0:head><ns0:p>A small number of literature reviews have been published on microservice architecture research, some of these papers include analysis of application modeling and architecture; design and development patterns; industrial adoption; state of practice; grey literature review; and analysis and interviews with industry leaders, software architects, and application developers of microservices-based applications; whereas two papers focused on microservice granularity specifically <ns0:ref type='bibr' target='#b21'>(Hassan, Bahsoon &amp; Kazman, 2020)</ns0:ref> and <ns0:ref type='bibr' target='#b56'>(Schmidt &amp; Thiry, 2020)</ns0:ref>. The literature reviews are described in chronological order below. Di <ns0:ref type='bibr' target='#b14'>Francesco (2017)</ns0:ref>, <ns0:ref type='bibr' target='#b15'>Di Francesco, Lago and Malavolta (2019)</ns0:ref> focused on determining the publication trends of research studies on architecture with microservices and on the potential for industrial adoption of existing research. They point out that few studies have focused on design patterns and architectural languages for microservices, and research gaps exist in areas related to quality attributes <ns0:ref type='bibr' target='#b14'>(Di Francesco, 2017)</ns0:ref> <ns0:ref type='bibr' target='#b15'>(Di Francesco, Lago &amp; Malavolta, 2019)</ns0:ref>. Zimmermann (2017) extracted the principles of microservices from the literature and makes a comparison with SOA and highlights the critical points in the research on microservices, as a result of the review and discussions with industry opinion leaders, developers and members of the service-oriented community. He raises five research issues: (1) service interface design (contracting and versioning), (2) microservice assembly and hosting, (3) microservice integration and discovery, (4) service dependency management, and (5) service and end client application testing <ns0:ref type='bibr' target='#b43'>(Zimmermann, 2017)</ns0:ref>. <ns0:ref type='bibr' target='#b26'>Jamshidi et al. (2018)</ns0:ref> presented a technological and architectural perspective on the evolution of microservices. Their editorial introduction also set out future research challenges: (1) service modularization and refactoring; (2) service granularity; (3) front-end integration; (4) resource monitoring and management; (5) failure, recovery and self-repair; and (6) organizational culture and coordination <ns0:ref type='bibr' target='#b26'>(Jamshidi et al., 2018)</ns0:ref>. <ns0:ref type='bibr' target='#b59'>Soldani, Tamburri, and Heuvel (2018)</ns0:ref> systematically analyzed the grey industrial literature on microservices to define the state of practice, identified the technical and operational problems and benefits of the architectural style based on microservices at an industrial level. When designing a microservice-based application, key issues involve determining the right granularity of its microservices and the design of its security policies. During development time, managing distributed storage and application testing, is challenging. Another pain was usage of network and computing resources during operation <ns0:ref type='bibr' target='#b59'>(Soldani, Tamburri &amp; Heuvel, 2018)</ns0:ref>. <ns0:ref type='bibr' target='#b17'>Ghofrani and L&#252;bke (2018)</ns0:ref> focused on identify challenges and gaps in the design and development of microservices, they describe the main reasons leveraging and preventing the usage of systematic approaches in microservice architectures; and the suggestions or solutions that improve aspects of the microservices architecture. Ghofrani and L&#252;bke provided an updated map of the state of practice in microservice architecture and its complexities for future research. According to the results of their survey, optimization in security, response time, and performance had higher priorities than resilience, reliability, fault tolerance, and memory usage are research gaps <ns0:ref type='bibr' target='#b17'>(Ghofrani &amp; L&#252;bke, 2018)</ns0:ref>. <ns0:ref type='bibr' target='#b42'>Osses, M&#225;rquez, and Astudillo (2018)</ns0:ref> summarized the 44 architectural patterns of microservices, and proposed a microservice architectural pattern taxonomy: front-end, back-end, orchestration, migration, internet of things, and DevOps. There was no specific pattern to define the adequate microservice granularity, and they just proposed designing the application as a set of modules, each one an independent business function with its data, developed by a separate team and deployed in a separate process <ns0:ref type='bibr' target='#b42'>(Osses, M&#225;rquez &amp; Astudillo, 2018)</ns0:ref>. <ns0:ref type='bibr' target='#b19'>Hamzehloui, Sahibuddin &amp; Salah (2019)</ns0:ref> aimed to identify the common trends and direction of research in microservices. They stated that infrastructure-related issues were more common than software-related issues, and the cloud was the most common platform for running microservices. At the infrastructure level, automation and monitoring require more research, as do software development and design in microservices; safety, maintenance, and costs were three other areas that have been studied relatively less compared to other topics <ns0:ref type='bibr' target='#b19'>(Hamzehloui, Sahibuddin &amp; Salah, 2019)</ns0:ref>. Vera-Rivera, Gaona, and Astudillo (2019) identified the challenges and research trends present in the phases of the development process and in the management of quality attributes of microservice-based applications (Vera-Rivera, Gaona Cuevas &amp; <ns0:ref type='bibr'>Astudillo, 2019)</ns0:ref>. This article was more general, it did not emphasize in granularity. <ns0:ref type='bibr' target='#b21'>Hassan, Bahsoon, and Kazman (2020)</ns0:ref> carried out a systematic mapping study to provide a better understanding of the transition to microservices; they consolidated various views (industrial, research/academic) of the principles, methods, and techniques commonly adopted to assist in the transition to microservices. They identified gaps in the state of the art and the practice related to reasoning about microservice granularity. In particular, they identified possible research topics concerning (1) systematic architecture-oriented modeling support for microservice granularity, (2) a dynamic architectural assessment approach for reasoning about the cost and benefit of granularity adaptation, and (3) effective decision support for informing reasoning about microservice granularity at runtime <ns0:ref type='bibr' target='#b21'>(Hassan, Bahsoon &amp; Kazman, 2020)</ns0:ref>. They focused on understanding the transition to microservices and the microservice granularity problem (a direct antecedent of this study). They considered quality attributes but not metrics. Their sources were gray literature (blog articles, presentations, and videos, as means of reporting first-hand industrial experiences) and research papers, whereas our study put emphasis only on white literature (articles published in journals and scientific events). Our work was more specific and detailed evaluating the methods and techniques to define granularity, whereas their work detailed those methods and techniques in general. Our work are complementary to their work and take a deeper look at the definition of granularity. <ns0:ref type='bibr' target='#b56'>Schmidt and Thiry (2020)</ns0:ref> carried out a systematic literature review, they found proposals for identification, decomposition, partitioning or breaking down the application domain to reach an adequate granularity for microservices. Moreover, the research aims to highlight the usage of Model-Driven Engineering (MDE) or Domain-Driven Design (DDD) approaches <ns0:ref type='bibr' target='#b56'>(Schmidt &amp; Thiry, 2020)</ns0:ref>. They emphasized on DDD and MDE; whether the selected studies cover DDD or apply MDE; and which elements, principles, practices, and patterns authors applied; they did not include metrics and quality attributes. Therefore, our work is complementary to their work. Most previous literature reviews do not emphasize granularity, they concern general topics of microservice architecture. To our knowledge, this is the first study focuses specifically on classified and detailed research papers of microservice granularity, including quality attributes that motivate working on it, methods/techniques to improve it, and metrics to measure it.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Survey methodology</ns0:head><ns0:p>A systematic literature review was carried out following the approach introduced by Kitchenham define systematic literature reviews as 'a form of secondary study that uses a well-defined methodology to identify, analyze and interpret all available evidence related to a specific research question in a way that is unbiased and (to a degree) repeatable' <ns0:ref type='bibr' target='#b28'>(Kitchenham, 2004)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.'>Planning the systematic literature review</ns0:head><ns0:p>The objectives of this systematic literature review are defined as follows: first, to identify the proposals that address the microservice granularity problem; second, to identify the metrics that have been used to evaluate microservice granularity; and third, to analyze the quality attributes addressed in those works to evaluate microservice granularity. Few studies or reviews specifically address the problem of microservice granularity, and very few identify the metrics along with the quality attributes addressed to assess microservice granularity. A review protocol specifies the methods that will be used to undertake a specific systematic review. We selected research papers through two queries strings used in the IEEE Xplore, ACM Digital Library, and Scopus; then the papers were screening and reviewed, we applied inclusion and exclusion criteria, next we tabulated the papers, the contribution of each selected paper was detailed, we classified the papers, the metrics were described, and the quality attributes were specified. The protocol components are listed below:</ns0:p><ns0:p>1. Define research questions. The definition of the correct size and the functionalities that each microservice must contain, affects the quality attributes of the system, and affects testing, deployment, and maintenance; therefore, identify the metrics that are being used to evaluate the granularity is very important, describe them and use them as references to propose other methods, techniques or methodologies that allow defining the granularity of the microservices in a more objective way. Identifying the quality attributes studied and how they were evaluated is an important reference for future work to define the granularity of the microservices that are part of an application.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Search strategy:</ns0:head><ns0:p>Two query strings were defined, which included alternative spellings for 'microservice' and any of the following words: granularity, size, dimensioning, or decomposition. Additionally, to make the search terms that correspond to the research questions more precise, another search string was included. In addition to granularity, it contained the words 'method', 'technique', and 'methodology'. Therefore, the search strings were as follow:</ns0:p><ns0:p>Query string 1 (QS1): ('micro service' OR microservice) AND (granularity OR size OR dimensioning OR decomposition). According to <ns0:ref type='bibr'>Hassan, Bahsoon, and Kasman (2020)</ns0:ref>, it is established that the granularity of microservices is related to the size and dimension of the microservice, so these terms are included in the search string. Additionally, the decomposition of monolithic applications to microservices is an important research topic, so we include this word.</ns0:p><ns0:p>Query string 2 (QS2): ('micro service' OR microservice) AND 'granularity' AND ('method' OR 'technique' OR 'methodology'), targeting only research papers. The main objective of the work was to identify the methods, techniques or methodologies used to determine the microservices granularity.</ns0:p><ns0:p>QS1 and QS2 addresses all research question; for each of the proposals selected in each QS, the metrics used, and the quality attributes addressed were identified. The query strings were used in IEEE Xplore 1 , ACM Digital Library 2 and Scopus 3 , searching for papers' titles, abstracts and keywords. The search in these databases, yield 969 results for QS1 and 146 results for QS2. The search was performed in July 2020.</ns0:p><ns0:p>3. Data extraction strategy. First, papers were tabulated; second, duplicated papers were removed; third, title, abstract, and conclusions of all papers were reviewed and analyzed. Each coauthor of this report carried out this process.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>Study selection criteria and procedures.</ns0:head><ns0:p>We selected primary research papers that make a specific proposal (methodology, model, technique, or method) about microservices granularity, including migrations from monolith to microservices and decompositions of systems in microservices. After obtaining the relevant studies, the inclusion and exclusion criteria were applied (see table <ns0:ref type='table'>1</ns0:ref>). We excluded any paper about monolith migrations that were not directly related to the definition of microservice granularity. We also excluded papers that proposed methods, techniques, or models for SOA, web services or mobile services.</ns0:p><ns0:p>5. Synthesis of the extracting data. Each of the selected papers was evaluated in full-text form, taking detailed note of the work and the contributions made.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.'>Conducting the systematic literature review</ns0:head><ns0:p>The review process was carried out as follow:</ns0:p><ns0:p>1. We download the full-text paper.</ns0:p><ns0:p>2. Each co-author read and review the paper.</ns0:p><ns0:p>3. Each co-author uses the classification criteria on the paper, using the table presented in the appendix A; this was carried out by each co-author independently.</ns0:p><ns0:p>4. We discussed and analyzed the results obtained by each author, resolving doubts and contradictions and the results are presented in appendix A.</ns0:p><ns0:p>The results of applying the research protocol is presented in figure <ns0:ref type='figure'>1</ns0:ref>. To analyze the works presenting definitions of the granularity of microservices, classification criteria were defined. These criteria were based on the classification performed by <ns0:ref type='bibr'>(Wieringa et al., 2006)</ns0:ref>, and have been widely used in previous systematic literature reviews: <ns0:ref type='bibr' target='#b15'>(Di Francesco, Lago &amp; Malavolta, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b19'>(Hamzehloui, Sahibuddin &amp; Salah, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b21'>(Hassan, Bahsoon &amp; Kazman, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b67'>(Vural, Koyuncu &amp; Guney, 2017</ns0:ref>). To answer the research questions, we added the classification criteria &#61623; Approach: Structural or behavioral aspects proposed in the papers to define the granularity of microservices <ns0:ref type='bibr' target='#b21'>(Hassan, Bahsoon &amp; Kazman, 2020)</ns0:ref>.</ns0:p><ns0:p>&#61623; Quality attribute studied: The Quality attributes considered in the proposal, such as performance, availability, reliability, scalability, maintainability, security, and complexity.</ns0:p><ns0:p>&#61623; Research contribution: Type of contribution made in the article; namely, method, application, problem formulation, reference architecture, middleware, architectural language, design pattern, evaluation, or comparison.</ns0:p><ns0:p>&#61623; Experimentation type: Type of experimentation used to validate the proposal; namely experiment, case study, theoretical, or other.</ns0:p><ns0:p>&#61623; Technique used: This criterion describes the technique, method or model used to define the granularity of the microservices.</ns0:p><ns0:p>&#61623; Input data: Type of input data used to identify the microservices (i.e. uses cases, logs, source code, execution traces, among others) &#61623; Type of case study: This criterion determines if the case study is a toy example (hypothetical case) or a real-life case study. We identified the case study.</ns0:p><ns0:p>&#61623; Automatization level: This criterion determines the level of automation of the proposed technique, if it is manual, automatic, or semi-automatic.</ns0:p><ns0:p>Finally, results were presented in four sections: first, the classification of the selected papers; second, the main contributions and research gaps in sizing and definition of microservice granularity were detailed; third, metrics were described an ordered by year and type; and fourth, quality attributes were detailed, and results were discussed, leading to conclusions presented in this article.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>Results</ns0:head><ns0:p>The search process took place in July 2020. The search in the databases of scientific publications when applying the search strings (QS1 and QS2) related to the granularity of the microservices yield 969 and 146 works respectively (see table <ns0:ref type='table'>2</ns0:ref>).</ns0:p><ns0:p>After applying the inclusion and exclusion criteria, 29 papers were selected that address the definition of the granularity of microservices. (see table <ns0:ref type='table'>3</ns0:ref>). The summarized results of this systematic literature review are synthesized in figure <ns0:ref type='figure'>2</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:2:0:NEW 3 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For RQ1, we identified the papers that propose a method, model, or methodology to define the microservice granularity; metrics are fundamental because they allow one to measure, monitor, and evaluate any aspect of a microservice, thus defining or determining the appropriate granularity of a microservice. For RQ2, we identified metrics used to evaluate microservice granularity and their decomposition. Figure <ns0:ref type='figure'>2</ns0:ref> shows the type and number of metrics and whether it was applied to microservice, system, development process, or development team. These metrics are detailed in section 4.3. Finally, for RQ3 we synthesized the works that address quality attributes to evaluate microservices granularity.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.'>Classification of the selected papers</ns0:head><ns0:p>Appendix A shows the tabulated data and the results of the evaluation of classification criteria. Most papers were published in conference (86%), and only four (14%) were published in journals. All selected papers were published between 2016 and the beginning of <ns0:ref type='bibr'>2020 (2 in 2016, 7 in 2017, 6 in 2018, 12 in 2019, and 2 in 2020)</ns0:ref>. The development process phases addressed by each proposal are shown in figure <ns0:ref type='figure'>3</ns0:ref>. Several papers emphasize more than one phase, (e.g., P10 focuses on development and deployment, as befits a method for migration from monolith to microservices). Most of the proposed methods focus on the design (79%) and development (38%) phase, with only one addressing testing (3%). Migrations from monolithic architectures to microservices are very common and important 19 of 29 papers (66%). The papers that do not address migration do focus on identifying microservices in the design phase; therefore, defining the size and granularity of microservices from the design phase on is key, because it has implications for development, testing and deployment. Further, most papers (79%) focus on the design phase; implicitly or explicitly, they suggest that defining the 'right' microservices granularity from the design phase on is fundamental. However, some authors affirm that reasoning about microservices size and performance is not possible at design time; indeed, <ns0:ref type='bibr' target='#b20'>(Hassan, Ali &amp; Bahsoon, 2017)</ns0:ref> affirm that the expected behavior of the system cannot be fully captured at design time. On the research strategy (see figure <ns0:ref type='figure'>4</ns0:ref>), validation research and solution proposals account for almost all (14 and 11, respectively); proposals that have been tested and validated in practice are very few, namely P5 (a reference architecture) and P16 (a method for candidates microservice identification from monolithic systems). On the type of contribution (see figure <ns0:ref type='figure'>5</ns0:ref>), the vast majority (17 papers) proposed methods (59%), some proposed methodologies (24%), few proposed reference architectures (7%) and problem formulation (7%), Only one propose an evaluation or comparison (3%). On the validation approach (see figure <ns0:ref type='figure'>6</ns0:ref>), most papers (69%) used case studies for validation and evaluation, other papers use experiments (37%), and most of also used case studies. More than half of studies (13 of 29) validated their proposals using realistic (but not realhypothetical) case studies, and the remaining almost-half (14 of 29) used real-life case studies, real-life case studies achieve better validation than hypothetical case studies. Even better, some studies ( <ns0:ref type='formula'>8</ns0:ref>) used actual open-source projects. The case studies found in the reviewed articles are summarized in table 4; they are valuable resources to validate future research and to compare new methods with those identified in this review. In any case, other microservice-based datasets have been found to be beyond the reach of this study; for example, <ns0:ref type='bibr' target='#b48'>(Rahman, Panichella &amp; Taibi, 2019)</ns0:ref> The most used case studies to validate the proposals were Kanban boards (P6, P20, P28) and Money transfer (P6, P20, P28); they were used by 3 papers, followed by JPetsStore (P16, P29) and Cargo tracking (P6, P24) which was used by 2 papers.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.'>RQ1: Approaches to define microservices granularity.</ns0:head><ns0:p>The granularity of microservices involves defining their size and the number that will be part of the application. From the proposal of S. Newman <ns0:ref type='bibr' target='#b39'>(Newman, 2015)</ns0:ref>, microservices follow the principle of simple responsibility that says 'Gather things that change for the same reason and separate things that change for different reasons'. The size, dimension, or granularity of microservices have traditionally been defined as follows:</ns0:p><ns0:p>1. Trial and error, depending on the experience of the architect or developer.</ns0:p><ns0:p>2. According to the number of lines of code.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>By implementation units.</ns0:head><ns0:p>4. By business capabilities.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.'>By capabilities of the development team or teams.</ns0:head><ns0:p>6. Using domain-driven design.</ns0:p><ns0:p>7. Number of methods or exposed interfaces. <ns0:ref type='bibr'>Richardson (2020)</ns0:ref> proposed four decomposition patterns, which allow for the decomposition of an application into services: (1) Decompose by business capability: define services corresponding to business capabilities; (2) decompose by subdomain: define services corresponding to DDD subdomains; (3) self-contained service: design services to handle synchronous requests without waiting for other services to respond. (4) service per team: each service is owned by a team, which has sole responsibility for making changes, and ideally each team has only one service <ns0:ref type='bibr'>(Richardson &amp; microservices.io). Zimmermann et al. (2019)</ns0:ref> proposed a microservice API patter (MAP) for API design and evolution. The patterns are divided in five categories: (1) foundation, (2) responsibility, (3) structure, (4) quality, and (5) evolution. These patterns are an important reference for developing microservice-based applications. There is no specific pattern that helps to determine the number of microservices and their size, that is, the number of operations it must contain <ns0:ref type='bibr'>(Zimmermann et al., 2019)</ns0:ref>.</ns0:p><ns0:p>The size of the microservice or optimal granularity is one of the most discussed properties and there are few patterns, methods, or models to determine how small a microservice should be. In this respect, some authors have addressed this problem and proposed the solutions summarized in table 5. The proposed techniques were classified into manual, semi-automatic, or automatic techniques; manual techniques are methods, procedures, or methodologies performed by the architect or developer decomposing systems following a few steps. Automatic techniques use some type of algorithm to generate decomposition, and the system generates the decomposition. Semiautomatic combine one part made manually and with another made automatically. Most papers proposed manual procedures to identify the microservice granularity (15 papers); some proposals were automatic (8 papers) and few proposals paper were semi-automatic (6 papers). The most used case studies to validate the proposals were Kanban boards and Money Transfer (P6, P20, P28). The papers from 2017 and 2018 are mostly manual methods or methodologies that detail the way to decompose or determine microservices, using DDD, domain engineering, or a specific methodology. Later, the papers from 2019, and 2020 propose semi-automatic, and automatic methods that use intelligent algorithms and machine learning mostly focused on migrations from monolith to microservices. We can observe a chronological evolution in the proposals, the type of techniques used to define the granularity of the microservices that are part of an application are presented in figure <ns0:ref type='figure'>7</ns0:ref>, semantic similarity, machine learning, and genetic programing were the most important techniques.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.'>RQ2: Metrics to evaluate the microservice granularity.</ns0:head><ns0:p>Software metrics allow us to measure and monitor different aspects and characteristics of a software process and product, and there are metrics at the level of design, implementation, testing, maintenance, and deployment. These metrics allow understanding, controling, and improving of what happens during the development and maintenance of the software, to take corrective and preventive actions. In the methods and models identified, most of them (59%) used some metrics to determine microservices granularity. We would have expected greater importance for metrics in automatic methods to validate granularity in microservice-based applications and evaluate decompositions yield by methods. We identify metrics for coupling, cohesion, granularity, complexity, performance, use of computational resources, development team, source code, and so on (see Table <ns0:ref type='table'>6</ns0:ref>). We classified them into four groups: about development team, about microservices development process, about the system, and about each microservice. Most identified metrics (40) focused on a microservice, and only two address the microservice development processes. There is a research gap for metrics to evaluate the full development process of microservice-based applications and their impact on the granularity of microservices.</ns0:p><ns0:p>The most used metrics are related to coupling (14 proposed metrics), followed by performance and cohesion (13 metrics), next, computational resources metrics (8 metrics), and complexity and source code (7 metrics); (see figure <ns0:ref type='figure'>8</ns0:ref>). Nine papers used coupling metrics (P11, P12, P13, P14, P15, P16, P21, P22, P24), and seven papers used cohesion metrics (P12, P14, P16, P21, P24, P27, P28), whereas performance metrics was used by five papers (P4, P10, P12, P21, P23); Complexity metrics were considered by two papers (P8, P22), although they are fundamental characteristic of microservices. More proposals that include more complexity metrics are required, as well as metrics related to the microservice development process. The other metrics were used by only one paper each. We found that 11 papers used coupling or cohesion metrics, and 5 papers used both. Only one (P24) used coupling, cohesion, and complexity metrics. The size and number of microservices that compose an application directly affects its maintainability. Automation of tests, continuous integration and deployment are essential especially when microservices and many distributed systems must be managed independently by each microservice. <ns0:ref type='bibr'>Bogner, Wagner &amp; Zimmermann (2017)</ns0:ref> performed a literature review to measure the maintainability of software and identified metrics in four dominant design properties: size, complexity, coupling, and cohesion. For service-based systems, they also analyzed its application to systems based on microservices and presented a maintainability model for services (MM4S), consisting of service properties related to automatically collectible service metrics <ns0:ref type='bibr' target='#b9'>(Bogner, Wagner &amp; Zimmermann, 2017b)</ns0:ref>. The metrics proposed by them can be used or adapted to determine the adequate granularity of the microservices that are going to be part of an application. Considering Bogner, Wagner &amp; Zimmermann (2017), <ns0:ref type='bibr' target='#b11'>Candela et al. (2016)</ns0:ref> and related papers, we detail the following metrics, which can be used or adapted to define the right granularity of the microservices and to evaluate decompositions.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.1.'>Coupling metrics</ns0:head><ns0:p>The coupling measures the degree of dependence of one software component in relation to another. If there is a high degree of coupling, the software component cannot function properly without the other component; furthermore, when we change a software component, we must obligatorily change the other component. For these reasons when designing microservice-based applications, we should look for a low degree of coupling between each microservice. <ns0:ref type='bibr' target='#b38'>Mazlami, Cito &amp; Leitner (2017)</ns0:ref> represented the information in the monolith and create an undirected, edge-weighted graph G. Each graph edge has a weight defined by the weight function; this weight function determines how strong the coupling between the neighboring edges is, according to the coupling strategy in use <ns0:ref type='bibr' target='#b38'>(Mazlami, Cito &amp; Leitner, 2017)</ns0:ref>. These coupling strategies can be used as metrics to define the granularity. These metrics are defined as follows:</ns0:p><ns0:p>Dependency weight. <ns0:ref type='bibr' target='#b1'>Ahmadvand &amp; Ibrahim (2016)</ns0:ref> said 'dependency weight indicates the frequency of using the dependency. For example, the dependency weight between a billing and shopping cart is high, because with each call to the former a call is required to the latter. On the other hand, the dependency weight between the billing service and a service managing the metadata of payment methods is low, because the former calls the latter only once a day' <ns0:ref type='bibr' target='#b1'>(Ahmadvand &amp; Ibrahim, 2016)</ns0:ref>.</ns0:p><ns0:p>Logical coupling. <ns0:ref type='bibr' target='#b16'>Gall, Jazayeri &amp; Krajewski (2003)</ns0:ref> coined the term logical coupling as a retrospective measure of implicit coupling based on the revision history of an application source code <ns0:ref type='bibr' target='#b16'>(Gall, Jazayeri &amp; Krajewski, 2003)</ns0:ref>. <ns0:ref type='bibr' target='#b38'>Mazlami, Cito &amp; Leitner (2017)</ns0:ref> define the value of the logical coupling is 1 if classes (C 1 , C 2 ) have changed together in a certain commit. They use the logical coupling aggregate which is the sum of the logical coupling for each pair of classes <ns0:ref type='bibr' target='#b38'>(Mazlami, Cito &amp; Leitner, 2017)</ns0:ref>.</ns0:p><ns0:p>Semantic coupling. Basically, semantic coupling couples together classes that contain code about the same things, i.e., domain model entities. The semantic coupling strategy can compute a score that indicates how related two files are, in terms of domain concepts or 'things' expressed in code and identifiers <ns0:ref type='bibr' target='#b38'>(Mazlami, Cito &amp; Leitner, 2017)</ns0:ref>.</ns0:p><ns0:p>Contributor count and contributor coupling. The contributor coupling strategy aims to incorporate the team-based factors into a formal procedure that can be used to cluster class files according to the team-based factors (reduce communication overhead to external teams and maximize internal communication and cohesion inside developer teams). It does so by analyzing the authors of changes on the class files in the version control history of the monolith. The procedure to compute the contributor coupling is applied to all class files. In the graph G representing the original monolith M, the weight on any edge is equal to the contributor coupling between two classes C i and C j that are connected in the graph. The weight is defined as the cardinality of the intersection of the sets of developers that contributed to class C i and C j <ns0:ref type='bibr' target='#b38'>(Mazlami, Cito &amp; Leitner, 2017)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Structural coupling.</ns0:head><ns0:p>Structural coupling consists of the number of classes outside package P j referenced by classes in the package P j divided by the number of packages <ns0:ref type='bibr' target='#b11'>(Candela et al., 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Afferent coupling (Ca).</ns0:head><ns0:p>The number of classes in other packages (services) that depend upon classes within the package (service) itself, as such it indicates the package's (service's) responsibility <ns0:ref type='bibr' target='#b37'>(Martin, 2002)</ns0:ref> cited by <ns0:ref type='bibr' target='#b31'>(Li et al., 2019)</ns0:ref>. references like creating a class instance. Higher numbers of RC indicate higher cohesion of a package (service)' <ns0:ref type='bibr' target='#b30'>(Larman, 2012)</ns0:ref> cite by <ns0:ref type='bibr' target='#b31'>(Li et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Cohesion at the domain level (CHD).</ns0:head><ns0:p>The cohesiveness of interfaces provided by a service at the domain level. The higher the CHD, the more functionally cohesive that service is <ns0:ref type='bibr' target='#b27'>(Jin et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Average cohesion at the domain level (Avg. CHD).</ns0:head><ns0:p>The average of all CHD values within the system <ns0:ref type='bibr' target='#b27'>(Jin et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Cohesion at the message level (CHM).</ns0:head><ns0:p>The cohesiveness of the interfaces published by a service at the message level. The higher a service's CHM, the more cohesive the service is, from an external perspective. CHM is the average functional cohesiveness <ns0:ref type='bibr' target='#b27'>(Jin et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Service interface data cohesion (SIDC).</ns0:head><ns0:p>The cohesion of a given service S with respect to the similarity of parameter data types of its interface's operations <ns0:ref type='bibr'>(Perepletchikov, Ryan &amp; Frampton, 2007)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Service Interface Usage Cohesion (SIUC).</ns0:head><ns0:p>The cohesion of a given service S based on the invocation behavior of clients using operations from its interface <ns0:ref type='bibr'>(Perepletchikov, Ryan &amp; Frampton, 2007)</ns0:ref>. <ns0:ref type='bibr' target='#b12'>Cojocaru, Uta, and Oprescu (2019)</ns0:ref>, 'entities composition assesses whether the entities are equally distributed among the proposed microservices and no duplicates, which might break the cohesion, exist. They define an entity as the class, or action of the service' <ns0:ref type='bibr' target='#b12'>(Cojocaru, Uta &amp; Oprescu, 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Entities composition. According to</ns0:head></ns0:div> <ns0:div><ns0:head>Relation composition.</ns0:head><ns0:p>According to Cojocaru, Uta, and Oprescu (2019) 'relation composition assesses the quantitative variation in published language per relation. It applies the concept of relative assessment to entities shared between the services via their communication paths. The test identifies services communicating much more data than their peers, and thus potential communication bottlenecks' <ns0:ref type='bibr' target='#b12'>(Cojocaru, Uta &amp; Oprescu, 2019)</ns0:ref>.</ns0:p><ns0:p>Responsibilities composition. <ns0:ref type='bibr' target='#b12'>Cojocaru, Uta, and Oprescu (2019)</ns0:ref> stated that the responsibilities composition 'assesses to what extent the use case responsibilities are equally distributed among the proposed microservices. It uses the coefficient of variation between the number of use case responsibilities of each microservice. Services having relatively more responsibility may imply low cohesion: a service providing multiple actions violates the single responsibility principle' <ns0:ref type='bibr' target='#b12'>(Cojocaru, Uta &amp; Oprescu, 2019)</ns0:ref>.</ns0:p><ns0:p>Semantic similarity: According to Cojocaru, Uta, and Oprescu (2019) 'semantic similarity uses lexical distance assessment algorithms to flag the services that contain unrelated components or unrelated actions hindering cohesion' <ns0:ref type='bibr' target='#b12'>(Cojocaru, Uta &amp; Oprescu, 2019)</ns0:ref>. <ns0:ref type='bibr'>Perepletchikov, Ryan, and Frampton (2007)</ns0:ref> ' reviewed categories of cohesion initially proposed for object-oriented software in order to determine their conceptual relevance to service-oriented designs'; and proposed a set of metrics for cohesion that can be adapted for microservices <ns0:ref type='bibr'>(Perepletchikov, Ryan &amp; Frampton, 2007)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.3.'>Complexity metrics</ns0:head><ns0:p>The complexity of microservices should be low, so that they can be changed within several weeks, rewritten, and improved quickly. If the complexity is high, then the cost of change is higher. Measuring complexity is fundamental for developing microservice-based applications. The metrics used by the authors of the papers are listed below.</ns0:p></ns0:div> <ns0:div><ns0:head>Function points.</ns0:head><ns0:p>A method for measuring the size of a software. A function point count is a measurement of the amount of functionality that software will provide <ns0:ref type='bibr'>(Totalmetrics.com)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>COSMIC function points.</ns0:head><ns0:p>Focuses on data movements between different layers. One of the benefits of the COSMIC method is that it can estimate the size in the planning phase, based on the user's functional requirements. The four main data group types are: entry, exit, read and write. The COSMIC function point calculation is aimed at measuring the system at the time of planning. This size calculation can be used for estimating efforts <ns0:ref type='bibr' target='#b68'>(Vural, Koyuncu &amp; Misra, 2018)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Total response for service (TRS).</ns0:head><ns0:p>The sum of all responses for operation (RFO) values for all operations of the interface of service S <ns0:ref type='bibr'>(Perepletchikov et al., 2007)</ns0:ref>. <ns0:ref type='bibr' target='#b40'>Nunes, Santos, and Rito Silva (2019)</ns0:ref> said 'that having more than 2 singleton clusters is considered negative. Considering a final microservice architecture with clear functional boundaries established, it is likely that there are not two services in which their content is a single domain entity' <ns0:ref type='bibr' target='#b40'>(Nunes, Santos &amp; Rito Silva, 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b40'>Nunes, Santos, and Rito Silva, (2019)</ns0:ref> 'MCS should not be bigger than half of the size of the system. Even with a cluster size inside this range, there is also a dependency regarding the number of entity instances that are part of the aggregate' <ns0:ref type='bibr' target='#b40'>(Nunes, Santos &amp; Rito Silva, 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Number of singleton clusters (NSC).</ns0:head></ns0:div> <ns0:div><ns0:head>Maximum cluster size (MCS).</ns0:head></ns0:div> <ns0:div><ns0:head n='5.2.4.'>Performance metrics</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:2:0:NEW 3 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Performance is a critical point of microservice-based applications. The selected papers used eight performance metrics:</ns0:p><ns0:p>The number of calls or requests. The number of times that a microservice is called.</ns0:p><ns0:p>The number of rejected requests. The number of times that a microservice does not respond or exceeds the time limit.</ns0:p><ns0:p>Response or execution time. The execution time of the invoked service.</ns0:p></ns0:div> <ns0:div><ns0:head>Interaction number (IRN).</ns0:head><ns0:p>The number of calls for methods among all pairs of extracted microservices. The smaller is the IRN, the better the quality of candidate microservices as a low IRN reflects loose coupling <ns0:ref type='bibr' target='#b53'>(Saidani et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Number of executions.</ns0:head><ns0:p>The number of test requests sent to the system or microservices.</ns0:p></ns0:div> <ns0:div><ns0:head>Maximum request time.</ns0:head><ns0:p>The maximum time for a request (output) made from one microservice to another.</ns0:p></ns0:div> <ns0:div><ns0:head>Maximum response time:</ns0:head><ns0:p>The maximum response time is that of a call (input) or request to the system or microservice. It is the time to process a response to another microservice.</ns0:p></ns0:div> <ns0:div><ns0:head>Number of packets sent:</ns0:head><ns0:p>The packets sent to the system or microservice. <ns0:ref type='bibr' target='#b49'>Ren et al. (2018)</ns0:ref> (P29) they used package analysis (PA), static structure analysis (SSA), class hierarchy analysis (CHA), static call graph analysis (SCGA), and combined static and dynamic analysis (CSDA) to evaluate migration performance. However, they did not explain the details of the performance analysis test or the metrics they used <ns0:ref type='bibr' target='#b49'>(Ren et al., 2018)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.5.'>Other quality attributes metrics.</ns0:head><ns0:p>Few metrics were directly related to quality attributes. The metrics proposed in the revised works are defined as follows:</ns0:p><ns0:p>Cost of quality assurance. It can be calculated by adding up the time spent by testers validating not only the new features but also the non-regression on existing ones, along with the time spent on release management <ns0:ref type='bibr' target='#b18'>(Gouigoux &amp; Tamzalit, 2017)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Cost of deployment.</ns0:head><ns0:p>The time spent by operational teams to deploy a new release, in man-days; it decreases greatly as teams automate deployment <ns0:ref type='bibr' target='#b18'>(Gouigoux &amp; Tamzalit, 2017)</ns0:ref>.</ns0:p><ns0:p>Security impact. The security policy applied to requirements or services. Assets and threats identified lead to deployed security mechanisms, which form security policies. <ns0:ref type='bibr' target='#b1'>Ahmadvand and Ibrahim (2016)</ns0:ref> mapped the identified policies to the corresponding functional requirements, mainly based on their access to the system assets. Security impact is a qualitative value (low, medium, high) <ns0:ref type='bibr' target='#b1'>(Ahmadvand &amp; Ibrahim, 2016)</ns0:ref>.</ns0:p><ns0:p>Scalability impact. <ns0:ref type='bibr' target='#b1'>Ahmadvand &amp; Ibrahim (2016)</ns0:ref> define th Scalability impact as the required level of scalability (high, medium, low) to implement a a functional requirement or service. Defining the requirements at design time for a software system to be scalable is a challenging task. <ns0:ref type='bibr' target='#b1'>(Ahmadvand &amp; Ibrahim, 2016)</ns0:ref> think that a requirements engineer should answer a question such as 'What is the anticipated number of simultaneous users for this functionality?' <ns0:ref type='bibr' target='#b1'>(Ahmadvand &amp; Ibrahim, 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.6.'>Computational resources metrics</ns0:head><ns0:p>The computational resources are all the software and hardware elements necessary for the operation of the microservice-based applications. The proposed metrics are listed.</ns0:p></ns0:div> <ns0:div><ns0:head>Average of memory. The average memory consumption for each microservice or application.</ns0:head><ns0:p>Average of disk. The average of disk consumption for each microservice or application.</ns0:p></ns0:div> <ns0:div><ns0:head>Average of network.</ns0:head><ns0:p>The average network bandwidth consumption for the entire system; Kb/s used by system or microservice <ns0:ref type='bibr' target='#b4'>(De Alwis et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Average of CPU.</ns0:head><ns0:p>The average of the CPU consumption by the system or microservice. <ns0:ref type='bibr' target='#b25'>Homay et al. (2019)</ns0:ref> stated that 'identifying which existing functionalities in the service are consuming more resources is not an easy task. Therefore, we suggest relying on each request that a service provider receives from a service consumer. Because each request is a chain of stats or activities that needs to be satisfied inside of the service provider to generate a related response. The cost-of-service composition for the service s will be equal to the maximum cost of requests (routes)' <ns0:ref type='bibr' target='#b25'>(Homay et al., 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b25'>Homay et al., 2019</ns0:ref>, 'By refining a service into smaller services, we will make some drawbacks. The SDC is a function that calculates the overhead of refining the service s into smaller pieces' <ns0:ref type='bibr' target='#b25'>(Homay et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Service composition cost (SCC).</ns0:head></ns0:div> <ns0:div><ns0:head>Service decomposition cost (SDC). According to</ns0:head></ns0:div> <ns0:div><ns0:head n='5.2.7.'>Team metrics</ns0:head><ns0:p>Each microservice can be developed by a different team, and with different programming languages and database engines. It is important to consider metrics that allow analysis of PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:2:0:NEW 3 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science microservices' granularity and its impacts on the development team. The proposed metrics in the analyzed are as follows:</ns0:p><ns0:p>Team size reduction (TSR). Reduced team size translates to reduced communication overhead and thus more productivity and the team's focus can be directed toward the actual domain problem and service for which it is responsible. TSR is a proxy for this team-oriented quality aspect. Let RM be a microservice recommendation for a monolith M. TSR is computed as the average team size across all microservice candidates in the RM divided by the team size of the original monolith M <ns0:ref type='bibr' target='#b38'>(Mazlami, Cito &amp; Leitner, 2017)</ns0:ref>.</ns0:p><ns0:p>Commit count. The number of commits in the code repository made by the developers.</ns0:p><ns0:p>We found very few metrics related to the development team. This can be an interesting topic for future research.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.8.'>Source code metrics</ns0:head><ns0:p>The source code is one of the most important sources for analyzing certain characteristics of an application. Some authors have used it to identify microservices and define their granularity. The proposed metrics are described below:</ns0:p><ns0:p>Code size in lines of code. The total size of the code in the repository, in terms of in lines of code, microservices' lines of code, or application's lines of code.</ns0:p><ns0:p>The number of classes per microservice. Helps to understand how large the identified microservice is and to identify if any microservice is too big compared to others. The number of classes should be minimized because a smaller number of classes implies more independent development of the microservice <ns0:ref type='bibr' target='#b60'>(Taibi &amp; Syst, 2019)</ns0:ref>.</ns0:p><ns0:p>The number of duplicated classes. In some cases, two execution traces will have several classes in common. The number of duplicated classes helps one to reason about the different slicing options, considering not only the size of the microservices but also the number of duplications, that will be then reflected in the microservices' development. Duplicated classes should be avoided since duplication adds to the system's size and maintenance <ns0:ref type='bibr' target='#b60'>(Taibi &amp; Syst, 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Internal co-change frequency (ICF).</ns0:head><ns0:p>How often entities within a service change together as recorded in the revision history. A higher ICF means that the entities within this service will be more likely to evolve together. The ICF is the average of all ICFs within the system <ns0:ref type='bibr' target='#b27'>(Jin et al., 2019)</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:2:0:NEW 3 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>External co-change frequency (ECF). How often entities assigned to different services change together, according to the revision history. A lower ECF score means that entity pairs located in different services are expected to evolve more independently. Similarly, ECF is the average ECF value of all services within the system <ns0:ref type='bibr' target='#b27'>(Jin et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>The ratio of ECF to ICF (REI).</ns0:head><ns0:p>The ratio of co-change frequency across services vs. the cochange frequency within services. The ratio is expected to be less than 1.0 if co-changes happen more often inside a service than across different services. The smaller the ratio is, the less likely co-changes are across services, and the extracted services tend to evolve more independently. Ideally, all co-changes should happen inside the services. REI is calculated as ECF divided by ICF <ns0:ref type='bibr' target='#b27'>(Jin et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Modularity quality measure. The modularity of a component or service can be measured from multiple perspectives, such as structural, conceptual, historical, and dynamic dimensions <ns0:ref type='bibr' target='#b11'>(Candela et al., 2016)</ns0:ref>. They extend the modularity quality (MQ), as defined by <ns0:ref type='bibr' target='#b34'>(Mancoridis et al., 1998)</ns0:ref> as structural and conceptual dependencies, using structural modularity quality and conceptual modularity quality to assess the modularity of service candidates. Structural modularity quality (SMQ) measures the quality of modularity from a structural perspective. The higher the SMQ, the better modularized the service is. On the other hand, conceptual modularity quality (CMQ), similarly measures modularity quality from a conceptual perspective. The higher the CMQ, the better <ns0:ref type='bibr' target='#b27'>(Jin et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.9.'>Granularity metrics</ns0:head><ns0:p>Measuring granularity is complex. Granularity is related to size, including the number of functionalities or services that the application or microservice will have. It is also related to coupling and cohesion. Being more granular implies that microservice has no dependencies and can function independently, as an independent and encapsulated piece. Six granularity metrics were identified:</ns0:p><ns0:p>Weighted service interface count (WSIC): WSIC(S) is the number of exposed interface operations of service S. The default weight is set to 1. Alternate weighting methods, which need to be validated empirically, can take into consideration the number and the complexity of data types of parameters in each interface. <ns0:ref type='bibr' target='#b23'>(Hirzalla, Cleland-Huang &amp; Arsanjani, 2009)</ns0:ref>. WSIC(S) is the number of exposed interface operations of service S. Operations can be weighted based on the number of parameters or their granularity (e.g. a complex nested object) with the default weight being set to 1 <ns0:ref type='bibr' target='#b8'>(Bogner, Wagner &amp; Zimmermann, 2017a)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Component Balance (CB):</ns0:head><ns0:p>The CB is a system-level metric to evaluate the appropriateness of granularity, i.e. if the number and size uniformity of the components (in this case, services) are in a favorable range for maintainability <ns0:ref type='bibr' target='#b10'>(Bouwers et al., 2011)</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:2:0:NEW 3 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Operation number (OPN):</ns0:head><ns0:p>The OPN is used to compute the average number of public operations exposed by an extracted microservice to other candidate microservices. The smaller the OPN is the better <ns0:ref type='bibr' target='#b53'>(Saidani et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Number of microservices:</ns0:head><ns0:p>The number of microservices that are part of the system or application.</ns0:p></ns0:div> <ns0:div><ns0:head>Lines of code:</ns0:head><ns0:p>The lines of code measure the number of lines of code in the microservice. Additionally, it may consider the total size of the code in the repository. <ns0:ref type='bibr' target='#b12'>: Cojocaru, Uta, and Oprescu (2019)</ns0:ref> stated that 'the number of nanoentities (attributes or fields of a class) computes the number of nanoentities assigned to each proposed service, storing the result as a floating-point parameterized list. The list's length is equal to the number of services found in the system model specification file' <ns0:ref type='bibr' target='#b12'>(Cojocaru, Uta &amp; Oprescu, 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Number of nanoentities</ns0:head><ns0:p>The fundamentals of microservices suggest that they must have low coupling, high cohesion, and low complexity. Based on the described metrics, a model or method could be defined that uses artificial intelligence to determine the most appropriate dimensioning and size for microservices. Some have already been defined, mainly for migrations from monoliths to microservices. The number of microservices, their size, and their computational complexity directly affect the use of computational resources and therefore their cost of deployment. This is an interesting topic for future research. In conclusion, some papers used metrics to evaluate the granularity of the microservices, including coupling, cohesion, number of calls, number of requests, and response time, although few methods or techniques use complexity as a metric, even though it seems fundamental for microservices. More research that considers design-level metrics is needed to define the granularity of the microservices that are part of an application, as well as research proposing models, methods, or techniques to determine the most appropriate granularity.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.4.'>RQ3: Quality attributes to define the microservice granularity.</ns0:head><ns0:p>Quality attributes are essential for today's applications. Availability, performance, automatic scaling, maintainability, security, and fault tolerance are essential features that every application must handle. An architecture based on microservices allows independent management of quality attributes, according to the specific need of each microservice. This is one of the main advantages compared to monolithic architectures. The size and number of microservices that compose an application directly affect its quality attributes. Creating more microservices may affect maintainability because testing costs will increase, even more so if automated testing is not available. Moreover, performance may also be affected by having to integrate and process data from several distributed applications. Clearly, quality attributes are impacted by microservices granularity and should be considered when defining a model, method, or technique to determine granularity (see Fig. <ns0:ref type='figure'>9</ns0:ref>). Surprisingly, 62% of the identified proposals did not consider or report any quality attributes at all. Of those that did, scalability and performance were the most considered (7 papers, 24%), followed by maintainability and availability (2 papers, 7%); and lastly, reliability (fault tolerance), security, functionality, and modularity with only one paper each. More research is needed that considers quality attributes to define the granularity of the microservices comprising an application. Security and fault tolerance are key attributes that microservice-based applications must handle, few works addressed these features (see table <ns0:ref type='table'>7</ns0:ref>). We grouped the software quality attributes into the following two categories: firstly, according to runtime characteristics, (scalability, performance, reliability, availability, and functionality), which are observable during execution; and secondly according to software as an artifact characteristic (maintainability, modularity, reusability), which are not observable during execution <ns0:ref type='bibr' target='#b7'>(Bass, Clemens &amp; Katzman, 1998)</ns0:ref> <ns0:ref type='bibr' target='#b6'>(Astudillo, 2005)</ns0:ref>; run time characteristics were the most used ones, having been addressed by 8 papers (P2, P4, P10, P11, P12, P13, P21 y P29); only two papers addressed software artifact characteristics (P3 y P16); only one paper used both artifact and runtime characteristics (P16). Therefore, more proposals are required to define microservices granularity considering both runtime and software artifact characteristics.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.1.'>Runtime characteristics</ns0:head><ns0:p>In this section, we detail the runtime quality attributes and the way they were addressed by the papers, whether they used metrics to evaluate their proposals. &#61623; Scalability, performance, and reliability (fault tolerance) were used by only one paper (P2).</ns0:p><ns0:p>P2 proposed a re-implementation of otoo.de (a real-life case study). They defined the granularity through vertical decomposition, they used DevOps including continuous deployment, to deliver features quickly to customers. Team organization is crucial for success, this organization was based on Conway's Law. Full automation of quality assurance and software deployment allows for early fault and error detection, thus reducing repair times both during development and during operations <ns0:ref type='bibr' target='#b22'>(Hasselbring &amp; Steinacker, 2017)</ns0:ref>. This paper did not propose metrics for evaluation. &#61623; Scalability and performance were used by two papers (P10, P29); P10 proposed an automatic decomposition method, which was based on a black-box approach that mines the application access logs using a clustering method to discover URL partitions having similar performance and resource requirements. Such partitions were mapped to microservices <ns0:ref type='bibr' target='#b1'>(Ahmadvand &amp; Ibrahim, 2016)</ns0:ref>. The metrics used in this paper were performance metrics (response time SLO violations, number of calls, number of rejected requests, and throughput) and computational resource metrics (Avg. CPU, number of virtual machines used, and allocated virtual machines). P29 used the source code and the runtime logs in a semi-automatic method, it used granularity, performance, and source code metrics to evaluate the decompositions. They PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:2:0:NEW 3 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science presented a program analysis-based method to migrate monolith legacy applications to microservices architecture; this method used a function call graph, a Markov chain model to represent migration characteristics, and a k-means hierarchical clustering algorithm <ns0:ref type='bibr' target='#b49'>(Ren et al., 2018)</ns0:ref>. &#61623; Only performance was used by two papers (P4, P13). P4 examined the granularity problem of the microservice and explored its effect on the latency of the application. Two approaches for the deployment of microservices were simulated; the first one with microservices in a single container, and the second one with microservices divided into separate containers.</ns0:p><ns0:p>They discussed the findings in the context of the Internet of Things (IoT) application architectures <ns0:ref type='bibr' target='#b57'>(Shadija, Rezai &amp; Hill, 2017)</ns0:ref>; that paper corresponds to an evaluation or comparison, it is not a method to define the microservice granularity; it used performance metrics (response time and the number of calls). P13 presented three formal coupling strategies and embedded those in a graph-based clustering algorithm: (1) logical coupling, (2) semantic coupling, and (3) contributor coupling. The coupling strategies rely on metainformation from monolithic code bases to construct a graph representation of the monoliths that are in turn processed by the clustering algorithm to generate recommendations for potential microservice candidates in a refactoring scenario; P13 was the only one that proposed development team based metrics; logical coupling, average domain redundancy, contributor coupling, semantic coupling, commit count, contributor count, and lines of code were the metrics used by this paper. &#61623; Scalability and security were used by one paper (P11). P11 proposed a methodology, consisting of a series of steps and activities that must be carried out to identify the microservices that will be part of the system. It is based on the use cases and the analysis made by the architect in terms of the scalability and security of each use case, as well as the dependencies with the other use cases <ns0:ref type='bibr' target='#b1'>(Ahmadvand &amp; Ibrahim, 2016)</ns0:ref>; this paper used the following metrics: dependency weight, security impact, and scalability impact (qualitative metrics). &#61623; Scalability, performance, and availability were addressed by two papers (P12, P21). P12 presented discovery techniques that help identifying the appropriate parts of consumeroriented business systems that could be redesigned as microservices with desired characteristics such as high cohesion, low coupling, high scalability, high availability, and high processing efficiency <ns0:ref type='bibr' target='#b5'>(De Alwis et al., 2018)</ns0:ref>. They proposed microservice discovery algorithms and heuristics. It was an automatic method that used coupling (structural coupling), cohesion (lack of cohesion), computational resources (avg. memory, avg. disk), and performance metrics (number of requests, execution time). P21 was a semi-automatic method, a genetic algorithm with semantic similarity based on DISCO and non-dominated <ns0:ref type='bibr'>al., 2019)</ns0:ref>. The metrics used by that paper were: structural coupling, lack of cohesion, the average CPU, the average of the network, number of executions, and the number of packets sent.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.2.'>Software as an artifact characteristic</ns0:head><ns0:p>Only two software as an artifact characteristic were used, which were maintainability and modularity. Only maintainability was used by one paper (P3); whereas maintainability, and modularity were used by (P16), which proposed the most complete method. P3 used a balance between the cost of quality assurance and the cost of deployment for defining microservices granularity, it was a manual method. The choice of granularity should be based on the balance between the costs of quality assurance and the cost of deployment <ns0:ref type='bibr' target='#b18'>(Gouigoux &amp; Tamzalit, 2017)</ns0:ref>. P16 presented a framework that consists of three major steps: (1) extracting representative execution traces, (2) identifying entities using a search-based functional atom grouping algorithm, and (3) identifying interfaces for service candidates <ns0:ref type='bibr' target='#b27'>(Jin et al., 2019)</ns0:ref>. They also presented a comprehensive measurement system to quantitatively evaluate service candidate quality in terms of functionality, modularity, and evolvability. P16 proposed an automatic method, which used a search-based functional atom grouping algorithm and a non-dominated sorting genetic algorithm-II (NSGA II). The evaluation metrics were coupling (Integrating interface number), cohesion at message level, cohesion at domain level, structural modularity quality, conceptual modularity quality, internal co-change frequency (ICF), external Co-change frequency (ECF), and the ratio of ECF to ICF.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.3.'>Quality attributes and artificial intelligence</ns0:head><ns0:p>In some cases, artificial intelligence techniques are being used to improve the quality attributes of microservices. For example: <ns0:ref type='bibr' target='#b3'>Alipour and Liu (2017)</ns0:ref> proposed two machine learning algorithms and predicted the resource demand of microservice backend systems, as emulated by a Netflix workload reference application. They proposed a microservice architecture that encapsulates monitoring functions of metrics and learning of workload patterns. Then, this service architecture is used to predict the future workload for making decisions about resource provisioning <ns0:ref type='bibr' target='#b3'>(Alipour &amp; Liu, 2017)</ns0:ref>. <ns0:ref type='bibr' target='#b46'>Prachitmutita et al. (2018)</ns0:ref> proposed a new self-scaling framework based on the predicted workload, with an artificial neural network, a recurrent neural network, and a resource scaling optimization algorithm used to create an automated system to manage the entire application with Infrastructure-as-a-service (IaaS) <ns0:ref type='bibr' target='#b46'>(Prachitmutita et al., 2018)</ns0:ref>. <ns0:ref type='bibr' target='#b32'>Ma et al. (2018)</ns0:ref> proposed an approach, called scenario-based microservice retrieval (SMSR), to recommend appropriate microservices for users based on the Behavior-driven Development (BDD) test scenarios written by the user. The proposed service retrieval algorithm is based on word2vec, an automatic learning method widely used in natural language processing (NLP) to perform service filtering and calculate service similarity <ns0:ref type='bibr' target='#b32'>(Ma et al., 2018)</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:2:0:NEW 3 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Abdullah, Iqbal and Erradi (2019) proposed a complete automated system for breaking down an application into microservices, implementing microservices using appropriate resources, and automatically scaling microservices to maintain the desired response time <ns0:ref type='bibr' target='#b0'>(Abdullah, Iqbal &amp; Erradi, 2019)</ns0:ref>. Artificial intelligence can help to improve and control different characteristics of microservices, especially those related to improving quality attributes. Some proposals have been made in this regard, but more research is needed. Finally, we identified the automatic and semi-automatic methods, which used metrics and addressed some quality attribute to define the granularity (see table <ns0:ref type='table'>8</ns0:ref>). Only six papers meet those conditions (P10, P12, P13, P16, P21, and P29), and were the more suitable methods to define the granularity of microservices. Also, we identified semi-automatic methodologies, which used metrics to define the granularity, only two papers were found (P14 and P22); there were no automatic methodologies, most of them were manual methodologies (P11, P17, P18, and P26) and only one was semi-automatic but it did not use metrics (P25). P14 was a data flow-driven decomposition algorithm. Their methodology, first, the use case specification and business logics were analyzed based on requirements; second, the detailed dataflow diagrams (DFD) at different levels and the corresponding process-datastore version of DFD (DFD PS) are constructed from business logics based on requirement analysis; third, we designed an algorithm to automatically condense the DFD PS to a decomposable DFD, in which the sentences between processes and data stores are combined; last but not least, microservice candidates were identified and extracted automatically from the decomposable DFD <ns0:ref type='bibr' target='#b31'>(Li et al., 2019)</ns0:ref>. The metrics used by P14 were coupling (afferent coupling, efferent coupling, instability) and cohesion (relational cohesion) metrics. P14 did not address any quality attribute directly. P22 proposed a clustering algorithm applied to aggregate domain entities. It used coupling (silhouette score) and complexity (number of singleton clusters, maximum cluster size) metrics. The authors proposed an approach to the migration of monolith applications to a microservice architecture that focused on the impact of the decomposition on the monolith business logic <ns0:ref type='bibr' target='#b40'>(Nunes, Santos &amp; Rito Silva, 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.'>Discussion</ns0:head><ns0:p>The use of artificial intelligence techniques to determine the appropriate granularity or to identify the microservices that will be part of an application is a growing trend; this is especially true of machine learning clustering algorithms and genetic algorithms, with an emphasis on semantic similarity to group the microservices that refer to the same entity. Domain engineering and DDD are still among the most used techniques. Migration of software systems implies many architectural decisions that should be systematically evaluated to assess concrete trade-offs and risks <ns0:ref type='bibr' target='#b13'>(Cruz et al., 2019)</ns0:ref>. In these cases, the beginning is given from a monolithic system that must be decomposed into microservices, and that monolithic system has important data sources that allow for the identification and evaluation of the candidate microservices. These sources are mainly: the source code, the use cases, the database, the logs, and the execution traces. It should be noted that the development of microservice-based applications is closely related to agile practices and DevOps, and none of the input data that are being considered in the proposed methods correspond to agile artifacts such as user stories, product backlog, iteration planning and others. Therefore, research work is needed at this point. The migration from monolith to microservices is a topic with much interest and is widely studied. In contrast, the design and development of microservice-based applications from scratch have few related proposals. The proposed methods emphasize artifacts available at run time, development, deployment, or production, which are hardly available when starting a project from scratch at design time. The development of microservice-based applications from scratch resembles component-based development <ns0:ref type='bibr' target='#b64'>(Vera-Rivera &amp; Rojas Morales, 2010)</ns0:ref>, in which microservices are reusable software components. In (Vera-Rivera, 2018) we characterized the process of developing applications based on microservices, identifying two fundamental parts, first the development of each microservice and then the development of applications based on those microservices. The definition of adequate granularity is fundamental to the development of microservices-based applications <ns0:ref type='bibr' target='#b62'>(Vera-Rivera, 2018)</ns0:ref>. The granularity of a monolith is not the most optimal and defining an operation by microservice is also not optimal. Hence, if an application that offers 100 operations should have 100 microservices, it would not be optimal either, due to latency, performance, and management of this large distributed system. The optimal granularity is somewhere in between the monolithic application and the operation by microservice system, this granularity should be defined according to the characteristics of the application, the development team, the non-functional requirements, the available resources, and design, development, and operation trade-offs. The research gaps focus on proposing techniques or methods that allow for the evaluation of granularity and its impact on tests, considering security controls, fault tolerance mechanisms and DevOps. By managing more microservices or larger microservices, testing can be slower and more tedious. Moreover, the pipelines of continuous integration and deployment would be more complex. Determining the appropriate number of microservices and their impact on continuous deployment is an interesting research topic. Few works address these issues. In addition, few papers use as input data or analysis units the artifacts used in agile development, such as user stories, product backlog, release planning, Kanban board and its data, to propose agile methods or new practices that allow for determination or evaluation of the microservices that will be part of the application. None of the proposed works focuses on agile software development. Several interesting works have been proposed, but there are still few specific, actionable proposals; more research is needed to propose design patterns, good practices, more complete models, methods, or tools that can be generalized to define microservices granularity considering metrics, quality attributes and trade-offs.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1.'>Research trends:</ns0:head><ns0:p>We detail de research trends according to the analyzed papers, the trends summarized as follow:</ns0:p><ns0:p>&#61623; The most used techniques to define microservice granularity included machine learning clustering, semantic similarity, genetic programming, and domain engineering. &#61623; The most used research strategies were validation research and solution proposals. &#61623; The most used validation method was the case study, although some studies used experimental evaluations. We summarized the case studies found in the reviewed papers because they are valuable resources with which to validate future research and to compare new methods (See table <ns0:ref type='table'>4</ns0:ref>). The most common case studies were Kanban boards, Money transfer, JPetsStore and Cargo Tracking, which are either hypothetical or open-source projects. &#61623; The use of metrics was evidenced to evaluate the granularity of the microservices comprising an application. Performance and coupling were the most used metrics; they help to identify microservices and their granularity more objectively. &#61623; Migrations from monoliths to microservices have been widely studied. Methods and techniques have been proposed to decompose applications into microservices, with the source code, logs, execution traces, and even use cases used as input data. These methods are used mainly during design and development time. &#61623; Scalability and performance were the most addressed quality attributes in the reviewed papers; they are fundamental for microservice-based applications. Finally, the main reason to migrate a monolithic application to microservices is precisely to improve performance and scalability, followed by fault tolerance, maintainability, and modularity.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.'>Research gaps:</ns0:head><ns0:p>Research gaps allow us to propose new research works and future work, we identify the research gaps, which are listed as follow:</ns0:p><ns0:p>&#61623; Research works that include techniques or methods to evaluate granularity and its impact on tests, while also considering security controls, fault-tolerance mechanisms, and DevOps. &#61623; Metrics were grouped into four categories: development team, development process, microservice-based application (system), and microservice itself. Few metrics were found for the development team or development process, more research is necessary in these groups. Manuscript to be reviewed Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='6.'>Threats to validity</ns0:head></ns0:div> <ns0:div><ns0:head n='6.1.'>External validity</ns0:head><ns0:p>We express a threat to external validity regarding the search and selection of primary papers, which may not be representative of the state of the art in the definition of the granularity of microservices, to reduce this risk we used a systematic and well defined process, using two search strings so that the papers obtained are representative, to define and select the papers that were included in our review, each of the authors made their selection and tabulation independently, then in common agreement and group discussion were selected by applying the inclusion and exclusion criteria. In addition, the systematic literature review process carried out corresponds to the classic and widely used in other reviews, proposed by Kitchenham <ns0:ref type='bibr' target='#b28'>(Kitchenham, 2004)</ns0:ref>, also our study significantly include research papers have undergone a rigorous peer-review process, which is a well-established requirement for high quality publications, so the selected paper may be representatives to define the state-of-the-art of microservices granularity definition. For each paper obtained from the query strings, the reason why it was included or excluded from the review was defined. We did not include grey literature. By using a systematic method already established and widely used in other reviews, the replicability of our study is guaranteed, and the process was rigorously followed to reduce this threat.</ns0:p></ns0:div> <ns0:div><ns0:head n='6.2.'>Internal validity</ns0:head><ns0:p>In order to reduce the researcher bias a pre-defined protocol were defined (See figure <ns0:ref type='figure'>1</ns0:ref>). The classification criteria for the selected papers were carefully selected and they were defined in other literature reviews, these literature reviews were explained in the related work section. We download the selected papers; they were shared with all authors for review. The papers were summarized, we detail the contribution and made the classification and analysis based on fulltext papers. We specified a paper ID, the used technique, the input data, the full paper summary and synthesis, the description of the proposal, the journal or conference where it was published, and observations and comments. We tabulated the papers using the classification criteria explained in section 4, we review each selected paper based on the interpretation of the contribution raised in each one, then we grouped the papers. To reduce the selection bias this process was reviewed for each author independently. The threats to data synthesis and results were mitigated by having a unified classification and description scheme and following a standard protocol where a systematic process was done and externally evaluated. The data extraction process was aligned with our research questions, also we applied the guidelines of a classic systematic literature review, following a research protocol, thus making our research easy to check and replicate.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:2:0:NEW 3 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This systematic literature review identified the main contributions and research gaps regarding the dimensioning and definition of the granularity of microservices comprising an application. Methods, methodologies, and techniques to determine the granularity of microservices were identified. Microservice granularity research is at a Wild West stage: no standard definition exist, development-operation trade-offs are unclear, there is little notion of continuous granularity improvement, and conceptual reuse is scarce (e.g. few methods seem applicable or replicable in projects other than the first to use them). These gaps in granularity research offer clear options for research on continuous improvement of the development and operation of microservicebased systems. We propose a microservice granularity definition, first by its size or dimensions, meaning the number of operations (services) exposed by the microservice, along with the number of microservices that are part of the whole application and by its complexity and dependencies. The goal is to have low coupling, low complexity, and high cohesion of the microservices. Defining the most optimal granularity for microservices can significantly improve performance, maintainability, scalability, network use and consumption, computational resources, and cost, because microservices mainly are deployed in the cloud. As future work we will propose 'Microservice Backlog' a model and techniques to define and evaluate the microservice granularity at design time, using metrics to evaluate the granularity. We want to develop a genetic programing technique and a semantic grouping algorithm to group the user stories of the product backlog into candidate microservices, so the architect or development team can evaluate the candidate decomposition of the application.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 1. Inclusion and exclusion criteria</ns0:head></ns0:div> <ns0:div><ns0:head>Inclusion criteria Description</ns0:head><ns0:p>Primary research papers that make a specific proposal about the size, granularity, or decomposition of applications to microservices.</ns0:p><ns0:p>This criterion focuses on identifying primary research papers that propose or define the size or granularity of microservices, also we include migrations from monolith to microservices that carry out a proposal to decompose the monolithic application to microservices.</ns0:p><ns0:p>Papers that propose a methodology, model, technique, or method to define granularity, size, or dimension of microservices.</ns0:p><ns0:p>The objective of the review is to identify the models, methods, methodologies, or techniques used to define the microservice granularity.</ns0:p><ns0:p>Migrations that include a methodology, model, technique, or method to define granularity, size, or dimension of microservices.</ns0:p><ns0:p>We include migrations from monolithic applications to microservices that reason about the definition of microservice granularity, those migrations that focus on other aspects are not included.</ns0:p><ns0:p>Papers published in journals and conference proceedings in the field of software architecture, software engineering and computer science.</ns0:p><ns0:p>We focus on research papers published in international journals and conferences only in software architecture, software engineering, and computer science. We include only peer-reviewed papers. We did not include gray literature.</ns0:p></ns0:div> <ns0:div><ns0:head>Exclusion criteria Description</ns0:head><ns0:p>Tutorial, example, experience, and opinion articles. We do not include tutorials, examples, experiences, and opinion articles, because they do not correspond to primary research papers, or they do not carry out a new contribution in the definition of microservices granularity.</ns0:p><ns0:p>Survey and literature review. We exclude survey papers, and literature reviews because they are secondary research papers that list the contributions of other authors.</ns0:p><ns0:p>Use of microservices in other areas. The use of microservices architecture in other areas is evident and fundamental, for this review they were excluded because they do not directly address the problem of defining the microservice granularity.</ns0:p><ns0:p>Papers that do not include a methodology, model, technique, or method to define granularity, size, or dimension of microservices.</ns0:p><ns0:p>Articles related to the microservice architecture, which do not make a specific proposal on the definition of microservices granularity are excluded.</ns0:p><ns0:p>Papers which propose a specific method, technique or model for SOA, web services or mobile services.</ns0:p><ns0:p>The fundamentals of SOA, web services, and mobile services are different from the fundamentals of microservices architecture, so specific proposals in these topics are not included.</ns0:p><ns0:p>Literature only in the form of abstracts, blogs, or presentations.</ns0:p><ns0:p>We used full-text articles, excluding those that are only available in abstract, blog, or presentation form (not peerreviewed).</ns0:p><ns0:p>Articles not written in English or Spanish.</ns0:p><ns0:p>Only we include papers written in English or Spanish in other languages are excluded. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>selection criteria and procedures.5. Synthesis of the extracting data.1. Define the research questions. The research questions (RQs) covered in this systematic literature review were: RQ1: Which approaches have been proposed to define microservice granularity? RQ2: Which metrics are used to evaluate microservice granularity? RQ3: Which quality attributes are addressed when researching microservice granularity?</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>&#61623;</ns0:head><ns0:label /><ns0:figDesc>in each paper: metrics, stage of the development process, technique used, and quality attributes studied or analyzed; namely: &#61623; Metrics used: Which metrics are used to define the granularity of microservices? &#61623; Development process phases: Phases of the development process on which the work focuses. Research strategies: Includes solution proposal, validation research, experience paper, opinion paper, philosophical paper, and evaluation research.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>shared a dataset composed of 20 opensource projects using specific microservice architecture patterns; and (Marquez &amp; Astudillo, 2018) shared a dataset of open source microservice-based projects when investigating actual use of architectural patterns.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>sorting genetic algorithm-II (NSGAII). That paper presented four microservice patterns, namely object association, exclusive containment, inclusive containment, and subtyping for 'greenfield' (new) development of software while demonstrating the value of the patterns for 'brownfield' (evolving) developments by identifying prospective microservices (De Alwis et PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:2:0:NEW 3 Aug 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>&#61623;</ns0:head><ns0:label /><ns0:figDesc>Few methods have been proposed to define the most adequate microservices granularity at testing or deployment time. &#61623; More research is required that uses agile development artifacts as inputs, (i.e. user stories, product backlog, release planning, Kanban boards, and their data), to propose new agile practices to define or assess microservices' granularity. None of the proposal identified in this survey focused on agile software development. PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:2:0:NEW 3 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,178.87,525.00,527.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,178.87,525.00,349.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,178.87,525.00,252.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='41,42.52,178.87,525.00,333.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,178.87,525.00,351.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='43,42.52,178.87,525.00,369.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='44,42.52,229.87,525.00,324.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='45,42.52,178.87,525.00,354.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='46,42.52,178.87,525.00,339.75' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>1 Table 3 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Selected papers, related works on the definition of granularity of microservices. 2 Their Design Trade-offs: A self-adaptive Roadmap(Hassan &amp; Bahsoon, 2016) . Conference paper P2 Microservice Architectures for Scalability, Agility and Reliability in E-Commerce<ns0:ref type='bibr' target='#b22'>(Hasselbring &amp; Steinacker, 2017)</ns0:ref>. Microservices: A Dataflow-Driven Approach (Chen,Li &amp; Li, 2017).A Dataflow-driven Approach to Identifying Microservices from Monolithic Applications<ns0:ref type='bibr' target='#b31'>(Li et al., 2019)</ns0:ref>.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>ID.</ns0:cell><ns0:cell>Paper</ns0:cell><ns0:cell>Year</ns0:cell><ns0:cell>Type</ns0:cell></ns0:row><ns0:row><ns0:cell>P1</ns0:cell><ns0:cell cols='3'>Microservices and Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell>P3 P4</ns0:cell><ns0:cell>From Monolith to Microservices: Lessons Learned on an Industrial Migration to a Web Oriented Architecture (Gouigoux &amp; Tamzalit, 2017). Microservices: Granularity vs. Performance (Shadija, Rezai &amp; Hill, 2017).</ns0:cell><ns0:cell /><ns0:cell>Conference paper Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell>P5 P6 P7 P8 P9 P10 P11 P12 P13 P14</ns0:cell><ns0:cell cols='3'>Microservice Ambients: An Architectural Meta-Modelling Approach for Microservice Granularity (Hassan, Ali &amp; Bahsoon, 2017). Microservices Identification Through Interface Analysis (Baresi, Garriga &amp; De Renzis, 2017). Partitioning Microservices: A domain engineering approach (Jos&#233;lyne et al., 2018). A Case Study on Measuring the Size of Microservices (Vural, Koyuncu &amp; Misra, 2018) Identifying Microservices Using Functional Decomposition (Tyszberowicz et al., 2018). Unsupervised Learning Approach for Web Application Auto-decomposition into Microservices (Abdullah, Iqbal &amp; Erradi, 2019). Requirements Reconciliation for Scalable and Secure Microservice (De)composition (Ahmadvand &amp; Ibrahim, 2016). Function-Splitting Heuristics for Discovery of Microservices in Enterprise Systems (De Alwis et al., 2018). Extraction of Microservices from Monolithic Software Architectures (Mazlami, Cito &amp; Leitner, 2017). From Monolith to Conference paper Conference paper Conference paper Conference paper Conference paper Conference paper Journal paper Conference paper Conference paper Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Journal paper</ns0:cell></ns0:row><ns0:row><ns0:cell>P15</ns0:cell><ns0:cell>From Monolithic Systems to Microservices: A Decomposition Framework</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Based on Process Mining (Taibi &amp; Syst, 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P16</ns0:cell><ns0:cell>Service Candidate Identification from Monolithic Systems Based on</ns0:cell><ns0:cell /><ns0:cell>Journal paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Execution Traces (Jin et al., 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P17</ns0:cell><ns0:cell>The ENTICE Approach to Decompose Monolithic Services into</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Microservices (Kecskemeti, Marosi &amp; Kertesz, 2016).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Towards a Methodology to Form Microservices from Monolithic Ones</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>(Kecskemeti, Kertesz &amp; Marosi, 2017).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P18</ns0:cell><ns0:cell>Refactoring Orchestrated Web Services into Microservices Using</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Decomposition Pattern (Tusjunt &amp; Vatanawood, 2018).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P19</ns0:cell><ns0:cell>A logical architecture design method for microservices architectures (Santos</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>et al., 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P20</ns0:cell><ns0:cell>A New Decomposition Method for Designing Microservices (Al-Debagy &amp;</ns0:cell><ns0:cell /><ns0:cell>Journal paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Martinek, 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P21</ns0:cell><ns0:cell>Business Object Centric Microservices Patterns (De Alwis et al., 2019).</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell>P22</ns0:cell><ns0:cell>From a Monolith to a Microservices Architecture: An Approach Based on</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Transactional Contexts (Nunes, Santos &amp; Rito Silva, 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P23</ns0:cell><ns0:cell>Granularity Cost Analysis for Function Block as a Service (Homay et al.,</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P24</ns0:cell><ns0:cell>MicroValid: A Validation Framework for Automatically Decomposed</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Microservices (Cojocaru, Uta &amp; Oprescu, 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P25</ns0:cell><ns0:cell>Migration of Software Components to Microservices: Matching and</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Synthesis (Christoforou, Odysseos &amp; Andreou, 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P26</ns0:cell><ns0:cell>Microservice Decomposition via Static and Dynamic Analysis of the</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Monolith (Krause et al., 2020).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P27</ns0:cell><ns0:cell>Towards Automated Microservices Extraction Using Multi-objective</ns0:cell><ns0:cell /><ns0:cell>Conference paper</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Evolutionary Search (Saidani et al., 2019).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>P28</ns0:cell><ns0:cell>Extracting Microservices' Candidates from Monolithic Applications:</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Interface Analysis and Evaluation Metrics Approach (Al-Debagy &amp; Martinek,</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>2020).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:2:0:NEW 3 Aug 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot' n='1'>IEEE Xplore: https://ieeexplore.ieee.org/ 2 ACM Digital Library: https://dl.acm.org/ 3 Scopus: https://www.scopus.com/search/form.uri?display=basic PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:2:0:NEW 3 Aug 2021)</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54842:2:0:NEW 3 Aug 2021)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"3 August 2021 Dear Editors and Reviewers PeerJ Computer Science Thank you for the comments and for the opportunity to address them in the paper. We send you the answer to the reviewers' comments and my corrected manuscript. We address the issues reported by the reviewers. I appreciate the opportunity to present my work at this prestigious journal, the suggestions and corrections are very important to improve our research, we hope that the paper meets the expectations and can be published. Please, let us know about your final decision and additional comments. Thanks. Best regard, Dr. Fredy Humberto Vera Rivera Docente Ingeniería de Sistemas e Informática Universidad Francisco de Paula Santander. Correo Electrónico: fredyhumbertovera@ufps.edu.co, freve9@hotmail.com. Phone: +57 301-6079412 City: Cúcuta – Colombia REVIEWER 3 • Line 216: 'this is the first study focus specifically...' => 'this is the first study that focuses specifically...' • Line 221: '...introduced by.' => please remove the period. • Use of references as subjects of sentences, particularly in Section 5.2. For example: ◦ Line 491: '(Bogner, Wagner & Zimmermann, 2017b) performed...' => 'Bogner, Wagner, and Zimmermann (2017b) performed...' ◦ Line 514: '(Ahmadvand & Ibrahim, 2016) said... '=> 'Ahmadvand and Ibrahim (2016) said...' ◦ Line 649: 'According to (Cojocaru, Uta & Oprescu, 2019),' => 'According to Cojocaru, Uta, and Oprescu (2019)' • Please read that section through and fix that problem, since it happened so many times. • Line 720: there is a broken sentence: 'It is the' Answer: We review the sentence (Line 216) and check the grammar of the entire article. We remove the period '...introduced by.' (line 221) We correct the citations as the reviewer requested, the section 5.3 (line 458), and we review all paper and fix the problem. We delete the broken sentence: 'It is the' (line 735). "
Here is a paper. Please give your review comments after reading it.
228
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>In image analysis, orthogonal moments are useful mathematical transformations for creating new features from digital images. Moreover, orthogonal moment invariants produce image features that are resistant to translation, rotation, and scaling operations.</ns0:p><ns0:p>Here, we show the result of a case study in biological image analysis to help researchers judge the potential efficacy of image features derived from orthogonal moments in a machine learning context. In taxonomic classification of forensically important flies from the Sarcophagidae and the Calliphoridae family (n=74), we found the GUIDE random forests model was able to completely classify samples from 15 different species correctly based on Krawtchouk moment invariant features generated from fly wing images, with zero out-of-bag error probability. For the more challenging problem of classifying breast masses based solely on digital mammograms from the CBIS-DDSM database (n=2478), we found that image features generated from the Generalized Pseudo-Zernike moments and the Krawtchouk moments only enabled the GUIDE kernel model to achieve modest classification performance. However, using the predicted probability of malignancy from GUIDE as a feature together with five expert features resulted in a reasonably good model that has mean sensitivity of 85%, mean specificity of 61%, and mean accuracy of 70%. We conclude that orthogonal moments have high potential as informative image features in taxonomic classification problems where the patterns of biological variations are not overly complex. For more complicated and heterogeneous patterns of biological variations such as those present in medical images, relying on orthogonal moments alone to reach strong classification performance is unrealistic, but integrating prediction result using them with carefully selected expert features may still produce reasonably good prediction models.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>In image analysis, orthogonal moments are useful mathematical transformations for creating new features from digital images. Moreover, orthogonal moment invariants produce image features that are resistant to translation, rotation, and scaling operations.</ns0:p><ns0:p>Here, we show the result of a case study in biological image analysis to help researchers judge the potential efficacy of image features derived from orthogonal moments in a machine learning context. In taxonomic classification of forensically important flies from the Sarcophagidae and the Calliphoridae family (n=74), we found the GUIDE random forests model was able to completely classify samples from 15 different species correctly based on Krawtchouk moment invariant features generated from fly wing images, with zero out-of-bag error probability. For the more challenging problem of classifying breast masses based solely on digital mammograms from the CBIS-DDSM database (n=2478), we found that image features generated from the Generalized Pseudo-Zernike moments and the Krawtchouk moments only enabled the GUIDE kernel model to achieve modest classification performance. However, using the predicted probability of malignancy from GUIDE as a feature together with five expert features resulted in a reasonably good model that has mean sensitivity of 85%, mean specificity of 61%, and mean accuracy of 70%. We conclude that orthogonal moments have high potential as informative image features in taxonomic classification problems where the patterns of biological variations are not overly complex. For more complicated and heterogeneous patterns of biological variations such as those present in medical images, relying on orthogonal moments alone to reach strong classification performance is unrealistic, but integrating prediction result using them with carefully selected expert features may still produce reasonably good prediction models.</ns0:p></ns0:div> <ns0:div><ns0:head>INTRODUCTION 34</ns0:head><ns0:p>Image analysis -the extraction of information from digital pictures by quantitative means, is a powerful 35 way to study biological variation without directly interacting with the physical object that is imaged. After 36 suitable image processing steps such as zooming or reduction, denoising, and segmentation, the pattern 37 of shape variation in an image, which is represented by a matrix of pixel values, can be extracted using 38 suitable feature extraction methods <ns0:ref type='bibr' target='#b23'>(Gonzalez and Woods, 2002)</ns0:ref>. To be practically useful, such methods 39 need to be invariant to translation, rotation, and scaling.</ns0:p></ns0:div> <ns0:div><ns0:head>40</ns0:head><ns0:p>Moment invariants, which are abstract representations of shape that satisfy the three properties of 41 translation, rotation and scale invariance, were first proposed by <ns0:ref type='bibr' target='#b26'>Hu (1962)</ns0:ref>. These moments can serve as machine learning) algorithms. Subsequently, <ns0:ref type='bibr' target='#b75'>Teague (1980)</ns0:ref> showed that continuous orthogonal moments based on orthogonal polynomials such as the Legendre polynomials (see <ns0:ref type='bibr' target='#b72'>Szeg&#246; (1975)</ns0:ref>) and the Zernike polynomials <ns0:ref type='bibr' target='#b81'>(Zernike, 1934)</ns0:ref> enable approximate image reconstruction. Examples of other continuous orthogonal moments include the pseudo-Zernike moments <ns0:ref type='bibr' target='#b76'>(Teh and Chin, 1988)</ns0:ref>, the Gegenbauer moments <ns0:ref type='bibr' target='#b43'>(Liao et al., 2002)</ns0:ref>, and the generalized pseudo-Zernike moments (GPZM; <ns0:ref type='bibr' target='#b78'>Xia et al. (2007)</ns0:ref>).</ns0:p><ns0:p>Discrete orthogonal moments based on the Chebyshev polynomials (see <ns0:ref type='bibr' target='#b72'>Szeg&#246; (1975)</ns0:ref>) were introduced to overcome the problem of computational complexity of continuous orthogonal moments, and allow exact image reconstruction <ns0:ref type='bibr' target='#b80'>(Yap et al., 2001;</ns0:ref><ns0:ref type='bibr' target='#b51'>Mukundan et al., 2001)</ns0:ref>. An important member of this class of discrete orthogonal moments is the Krawtchouk moments (KM), which is unique for being able to extract local features in images <ns0:ref type='bibr' target='#b79'>(Yap et al., 2003)</ns0:ref>. Other members include the Hahn moments <ns0:ref type='bibr' target='#b82'>(Zhou et al., 2005)</ns0:ref>, dual Hahn moments <ns0:ref type='bibr' target='#b83'>(Zhu et al., 2007)</ns0:ref> and the Racah moments <ns0:ref type='bibr' target='#b83'>(Zhu et al., 2007)</ns0:ref>.</ns0:p><ns0:p>In applications, orthogonal moments are widely used for non-trivial image analysis tasks, such as the identification of written alphabets in different languages (e.g. <ns0:ref type='bibr' target='#b44'>Liao and Pawlak (1995)</ns0:ref>, <ns0:ref type='bibr' target='#b10'>Bailey and Srinath (1996)</ns0:ref>). In biology, they have been used in the analysis of complex biological images, for tasks like classification of cellular subtypes <ns0:ref type='bibr' target='#b59'>(Ryabchykov et al., 2016)</ns0:ref>, bacteria strains <ns0:ref type='bibr' target='#b12'>(Bayraktar et al., 2006)</ns0:ref>, ophthalmic pathologies <ns0:ref type='bibr' target='#b2'>(Adapa et al., 2020)</ns0:ref>, cancer cell phenotypes <ns0:ref type='bibr' target='#b6'>(Alizadeh et al., 2016)</ns0:ref>, breast cancer phenotypes <ns0:ref type='bibr' target='#b73'>(Tahmasbi et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b52'>Narv&#225;ez and Romero, 2012;</ns0:ref><ns0:ref type='bibr' target='#b61'>Saki et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b21'>Cordeiro et al., 2016)</ns0:ref>, fingerprint identification <ns0:ref type='bibr' target='#b27'>(Kaur and Pannu, 2019)</ns0:ref>, and facial recognition <ns0:ref type='bibr' target='#b3'>(Akhmedova and Liao, 2019)</ns0:ref>.</ns0:p><ns0:p>Presently, the ease of acquiring image data from biology and medicine has created the possibility of mimicking human expert classification decisions using a purely data-driven approach via machine learning models. While state-of-the-art deep learning algorithms <ns0:ref type='bibr' target='#b36'>(LeCun et al., 2015)</ns0:ref>, which use image pixel data directly from images, are currently in vogue for image-based machine learning applications, they are not suitable for initial exploratory work where data are limited. In addition, technical and infrastructural know-how to properly execute and interpret results from deep learning algorithms is a substantial barrier for the diffusion of deep learning to many areas in biology and medicine.</ns0:p><ns0:p>Might the method of orthogonal moments become increasingly redundant in biological image analysis against a background of unrelenting shift towards deep learning methods? To better understand this situation, we performed a case study to assess the usefulness of KM and GPZM as image features in classification problems involving biological images. In this paper, we address two classification problems in biology using images of varying degree of complexity. The first problem concerns fly species identification using patterns of wing venation. Specifically, we aim to contrast the quality of classifying fly species using KM features extracted from wing image data compared to using landmark data from standard geometric morphometric approach. The second problem concerns breast mass classification using information from digital mammograms. Along with several expert features, we explore how global features extracted from GPZM, and local features extracted from KM help improve classification of benign and malignant breast masses. Here, we consider variations in wing venation patterns to be relatively simple compared to variations in breast mass patterns, which are highly heterogeneous <ns0:ref type='bibr' target='#b4'>(Aleskandarany et al., 2018)</ns0:ref>.</ns0:p><ns0:p>In the following subsections, we provide the background of the two problems.</ns0:p></ns0:div> <ns0:div><ns0:head>Problem 1: Fly wing venation patterns for species identification</ns0:head><ns0:p>Generally, identifying a biological specimen down to the species level with certainty requires a certain level of taxonomic expertise. The taxonomist examines morphological characteristics of the specimen physically, and applies expert judgement to classify the specimen. This process is often slow and expensive.</ns0:p><ns0:p>Additionally, taxonomists may also be increasingly hard to find in the future, as the number of permanent positions stagnate or shrink as a consequence of lack of funding and training at the tertiary level <ns0:ref type='bibr' target='#b16'>(Britz et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Traditional morphometric analysis <ns0:ref type='bibr' target='#b50'>(Marcus, 1990)</ns0:ref>, which mainly captures size variation, or geometric morphometric analysis <ns0:ref type='bibr' target='#b15'>(Bookstein, 1991;</ns0:ref><ns0:ref type='bibr'>Adams et al., 2013)</ns0:ref>, which captures shape variation, are possible quantitative methods that potentially allow a data-driven approach to species identification. Landmarkbased geometric morphometrics relies on using homologous landmarks, which can be unambiguously identified on an image. However, depending on the organism of interest, it is possible that few or no homologous landmarks are available, despite the fact that biological shape variation is apparent (e.g. cellular shape, claw shape) to the human observer.</ns0:p><ns0:p>Species identification by analysis of wing venation patterns often leads to correct identification at the Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>species or even subpopulation level because the main source of variation in wing venation patterns is evolutionary divergence between taxa, with only rare and incomplete secondary convergence <ns0:ref type='bibr' target='#b56'>(Perrard et al., 2014)</ns0:ref>. Currently, there is persistent interest in applying geometric morphometric analysis of wing venation patterns as a basis for identifying forensically important flies (e.g. <ns0:ref type='bibr' target='#b70'>Sontigun et al. (2017)</ns0:ref>, <ns0:ref type='bibr' target='#b69'>Sontigun et al. (2019)</ns0:ref>). Recently, <ns0:ref type='bibr' target='#b29'>Khang et al. (2021)</ns0:ref> provided proof-of-concept that species identity of forensically important flies predicted using wing venation geometric morphometric data with random forests are highly concordant with those inferred from DNA sequence data. Since analysis of whole wing image is likely to yield higher resolution data, we hypothesize that this will yield improved species prediction performance compared to using geometric morphometric landmark data which is relatively low resolution. Indeed, <ns0:ref type='bibr' target='#b49'>Macleod et al. (2018)</ns0:ref> reported encouraging results from an image analysis of fly wing venation patterns using pixel brightness as features. However, their method requires the use of undamaged wings and a standardized protocol to minimize imaging artefacts arising from slide preparation (e.g. bubbles, lighting variation). Therefore, capturing information in image pixel data as translation, rotation and scale-invariant features may improve usability of images without the need to apply a rigid imaging protocol.</ns0:p><ns0:p>Here, we do not consider comparison against Elliptic Fourier Analysis ( <ns0:ref type='bibr' target='#b35'>Kuhl and Giardina, 1982)</ns0:ref> another shape analysis method, since it is used for shapes that are closed contours, which wing venation patterns are not.</ns0:p></ns0:div> <ns0:div><ns0:head>Problem 2: Breast mass classification</ns0:head><ns0:p>The classification of breast masses based solely on digital mammograms is a challenging problem, owing to the heterogeneous morphology of breast masses <ns0:ref type='bibr' target='#b4'>(Aleskandarany et al., 2018)</ns0:ref>. Several researchers who used orthogonal moments to construct image features for the classification of benign and malignant masses reported encouraging findings <ns0:ref type='bibr' target='#b73'>(Tahmasbi et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b52'>Narv&#225;ez and Romero, 2012)</ns0:ref> . Current state-of-the-art deep learning approach to image analysis of breast cancer mammograms gave highly optimistic results. <ns0:ref type='bibr' target='#b62'>Shen et al. (2019)</ns0:ref> reported sensitivity of 86.7%, and specificity of 96.1% in the classification of benign and malignant breast masses, using 2478 images in the CBIS-DDSM database (training set size = 1903; validation set size = 199; test set size =376; <ns0:ref type='bibr' target='#b20'>(Clark et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b38'>Lee et al., 2016</ns0:ref><ns0:ref type='bibr' target='#b37'>Lee et al., , 2017))</ns0:ref>). Nevertheless, the opacity and plasticity of powerful black-box methods such as deep learning pose challenges to its formal adoption in medical practice <ns0:ref type='bibr' target='#b53'>(Nicholson Price, 2018)</ns0:ref>. In the end, combining the complementary strengths of features derived from human expert judgement and those from statistical learning models seems to be the most convincing approach <ns0:ref type='bibr' target='#b22'>(Gennatas et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Here, we hypothesize that integrating the result of statistical learning outcome from image analysis using orthogonal moments with relevant expert features may ameliorate performance deficiencies based solely on image analysis.</ns0:p></ns0:div> <ns0:div><ns0:head>MATERIALS AND METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>Fly wing images</ns0:head><ns0:p>For the problem of fly species identification using images of wing venation patterns, we used species from two forensically important fly families: Sarcophagidae, and Calliphoridae <ns0:ref type='bibr' target='#b7'>(Amendt et al., 2011)</ns0:ref>. Images of wings of male specimens that are of sufficiently good quality for image analysis, and their associated geometric morphometric data from 19 landmarks (Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>) were taken from <ns0:ref type='bibr' target='#b28'>Khang et al. (2020)</ns0:ref>.</ns0:p><ns0:p>The samples were taken from male flies from the Calliphoridae family (seven species), namely </ns0:p></ns0:div> <ns0:div><ns0:head>Breast Cancer Images and Associated Expert Features</ns0:head><ns0:p>The DDSM (Digital Database for Screening Mammography) database <ns0:ref type='bibr' target='#b24'>(Heath et al., 1998</ns0:ref><ns0:ref type='bibr' target='#b25'>(Heath et al., , 2000) )</ns0:ref> <ns0:ref type='table'>2021:03:59010:1:1:NEW 5 Jul 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science 2620 scanned film mammograms, the quality of annotations in the images varied. Examples of errors include wrongly annotated images and lesion outlines that do not form precise mass boundary <ns0:ref type='bibr' target='#b37'>(Lee et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b68'>Song et al., 2009)</ns0:ref>. Inclusion of such poor quality images in the training phase of a statistical learning model can weaken model generalizability.</ns0:p><ns0:p>To overcome this problem, a curated subset of images in the DDSM database, known as the CBIS-DDSM (Curated Breast Imaging subset of DDSM) collection <ns0:ref type='bibr' target='#b20'>(Clark et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b38'>Lee et al., 2016</ns0:ref><ns0:ref type='bibr' target='#b37'>Lee et al., , 2017) )</ns0:ref> was created. Images in this database consist of selected mammograms that have been segmented using an automated segmentation algorithm. The segmented images were evaluated by comparing outlines of mass lesion images with hand-drawn outlines made by a trained radiologist. The CBIS-DDSM collection comprises scanned filmed mammography from 1566 participants. A patient could have more than one type of lesions (e.g. mass, calcification) in a single mammogram. The images were decompressed and converted to the DICOM format containing updated region of interest (ROI) segmentation, bounding boxes, and pathologic diagnosis. Moreover, a total of 3568 focused images are also available in this database to cater to studies that do not require the use of full mammogram images but only a focused region of abnormalities. We used 1151 images from the CBIS-DDSM collection, of which 48% (556/1151) were of benign class, and 52% (595/1151) were of malignant class. The images were downloaded from the CBIS-DDSM database in the PNG file format.</ns0:p><ns0:p>Each mammogram image is further associated with five expert features: (i) BI-RADS assessment;</ns0:p><ns0:p>(ii) mass shape; (iii) mass margin; (iv) breast density; (v) subtlety rating. The Breast Imaging-Reporting and Data System (BI-RADS) provides a standard for reporting breast examination results based on mammography, ultrasound, and magnetic resonance imaging data. First published in 1992 (American College of Radiology, 1992), BI-RADS has become a standard communication tool for mammography reports globally <ns0:ref type='bibr' target='#b11'>(Balleyguier et al., 2007)</ns0:ref>, and is now in its fifth edition <ns0:ref type='bibr' target='#b64'>(Sickles et al., 2013)</ns0:ref>. By standardizing the reporting of mammography results, BI-RADS facilitates communication among radiologists and clinicians and aids the training and education of junior radiologists in developing countries <ns0:ref type='bibr' target='#b40'>(Lehman et al., 2001)</ns0:ref>.</ns0:p><ns0:p>There are 7 categories in the BI-RADS assessment that can be assigned based on evaluation of the lexicon descriptors or the biopsy findings of a lesion <ns0:ref type='bibr' target='#b64'>(Sickles et al., 2013)</ns0:ref>. Category 0 indicates that materials are insufficient for evaluation and additional imaging evaluation or prior mammograms for comparison are required. Category 1 is given when no anomalies found. Category 2 is given when there is evidence of benign tumours such as skin calcifications, metallic foreign bodies, fat-containing lesions and involuting calcified fibroadenomas. If radiologists are unsure of the lesion categorization, a BI-RADS category of 3 is given, and a follow-up over a certain interval of time is done to determine stability of the lesion. The risk of malignancy in this category is considered to be at most 2%. Category of 4 is assigned <ns0:ref type='table'>2021:03:59010:1:1:NEW 5 Jul 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>when malignant tumours are suspected. Three subcategories are possible: a, b, and c. The subcategory (a) reflects a subjective probability of malignancy between 2% to 10%. Subcategory (b) reflects a subjective probability of 10% to 50% for malignancy, while subcategory (c) reflects a subjective probability of malignancy ranging from 50% to 95%. Category 5 reflects a strong belief (probability of 95% or more) that a lesion is malignant. When a lesion placed in this category is contradicted by a benign biopsy report, a surgical consultation may still be advised. Finally, a BI-RADS category of 6 is given when a lesion receives confirmation of malignancy from the biopsy result.</ns0:p><ns0:p>A breast mass has two important aspects -shape and margin. BI-RADS lexical descriptors of mass shape include oval, irregular, lobulated, etc. The mass margin describes the shape of the edges of a mass, such as being circumscribed, ill-defined, or spiculated. Breast density <ns0:ref type='bibr' target='#b64'>(Sickles et al., 2013</ns0:ref>) is a categorical variable with four categories. Category 1 describes breast composition that is almost entirely fatty. Category 2 indicates the presence of scattered fibroglandular densities. Category 3 indicates breast that is heterogeneously dense. Category 4 indicates breast that is extremely dense, which lowers sensitivity of mammography. Finally, the subtlety rating, which is not part of the BI-RADS standards, is an ordinal variable on a scale of 1 to 5 representing the difficulty in viewing the abnormality in a mammogram <ns0:ref type='bibr' target='#b37'>(Lee et al., 2017)</ns0:ref>. The scale ranges from 1 for 'subtle' to 5 for 'obvious'.</ns0:p></ns0:div> <ns0:div><ns0:head>Data Processing</ns0:head></ns0:div> <ns0:div><ns0:head>Fly Wing Images</ns0:head><ns0:p>For the fly images, the images could not be used directly because of the presence of non-biological variation such as damaged wing membranes, lighting variation, and presence of bubbles in the slides. We processed the images by converting them into binary images using Pinetools (https://pinetools.com/threshold-image), with a focus on retaining the pattern of venation on the wing. Subsequently, we denoised the binary images manually. All images were cropped to a uniform size of 724 pixels &#215; 254 pixels. The final images were stored in PNG format.</ns0:p></ns0:div> <ns0:div><ns0:head>Breast Cancer Images</ns0:head><ns0:p>Region of interests associated with the CBIS-DDSM breast cancer images were resized to a uniform size of 300 pixels &#215; 300 pixels, and normalized using the EBImage R package (Version 3.0.3; <ns0:ref type='bibr' target='#b55'>Pau et al. (2010)</ns0:ref>). Subsequently, we enhanced the contrast in the ROI of images using the histogram equalization method <ns0:ref type='bibr' target='#b23'>(Gonzalez and Woods, 2002)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Feature extraction</ns0:head></ns0:div> <ns0:div><ns0:head>Fly wing images</ns0:head><ns0:p>To use geometric morphometric data from the wing images, raw coordinate data from the 19 landmarks were processed using Generalized Procrustes Analysis in the geomorph R package (Version 3.3.2; Adams and Otarola-Castillo ( <ns0:ref type='formula'>2013</ns0:ref>)) to produce the translation, rotation, and scale-invariant Procrustes coordinates.</ns0:p><ns0:p>To remove the effect of allometry, we used the residuals produced from linear regression of the Procrustes coordinates against the logarithm (base 10) of centroid size <ns0:ref type='bibr' target='#b67'>(Sidlauskas et al. (2011)</ns0:ref>; <ns0:ref type='bibr' target='#b32'>Klingenberg (2016)</ns0:ref>).</ns0:p><ns0:p>To remove correlation between the Procrustes coordinates, we applied R-mode principal component analysis (PCA), and kept the first 15 principal components that cumulatively explain 98.7% of the total variation in the data.</ns0:p><ns0:p>For image analysis, since the fly wing images are rectangular, we used higher order moments to ensure that the reconstruction captured image details distal from the image centroid (Fig. <ns0:ref type='figure' target='#fig_5'>2</ns0:ref>). We found that Krawtchouk moment invariants (see Appendix) of order 200 was appropriate for extracting image features from the binary images of the fly wings. Thus, 40000 moment invariant features were obtained. Since the range of values for these features were generally large, we scaled these features using the Z-score, and then applied Q-mode PCA reduce the dimension of the feature space. The first 60 principal components accounting for 92.6% of total variance were used as features for downstream statistical learning work.</ns0:p></ns0:div> <ns0:div><ns0:head>Breast cancer images</ns0:head><ns0:p>For the breast cancer images, we used KM of order 163. For GPZM, we used order 126, and set &#945; = 0.</ns0:p><ns0:p>The choice of &#945; value was guided by results in <ns0:ref type='bibr' target='#b78'>Xia et al. (2007)</ns0:ref>, which showed that reconstruction error tended to be relatively smaller for &#945; = 0 compared to larger values of &#945;, under the assumption of the presence of low level noise in the images. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>GPZM and KM moments from order 1 to order 300 were computed for the images. For each order, we reconstructed the images and calculated the mean squared error (MSE) for each image. The MSE can be obtained by squaring the difference of the reconstructed image from the original image followed by averaging using the dimension of the image.</ns0:p><ns0:p>For KM, order 163 was used as it produced the lowest MSE error, with mean and standard deviation of the reconstructed images being 0.0128 and 0.0030, respectively. For GPZM, the lowest MSE ranged from order 11 to order 238 with mean of 0.0028 and standard deviation of 0.0044, respectively. Order 126 was the mean of the orders producing the lowest MSE among the 8 images. The mean and standard deviation of the reconstructed images with order 126 were 0.0040 and 0.0060, respectively. Plots of the MSE for orders 1 to 300 for each of the 8 images are given in Supplemental File 1.</ns0:p><ns0:p>The top 1% of features with the largest magnitude of t-statistic values were selected as the feature vector. We then applied Q-mode PCA on these selected features, and used the first k principal components that explained about 95% of the total variance.</ns0:p></ns0:div> <ns0:div><ns0:head>Study design and analysis</ns0:head></ns0:div> <ns0:div><ns0:head>Data analysis</ns0:head><ns0:p>For Problem 2, consider an n &#215; p input data matrix that has been subjected to Q-mode PCA. This produces W T , the p &#215; n matrix of principal component loadings. With p &gt; n, as is the case with features extracted using orthogonal moments, the matrix of principal component scores V is of dimension n &#215; n. A partial principal component scores matrix V &#946; accounting for &#946; proportion of total variation in the training samples is the n &#215; k submatrix obtained from V by taking the first k columns of V. Applying Fisher's linear discriminant analysis using V &#946; , we then obtain the k &#215; (s &#8722; 1) matrix of weights A for the s &#8722; 1 linear discriminants, where s is the number of classes.</ns0:p><ns0:p>Given an n test &#215; p matrix of test samples X test , we first center the test samples X test, centered = X test &#8722; (&#181; &#181; &#181; train , . . . , &#181; &#181; &#181; train ) T , where &#181; &#181; &#181; T train is the 1 &#215; p vector of mean of each of the p variables in the training set. Then, we map the test samples into the principal component space of training samples using the matrix operation V test = X test, centered W T . Thereafter, we obtain the partial principal component scores matrix V test,&#946; , which is of dimension n test &#215; k, and finally map the test samples into the linear discriminant space of the training samples using the matrix product V test,&#946; A T ., which is of dimension n test &#215; (s &#8722; 1). We used the latter as input for training.</ns0:p></ns0:div> <ns0:div><ns0:head>Statistical learning model</ns0:head><ns0:p>For classification, we used a kernel discriminant model in the GUIDE (Generalized, Unbiased, Interaction Detection and Estimation) classification and regression tree program <ns0:ref type='bibr' target='#b45'>(Loh, 2009;</ns0:ref><ns0:ref type='bibr' target='#b47'>Loh, 2014)</ns0:ref>. The kernel method is a non-parametric method that estimates a Gaussian kernel density <ns0:ref type='bibr'>(Silverman, 1986)</ns0:ref> for each class in a node, and uses the estimated densities in a maximum likelihood framework for classification.</ns0:p><ns0:p>The tree complexity parameter k-SE in GUIDE was set at the default value of 0.5, with the number of cross-validated trees set at 10.</ns0:p><ns0:p>For Problem 1, it was not feasible to split the samples into another test set for assessing generalization error, because the average sample size per class was already small (about 5). Hence, we applied the random forests ensemble classifier (2001 trees) and obtained the out-of-bag error estimate for generalization error.</ns0:p><ns0:p>For the CBIS-DDSM data set in Problem 2, we randomly chose 70% of the data set for training, and the remainder 30% for testing. A total of 10 such instances were made to study variation in performance metrics. Subsequently, the predicted probability of malignancy (P(mal)) was used as a feature together with subsets of the expert features to build another machine learning model. For this, we considered four models: (I) expert features with BI-RADS assessment and P(mal) from image analysis;</ns0:p><ns0:p>(II) expert featureswith BI-RADS assessment, without P(mal); (III) expert features with BI-RADS assessment replaced by P(mal) from image analysis; (IV) expert features without BI-RADS assessment.</ns0:p><ns0:p>To understand the relative contribution of the set of variables in the four models, we computed variable importance scores for each variable in Models I to IV using GUIDE, following the method in <ns0:ref type='bibr' target='#b46'>Loh (2012)</ns0:ref>.</ns0:p><ns0:p>We ranked the importance scores of each variable in the four models (1 being most important) and then reported the mean and standard deviation of the ranks.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance evaluation</ns0:head><ns0:p>We used standard metrics for the evaluation of classifier performance. Accuracy was defined as the probability that the predicted class was the same as the true class. For multi-class prediction in the fly Manuscript to be reviewed</ns0:p><ns0:p>Computer Science images, we did not use sensitivity or specificity, as contextually no particular species is of any special interest. Hence, sensitivity and specificity were considered only in the breast cancer classification. There, sensitivity is defined as the probability of predicting the malignant class, given that a sample is malignant.</ns0:p><ns0:p>Specificity is defined as the probability of predicting the benign class, given that a sample is benign.</ns0:p><ns0:p>In the case of the fly images, we used the Bayesian posterior mean of accuracy with uniform prior (see Appendix), and reported the 95% Bayesian credible interval <ns0:ref type='bibr' target='#b17'>(Brown et al., 2001)</ns0:ref>. For the breast cancer images, we reported the mean of accuracy, sensitivity, and specificity from the 10 random instances, along with their associated standard error estimate (sample standard deviation / &#8730; 10).</ns0:p></ns0:div> <ns0:div><ns0:head>Software and computation</ns0:head><ns0:p>For image analysis, we used R version 3.6.1 (R Core Team, 2018) to perform the computations and run the IM R package <ns0:ref type='bibr' target='#b58'>(Rajwa et al., 2013)</ns0:ref>. Data processing and analyses of the CBIS-DDSM samples and fly wing images were done using a 22 CPU core, 23 GB RAM server running on Ubuntu 16.04.4 LTS at the Data Intensive Computing Centre, University of Malaya, Malaysia. For classification using decision tree with kernel model and random forests, we used the GUIDE program (Version 35.2; <ns0:ref type='bibr' target='#b45'>Loh (2009</ns0:ref><ns0:ref type='bibr' target='#b47'>Loh ( , 2014))</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head>Quality of Moment Feature Representation of Images</ns0:head><ns0:p>For the fly wings, images reconstructed from KM of order 200 approximated the binary images very well (Fig. <ns0:ref type='figure' target='#fig_5'>2C</ns0:ref>). Similarly, images reconstructed from KM (order 163) and GPZM (&#945; = 0) for the breast images also approximated of the ROI well. An example showing a malignant mass is given in Fig. <ns0:ref type='figure'>3</ns0:ref>. For additional examples, see Supplemental File 2. </ns0:p></ns0:div> <ns0:div><ns0:head>Classification of benign and malignant breast masses 323</ns0:head><ns0:p>The mean classification accuracy based solely on image data was about 57% &#177; 1%, with mean sensitivity 324 about 70% &#177; 1%, and mean specificity of 43% &#177; 2%. Baseline prediction accuracy using majority class 325 was 52%. Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref> shows, for a particular training-testing instance, the estimated bivariate Gaussian Manuscript to be reviewed</ns0:p><ns0:p>Computer Science samples superimposed on the plot.</ns0:p><ns0:p>When expert features were used together with P(mal) as a feature, we observed substantial increase of mean accuracy (from 57% to between 68% and 75%; Table <ns0:ref type='table'>1</ns0:ref>), mean sensitivity (from 70% to between 80% and 97%), and mean specificity (from 43% to between 51% and 61%, excluding Model IV). Using the set of expert features without P(mal) (Model II) produced the best mean accuracy (75% &#177; 1%) and best mean sensitivity (97% &#177; 0%). Manuscript to be reviewed</ns0:p><ns0:p>Computer Science sensitivity and model specificity by decreasing the former but increasing the latter.</ns0:p></ns0:div> <ns0:div><ns0:head>Model Accuracy Sensitivity Specificity</ns0:head><ns0:formula xml:id='formula_0'>I 70 &#177; 1 85 &#177; 1 61 &#177; 2 II 75 &#177; 1 97 &#177; 0 51 &#177; 1 III 68 &#177; 2 80 &#177; 1 54 &#177; 3 IV 69 &#177; 1 95 &#177; 1 40 &#177; 3</ns0:formula><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Mean accuracy, sensitivity, and specificity (&#177; standard error) of Models I to IV, in percentages.</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> shows summary statistics of feature importance (mean &#177; standard deviation) for each of the four models. Across all four models, BI-RADS assessment (where used) and mass margin were consistently ranked as the two most important features, and the order of importance (in decreasing importance) for mass shape, subtlety, and breast density was also consistent. Where P(mal) was used in Model I, on average it ranked third in order of importance; in Model III, on average it ranked second, after mass margin. This suggests that P(mal), which summarizes abstract information from image data, provides useful information to the statistical learning model when used together with the expert features.</ns0:p></ns0:div> <ns0:div><ns0:head>Model P(mal)</ns0:head><ns0:p>Assessment Mass margins Mass shape Subtlety Breast density I 2.8 &#177; 0.6 1.5 &#177; 0.5 1.7 &#177; 0.6 4.0 &#177; 0.0 5.0 &#177; 0.0 6.0 &#177; 0.0</ns0:p><ns0:formula xml:id='formula_1'>II NA 1.6 &#177; 0.5 1.4 &#177; 0.5 3.0 &#177; 0.0 4.0 &#177;0.0 5.0 &#177; 0.0 III 1.9 &#177; 0.3 NA 1.1 &#177; 0.3 3.0 &#177;0.0 4.0 &#177; 0.0 5.0 &#177; 0.0 IV NA NA 1.0 &#177; 0.0 2.0 &#177;0.0 3.0 &#177; 0.0 4.0 &#177; 0.0 Table 2.</ns0:formula><ns0:p>Variable (feature) importance of the predicted probability of malignancy (P(mal)) and the five expert features for Models I to IV. The most important variable is ranked 1, with larger ranks indicating less importance. Abbreviation: NA = not available.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Currently, geometric morphometrics is the standard method for the analysis of wing shape variation in entomology <ns0:ref type='bibr' target='#b74'>(Tatsuta et al., 2018)</ns0:ref>, of which species identification is only one possible application.</ns0:p><ns0:p>Nevertheless, for routine species identification, direct image analysis can also be practical and produce more accurate prediction results. Indeed, we showed that images containing artefacts could still be useful for species identification, as such artefacts can be removed in the binary images via manual denoising.</ns0:p><ns0:p>Generating Krawtchouk moment invariants of images is fast, making them useful as sample features for statistical learning models in the initial evaluation of the difficulty of a biological image classification problem. Indeed, in situations where only a few landmarks can be reliably identified, we suggest that image analysis using orthogonal moments in general may be a feasible alternative to addressing the species identification problem.</ns0:p><ns0:p>Given the encouraging results of applying Krawtchouk moment invariants to image analysis of fly wing svenation patterns in the present study, we propose that image-based identification of other insects where substantial species-specific variation is present in the wing organs, such as dragonflies <ns0:ref type='bibr' target='#b31'>(Kiyoshi and Hikida, 2012)</ns0:ref> and mosquitoes <ns0:ref type='bibr' target='#b48'>(Lorenz et al., 2017)</ns0:ref>, may be also be fruitful.</ns0:p><ns0:p>For classification of breast masses using KM and GPZM, the result was less satisfactory. A potential source of error may be noise generated in some ROI images that were originally smaller (e.g. 159 pixels &#215; 95 pixels) when they were rescaled to 300 pixels &#215; 300 pixels. Several studies that used orthogonal moments in breast mass classification reported apparently optimistic results, but their study design should be considered carefully. For example, <ns0:ref type='bibr' target='#b73'>Tahmasbi et al. (2011)</ns0:ref> reported classification accuracy of 96%, sensitivity of 100%, and specificity of 95%. They used a different set of breast cancer images (n = 121) from the much smaller Mini-MIAS database <ns0:ref type='bibr' target='#b71'>(Suckling et al., 1994)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science manual segmentation in their work to accentuate mass boundaries in the ROIs, the authors inadvertently injected expert knowledge into the study. They extracted features using Zernike moments <ns0:ref type='bibr' target='#b30'>(Khotanzad and Hong, 1990)</ns0:ref> and applied artificial neural network as the classifier. <ns0:ref type='bibr' target='#b52'>Narv&#225;ez and Romero (2012)</ns0:ref> used KM and Zernike moments to extract features from images in the DDSM database for the classification of breast masses. They reported test accuracy of about 90% with KM features. However, the size of the test samples was small (n = 100; half benign, half malignant). It was also unclear from their study design whether the selected training samples (n = 300) and test samples were randomized.</ns0:p><ns0:p>The United States of America national performance benchmark on screening mammography by radiologists recently established a mean sensitivity 86.9% (95% confidence interval -[86.3%, 87.6%]) and a mean specificity of 88.9% (95% confidence interval -[88.8%, 88.9%]), using a sample of 359 radiologists who examined about 1.7 million digital mammograms <ns0:ref type='bibr' target='#b39'>(Lehman et al., 2017)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>Through the analysis of fly wing images, we showed that orthogonal moments such as the Krawtchouk moments are effective features for summarising meaningful patterns in relatively simple biological images.</ns0:p><ns0:p>Statistical learning models that used Krawtchouk moment invariants gave completely accurate prediction of all 15 fly species used, beating similar models that use geometric morphometric data by a wide margin.</ns0:p><ns0:p>On the other hand, the efficacy of orthogonal moments-based features for summarising patterns of variation that are more heterogeneous and less well-defined in complex biological images appears modest.</ns0:p><ns0:p>Through analysis of the CBIS-DDSM breast mammograms, we found statistical learning models that use orthogonal moments produced classification performance that was far below those achieved by trained radiologists. Nevertheless, when output of the predictive model in the form of predicted probability of malignancy was used as a feature to summarize image evidence for the malignancy class, we found its variable importance score surpassed those of expert features associated with mammograms (e.g. mass shape, breast density, and subtlety rating). We also found the predicted probability of malignancy to interact with the important BI-RADS assessment for malignancy expert feature, leading to prediction performance that is optimal in the sense of having the smallest discrepancy between sensitivity and specificity.</ns0:p><ns0:p>To summarize, we believe orthogonal moments are still feasible as image features in the analysis of biological images. They should be adequate for handling species prediction problems on the basis of the shape of specific anatomies. The ease of applying them means that orthgonal moments are ideal for estimating a lower bound of prediction performance. On the other hand, expert features that accompany more complex biological images are probably necessary to offset the modest performance of statistical learning models that use orthogonal moments for class prediction. </ns0:p></ns0:div> <ns0:div><ns0:head>ACKNOWLEDGMENTS</ns0:head></ns0:div> <ns0:div><ns0:head>APPENDIX Orthogonal moments</ns0:head><ns0:p>In this section, we provide sufficient mathematical background for the appreciation of the use of orthogonal moments as feature extractors of images. For further advanced details, we refer readers to <ns0:ref type='bibr' target='#b72'>Szeg&#246; (1975)</ns0:ref> 12 Definition 2. <ns0:ref type='bibr' target='#b54'>(Oberhettinger, 1964)</ns0:ref> The hypergeometric function, 2 F 1 (a, b; c; z) is a special function defined as the power series </ns0:p><ns0:formula xml:id='formula_2'>2 F 1 (a, b; c; z) = &#8734; &#8721; k=0 (a) k (b) k (c) k z k k! ,</ns0:formula></ns0:div> <ns0:div><ns0:head>Krawtchouk polynomials and moments</ns0:head><ns0:p>The Krawtchouk polynomials <ns0:ref type='bibr'>(Krawtchouk, 1929a,b)</ns0:ref> are discrete orthogonal polynomials associated with a binomial probability weight function. The Krawtchouk polynomial of order n is denoted by k n (x; p, N &#8722; 1), and can be conveniently representing as a hypergeometric function</ns0:p><ns0:formula xml:id='formula_3'>k n (x; p, N &#8722; 1) = 2 F 1 &#8722;n, &#8722;x; &#8722;N + 1; 1 p</ns0:formula><ns0:p>where n, x = 0, 1, 2, . . . , N &#8722; 1, N &gt; 1, 0 &lt; p &lt; 1. The weighted Krawtchouk polynomial <ns0:ref type='bibr' target='#b79'>(Yap et al., 2003)</ns0:ref> k n (x; p, N &#8722; 1) is given by k n (x; p, N &#8722; 1) = k n (x; p, N &#8722; 1) &#969; (x; p, N &#8722; 1) &#961; (x; p, N &#8722; 1) ,</ns0:p><ns0:p>where &#969; (x; p, N &#8722; 1) is the binomial probability mass function</ns0:p><ns0:formula xml:id='formula_4'>&#969;(x; p, N &#8722; 1) = N &#8722; 1 x p x (1 &#8722; p) N&#8722;1&#8722;x ,</ns0:formula><ns0:p>and (1 &#8722; p) N&#8722;1 /&#961; (x; p, N &#8722; 1) = &#969;(n; p, N &#8722; 1). For brevity, we will write k n (x; p, N &#8722; 1) as k n (x).</ns0:p><ns0:p>Let the Krawtchouk moments matrix of order l be a l &#215; l square matrix,</ns0:p><ns0:formula xml:id='formula_5'>Q Q Q. The (n, m) element of Q Q Q, denoted as Q nm , is related to the image intensity function f (x, y) on the two-dimensional discrete domain through Q nm = N&#8722;1 &#8721; x=0 M&#8722;1 &#8721; y=0 k n (x) k m (y) f (x, y).</ns0:formula><ns0:p>Consider an image of dimension N &#215; M. If we denote K K K 1 and K K K 2 as the l &#215; N and l &#215; M matrix of Krawtchouk polynomials, respectively, with A A A as the N &#215; M matrix of image intensity functions, then</ns0:p><ns0:formula xml:id='formula_6'>Q Q Q = K K K 1 A A AK K K T 2 .</ns0:formula><ns0:p>The orthogonality property of the Krawtchouk polynomials Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>implies that the product of an l &#215; N matrix of Krawtchouk polynomials with its transpose is the l &#215; l</ns0:p><ns0:formula xml:id='formula_7'>A A A = K K K T 1 1 1 Q Q QK K K 2 2 2 ,</ns0:formula><ns0:p>that is, The pseudo-Zernike polynomials <ns0:ref type='bibr' target='#b14'>(Bhatia and Wolf, 1954)</ns0:ref> are polynomials in two variables that form a complete orthogonal set for the interior of the unit circle. The pseudo-Zernike polynomials of order n and repetition m is denoted by V nm (r, &#952; ), and expressed in polar coordinate form as</ns0:p><ns0:formula xml:id='formula_8'>V nm (r, &#952; ) = R nm (r)e im&#952; , (<ns0:label>1</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>)</ns0:formula><ns0:p>where i is the complex number &#8730; &#8722;1, and R nm (r) is the radial polynomial defined as </ns0:p><ns0:p>The image can be approximately reconstructed using the inverse transform formula <ns0:ref type='bibr' target='#b75'>(Teague, 1980)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Bayesian estimator of classification accuracy</ns0:head><ns0:p>Let X i , i = 1, 2, . . . , s be number of correctly predicted samples for the ith species (up to s species). These are the diagonal entries of the s &#215; s classification matrix. For a statistical learning model, assume that it has constant probability &#960; of correctly predicting the species identity of a sample (i.e. accuracy). Then, X i is binomially distributed with number of trials equal to the number of i-th species in the sample (n i ) and success probability &#960;. By Bayes' Theorem, the posterior distribution of &#960;, given X 1 , X 2 , . . . , X s , is f (&#960;|X 1 , X 2 , . . . , X s ) = P(X 1 , X 2 , . . . , X s |&#960;) f (&#960;) &#215; , where f (&#960;) is the prior distribution of &#960;. Using the conservative uniform prior f (&#960;) = 1, 0 &lt; &#960; &lt; 1, and assuming that P(X 1 , X 2 , . . . , X s |&#960;) = &#8719; s i=1 P(X i |&#960;), it can be shown that f (&#960;|X 1 , X 2 , . . . , X s ) has a beta distribution with shape parameters &#945; = &#8721; s i=1 X i + 1, and &#946; = N &#8722; &#8721; s i=1 X i + 1, where N is the total sample size. Thus, the Bayesian posterior mean estimate of &#960; is given by &#960; = &#8721; s i=1 X i + 1 N + 2 .</ns0:p><ns0:p>The lower and the upper end of the 95% Bayesian credible interval of &#960; are computed as the 2.5th and the 97.5th percentile of the beta distribution with shape parameters &#945; = &#8721; s i=1 X i + 1, and &#946; = N &#8722; &#8721; s i=1 X i + 1, respectively.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:03:59010:1:0:NEW 19 Jun 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59010:1:1:NEW 5 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Chrysomya megacephala (Fabricius, 1794) (n=5), Chrysomya nigripes Aubertin, 1932 (n=5), Chrysomya pinguis (Walker, 1858) (n=5), Chrysomya rufifacies (Macquart, 1842) (n=5), Chrysomya villeneuvi Patton, 1922 (n=5), Lucilia cuprina (Wiedemann, 1830) (n=4) and Lucilia porphyrina (Walker, 1856) (n=5). From the Sarcophagidae family (eight species), we had Boettcherisca javanica Lopes, 1961 (n=5), Boettcherisca karnyi (Hardy, 1927) (n=5), Boettcherisca peregrina (Robineau-Desvoidy, 1830) (n=5), Sarcophaga ruficornis (Fabricius, 1794) (n=5), Sarcophaga dux Thompson, 1869 (n=5), Parasarcophaga albiceps (Meigen, 1826) (n=5), Parasarcophaga misera (Walker, 1849) (n=5) and Sarcophaga princeps Wiedemann, 1830 (n=5). In total, 74 specimens from 15 different species were used.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>is a public repository of breast cancer mammograms. Although this database contains a large collection of 3/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59010:1:0:NEW 19 Jun 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The position of landmarks (gray circles) on a sample wing image.</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.57,206.24' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:03:59010:1:0:NEW 19 Jun 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. An example of fly wing image data taken from C. nigripes. (A) Raw image; (B) Binary image after manual denoising; (C) Reconstructed image using Krawtchouk moment invariants of order 200.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:03:59010:1:1:NEW 5 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .Figure 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Figure 3. (A) An example of the raw mammogram image showing malignant spiculated breast mass (index no. P 01461). (B) Image after thresholding showing the tumor (white). (C) Region of interest centered on the mass (390 pixels &#215; 385 pixels) before (D) Enhancement. (E) Reconstruction using KM; (F) Reconstruction using GPZM.</ns0:figDesc><ns0:graphic coords='11,141.73,346.21,413.60,232.65' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>326</ns0:head><ns0:label /><ns0:figDesc>kernel densities in the space of the first linear discriminants derived from KM and GPZM, with test327 9/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59010:1:0:NEW 19 Jun 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59010:1:1:NEW 5 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Example of contour plots of estimated bivariate Gaussian kernel densities for benign (light blue to dark blue tones) and malignant (yellow to red tones) training data (CBIS-DDSM data with seed 261) in the space of linear discriminants (first) based on Krawtchouk moments and generalized pseudo-Zernike moments. Squares and crosses indicate benign and malignant test samples, respectively.</ns0:figDesc><ns0:graphic coords='13,141.73,149.89,413.60,413.60' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Jia</ns0:head><ns0:label /><ns0:figDesc>Yin Goh was supported by a research assistantship RU Grant from the Faculty of Science, University of Malaya, Malaysia (Grant number: GPF029B-2018). We thank Dr. C.S. Liew and K.G. Ng from the Data Intensive Computing Centre, University of Malaya for technical support.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>/ 18 PeerJ</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59010:1:0:NEW 19 Jun 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59010:1:1:NEW 5 Jul 2021) Manuscript to be reviewed Computer Science for the theory of orthgonal polynomials, Yap et al. (2003) for Krawtchouk moments, Xia et al. (2007) for generalized Pseudo-Zernike moments, and Shu et al. (2007) for a general introduction to using orthogonal moments for image analysis.Mathematical preliminariesDefinition 1.<ns0:ref type='bibr' target='#b72'>(Szeg&#246;, 1975)</ns0:ref> Let p n (x) be a polynomial in x of order n. For the interval a &#8804; x &#8804; b, if w(x) is a weight function in x, and &#948; nm is the Kronecker delta which is equal to 1 when n = m, and 0 when n = m, then p n (x) is said to be an orthogonal polynomial associated with the weight function w(x) if it satisfies the condition b a p n (x)p m (x)w(x)dx = &#948; nm .</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>where a, b, c are real numbers, with |z| &lt; 1. The notation (a) k denotes the Pochhammer symbol for the rising factorial (a) k = a(a + 1)(a + 2)...(a + k &#8722; 1), with (a) 0 = 1.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>x) k m (x) = &#948; nm 13/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59010:1:0:NEW 19 Jun 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59010:1:1:NEW 5 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>j (2n + 1 &#8722; j)! j!(n &#8722; |m| &#8722; j)!(n + |m| + 1 &#8722; j)! r n&#8722; j ,(2)where n = 0, 1, 2, . . . , &#8734;, and |m| &#8804; n.<ns0:ref type='bibr' target='#b78'>Xia et al. (2007)</ns0:ref> proposed a generalization of R nm (r)(Xia et al.n &#8722; |m| &#8722; j)!(n + |m| + 1 &#8722; j)! r n&#8722; j ,(3)where &#945; &gt; &#8722;1, with R 0 nm (r) = R nm (r). The weighted generalized radial polynomial is given by454 R&#945; nm (r) = R &#945; nm (r) (2n + &#945; + 2)(&#945; + 1 + n &#8722; |m|) 2|m|+1 ) 2&#960;(n &#8722; |m| + 1) 2|m|+1 (1 &#8722; r) &#945;/2 ,leading to the generalized pseudo-Zernike polynomials455 V &#945; nm (r, &#952; ) = R&#945; nm (r)e im&#952; .The generalized pseudo-Zernike moments (GPZM) of order n and repetition m are defined as nm (r, &#952; )] * f (r, &#952; )rdrd&#952; , where * denotes complex conjugate. The orthogonality property of the pseudo&#952; )[ V &#945; kl (r, &#952; )] * rdrd&#952; = &#948; nk &#948; ml .</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:03:59010:1:1:NEW 5 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>1 , X 2 , . . . , X s |&#960;) f (&#960;)d&#960; &#8722;1</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>, of which about 55% (n = 67) are benign cases, and 45% (n = 54) are malignant cases. As a result, the number of images available for testing (n &#8776; 36, with benign and malignant cases being approximately equal) was limited. By introducing</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59010:1:0:NEW 19 Jun 2021)</ns0:cell><ns0:cell>11/18</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59010:1:1:NEW 5 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>. In comparison, deep learning algorithms produced impressive prediction performance that was on par with, or surpassed expert performance. Using 2478 images in the CBIS-DDSM database (training set size = 1903; validation set size = 199; test set size =376), Shen et al. (2019) reported sensitivity of 86.7%, and specificity of 96.1% in the classification of malignant and benign masses. However, the apparent optimism in deep learning as the last word in medical image analysis was recently questioned.<ns0:ref type='bibr' target='#b77'>Wang et al. (2020)</ns0:ref> reported that deep learning test accuracy dropped substantially when test samples with distribution of patterns of variation that differed from that of the validation samples' were used to challenge the trained deep learning model. It seems that allowing image information to interact with expert features may produce more robust models when we attempt to classify biologically complicated images such as mammograms.</ns0:figDesc><ns0:table /></ns0:figure> </ns0:body> "
"Rebuttal letter for manuscript #CS-2021:03:59010 19 June 2021 Dear editor, We thank the reviewers for their time and comments. From the comments, we believe that our work may have been misunderstood. We have modified the title slightly (addition in italics) as “On the classification of simple and complex biological images using Krawtchouk moments and Generalized Pseudo-Zernike moments: a case study with fly wing images and breast cancer mammograms”. In doing so, we wish to emphasise that this paper primarily discusses the applicability of orthogonal moments (using the Krawthchouk moments and the Generalized Pseudo-Zernike moments) in producing good classifications in biological problems, when they are used to extract features from relatively simple (represented using fly wing images), and relatively complex biological images (represented using breast cancer mammograms). It is not the intention of our work to evaluate a battery of potential feature extraction methods and potential classifier models in the analysis of these two kinds of biological images. The following are our replies to the comments by all three reviewers. Reply to Reviewer comments: Reviewer 1 (R1) 1. R1: Validity of the findings “The authors have produced some interesting results. However, I found the following lacking in the paper. 1. The performance has not compared with existing methods. 2. More evaluation methods may be used for the comparison 3. Additional papers related to breast cancer paper can enhance the quality of the paper: Sitaula, C., Aryal, S. Fusion of whole and part features for the classification of histopathological image of breast tissue. Health Inf Sci Syst 8, 38 (2020). https://doi.org/10.1007/s13755-020-00131-7” Authors: Regarding Item 1, we pointed out that while orthogonal moments may not be competitive compared to deep learning methods when used to analyse complex medical images (p.12, lines 381-393), they are nevertheless adequate for relatively simple biological images, such as fly wing images, which we show to be useful for fly species identification. For the fly wing image analysis problem, in pages 2-3 (line 83-115), we did describe a comparison of the result of classification using image features extracted from orthogonal moments with those using standard geometric morphometric data. There is no comparison in the literature for fly wing image analysis because published real data sets in this field are rare. 1 Additionally, the data set that we used is only very recently published (2020; https://doi.org/10.5061/dryad.95x69p8hf). Regarding Item 2, we are unsure what is meant by “more evaluation methods may be used”. If the reviewer means picking an optimal machine learning algorithm from a set of candidate algorithms, this is unnecessary in the present context as variation in choice of standard machine learning models has a much smaller effect on prediction performance compared to variation in choice of the more upstream feature extraction methods. In any case, the focus is on the effectiveness of orthogonal moments as the feature extraction method for which application of a suitable downstream machine learning method may yield meaningful results. On Item 3, the suggested paper deals with prediction of breast cancer state based on histopathological images, which is entirely different from the use of predicting breast cancer state using mammograms. 2. R1: Comments for the Author “The reviewer found lacking novelty in the paper. Specifically, the paper just utilizes wellestablished features without any novelty.” Authors: We do not claim novelty of methodology with respect to the use of orthogonal moments. However, our present work is novel in the sense that it re-examines how well orthogonal moments are able to produce good classification of benign and malignant states, using the CBIS-DDSM benchmark mammogram dataset. In doing so, we discover that previously reported performance (based on small test data set) using orthogonal moments are likely overly optimistic (pages 11 – 12; lines 367-379). We subsequently demonstrate that by considering the prediction output from mammogram image analysis (probability of malignancy) as a feature, using it in conjunction with the five expert features associated with the mammogram data (Models I to IV – pages 9-11; lines 322-348) enables reasonable prediction performance to be obtained. Concurrently, we also highlight the value of using orthogonal moments for the classification of fly species based on their fly wings (simpler biological images), for which we report encouraging results. We believe these results help researchers to keep orthogonal moments as an option when trying to do classification work with biological images, particularly when the problem studied does not yet have large sample size, and biological patterns in images are relatively simple. 3. R1: “Also, the performance comparison is insufficient. Rather than focussing on multiple datasets, I suggest working on only one with a deeper research understanding.” We disagree that the analysis of a single dataset using any machine learning methods, no matter how sophisticated, provides any “deeper research understanding”. In fact, it is through analyses of multiple data sets that biases and limitations of particular feature extraction methods or machine learning methods become eventually clarified. 2 Reviewer 2 (R2) 1. R2: Basic reporting - English should be improved. There still have some unclear or ambiguous parts. - Literature review should be re-organized to group some similar papers into one paragraph. - Quality of the figures should be improved. Authors: We would appreciate if the reviewer can point to us specific sections of the manuscript where the quality of English language usage is doubtful. We reread our manuscript, and probably due to author cognitive bias, are unable to pinpoint the problematic parts. Item 2 comes across us as vague to us and gives little information about how such change can make a difference to the main message that we want to bring out in the present work. On Item 3, we agree that the quality of some of the figures does not yet satisfying PeerJ production level standards. However, they are generally intelligible and do not impede current understanding of the contents at the review stage. 2. R2: Experimental design “ The authors indeed had two case studies on fly wing images and breast cancer mammograms. Why did they use the title as 'a case study'? Also, the same in the whole text.” Authors: These two examples are not separate case studies, they are considered jointly in the manuscript to contrast the efficacy of orthogonal moments in extracting useful features that enable good prediction performance, in the case where its performance in relatively simple biological images (fly wings) is contrasted with that of relatively complex biological images (breast mammograms). It is entirely appropriate to consider these two data sets within the framework of a case study. 3. R2: “The two case studies are also big questions. Why did they use these two datasets since they are not relevant to each other?” Authors: The focus of our work is to evaluate how well features extracted using orthogonal moments perform as far as the classification of biological images is concerned. These two data sets serve to illustrate the following two points: (i) the applicability of orthogonal moments in the classification of relatively simple biological images, as shown in the example of fly species prediction problem using fly wing images; (ii) the limitation in prediction performance when using orthogonal moments in the classification of complex biological images, such as prediction of benign and malignant states using mammograms. 3 By sharing our findings in the present work, we hope researchers would keep considering orthogonal moments as an option when doing classification work with biological images, especially in current times when researchers have a tendency to use deep learning methods to treat all image classification problems. Deep learning methods are unnecessary in the case of relatively simple biological images (as shown using the fly wing images), and not implementable when a problem has not yet have sufficiently large training sample size, or is still in an exploratory stage. 4. R2: “DCOM format is at 2D or 3D?” Authors: DICOM is a communications protocol and a file format. It is a useful format that stores medical images data (e.g. mammogram, ultrasound, MRI images, etc.), together with information about the patient in a single file. For mammograms, the data is 2D. There is no need to handle the raw data in DICOM format as the owners of the CBIS-DDSM database have already converted the DICOM data files to PNG file format. To make this clearer, we added the following sentence on page 4 (end of line 165). “The images were downloaded from the CBIS-DDSM database in the PNG file format.” 5. R2: “How did the authors deal with hyperparameter optimization of the models?” Authors: For the hyperparameters of GUIDE decision trees, the standard k=0.5 parameter produces trees that are neither too complex nor too simplistic. The kernel discriminant model applied at the partitioned data spaces at the tree terminals is essentially data-optimised because it is based on the maximum-likelihood method. The default GUIDE random forest hyperparameters with user-specified 2001 trees produces reasonably good results. Generally, there is a broad range of hyperparameter values which can be used, and they should produce more or less similar prediction performance. We refer the reviewer to page 7 (line 267-272). “For classification, we used a kernel discriminant model in the GUIDE (Generalized, Unbiased, Interaction Detection and Estimation) classification and regression tree program (Loh, 2009; Loh, 2014). The kernel method is a non-parametric method that estimates a Gaussian kernel density (Silverman, 1986) for each class in a node, and uses the estimated densities in a maximum likelihood framework for classification. The tree complexity parameter k-SE in GUIDE was set at the default value of 0.5, with the number of crossvalidated trees set at 10.” 6. R2: Measurement metrics (i.e., accuracy, sensitivity, specificity, ...) have been used in previous biomedical studies such as PMID: 33816830, PMID: 33735760, and PMID: 33260643. Therefore, the authors are suggested to refer to more works in this description. 4 Authors: For the breast cancer mammogram data, there are only two classes, in approximately equal proportions. So the use of accuracy is appropriate, as is sensitivity and specificity. Sensitivity and specificity are not useful metrics in the fly image problem, as there are 15 classes. For this problem, only the accuracy metric was used, and to account for estimation variance probabilistically, a 95% Bayesian credible interval was provided to this metric. 7. R2: Source codes should be provided for replicating the methods. Authors: We already provided the source codes in the initial submission as supplemental files. This is corroborated by comments from Reviewer 3. 8. R2: Validity of the findings “- In Figure 4, the text is not displayed clearly. - Besides training, the authors should have some validation data. - ROC curves and AUC values should be reported in binary classification. - The authors should compare the predictive performance to previous studies on the same problem/data.” Authors: Item 1: The texts in the figure are class labels. They are “not clear” because they overlap substantially. We have tried adjusting the font size but find the present one is the best. Item 2: We refer to page 7 (lines 273-277): “For Problem 1, it was not feasible to split the samples into another test set for assessing generalization error, because the average sample size per class was already small (about 5). Hence, we applied the random forests ensemble classifier (2001 trees) and obtained the outof-bag error estimate for generalization error. For the CBIS-DDSM data set in Problem 2, we randomly chose 70% of the data set for training, and the remainder 30% for testing.” It was not feasible to split the data into a test set in Problem 1, as the sample sizes of each species are already small. For this reason, the random forests model with out-of-bag error estimate for generalization error was used. This is a reasonable substitute for estimating prediction error when sample size is not large. For Problem 2, 30% of data was used for testing. Item 3: We understand that ROC curve is used so often in evaluation of model performance that, somehow, it is expected to be presented in every binary classification problem. However, we believe that ROC curve should only be used when it can be interpreted in the context of the problem where it is used. In Problem 2 which involves breast cancer mammograms, the ROC curve and the resultant AUC metric are unsuitable because the BI-RADS feature is 5 considered in the analysis (Model I and Model II (page 11; Table 1, Table 2). Jiang & Metz (2010) show that the BI-RADS feature is inappropriate for construction of ROC analysis as the encodings used in the BI-RADS feature have context-specific meaning in radiology. In addition, the problem of whether AUC of the ROC curve means anything to the subject matter experts (clinicians) in a radiology setting is also well-documented by Halligan et al. (2015). We quote from them: “Sensitivity and specificity are familiar concepts to clinicians, who are used to interpreting the results of diagnostic tests in these terms. In contrast, ROC AUC means little to clinicians (especially non-radiologists), patients, or health care providers. While a test whose AUC is 0.9 is considered “better” than one of 0.8, what does this mean for patients and what is clinically important? It is well established that diagnostic tests are understood best when presented in terms of gains and losses to individual patients [11]. AUC lacks clinical interpretability because it does not reflect this. Clinicians are uninterested in performance across all thresholds - they focus on clinically relevant thresholds. However, because AUC measures performance over all thresholds, it includes both those clinically relevant and clinically illogical. Moreover, different tests can have identical AUC but different performance at clinically important thresholds.” References: 1.Halligan, S., Altman, D.G. & Mallett, S. (2015). Disadvantages of using the area under the receiver operating characteristic curve to assess imaging tests: A discussion and proposal for an alternative approach. European Radiology, 25: 932-939. 2. Jiang, Y. & Metz, C.E. (2010). BI-RADS data should not be used to estimate ROC curves. Radiology, 256: 29-31. Item 4: For the breast cancer mammograms, such comparisons are pointed out in page 11-12 (lines 364-392). For the fly wing images, the present application of orthogonal moments to analysis of fly wing images is novel because the data set used is very recent (2020; https://doi.org/10.5061/dryad.95x69p8hf). We actually reported a comparison of the result of classification using image features extracted from orthogonal moments with those using geometric morphometric data (page 2, line 83-115; page 8, line 311-322). 6 Reviewer 3 (R3) 1.R3 “Basic reporting The manuscript is clear and professional language is used. Literature references and field background is sufficient. Raw data is shared, results are relevant to the hypotheses. Experimental design The study is within the aims and scope of the journal. Research question is well defined, relevant and meaningful. The research aims to fill the gap of predicting between fly species by their wing patterns and between benign or malignant masses in in mammograms. The study provides a model with a broad application area. The model is described with sufficient detail to replicate. Validity of the findings The limitations are clearly stated in the discussion. All data were provided, statistically sound and controlled. Benefit to the literature is stated, conslusions are linked to the scientific question at hand and limited to results.” Authors: We thank the reviewer for positive opinions about our work. 7 "
Here is a paper. Please give your review comments after reading it.
229
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Betweenness-centrality is a popular measure in network analysis that aims to describe the importance of nodes in a graph. It accounts for the fraction of shortest paths passing through that node and is a key measure in many applications including community detection and network dismantling. The computation of betweenness-centrality for each node in a graph requires an excessive amount of computing power, especially for large graphs. On the other hand, in many applications, the main interest lies in finding the top-k most important nodes in the graph. Therefore, several approximation algorithms were proposed to solve the problem faster. Some recent approaches propose to use shallow graph convolutional networks to approximate the top-k nodes with the highest betweenness-centrality scores.</ns0:p><ns0:p>This work presents a deep graph convolutional neural network that outputs a rank score for each node in a given graph. With careful optimization and regularization tricks, including an extended version of DropEdge which is named Progressive-DropEdge, the system achieves better results than the current approaches. Experiments on both real-world and synthetic datasets show that the presented algorithm is an order of magnitude faster in inference and requires several times fewer resources and time to train.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Conducting network analysis has been a prominent topic in research, with applications spanning from community detection in social networks <ns0:ref type='bibr' target='#b3'>(Behera et al., 2020b</ns0:ref><ns0:ref type='bibr' target='#b5'>(Behera et al., , 2016))</ns0:ref>, to detecting critical nodes <ns0:ref type='bibr' target='#b4'>(Behera et al., 2019)</ns0:ref>, to hidden link prediction <ns0:ref type='bibr' target='#b18'>(Liu et al., 2013)</ns0:ref>. One of the more fundamental metrics for determining the importance of each graph node for network analysis is betweenness-centrality (BC). BC aims to measure the importance of nodes in the graph in terms of connectivity to other nodes via the shortest paths <ns0:ref type='bibr' target='#b19'>(Mahmoody et al., 2016)</ns0:ref>. It plays a big role in understanding the influence of nodes in a graph and, as an example, can be used to discover an important member, like a famous influencer or the set of the most reputable users in a network <ns0:ref type='bibr' target='#b4'>(Behera et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Computing the measure can be very resource-demanding especially for the large graphs. The fastest algorithm for computing exact betweenness-centrality in a given graph is the Brandes algorithm <ns0:ref type='bibr' target='#b8'>(Brandes, 2001)</ns0:ref> As O(|V ||E|) can grow very fast with the increase in the network size, several approximation algorithms based on sampling have been proposed <ns0:ref type='bibr' target='#b19'>(Mahmoody et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b29'>Riondato and Kornaropoulos, 2014a)</ns0:ref>.</ns0:p><ns0:p>However, along with the growth in the size of the graph, we face tantamount increases in the execution time and proportional decreases in the accuracy of the prediction. In many applications, the computation of the betweenness-centrality needs to be fast enough to handle dynamic changes in the graph <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Although several distributed computing algorithms exist for calculating betweenness-centrality <ns0:ref type='bibr' target='#b21'>(Naik et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b2'>Behera et al., 2020a</ns0:ref><ns0:ref type='bibr' target='#b4'>Behera et al., , 2019))</ns0:ref>, where the authors propose approaches to compute BC of a network using map-reduce in a distributed environment, this work focuses on a single machine algorithm.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60210:1:1:NEW 8 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The model works on a single machine and requires only a single GPU for training and can perform predictions on relatively big networks without the need for many machines or excessive computational power.</ns0:p><ns0:p>In fields such as social network analysis and network dismantling it is at times far more important to compute the relative importance of the nodes in the graph rather than obtain the exact scores of betweenness-centrality <ns0:ref type='bibr' target='#b15'>(Holme et al., 2002)</ns0:ref>. Several recent works like <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref> and <ns0:ref type='bibr' target='#b20'>(Maurya et al., 2019)</ns0:ref> have proposed to reformulate the problem into a learning-to-rank problem with the aim to learn a function that would map the nodes in the input graph to relative ranking BC scores. So, instead of computing the exact scores, the task is changed into finding the correct order of the nodes with respect to their betweenness-centrality.</ns0:p><ns0:p>Instead of using approximation techniques like sampling, recent approaches have proposed to train a graph convolutional neural network on synthetic small graphs that would learn to rank nodes based on their BC and would be able to generalize on bigger real-world graphs <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>.</ns0:p><ns0:p>In general, it is hard to avoid over-fitting and over-smoothing when training deep graph convolutional neural networks <ns0:ref type='bibr' target='#b32'>(Rong et al., 2020)</ns0:ref>. In order to generalize better (over-fitting) on small datasets and avoid obtaining uninformative representations for each node (over-smoothing) in deep models, Rong et al. &#8226; Second, deeper graph convolutional networks are shown to be able to have fewer parameters and be more efficient than more shallow alternatives leading to state-of-the-art results while being by an order of magnitude faster.</ns0:p><ns0:p>&#8226; Finally, the presented training procedure converges faster and requires fewer resources which enables training on a single GPU machine.</ns0:p><ns0:p>The approach is named ABCDE: Approximating Betweenness-Centrality ranking with progressive-DropEdge.</ns0:p><ns0:p>The source code is available on GitHub: https://github.com/MartinXPN/abcde. To reproduce the reported results one can run:</ns0:p><ns0:p>$ docker run martin97/abcde:latest</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>RELATED WORK</ns0:head></ns0:div> <ns0:div><ns0:head n='2.1'>Betweenness Centrality</ns0:head><ns0:p>The best-known algorithm for computing exact betweenness-centrality values is the Brandes algorithm <ns0:ref type='bibr' target='#b8'>(Brandes, 2001)</ns0:ref> for weighted ones, where |V | denotes the number of nodes and |E| denotes the number of edges in the graph. To enable approximate BC computation for large graphs several approximation algorithms were proposed which use only a small subset of edges in the graph. <ns0:ref type='bibr' target='#b29'>Riondato and Kornaropoulos (2014a)</ns0:ref> introduce the Vapnik-Chervonenskis (VC) dimension to compute the sample size that would be sufficient to obtain guaranteed approximations for the BC values of each node <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>. If V max denotes the maximum number of nodes on any shortest path, &#955; denotes the maximum additive error that the approximations should match, and &#948; is the probability of the guarantees holding, then the number of samples required to compute the BC score would be c &#955; 2 (&#8970;log(V max &#8722; 2)&#8971; + 1 + log 1 &#948; ). Manuscript to be reviewed</ns0:p><ns0:p>Computer Science ). Yet both approaches require a second run of the algorithm to identify top-k nodes with the highest betweenness-centrality scores. <ns0:ref type='bibr' target='#b17'>Kourtellis et al. (2012)</ns0:ref> introduces another metric that is correlated with high betweenness-centrality values and computes that metric instead, to identify nodes with high BC scores. <ns0:ref type='bibr' target='#b7'>Borassi and Natale (2019)</ns0:ref> propose an efficient way of computing BC for top-k nodes, which allows bigger confidence intervals for nodes with well-separated betweenness-centrality values. <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref> and <ns0:ref type='bibr' target='#b20'>Maurya et al. (2019)</ns0:ref> propose a shallow graph convolutional network approach for approximating the ranking based on the betweenness-centrality of nodes in the graph. They treat the problem as a learning-to-rank problem and approximate the ranking of vertices based on their betweenness-centrality.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Deep Graph Convolutional Networks</ns0:head><ns0:p>Graph Convolutional Networks (GCNs) have recently gained a lot of attention and have become the de facto methods for learning graph representations <ns0:ref type='bibr' target='#b33'>(Wu et al., 2019)</ns0:ref>. They are widely used in many graph representation tasks. Yet, different studies have different findings regarding the expressive power of GCNs as the network depth increases. <ns0:ref type='bibr' target='#b22'>Oono and Suzuki (2020)</ns0:ref> claims that they do not improve, or sometimes worsen their predictive performance as the number of layers in the network and the non-linearities grow.</ns0:p><ns0:p>On the other hand, <ns0:ref type='bibr' target='#b32'>Rong et al. (2020)</ns0:ref> claims that removing random edges from the graph during training acts as a regularisation for deep GCNs and helps to combat over-fitting (loss of generalization power on small datasets) and over-smoothing (isolation of output representations from the input features with the increase in network depth). They empirically show that this trick, called DropEdge, improves the performance on several both deep and shallow GCNs.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>PRELIMINARIES</ns0:head><ns0:p>Let G = (V, E) denote a network where each node has a representation Betweenness-centrality accounts for the significance of individual nodes based on the fraction of shortest paths that pass through them <ns0:ref type='bibr' target='#b19'>(Mahmoody et al., 2016)</ns0:ref>. Normalized betweenness-centrality for node w is defined as:</ns0:p><ns0:formula xml:id='formula_0'>X v &#8712; R c for v &#8712; V ,</ns0:formula><ns0:formula xml:id='formula_1'>b(w) = 1 |V |(|V | &#8722; 1) &#8721; u =w =v &#963; uv (w) &#963; uv<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where |V | denotes the number of nodes in the network, &#963; uv denotes the number of shortest paths from u to v, and &#963; uv (w) the number of shortest paths from u to v that pass through w.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>METHOD</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1'>Input features</ns0:head><ns0:p>For the input, the model only needs the structure of the graph G represented as a sparse adjacency matrix, and the degree d v for each vertex v &#8712; V . In comparison to this method, <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref> uses two additional features for each vertex, which were calculated based on the neighborhoods with radii of sizes one and two for each node. Yet, in this approach, having only the degree of the vertex and the network structure itself is sufficient to approximate the betweenness-centrality ranking for each node. So, the initial feature vector X v &#8712; R c for vertex v is only a single number -the degree of the vertex, which is enriched in deeper layers of the model.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Output and loss function</ns0:head><ns0:p>For each node v in the graph G, the model predicts the relative BC ranking score, meaning that for each input X v the model only outputs a single value which represents the predicted ranking score y v &#8712; R. As the output is the relative ranking score, the loss function is chosen to be a pairwise ranking loss follow</ns0:p></ns0:div> <ns0:div><ns0:head>3/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_6'>2021:04:60210:1:1:NEW 8 Jul 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the approach proposed by <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref>. To compute the pairwise ranking loss, 5|V | node pairs (i, j) are randomly sampled, following <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref> binary cross-entropy between the true order and the predicted order of those pairs is computed. So, having the two ground truth betweenness-centrality values b i and b j for i and j pair, and their relative rank y i and y j , the loss of a single pair would be:</ns0:p><ns0:formula xml:id='formula_2'>C i, j = &#8722;&#963; (b i &#8722; b j ) &#8226; log &#963; (y i &#8722; y j ) &#8722; (1 &#8722; &#963; (b i &#8722; b j )) &#8226; log(1 &#8722; &#963; (y i &#8722; y j )) (2)</ns0:formula><ns0:p>where &#963; is the sigmoid function defined as 1/(1 + e &#8722;x ). The total loss would be the sum of cross entropy losses for those pairs:</ns0:p><ns0:formula xml:id='formula_3'>L = &#8721; i, j&#8712;5|V | C i, j<ns0:label>(3)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head n='4.3'>Evaluation Metrics</ns0:head><ns0:p>As the baseline proposed by Fan et al. ( <ns0:ref type='formula'>2019</ns0:ref>) is adopted, the evaluation strategy is also the same. There are several metrics presented in the baseline. Kendall tau score is a metric that computes the number of concordant and discordant pairs in two ranking lists and is defined as:</ns0:p><ns0:formula xml:id='formula_4'>K(l 1 , l 2 ) = 2(&#945; &#8722; &#946; ) n &#8226; (n &#8722; 1) (4)</ns0:formula><ns0:p>where l 1 is the first list, l 2 is the second list, &#945; is the number of concordant pairs, &#946; is the number of discordant pairs, and n is the total number of elements. The range of the metric is [&#8722;1; 1] where 1 means that two ranking lists are in total agreement and &#8722;1 means that the two lists are in total disagreement.</ns0:p><ns0:p>Top-k% accuracy is defined as the percentage of overlap between the top-k% nodes in the predictions and the top-k% nodes in the ground truth list:</ns0:p><ns0:formula xml:id='formula_5'>Top-k% = {predicted-top-k%} &#8745; {true-top-k%} &#8968;|V | &#215; k%&#8969;<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>In these experiments, top-1%, top-5%, and top-10% accuracies as well as the Kendall tau score are reported.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4'>Training Data</ns0:head><ns0:p>The training data is generated similar to <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref>. Random graphs are sampled from the powerlaw distribution during training. The exact betweenness-centrality scores are computed for those graphs and are treated as the ground truth. As their sizes are small, the computation of the exact betweenness-centrality score is not computationally demanding. To avoid over-fitting on those graphs they are regenerated every 10 epochs. Each training graph is reused 8 times on average during a single training epoch.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5'>Model architecture</ns0:head><ns0:p>The model architecture is a deep graph convolutional network which consists of a stack of GCN layers and MaxPooling operations presented in Figure <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>. A GCN operation for a node v which has a neighborhood N(v) is defined as:</ns0:p><ns0:formula xml:id='formula_6'>H v = W &#8226; &#8721; u&#8712;N(v) 1 &#8730; d v + 1 &#8226; &#8730; d u + 1 h u (6)</ns0:formula><ns0:p>where h u is the input vector representation of the node u, d v and d u are the degrees of the vertices v and u accordingly, H v is the output vector representation of the node v, and W is a learnable matrix of weights.</ns0:p><ns0:p>The model takes the input representation X v of vertex v and maps it to an intermediate vector representation which is followed by several blocks of GCNs with different feature sizes, followed by MaxPooling operations which reduce the extracted features in the block to a single number for each Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>vertex. Each GCN block is followed by a transition block which is a fully connected single layer that maps the sizes of the previous GCN block to the current one.</ns0:p><ns0:p>For every GCN block, a different amount of random edge drops is applied which is called Progressive-DropEdge. In these experiments the model best scales when the probability of dropping an edge is higher in the initial GCN blocks, while slowly decreasing the probability as the layers approach the output. That helps the model to focus on more details and have a better, fine-grained ranking score prediction. To avoid having isolated nodes only the edges of vertices with degrees higher than 5 are dropped. </ns0:p></ns0:div> <ns0:div><ns0:head n='4.6'>Implementation Details</ns0:head><ns0:p>The MLPs and transition blocks follow the {Linear &#8594; LayerNorm &#8594; PReLU &#8594; Dropout} structure, while GCN blocks follow the {GCNConv &#8594; PReLU &#8594; LayerNorm &#8594; Dropout} structure. The initial MLP that maps the input X v to an intermediate representation has a size of 16. There are 6 blocks of GCNs in total. The number of GCNConvs in the blocks are {4, 4, 6, 6, 8, 8}, while their sizes are {48, 48, 32, 32, 24, 24}. The Progressive-DropEdge for each block is applied with probabilities {0.3, 0.3, 0.2, 0.2, 0.1, 0.1}. Gradients are clipped after the value 0.3.</ns0:p><ns0:p>For training and validation, random graphs from the powerlaw distribution are sampled using the NetworkX library <ns0:ref type='bibr' target='#b14'>(Hagberg et al., 2008)</ns0:ref>, having nodes from 4000 to 5000 with a fixed number of edges to add (m = 4), and the probability of creating a triangle after adding an edge (p = 0.05) (following Fan The training is stopped whenever Kendall Tau on the validation set does not improve for 5 consecutive epochs. Adam optimizer <ns0:ref type='bibr' target='#b16'>(Kingma and Ba, 2014)</ns0:ref> is used with an initial learning rate of 0.01 and the learning rate is divided by 2 if the validation Kendall score does not increase for two consecutive epochs.</ns0:p><ns0:p>The GCN training is implemented in Pytorch <ns0:ref type='bibr'>(Paszke et al., 2019)</ns0:ref> and Pytorch Geometric <ns0:ref type='bibr' target='#b12'>(Fey and Lenssen, 2019)</ns0:ref> libraries. All the weights are initialized with their default initializers. The ground truth betweenness-centrality values for training graphs are calculated with python-igraph library <ns0:ref type='bibr' target='#b10'>(Csardi and Nepusz, 2006)</ns0:ref>. Training and validation results were tracked with Aim <ns0:ref type='bibr' target='#b1'>(Arakelyan, 2020)</ns0:ref> and Weights and Biases <ns0:ref type='bibr' target='#b6'>(Biewald, 2020)</ns0:ref> libraries.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.7'>Complexity Analysis</ns0:head><ns0:p>The training time complexity is intractable to estimate robustly as it largely depends on the number of training steps, the network size, and the implementation of the operations used within the network. In The inference time complexity is proportional to the operations required for a single forward pass.</ns0:p><ns0:p>For most graphs in practice, including all graphs used in this work, all the vertices in a graph can be propagated in a single minibatch, so the complexity of inference becomes O(L &#8226; f &#8226; (|V | + |E|)). Further analysis of this model empirically demonstrates that L &#8226; f is a relatively small constant compared to other approaches and the speed of this approach outperforms others by an order of magnitude.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>EVALUATION AND RESULTS</ns0:head><ns0:p>The approach is evaluated on both real-world and synthetic graphs. Both of those are present in the benchmark provided by <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref>. The synthetic networks are generated from powerlaw distribution with a fixed number of edges to add (m = 4), and the probability of creating a triangle after adding an edge (p = 0.05), while the real-world graphs are taken from AlGhamdi et al. ( <ns0:ref type='formula'>2017</ns0:ref>) and represent 5 big graphs taken from real-world applications. The real-world graphs with their description and parameters are presented in Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref> <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>.</ns0:p><ns0:p>The ground truth betweenness-centralities for the real-world graphs are provided by AlGhamdi et al.</ns0:p><ns0:p>(2017), which are computed by the parallel implementation of Brandes algorithm on a 96 000-core supercomputer. The ground truth scores for the synthetic networks are provided by <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref> and are computed using the graph-tool <ns0:ref type='bibr' target='#b26'>(Peixoto, 2014)</ns0:ref> library.</ns0:p><ns0:p>The presented approach is compared to several baseline models. The performance of those models are adopted from the benchmark provided by Fan et al. ( <ns0:ref type='formula'>2019</ns0:ref>):</ns0:p><ns0:p>&#8226; ABRA (Riondato and Upfal, 2018): Samples pairs of nodes until the desired accuracy is reached.</ns0:p><ns0:p>Where the error tolerance &#955; was set to 0.01 and the probability &#948; was set to 0.1.</ns0:p><ns0:p>&#8226; RK <ns0:ref type='bibr' target='#b30'>(Riondato and Kornaropoulos, 2014b)</ns0:ref>: The number of pairs of nodes is determined by the diameter of the network. Where the error tolerance and the probability were set similar to ABRA.</ns0:p><ns0:p>&#8226; k-BC <ns0:ref type='bibr' target='#b27'>(Pfeffer and Carley, 2012)</ns0:ref>: Does only k steps of Brandes algorithm <ns0:ref type='bibr' target='#b8'>(Brandes, 2001)</ns0:ref> which was set to 20% of the diameter of the network.</ns0:p><ns0:p>&#8226; KADABRA <ns0:ref type='bibr' target='#b7'>(Borassi and Natale, 2019)</ns0:ref>: Uses bidirectional BFS to sample the shortest paths. The variant where it computest the top-k% nodes with the highest betweenness-centrality was used. The error tolerance and probability were set to be the same as ABRA and RK.</ns0:p><ns0:p>&#8226; Node2Vec <ns0:ref type='bibr' target='#b13'>(Grover and Leskovec, 2016)</ns0:ref>: Uses a biased random walk to aggregate information from the neighbors. The vector representations of each node were then mapped with a trained MLP to ranking scores.</ns0:p></ns0:div> <ns0:div><ns0:head>6/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60210:1:1:NEW 8 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; DrBC <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>: Shallow graph convolutional network that outputs a ranking score for each node by propagating through the neighbors with a walk length of 5.</ns0:p><ns0:p>For a fair comparison, the presented model was run on a CPU machine with 80 cores and 512GB memory to match the results reported by <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref>. Please note that due to several optimizations and smaller model size, the training takes around 30 minutes on a single 12GB NVIDIA 1080Ti GPU machine with only 4vCPUs and 12GB RAM compared to 4.5 hours reported by <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref> which used an 80-core machine with 512GB RAM, and 8 16GB Tesla V100 GPUs. For the inference, the ABCDE model does not need the 512GB memory, it only utilizes a small portion of it. Yet, the machine is used for a fair comparison. The inference is run on a CPU to be fairly compared to all the other techniques reported, yet using a GPU for inference can increase the speed substantially. <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>. It was not feasible to calculate the results marked with NA. The bold results indicate the best performance for a given metric.</ns0:p></ns0:div> <ns0:div><ns0:head>Dataset</ns0:head><ns0:p>Results on real-world networks presented in Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref> demonstrate that the ABCDE model outperforms all the other approaches for the ranking score Kendall-tau and is especially good for large graphs. For the Top-1%, Top-5%, and Top-10% accuracy scores, ABCDE outperforms other approaches on some datasets, while shows close-to-top performance on others. The presented algorithm is the fastest among all the baselines and outperforms others by an order of magnitude.</ns0:p><ns0:p>Comparison of the ABCDE model with the previous GCN approach DrBC, demonstrated in Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>,</ns0:p><ns0:p>shows that the presented deep model is more accurate and can achieve better results even though it has fewer trainable parameters and requires less time to train.</ns0:p></ns0:div> <ns0:div><ns0:head>7/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60210:1:1:NEW 8 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>. The bold results indicate the best performance for a given metric.</ns0:p><ns0:p>For each scale, the mean and standard deviation over 30 tests are reported.</ns0:p><ns0:p>The results on synthetic datasets demonstrated in Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref> show that ABRA performs well on identifying Top-1% nodes in the graph with the highest betweenness-centrality score, even though requiring a longer time to run. On all the other metrics including Top-5%, Top-10%, and Kendall tau scores ABCDE approach outperforms all the others. ABCDE is substantially faster than others on large graphs and for the small graphs, it has comparable performance to DrBC.</ns0:p><ns0:p>It is important to note that the presented model has only around 70 000 trainable parameters and requires around 30 minutes to converge during training as opposed to DrBC which has around 120 000 trainable parameters and requires around 4.5 hours to converge.</ns0:p><ns0:p>More GCN layers in the model enable the process to explore wider neighborhoods for each vertex in the graph during inference. <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>stage, therefore helping the network explore a wider spectrum of neighbors. That helps the network have better performance even though the structure is way simpler.</ns0:p><ns0:p>To be able to have a deep network with many graph-convolutional blocks, progressive DropEdge along with skip connections is used. Each GCN block gets only part of the graph where a certain number of edges are removed randomly. Initial layers get fewer edges, while layers closer to the final output MLP get more context of the graph which helps the model explore the graph better.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>ABLATION STUDIES</ns0:head><ns0:p>To demonstrate the contribution of each part of the ABCDE approach, each part is evaluated in ablation studies. Parts of the approach are removed to demonstrate the performance changes on the real-world datasets. As a lot of real-world graphs are very large, the final ABCDE approach is chosen to be the one leadingto the best performance on the large networks.</ns0:p></ns0:div> <ns0:div><ns0:head>Dataset</ns0:head><ns0:p>The over-fitting behavior of the proposed approach is also studied in details. As demonstrated in the Figure <ns0:ref type='figure' target='#fig_9'>2</ns0:ref>, the model without drop-edge over-fits faster than the two others which have constant and progressive dropouts set on the input network edges. The ABCDE model over-fits less and has more stable validation loss compared to both the constant drop-edge model and no drop-edge model.</ns0:p><ns0:p>Unlike the experiments done by <ns0:ref type='bibr' target='#b32'>Rong et al. (2020)</ns0:ref>, there is no over-smoothing noticed in ABCDE </ns0:p></ns0:div> <ns0:div><ns0:head n='7'>CONCLUSION</ns0:head><ns0:p>In this paper, a deep graph convolutional network was presented to approximate betweenness-centrality ranking scores for each node in a given graph. The author demonstrated that the number of parameters of the network can be reduced, while not compromising the predictive power of the network. The approach achieves better convergence and faster training on smaller machines compared to the previous approaches.</ns0:p><ns0:p>A novel way was proposed to add regularisation to the network through progressively dropping random edges in each graph convolutional block, which was called Progressive-DropEdge. The results suggest that deep graph convolutional networks are capable of learning informative representations of graphs and can approximate the ranking score for betweenness-centrality while preserving good generalizability for real-world graphs. The time comparison demonstrates that this approach is significantly faster than alternatives.</ns0:p><ns0:p>Several future directions can be examined, including case studies on specific applications (e.g. urban planning, social networks), and extensions of the approach for directed and weighted graphs. One more interesting direction is to approximate other centrality measures in big networks.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>which has O(|V ||E|) time complexity for unweighted networks and O(|V ||E| + |V | 2 log |V |) for weighted ones, where |V | denotes the number of nodes and |E| denotes the number of edges in the graph.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>Figure 1. Introducing Progressive-DropEdge in the training procedure improves the performance of the model, especially on larger real-world networks. This paper focuses on the benchmark of ranking based on betweenness-centrality proposed by Fan et al. (2019) as they include various real-world and synthetic datasets and detailed comparisons with other approximation algorithms. The main contributions are threefold: &#8226; First, Progressive-DropEdge is introduced in the training procedure which acts as regularization and improves the performance on large networks.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>which has O(|V ||E|) time complexity for unweighted graphs and O(|V ||E| + |V | 2 log |V |)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>2014a) with smaller sample sizes. Borassi and Natale (2019) propose a balanced bidirectional breadth-first search (BFS) which reduces the time for each sample from O(|E|) to O(|E| 1 2 +O(1)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>where c denotes the dimensionality of the representation, d v denotes the degree of the vertex v, |V | denotes the number of nodes and |E| denotes the number of edges in the graph.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. ABCDE model architecture. Each Transition block is a set of {Linear &#8594; LayerNorm &#8594; PRelu &#8594; Dropout} layers, while each GCN is a set of {GCNConv &#8594; PReLU &#8594; LayerNorm &#8594; Dropout}.symbol is the concatenation operation. Each MaxPooling operation extracts the maximum value from the given GCN block.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>et al. (2019)). For each training epoch, 160 graphs are sampled, while during validation 240 graphs are used for stability. The batch size is set to 16 graphs per step and the training lasts for at most 50 epochs.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>generic terms, the time complexity can be expressed as O(S(F + B)) where S is the number of training 5/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60210:1:1:NEW 8 Jul 2021) Manuscript to be reviewed Computer Science steps which can be expressed by the number of epochs times the number of minibatches within the epoch, F and B are the operations required for a single forward and backward pass of a minibatch respectively.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>F</ns0:head><ns0:label /><ns0:figDesc>and B are proportional to the number of layers in the deep network L, and the number of nodes and edges in the graph. GCN operation is O( f &#8226; (|V | + |E|)), where f is the size of the feature vector for each node. The overall time complexity would be proportional to O(S &#8226; L &#8226; f &#8226; (|V | + |E|))). In this approach, the training procedure converges in about 30 minutes and then the network can be reused for an arbitrarily constructed input graph.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The left plot represents the training losses of No DropEdge, DropEdge= 0.2, and ABCDE models. The right plot represents the validation losses of those models.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>. Summary</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Network com-Youtube 1 134 890 2 987 624 |V | |E|</ns0:cell><ns0:cell>D 5.27</ns0:cell><ns0:cell cols='2'>Diameter Description 20 A video-sharing web site that includes a social network. Nodes are users and edges</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>are friendships</ns0:cell></ns0:row><ns0:row><ns0:cell>Amazon</ns0:cell><ns0:cell>2 146 057 5 743 146</ns0:cell><ns0:cell>5.35</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>A product network created by crawling the Amazon online store. Nodes represent</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>products and edges link commonly co-purchased products</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>4 000 148 8 649 011</ns0:cell><ns0:cell>4.32</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>An authorship network extracted from the DBLP computer science bibliography.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Nodes are authors and publications. Each edge connects an author to one of his</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>publications</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell cols='2'>3 764 117 16 511 741 8.77</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell>A citation network of U.S. patents. Nodes are patents and edges represent citations.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>In our experiments, we regard it as an undirected network</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell cols='2'>3 997 962 34 681 189 17.35</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>A social network where nodes are LiveJournal users and edges are their friendships</ns0:cell></ns0:row></ns0:table><ns0:note>of real-world datasets. Where |V | is the number of nodes, |E| is the number of edges, and D is the average degree of the graph. Adapted from</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Top-k% accuracy, Kendall tau distance, (&#215;0.01), and running time on large real-world networks adapted from</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>ABRA</ns0:cell><ns0:cell>RK</ns0:cell><ns0:cell cols='4'>KADABRA Node2Vec DrBC ABCDE</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Top-1%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>95.7 95.7 95.7</ns0:cell><ns0:cell>76.0</ns0:cell><ns0:cell>57.5</ns0:cell><ns0:cell>12.3</ns0:cell><ns0:cell>73.6</ns0:cell><ns0:cell>77.1</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>69.2</ns0:cell><ns0:cell>86.0</ns0:cell><ns0:cell>47.6</ns0:cell><ns0:cell>16.7</ns0:cell><ns0:cell>86.2</ns0:cell><ns0:cell>92.0 92.0 92.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>49.7</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>35.2</ns0:cell><ns0:cell>11.5</ns0:cell><ns0:cell>78.9</ns0:cell><ns0:cell>79.8 79.8 79.8</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>37.0</ns0:cell><ns0:cell>74.4 74.4 74.4</ns0:cell><ns0:cell>23.4</ns0:cell><ns0:cell>0.04</ns0:cell><ns0:cell>48.3</ns0:cell><ns0:cell>50.2</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>60.0</ns0:cell><ns0:cell>54.2*</ns0:cell><ns0:cell>31.9</ns0:cell><ns0:cell>3.9</ns0:cell><ns0:cell>67.2</ns0:cell><ns0:cell>70.9 70.9 70.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Top-5%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>91.2 91.2 91.2</ns0:cell><ns0:cell>75.8</ns0:cell><ns0:cell>47.3</ns0:cell><ns0:cell>18.9</ns0:cell><ns0:cell>66.7</ns0:cell><ns0:cell>75.1</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>58.0</ns0:cell><ns0:cell>59.4</ns0:cell><ns0:cell>56.0</ns0:cell><ns0:cell>23.2</ns0:cell><ns0:cell>79.7</ns0:cell><ns0:cell>88.0 88.0 88.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>45.5</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>42.6</ns0:cell><ns0:cell>20.2</ns0:cell><ns0:cell>72.0</ns0:cell><ns0:cell>73.7 73.7 73.7</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>42.4</ns0:cell><ns0:cell>68.2 68.2 68.2</ns0:cell><ns0:cell>25.1</ns0:cell><ns0:cell>0.29</ns0:cell><ns0:cell>57.5</ns0:cell><ns0:cell>58.3</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>56.9</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>39.5</ns0:cell><ns0:cell>10.35</ns0:cell><ns0:cell>72.6</ns0:cell><ns0:cell>75.7 75.7 75.7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Top-10%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>89.5</ns0:cell><ns0:cell>100.0 100.0 100.0</ns0:cell><ns0:cell>44.6</ns0:cell><ns0:cell>23.6</ns0:cell><ns0:cell>69.5</ns0:cell><ns0:cell>77.6</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>60.3</ns0:cell><ns0:cell>100.0 100.0 100.0</ns0:cell><ns0:cell>56.7</ns0:cell><ns0:cell>26.6</ns0:cell><ns0:cell>76.9</ns0:cell><ns0:cell>85.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>100.0 100.0 100.0</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>50.4</ns0:cell><ns0:cell>27.7</ns0:cell><ns0:cell>72.5</ns0:cell><ns0:cell>76.3</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>50.9</ns0:cell><ns0:cell>53.5</ns0:cell><ns0:cell>21.6</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>64.1</ns0:cell><ns0:cell>64.9 64.9 64.9</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>63.6</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>47.6</ns0:cell><ns0:cell>15.4</ns0:cell><ns0:cell>74.8</ns0:cell><ns0:cell>78.0 78.0 78.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Kendall tau</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>56.2</ns0:cell><ns0:cell>13.9</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>46.2</ns0:cell><ns0:cell>57.3</ns0:cell><ns0:cell>59.8 59.8 59.8</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>16.3</ns0:cell><ns0:cell>9.7</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>44.7</ns0:cell><ns0:cell>69.3</ns0:cell><ns0:cell>77.7 77.7 77.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>14.3</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>49.5</ns0:cell><ns0:cell>71.9</ns0:cell><ns0:cell>73.7 73.7 73.7</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>17.3</ns0:cell><ns0:cell>15.3</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>4.0</ns0:cell><ns0:cell>72.6</ns0:cell><ns0:cell>73.5 73.5 73.5</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>22.8</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>35.1</ns0:cell><ns0:cell>71.3</ns0:cell><ns0:cell>71.8 71.8 71.8</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Time/s</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>com-youtube 72898.7 125651.2</ns0:cell><ns0:cell>116.1</ns0:cell><ns0:cell>4729.8</ns0:cell><ns0:cell>402.9</ns0:cell><ns0:cell>26.7 26.7 26.7</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell cols='2'>5402.3 149680.6</ns0:cell><ns0:cell>244.7</ns0:cell><ns0:cell>10679.0</ns0:cell><ns0:cell>449.8</ns0:cell><ns0:cell>63.5 63.5 63.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>11591.5</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>398.1</ns0:cell><ns0:cell>17446.9</ns0:cell><ns0:cell>566.7</ns0:cell><ns0:cell>104.9 104.9 104.9</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell cols='2'>10704.6 252028.5</ns0:cell><ns0:cell>568.0</ns0:cell><ns0:cell>11729.1</ns0:cell><ns0:cell>744.1</ns0:cell><ns0:cell>163.9 163.9 163.9</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>34309.6</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>612.9</ns0:cell><ns0:cell>18253.6</ns0:cell><ns0:cell>2274.2</ns0:cell><ns0:cell>271.0 271.0 271.0</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Comparison of Top-k% accuracy, Kendall-tau, and running time on large real-world networks with the baseline DrBC model. Results are taken from<ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>. The bold results indicate the best performance for a given metric.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Top-k% accuracy, Kendall tau, and execution time in seconds on synthetic graphs of different scales adapted from</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Top-k% accuracy, and Kendall tau distance, (&#215;0.01) on large real-world networks showing the ablation study for different parts of the ABCDE model. The bold results indicate the best performance for a given metric.From the experiments demonstrated in Table5, it can be observed that each part's contribution differs for different graph types. ABCDE with no DropEdge outperforms the proposed approach on the com-youtube and amazon graphs which are relatively small networks. Constant DropEdge of 0.2 outperforms all the rest on the Dblp graph which is larger than com-youtube and amazon but smaller than cit-Patents and com-lj. ABCDE with Progressive-DropEdge and skip connections is the best for the largest two graphs, namely cit-Patents and com-lj. Removing skip connections from the model drops the performance significantly in all the cases.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='4'>No DropEdge DropEdge= 0.2 No skip connections ABCDE</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Top-1%</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>78.5 78.5 78.5</ns0:cell><ns0:cell>77.8</ns0:cell><ns0:cell>66.5</ns0:cell><ns0:cell>77.1</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>86.2</ns0:cell><ns0:cell>91.0</ns0:cell><ns0:cell>85.3</ns0:cell><ns0:cell>92.0 92.0 92.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>79.3</ns0:cell><ns0:cell>80.2 80.2 80.2</ns0:cell><ns0:cell>76.9</ns0:cell><ns0:cell>79.8</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>47.4</ns0:cell><ns0:cell>47.1</ns0:cell><ns0:cell>37.6</ns0:cell><ns0:cell>50.2 50.2 50.2</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>69.0</ns0:cell><ns0:cell>69.1</ns0:cell><ns0:cell>46.1</ns0:cell><ns0:cell>70.9 70.9 70.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Top-5%</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>76.2 76.2 76.2</ns0:cell><ns0:cell>75.1</ns0:cell><ns0:cell>65.2</ns0:cell><ns0:cell>75.1</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>88.1 88.1 88.1</ns0:cell><ns0:cell>87.9</ns0:cell><ns0:cell>82.6</ns0:cell><ns0:cell>88.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>72.3</ns0:cell><ns0:cell>74.2 74.2 74.2</ns0:cell><ns0:cell>71.5</ns0:cell><ns0:cell>73.7</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>56.3</ns0:cell><ns0:cell>55.9</ns0:cell><ns0:cell>52.1</ns0:cell><ns0:cell>58.3 58.3 58.3</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>75.4</ns0:cell><ns0:cell>75.4</ns0:cell><ns0:cell>62.8</ns0:cell><ns0:cell>75.7 75.7 75.7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Top-10%</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>78.1 78.1 78.1</ns0:cell><ns0:cell>77.1</ns0:cell><ns0:cell>67.5</ns0:cell><ns0:cell>77.6</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>86.1 86.1 86.1</ns0:cell><ns0:cell>85.4</ns0:cell><ns0:cell>77.6</ns0:cell><ns0:cell>85.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>75.0</ns0:cell><ns0:cell>77.0 77.0 77.0</ns0:cell><ns0:cell>75.5</ns0:cell><ns0:cell>76.3</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>63.4</ns0:cell><ns0:cell>63.0</ns0:cell><ns0:cell>60.4</ns0:cell><ns0:cell>64.9 64.9 64.9</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>78.2 78.2 78.2</ns0:cell><ns0:cell>77.9</ns0:cell><ns0:cell>69.1</ns0:cell><ns0:cell>78.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Kendall tau</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>59.8 59.8 59.8</ns0:cell><ns0:cell>59.3</ns0:cell><ns0:cell>56.8</ns0:cell><ns0:cell>59.8 59.8 59.8</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>77.3</ns0:cell><ns0:cell>77.5</ns0:cell><ns0:cell>70.9</ns0:cell><ns0:cell>77.7 77.7 77.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>73.5</ns0:cell><ns0:cell>73.9 73.9 73.9</ns0:cell><ns0:cell>73.9 73.9 73.9</ns0:cell><ns0:cell>73.7</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>73.2</ns0:cell><ns0:cell>72.8</ns0:cell><ns0:cell>71.1</ns0:cell><ns0:cell>73.5 73.5 73.5</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>71.5</ns0:cell><ns0:cell>70.9</ns0:cell><ns0:cell>65.8</ns0:cell><ns0:cell>71.8 71.8 71.8</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Dear Editors, We thank the reviewers for their great and informative comments. We have made several changes and updated the manuscript to address their concerns. In particular, the writing style and the motivation were updated to improve the quality of the paper. Several references suggested by the reviewers were added to address the recent research in the area. The ablation studies were updated as well to perform the analysis requested. We have reached out to the editing services of PeerJ at editorial.support@peerj.com where they reviewed the writing and told us that the English language is good enough and they don’t see the need in purchasing their services. We still did many improvements to the writing style to make the manuscript better. As requested by the editor, the whole narrative of the manuscript was edited to be either passive voice or include phrases like “the author” or “I”. Reviewer 1 Basic reporting The given paper uses deep graph convolutional neural network to approximate top-k nodes with highest betweenness-centrality score by estimating the rank score for each node in the graph. The proposed method obtains results which is an order of magnitude faster in inference hence requiring lesser computational resources. The literature review is structured with professional English language with proper figures and tables. However minor concerns can be shown as follows: 1. Full form of BFS in Line 84 should be mentioned. 2. Accuracy and time can be presented in the form of a plot for easier interpretation of results. 1. We have added the full form of BFS in the manuscript. 2. As the number of different datasets and different methods is too large, we found it neater to present it in a table format. Experimental design The research question is well designed with rigorous benchmarking across different networks and algorithms in the literature. The methods are described in detail along with the code provided for the replication of results. Minor concerns are given as: 1. In line 154-155, authors have taken 160 graphs for training and 240 graphs for validation purpose. Generally in machine and deep learning literature, validation samples are lesser than training. Can the authors provide the rationale for this design? 2. Progressive drop edge probabilities are taken from 0.3 to 0.1. However Rong et. al (2020) have taken the probabilities from 0.5-0.8 mostly. Can the authors provide the rationale behind this design parameter? 1. As the graphs are synthetic, that allows generating an arbitrary number of training/validation graphs. We’ve noticed during our experiments that fewer training graphs are enough for the model to learn good betweenness-centrality estimates while having more validation graphs allowed for a more stable evaluation of the current model. Please note that both training and validation graphs are regenerated every 10 epochs. 2. As Rong et. al (2020) performed analysis on graph neural networks for graph-level classification tasks, it allowed for more aggressive pruning. As our task is to estimate betweenness-centrality scores for each vertex, the pruning-regularization had to be milder. We’ve tried setting the DropEdge probability to 0.8 in our experiments but that led to worse results. That’s why our final choice was to keep the range of probabilities for dropping edges from 0.3 to 0.1. Validity of the findings The results provide equivalent amount of accuracy as compared to the benchmarking algorithms for top 1%, 5% and 10% scores as shown in Table 2. However the Kendall tau coefficient is better than the other algorithms as shown in Table 2 and 3. Additionally this algorithm also takes lesser time as compared to other benchmarking algorithms as shown in Table 2 and 3. As shown in Table 4, the algorithm scales well with the increase in the number of nodes. However, there are certain concerns about the variant of the graph convolutional network used in this paper about the result in Table 5. 1. No DropEdge shows better results when finding top 10% nodes, however ABCDE performs much better for top 1%. Can authors explain this phenomena? Is it dependant upon the nature of the graph used for validation?. 2. How does the results change with variation in DropEdge probability as per the analysis performed in Rong et. al? 3. The authors should provide plots regarding the training and validation loss performance of the for no DropEdge condition, 0.2 DropEdge and ABCDE algorithm similar to the one presented in Rong et. al. 4. The analysis on oversmoothing, for ABCDE and DropEdge algorithms should be present comparing with different DropEdge probabilities as provided in Rong et. al. 1. Although “No DropEdge” performs better on 3 out of 5 tasks according to Top-10% accuracy, the difference is small. In fact, if we take the average of the differences in Top-10% accuracies along five datasets, ABCDE is slightly better (0.32 percentage points). The average difference is much larger for Top-1% accuracy (1.92 points). We do not know why the effect on Top-1% is stronger than on Top-10%. 2. We are uncertain about the meaning of this comment because we did not find a graph/table in the work of Rong et. al with varying probabilities of DropEdge and the performance associated with it. We would kindly ask you to elaborate on the question. 3. We added the plots in the ablation studies. Thanks for pointing it out. 4. As we use skip-connections in ABCDE for each block, we do not notice any over-smoothing. This way the model avoids converging to very similar activations in deep layers. Reviewer 2 Basic reporting 1. Improve citation writing, for examples Line 26-29, 35. 2. The English language should be improved to ensure that an international audience can clearly understand your text. Some examples where the language could be improved include lines 31-32, 38-41, 55-57 etc. In general, the English language must be improved. 1. We improved the citations in the whole article. Thanks for pointing it out. 2. We tried to improve the English language across the whole article. Experimental design 1. Line 106-107, explain in more detail the relationship of each variable in the sentence. 2. In line 112: “the sparse adjacency matrix of edges”, what is the meaning? 3. In Chapter 4: Method, sub bab 4.1 and 4.2 explain in more detail the relationship of each variable with graph notation, not just citing it. Likewise in 4.6, 1. We added the notations and their explanations to have more details. 2. We fixed the bogus sentence 3. We added more details for most of the variables to explain the relationship details Validity of the findings Readability of the results of this paper is strongly influenced by the above improvements. Comments for the author 1. Abstract: Does not explain or does not appear to be related to the title 2. Line 25: importance node …. There is an error, node is not metric 3. In general, this paper has low readability. 1. We changed the abstract to relate more to the title. 2. We fixed the error. 3. We tried to improve the language across the whole paper and made major changes across the whole manuscript. Reviewer 3 Basic reporting The authors have proposed a model for calculating approximation betweenness centrality by dropping edges progressively. The research work seems to be useful in the area of social network analysis. The authors have proposed graph convolutional networks to approximate the betweenness centrality in a large-scale complex networks. 1. The novelty of the paper is justified. 2. Motivation and application of the research work is unclear. An additional paragraph in the introduction section may be included to describe motivation. 3. What is the significance betweenness centrality in social network application? Please explain 4. Recent papers related to social network analysis especially in the area of centrality analysis, community detection should include in the paper to improve the quality. The following references are suggested to cite in the paper. a. Behera, R. K., Naik, D., Rath, S. K., & Dharavath, R. (2019). Genetic algorithm-based community detection in large-scale social networks. Neural Computing and Applications, 1-17. b. Kumari, A., Behera, R. K., Sahoo, K. S., Nayyar, A., Kumar Luhach, A., & Prakash Sahoo, S. (2020). Supervised link prediction using structured‐based feature extraction in social networks. Concurrency and Computation: Practice and Experience, e5839.. c. Kumar Behera, R., Kumar Rath, S., Misra, S., Damaševičius, R., & Maskeliūnas, R. (2019). Distributed centrality analysis of social network data using MapReduce. Algorithms, 12(8), 161. d. Behera, R. K., Naik, D., Sahoo, B., & Rath, S. K. (2016, October). Centrality approach for community detection in large scale network. In Proceedings of the 9th Annual ACM India Conference (pp. 115-124). Experimental design 1. What is the significance betweenness centrality in social network application? Please explain 2. What are the parameters you have set for GPU processing? Please explain it at the experimental setup section. 3. Is the proposed approach including any notion of distributing computing? How the work is different from the following paper. a. Naik, D., Behera, R. K., Ramesh, D., & Rath, S. K. (2020). Map-reduce-based centrality detection in social networks: An algorithmic approach. Arabian Journal for Science and Engineering, 45, 10199-10222. b. Behera, R. K., Naik, D., Ramesh, D., & Rath, S. K. (2020). Mr-ibc: Mapreducebased incremental betweenness centrality in large-scale complex networks. Social Network Analysis and Mining, 10(1), 1-13 4. How your research finding is different from other works in the area of centrality analysis? 5. Which tools or package you have used to generate synthetic networks? Explain all the parameters that you have set to generate the synthetic network. Validity of the findings 1. The authors have experimented the proposed model using several real-world and synthetic networks. 2. The results obtained is justified. 3. Conclusion supports the proposed work clearly Comments for the author The authors have proposed a model for calculating approximation betweenness centrality by dropping edges progressively. The research work seems to be useful in the area of social network analysis. Author has proposed graph convolutional networks to approximate the betweenness centrality in large scale complex network. 1. The novelty of the paper is justified. 2. Motivation and application of the research work is unclear. An additional paragraph in the introduction section may be included to describe motivation. 3. What is the significance betweenness centrality in social network application? Please explain 4. What are the parameters you have set for GPU processing? Please explain it at the experimental setup section. 5. Is the proposed approach including any notion of distributing computing? How the work is different from the following paper. a. Naik, D., Behera, R. K., Ramesh, D., & Rath, S. K. (2020). Map-reduce-based centrality detection in social networks: An algorithmic approach. Arabian Journal for Science and Engineering, 45, 10199-10222. b. Behera, R. K., Naik, D., Ramesh, D., & Rath, S. K. (2020). Mr-ibc: Mapreduce-based incremental betweenness centrality in large-scale complex networks. Social Network Analysis and Mining, 10(1), 1-13 6. How your research finding is different from other works in the area of centrality analysis? 7. Which tools or package you have used to generate synthetic networks? Explain all the parameters that you have set to generate the synthetic network. 8. Recent paper related to social network analysis especially in the area of centrality analysis, community detection should include in the paper to improve the quality. The following references are suggested to cite in the paper. a. Behera, R. K., Naik, D., Rath, S. K., & Dharavath, R. (2019). Genetic algorithm-based community detection in large-scale social networks. Neural Computing and Applications, 1-17. b. Kumari, A., Behera, R. K., Sahoo, K. S., Nayyar, A., Kumar Luhach, A., & Prakash Sahoo, S. (2020). Supervised link prediction using structured‐based feature extraction in social network. Concurrency and Computation: Practice and Experience, e5839.. c. Kumar Behera, R., Kumar Rath, S., Misra, S., Damaševičius, R., & Maskeliūnas, R. (2019). Distributed centrality analysis of social network data using MapReduce. Algorithms, 12(8), 161. d. Behera, R. K., Naik, D., Sahoo, B., & Rath, S. K. (2016, October). Centrality approach for community detection in large scale network. In Proceedings of the 9th Annual ACM India Conference (pp. 115-124). 9. The conclusion part supports the research work clearly. 2. We included an additional paragraph in the introduction to explain the motivation in more details 3. We also added a paragraph in the introduction to address the importance of betweenness-centrality 4. We have not set any specific parameters for the GPU processing. The GPU is used by the PyTorch library as it is. We have not modified any configurations or set any limitations. 5-6. We added a paragraph in the paper explaining the differences and similarities to the approaches listed 7. We added the method of generating the synthetic graphs and the library with all the parameters in the section “4.6 Implementation Details”. 8. We’ve added several of the suggested references to the paper to improve the quality of the manuscript. "
Here is a paper. Please give your review comments after reading it.
230
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Betweenness-centrality is a popular measure in network analysis that aims to describe the importance of nodes in a graph. It accounts for the fraction of shortest paths passing through that node and is a key measure in many applications including community detection and network dismantling. The computation of betweenness-centrality for each node in a graph requires an excessive amount of computing power, especially for large graphs. On the other hand, in many applications, the main interest lies in finding the top-k most important nodes in the graph. Therefore, several approximation algorithms were proposed to solve the problem faster. Some recent approaches propose to use shallow graph convolutional networks to approximate the top-k nodes with the highest betweenness-centrality scores.</ns0:p><ns0:p>This work presents a deep graph convolutional neural network that outputs a rank score for each node in a given graph. With careful optimization and regularization tricks, including an extended version of DropEdge which is named Progressive-DropEdge, the system achieves better results than the current approaches. Experiments on both real-world and synthetic datasets show that the presented algorithm is an order of magnitude faster in inference and requires several times fewer resources and time to train.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Conducting network analysis has been a prominent topic in research, with applications spanning from community detection in social networks <ns0:ref type='bibr' target='#b3'>(Behera et al., 2020b</ns0:ref><ns0:ref type='bibr' target='#b5'>(Behera et al., , 2016))</ns0:ref>, to detecting critical nodes <ns0:ref type='bibr' target='#b4'>(Behera et al., 2019)</ns0:ref>, to hidden link prediction <ns0:ref type='bibr' target='#b18'>(Liu et al., 2013)</ns0:ref>. One of the more fundamental metrics for determining the importance of each graph node for network analysis is betweenness-centrality (BC). BC aims to measure the importance of nodes in the graph in terms of connectivity to other nodes via the shortest paths <ns0:ref type='bibr' target='#b19'>(Mahmoody et al., 2016)</ns0:ref>. It plays a big role in understanding the influence of nodes in a graph and, as an example, can be used to discover an important member, like a famous influencer or the set of the most reputable users in a network <ns0:ref type='bibr' target='#b4'>(Behera et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Computing the measure can be very resource-demanding especially for the large graphs. The fastest algorithm for computing exact betweenness-centrality in a given graph is the Brandes algorithm <ns0:ref type='bibr' target='#b9'>(Brandes, 2001)</ns0:ref> As O(|V ||E|) can grow very fast with the increase in the network size, several approximation algorithms based on sampling have been proposed <ns0:ref type='bibr' target='#b19'>(Mahmoody et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b29'>Riondato and Kornaropoulos, 2014a)</ns0:ref>.</ns0:p><ns0:p>However, along with the growth in the size of the graph, we face tantamount increases in the execution time and proportional decreases in the accuracy of the prediction. In many applications, the computation of the betweenness-centrality needs to be fast enough to handle dynamic changes in the graph <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Although several distributed computing algorithms exist for calculating betweenness-centrality <ns0:ref type='bibr' target='#b21'>(Naik et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b2'>Behera et al., 2020a</ns0:ref><ns0:ref type='bibr' target='#b4'>Behera et al., , 2019))</ns0:ref>, where the authors propose approaches to compute BC of a network using map-reduce in a distributed environment, this work focuses on a single machine algorithm.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60210:2:0:NEW 30 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The model works on a single machine and requires only a single GPU for training and can perform predictions on relatively big networks without the need for many machines or excessive computational power.</ns0:p><ns0:p>In fields such as social network analysis and network dismantling it is at times far more important to compute the relative importance of the nodes in the graph rather than obtain the exact scores of betweenness-centrality <ns0:ref type='bibr' target='#b15'>(Holme et al., 2002)</ns0:ref>. Several recent works like <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref> and <ns0:ref type='bibr' target='#b20'>(Maurya et al., 2019)</ns0:ref> have proposed to reformulate the problem into a learning-to-rank problem with the aim to learn a function that would map the nodes in the input graph to relative ranking BC scores. So, instead of computing the exact scores, the task is changed into finding the correct order of the nodes with respect to their betweenness-centrality.</ns0:p><ns0:p>Instead of using approximation techniques like sampling, recent approaches have proposed to train a graph convolutional neural network on synthetic small graphs that would learn to rank nodes based on their BC and would be able to generalize on bigger real-world graphs <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>.</ns0:p><ns0:p>In general, it is hard to avoid over-fitting and over-smoothing when training deep graph convolutional neural networks <ns0:ref type='bibr' target='#b33'>(Rong et al., 2020)</ns0:ref>. In order to generalize better (over-fitting) on small datasets and avoid obtaining uninformative representations for each node (over-smoothing) in deep models, Rong et al. &#8226; Second, deeper graph convolutional networks are shown to be able to have fewer parameters and be more efficient than more shallow alternatives leading to state-of-the-art results while being by an order of magnitude faster.</ns0:p><ns0:p>&#8226; Finally, the presented training procedure converges faster and requires fewer resources which enables training on a single GPU machine.</ns0:p><ns0:p>The approach is named ABCDE: Approximating Betweenness-Centrality ranking with progressive-DropEdge.</ns0:p><ns0:p>The source code is available on GitHub: https://github.com/MartinXPN/abcde. To reproduce the reported results one can run:</ns0:p><ns0:p>$ docker run martin97/abcde:latest</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>RELATED WORK</ns0:head></ns0:div> <ns0:div><ns0:head n='2.1'>Betweenness Centrality</ns0:head><ns0:p>The best-known algorithm for computing exact betweenness-centrality values is the Brandes algorithm <ns0:ref type='bibr' target='#b9'>(Brandes, 2001)</ns0:ref> for weighted ones, where |V | denotes the number of nodes and |E| denotes the number of edges in the graph. To enable approximate BC computation for large graphs several approximation algorithms were proposed which use only a small subset of edges in the graph. <ns0:ref type='bibr' target='#b29'>Riondato and Kornaropoulos (2014a)</ns0:ref> introduce the Vapnik-Chervonenskis (VC) dimension to compute the sample size that would be sufficient to obtain guaranteed approximations for the BC values of each node <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>. If V max denotes the maximum number of nodes on any shortest path, &#955; denotes the maximum additive error that the approximations should match, and &#948; is the probability of the guarantees holding, then the number of samples required to compute the BC score would be c &#955; 2 (&#8970;log(V max &#8722; 2)&#8971; + 1 + log 1 &#948; ). Manuscript to be reviewed</ns0:p><ns0:p>Computer Science ). Yet both approaches require a second run of the algorithm to identify top-k nodes with the highest betweenness-centrality scores. <ns0:ref type='bibr' target='#b17'>Kourtellis et al. (2012)</ns0:ref> introduces another metric that is correlated with high betweenness-centrality values and computes that metric instead, to identify nodes with high BC scores. <ns0:ref type='bibr' target='#b8'>Borassi and Natale (2019)</ns0:ref> propose an efficient way of computing BC for top-k nodes, which allows bigger confidence intervals for nodes with well-separated betweenness-centrality values. <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref> and <ns0:ref type='bibr' target='#b20'>Maurya et al. (2019)</ns0:ref> propose a shallow graph convolutional network approach for approximating the ranking based on the betweenness-centrality of nodes in the graph. They treat the problem as a learning-to-rank problem and approximate the ranking of vertices based on their betweenness-centrality.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Deep Graph Convolutional Networks</ns0:head><ns0:p>Graph Convolutional Networks (GCNs) have recently gained a lot of attention and have become the de facto methods for learning graph representations <ns0:ref type='bibr' target='#b34'>(Wu et al., 2019)</ns0:ref>. They are widely used in many graph representation tasks. Yet, different studies have different findings regarding the expressive power of GCNs as the network depth increases. <ns0:ref type='bibr' target='#b22'>Oono and Suzuki (2020)</ns0:ref> claims that they do not improve, or sometimes worsen their predictive performance as the number of layers in the network and the non-linearities grow.</ns0:p><ns0:p>On the other hand, <ns0:ref type='bibr' target='#b33'>Rong et al. (2020)</ns0:ref> claims that removing random edges from the graph during training acts as a regularisation for deep GCNs and helps to combat over-fitting (loss of generalization power on small datasets) and over-smoothing (isolation of output representations from the input features with the increase in network depth). They empirically show that this trick, called DropEdge, improves the performance on several both deep and shallow GCNs.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>PRELIMINARIES</ns0:head><ns0:p>Let G = (V, E) denote a network where each node has a representation Betweenness-centrality accounts for the significance of individual nodes based on the fraction of shortest paths that pass through them <ns0:ref type='bibr' target='#b19'>(Mahmoody et al., 2016)</ns0:ref>. Normalized betweenness-centrality for node w is defined as:</ns0:p><ns0:formula xml:id='formula_0'>X v &#8712; R c for v &#8712; V ,</ns0:formula><ns0:formula xml:id='formula_1'>b(w) = 1 |V |(|V | &#8722; 1) &#8721; u =w =v &#963; uv (w) &#963; uv<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where |V | denotes the number of nodes in the network, &#963; uv denotes the number of shortest paths from u to v, and &#963; uv (w) the number of shortest paths from u to v that pass through w.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>METHOD</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1'>Input features</ns0:head><ns0:p>For the input, the model only needs the structure of the graph G represented as a sparse adjacency matrix, and the degree d v for each vertex v &#8712; V . In comparison to this method, <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref> uses two additional features for each vertex, which were calculated based on the neighborhoods with radii of sizes one and two for each node. Yet, in this approach, having only the degree of the vertex and the network structure itself is sufficient to approximate the betweenness-centrality ranking for each node. So, the initial feature vector X v &#8712; R c for vertex v is only a single number -the degree of the vertex, which is enriched in deeper layers of the model.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Output and loss function</ns0:head><ns0:p>For each node v in the graph G, the model predicts the relative BC ranking score, meaning that for each input X v the model only outputs a single value which represents the predicted ranking score y v &#8712; R. As the output is the relative ranking score, the loss function is chosen to be a pairwise ranking loss follow</ns0:p></ns0:div> <ns0:div><ns0:head>3/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60210:2:0:NEW 30 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the approach proposed by <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref>. To compute the pairwise ranking loss, 5|V | node pairs (i, j) are randomly sampled, following <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref> binary cross-entropy between the true order and the predicted order of those pairs is computed. So, having the two ground truth betweenness-centrality values b i and b j for i and j pair, and their relative rank y i and y j , the loss of a single pair would be:</ns0:p><ns0:formula xml:id='formula_2'>C i, j = &#8722;&#963; (b i &#8722; b j ) &#8226; log &#963; (y i &#8722; y j ) &#8722; (1 &#8722; &#963; (b i &#8722; b j )) &#8226; log(1 &#8722; &#963; (y i &#8722; y j )) (2)</ns0:formula><ns0:p>where &#963; is the sigmoid function defined as 1/(1 + e &#8722;x ). The total loss would be the sum of cross entropy losses for those pairs:</ns0:p><ns0:formula xml:id='formula_3'>L = &#8721; i, j&#8712;5|V | C i, j<ns0:label>(3)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head n='4.3'>Evaluation Metrics</ns0:head><ns0:p>As the baseline proposed by Fan et al. ( <ns0:ref type='formula'>2019</ns0:ref>) is adopted, the evaluation strategy is also the same. There are several metrics presented in the baseline. Kendall tau score is a metric that computes the number of concordant and discordant pairs in two ranking lists and is defined as:</ns0:p><ns0:formula xml:id='formula_4'>K(l 1 , l 2 ) = 2(&#945; &#8722; &#946; ) n &#8226; (n &#8722; 1) (4)</ns0:formula><ns0:p>where l 1 is the first list, l 2 is the second list, &#945; is the number of concordant pairs, &#946; is the number of discordant pairs, and n is the total number of elements. The range of the metric is [&#8722;1; 1] where 1 means that two ranking lists are in total agreement and &#8722;1 means that the two lists are in total disagreement.</ns0:p><ns0:p>Top-k% accuracy is defined as the percentage of overlap between the top-k% nodes in the predictions and the top-k% nodes in the ground truth list:</ns0:p><ns0:formula xml:id='formula_5'>Top-k% = {predicted-top-k%} &#8745; {true-top-k%} &#8968;|V | &#215; k%&#8969;<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>In these experiments, top-1%, top-5%, and top-10% accuracies as well as the Kendall tau score are reported.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4'>Training Data</ns0:head><ns0:p>The training data is generated similar to <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref>. Random graphs are sampled from the powerlaw distribution during training. The exact betweenness-centrality scores are computed for those graphs and are treated as the ground truth. As their sizes are small, the computation of the exact betweenness-centrality score is not computationally demanding. To avoid over-fitting on those graphs they are regenerated every 10 epochs. Each training graph is reused 8 times on average during a single training epoch.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5'>Model architecture</ns0:head><ns0:p>The model architecture is a deep graph convolutional network which consists of a stack of GCN layers and MaxPooling operations presented in Figure <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>. A GCN operation for a node v which has a neighborhood N(v) is defined as:</ns0:p><ns0:formula xml:id='formula_6'>H v = W &#8226; &#8721; u&#8712;N(v) 1 &#8730; d v + 1 &#8226; &#8730; d u + 1 h u (6)</ns0:formula><ns0:p>where h u is the input vector representation of the node u, d v and d u are the degrees of the vertices v and u accordingly, H v is the output vector representation of the node v, and W is a learnable matrix of weights.</ns0:p><ns0:p>The model takes the input representation X v of vertex v and maps it to an intermediate vector representation which is followed by several blocks of GCNs with different feature sizes, followed by MaxPooling operations which reduce the extracted features in the block to a single number for each Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>vertex. Each GCN block is followed by a transition block which is a fully connected single layer that maps the sizes of the previous GCN block to the current one.</ns0:p><ns0:p>For every GCN block, a different amount of random edge drops is applied which is called Progressive-DropEdge. In these experiments the model best scales when the probability of dropping an edge is higher in the initial GCN blocks, while slowly decreasing the probability as the layers approach the output. That helps the model to focus on more details and have a better, fine-grained ranking score prediction. To avoid having isolated nodes only the edges of vertices with degrees higher than 5 are dropped. </ns0:p></ns0:div> <ns0:div><ns0:head n='4.6'>Implementation Details</ns0:head><ns0:p>The MLPs and transition blocks follow the {Linear &#8594; LayerNorm &#8594; PReLU &#8594; Dropout} structure, while GCN blocks follow the {GCNConv &#8594; PReLU &#8594; LayerNorm &#8594; Dropout} structure. The initial MLP that maps the input X v to an intermediate representation has a size of 16. There are 6 blocks of GCNs in total. The number of GCNConvs in the blocks are {4, 4, 6, 6, 8, 8}, while their sizes are {48, 48, 32, 32, 24, 24}. The Progressive-DropEdge for each block is applied with probabilities {0.3, 0.3, 0.2, 0.2, 0.1, 0.1}. Gradients are clipped after the value 0.3.</ns0:p><ns0:p>For training and validation, random graphs from the powerlaw distribution are sampled using the NetworkX library <ns0:ref type='bibr' target='#b14'>(Hagberg et al., 2008)</ns0:ref>, having nodes from 4000 to 5000 with a fixed number of edges to add (m = 4), and the probability of creating a triangle after adding an edge (p = 0.05) (following Fan The training is stopped whenever Kendall Tau on the validation set does not improve for 5 consecutive epochs. Adam optimizer <ns0:ref type='bibr' target='#b16'>(Kingma and Ba, 2014)</ns0:ref> is used with an initial learning rate of 0.01 and the learning rate is divided by 2 if the validation Kendall score does not increase for two consecutive epochs.</ns0:p><ns0:p>The GCN training is implemented in Pytorch <ns0:ref type='bibr'>(Paszke et al., 2019)</ns0:ref> and Pytorch Geometric <ns0:ref type='bibr' target='#b12'>(Fey and Lenssen, 2019)</ns0:ref> libraries. All the weights are initialized with their default initializers. The ground truth betweenness-centrality values for training graphs are calculated with python-igraph library <ns0:ref type='bibr' target='#b10'>(Csardi and Nepusz, 2006)</ns0:ref>. Training and validation results were tracked with Aim <ns0:ref type='bibr' target='#b1'>(Arakelyan, 2020)</ns0:ref> and Weights and Biases <ns0:ref type='bibr' target='#b6'>(Biewald, 2020)</ns0:ref> libraries.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.7'>Complexity Analysis</ns0:head><ns0:p>The training time complexity is intractable to estimate robustly as it largely depends on the number of training steps, the network size, and the implementation of the operations used within the network. In The inference time complexity is proportional to the operations required for a single forward pass.</ns0:p><ns0:p>For most graphs in practice, including all graphs used in this work, all the vertices in a graph can be propagated in a single minibatch, so the complexity of inference becomes O(L &#8226; f &#8226; (|V | + |E|)). Further analysis of this model empirically demonstrates that L &#8226; f is a relatively small constant compared to other approaches and the speed of this approach outperforms others by an order of magnitude.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>EVALUATION AND RESULTS</ns0:head><ns0:p>The approach is evaluated on both real-world and synthetic graphs. Both of those are present in the benchmark provided by <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref>. The synthetic networks are generated from powerlaw distribution with a fixed number of edges to add (m = 4), and the probability of creating a triangle after adding an edge (p = 0.05), while the real-world graphs are taken from AlGhamdi et al. ( <ns0:ref type='formula'>2017</ns0:ref>) and represent 5 big graphs taken from real-world applications. The real-world graphs with their description and parameters are presented in Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref> <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>.</ns0:p><ns0:p>The ground truth betweenness-centralities for the real-world graphs are provided by AlGhamdi et al.</ns0:p><ns0:p>(2017), which are computed by the parallel implementation of Brandes algorithm on a 96 000-core supercomputer. The ground truth scores for the synthetic networks are provided by <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref> and are computed using the graph-tool <ns0:ref type='bibr' target='#b26'>(Peixoto, 2014)</ns0:ref> library.</ns0:p><ns0:p>The presented approach is compared to several baseline models. The performance of those models are adopted from the benchmark provided by Fan et al. ( <ns0:ref type='formula'>2019</ns0:ref>):</ns0:p><ns0:p>&#8226; ABRA (Riondato and Upfal, 2018): Samples pairs of nodes until the desired accuracy is reached.</ns0:p><ns0:p>Where the error tolerance &#955; was set to 0.01 and the probability &#948; was set to 0.1.</ns0:p><ns0:p>&#8226; RK <ns0:ref type='bibr' target='#b30'>(Riondato and Kornaropoulos, 2014b)</ns0:ref>: The number of pairs of nodes is determined by the diameter of the network. Where the error tolerance and the probability were set similar to ABRA.</ns0:p><ns0:p>&#8226; k-BC <ns0:ref type='bibr' target='#b27'>(Pfeffer and Carley, 2012)</ns0:ref>: Does only k steps of Brandes algorithm <ns0:ref type='bibr' target='#b9'>(Brandes, 2001)</ns0:ref> which was set to 20% of the diameter of the network.</ns0:p><ns0:p>&#8226; KADABRA <ns0:ref type='bibr' target='#b8'>(Borassi and Natale, 2019)</ns0:ref>: Uses bidirectional BFS to sample the shortest paths. The variant where it computest the top-k% nodes with the highest betweenness-centrality was used. The error tolerance and probability were set to be the same as ABRA and RK.</ns0:p><ns0:p>&#8226; Node2Vec <ns0:ref type='bibr' target='#b13'>(Grover and Leskovec, 2016)</ns0:ref>: Uses a biased random walk to aggregate information from the neighbors. The vector representations of each node were then mapped with a trained MLP to ranking scores.</ns0:p></ns0:div> <ns0:div><ns0:head>6/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60210:2:0:NEW 30 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; DrBC <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>: Shallow graph convolutional network that outputs a ranking score for each node by propagating through the neighbors with a walk length of 5.</ns0:p><ns0:p>For a fair comparison, the presented model was run on a CPU machine with 80 cores and 512GB memory to match the results reported by <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref>. Please note that due to several optimizations and smaller model size, the training takes around 30 minutes on a single 12GB NVIDIA 1080Ti GPU machine with only 4vCPUs and 12GB RAM compared to 4.5 hours reported by <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref> which used an 80-core machine with 512GB RAM, and 8 16GB Tesla V100 GPUs. For the inference, the ABCDE model does not need the 512GB memory, it only utilizes a small portion of it. Yet, the machine is used for a fair comparison. The inference is run on a CPU to be fairly compared to all the other techniques reported, yet using a GPU for inference can increase the speed substantially. <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>. It was not feasible to calculate the results marked with NA. The bold results indicate the best performance for a given metric.</ns0:p></ns0:div> <ns0:div><ns0:head>Dataset</ns0:head><ns0:p>Results on real-world networks presented in Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref> demonstrate that the ABCDE model outperforms all the other approaches for the ranking score Kendall-tau and is especially good for large graphs. For the Top-1%, Top-5%, and Top-10% accuracy scores, ABCDE outperforms other approaches on some datasets, while shows close-to-top performance on others. The presented algorithm is the fastest among all the baselines and outperforms others by an order of magnitude.</ns0:p><ns0:p>Comparison of the ABCDE model with the previous GCN approach DrBC, demonstrated in Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>,</ns0:p><ns0:p>shows that the presented deep model is more accurate and can achieve better results even though it has fewer trainable parameters and requires less time to train.</ns0:p></ns0:div> <ns0:div><ns0:head>7/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60210:2:0:NEW 30 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed <ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>. The bold results indicate the best performance for a given metric.</ns0:p><ns0:p>For each scale, the mean and standard deviation over 30 tests are reported.</ns0:p><ns0:p>The results on synthetic datasets demonstrated in Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref> show that ABRA performs well on identifying Top-1% nodes in the graph with the highest betweenness-centrality score, even though requiring a longer time to run. On all the other metrics including Top-5%, Top-10%, and Kendall tau scores ABCDE approach outperforms all the others. ABCDE is substantially faster than others on large graphs and for the small graphs, it has comparable performance to DrBC.</ns0:p><ns0:p>It is important to note that the presented model has only around 70 000 trainable parameters and requires around 30 minutes to converge during training as opposed to DrBC which has around 120 000 trainable parameters and requires around 4.5 hours to converge.</ns0:p><ns0:p>More GCN layers in the model enable the process to explore wider neighborhoods for each vertex in the graph during inference. <ns0:ref type='bibr' target='#b11'>Fan et al. (2019)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>stage, therefore helping the network explore a wider spectrum of neighbors. That helps the network have better performance even though the structure is way simpler.</ns0:p><ns0:p>To be able to have a deep network with many graph-convolutional blocks, progressive DropEdge along with skip connections is used. Each GCN block gets only part of the graph where a certain number of edges are removed randomly. Initial layers get fewer edges, while layers closer to the final output MLP get more context of the graph which helps the model explore the graph better.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>ABLATION STUDIES</ns0:head><ns0:p>To demonstrate the contribution of each part of the ABCDE approach, each part is evaluated in ablation studies. Parts of the approach are removed to demonstrate the performance changes on the real-world datasets. As a lot of real-world graphs are very large, the final ABCDE approach is chosen to be the one leading to the best performance on the large networks.</ns0:p></ns0:div> <ns0:div><ns0:head>Dataset</ns0:head><ns0:p>The over-fitting behavior of the proposed approach is also studied in details. As demonstrated in the Unlike the experiments done by <ns0:ref type='bibr' target='#b33'>Rong et al. (2020)</ns0:ref>, there is no over-smoothing noticed in ABCDE as the model employs skip-connections for each block. That helps it avoid converging to very similar activations in deep layers.</ns0:p></ns0:div> <ns0:div><ns0:head n='7'>CONCLUSION</ns0:head><ns0:p>In this paper, a deep graph convolutional network was presented to approximate betweenness-centrality ranking scores for each node in a given graph. The author demonstrated that the number of parameters of the network can be reduced, while not compromising the predictive power of the network. The approach achieves better convergence and faster training on smaller machines compared to the previous approaches.</ns0:p><ns0:p>A novel way was proposed to add regularisation to the network through progressively dropping random edges in each graph convolutional block, which was called Progressive-DropEdge. The results suggest that deep graph convolutional networks are capable of learning informative representations of graphs and can approximate the ranking score for betweenness-centrality while preserving good generalizability for real-world graphs. The time comparison demonstrates that this approach is significantly faster than alternatives.</ns0:p><ns0:p>Several future directions can be examined, including case studies on specific applications (e.g. urban planning, social networks), and extensions of the approach for directed and weighted graphs. One more interesting direction is to approximate other centrality measures in big networks.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>which has O(|V ||E|) time complexity for unweighted networks and O(|V ||E| + |V | 2 log |V |) for weighted ones, where |V | denotes the number of nodes and |E| denotes the number of edges in the graph.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>Figure 1. Introducing Progressive-DropEdge in the training procedure improves the performance of the model, especially on larger real-world networks. This paper focuses on the benchmark of ranking based on betweenness-centrality proposed by Fan et al. (2019) as they include various real-world and synthetic datasets and detailed comparisons with other approximation algorithms. The main contributions are threefold: &#8226; First, Progressive-DropEdge is introduced in the training procedure which acts as regularization and improves the performance on large networks.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>which has O(|V ||E|) time complexity for unweighted graphs and O(|V ||E| + |V | 2 log |V |)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>2014a) with smaller sample sizes. Borassi and Natale (2019) propose a balanced bidirectional breadth-first search (BFS) which reduces the time for each sample from O(|E|) to O(|E| 1 2 +O(1)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>where c denotes the dimensionality of the representation, d v denotes the degree of the vertex v, |V | denotes the number of nodes and |E| denotes the number of edges in the graph.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. ABCDE model architecture. Each Transition block is a set of {Linear &#8594; LayerNorm &#8594; PRelu &#8594; Dropout} layers, while each GCN is a set of {GCNConv &#8594; PReLU &#8594; LayerNorm &#8594; Dropout}.symbol is the concatenation operation. Each MaxPooling operation extracts the maximum value from the given GCN block.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>et al. (2019)). For each training epoch, 160 graphs are sampled, while during validation 240 graphs are used for stability. The batch size is set to 16 graphs per step and the training lasts for at most 50 epochs.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>generic terms, the time complexity can be expressed as O(S(F + B)) where S is the number of training 5/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60210:2:0:NEW 30 Jul 2021) Manuscript to be reviewed Computer Science steps which can be expressed by the number of epochs times the number of minibatches within the epoch, F and B are the operations required for a single forward and backward pass of a minibatch respectively.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>F</ns0:head><ns0:label /><ns0:figDesc>and B are proportional to the number of layers in the deep network L, and the number of nodes and edges in the graph. GCN operation is O( f &#8226; (|V | + |E|)), where f is the size of the feature vector for each node. The overall time complexity would be proportional to O(S &#8226; L &#8226; f &#8226; (|V | + |E|))). In this approach, the training procedure converges in about 30 minutes and then the network can be reused for an arbitrarily constructed input graph.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 2 ,Figure 2 .</ns0:head><ns0:label>22</ns0:label><ns0:figDesc>Figure2, the model without drop-edge over-fits faster than the models with a constant 0.2 DropEdge probability and the ABCDE model with progressive DropEdge. The ABCDE model over-fits less and has more stable validation loss compared to both the constant drop-edge models (0.2 and 0.8) and no dropedge model. When the probability of dropping random edges from the input graph increases too much,</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>. Summary</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Network com-Youtube 1 134 890 2 987 624 |V | |E|</ns0:cell><ns0:cell>D 5.27</ns0:cell><ns0:cell cols='2'>Diameter Description 20 A video-sharing web site that includes a social network. Nodes are users and edges</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>are friendships</ns0:cell></ns0:row><ns0:row><ns0:cell>Amazon</ns0:cell><ns0:cell>2 146 057 5 743 146</ns0:cell><ns0:cell>5.35</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>A product network created by crawling the Amazon online store. Nodes represent</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>products and edges link commonly co-purchased products</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>4 000 148 8 649 011</ns0:cell><ns0:cell>4.32</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>An authorship network extracted from the DBLP computer science bibliography.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Nodes are authors and publications. Each edge connects an author to one of his</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>publications</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell cols='2'>3 764 117 16 511 741 8.77</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell>A citation network of U.S. patents. Nodes are patents and edges represent citations.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>In our experiments, we regard it as an undirected network</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell cols='2'>3 997 962 34 681 189 17.35</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>A social network where nodes are LiveJournal users and edges are their friendships</ns0:cell></ns0:row></ns0:table><ns0:note>of real-world datasets. Where |V | is the number of nodes, |E| is the number of edges, and D is the average degree of the graph. Adapted from</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Top-k% accuracy, Kendall tau distance, (&#215;0.01), and running time on large real-world networks adapted from</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>ABRA</ns0:cell><ns0:cell>RK</ns0:cell><ns0:cell cols='4'>KADABRA Node2Vec DrBC ABCDE</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Top-1%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>95.7 95.7 95.7</ns0:cell><ns0:cell>76.0</ns0:cell><ns0:cell>57.5</ns0:cell><ns0:cell>12.3</ns0:cell><ns0:cell>73.6</ns0:cell><ns0:cell>77.1</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>69.2</ns0:cell><ns0:cell>86.0</ns0:cell><ns0:cell>47.6</ns0:cell><ns0:cell>16.7</ns0:cell><ns0:cell>86.2</ns0:cell><ns0:cell>92.0 92.0 92.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>49.7</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>35.2</ns0:cell><ns0:cell>11.5</ns0:cell><ns0:cell>78.9</ns0:cell><ns0:cell>79.8 79.8 79.8</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>37.0</ns0:cell><ns0:cell>74.4 74.4 74.4</ns0:cell><ns0:cell>23.4</ns0:cell><ns0:cell>0.04</ns0:cell><ns0:cell>48.3</ns0:cell><ns0:cell>50.2</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>60.0</ns0:cell><ns0:cell>54.2*</ns0:cell><ns0:cell>31.9</ns0:cell><ns0:cell>3.9</ns0:cell><ns0:cell>67.2</ns0:cell><ns0:cell>70.9 70.9 70.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Top-5%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>91.2 91.2 91.2</ns0:cell><ns0:cell>75.8</ns0:cell><ns0:cell>47.3</ns0:cell><ns0:cell>18.9</ns0:cell><ns0:cell>66.7</ns0:cell><ns0:cell>75.1</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>58.0</ns0:cell><ns0:cell>59.4</ns0:cell><ns0:cell>56.0</ns0:cell><ns0:cell>23.2</ns0:cell><ns0:cell>79.7</ns0:cell><ns0:cell>88.0 88.0 88.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>45.5</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>42.6</ns0:cell><ns0:cell>20.2</ns0:cell><ns0:cell>72.0</ns0:cell><ns0:cell>73.7 73.7 73.7</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>42.4</ns0:cell><ns0:cell>68.2 68.2 68.2</ns0:cell><ns0:cell>25.1</ns0:cell><ns0:cell>0.29</ns0:cell><ns0:cell>57.5</ns0:cell><ns0:cell>58.3</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>56.9</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>39.5</ns0:cell><ns0:cell>10.35</ns0:cell><ns0:cell>72.6</ns0:cell><ns0:cell>75.7 75.7 75.7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Top-10%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>89.5</ns0:cell><ns0:cell>100.0 100.0 100.0</ns0:cell><ns0:cell>44.6</ns0:cell><ns0:cell>23.6</ns0:cell><ns0:cell>69.5</ns0:cell><ns0:cell>77.6</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>60.3</ns0:cell><ns0:cell>100.0 100.0 100.0</ns0:cell><ns0:cell>56.7</ns0:cell><ns0:cell>26.6</ns0:cell><ns0:cell>76.9</ns0:cell><ns0:cell>85.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>100.0 100.0 100.0</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>50.4</ns0:cell><ns0:cell>27.7</ns0:cell><ns0:cell>72.5</ns0:cell><ns0:cell>76.3</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>50.9</ns0:cell><ns0:cell>53.5</ns0:cell><ns0:cell>21.6</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>64.1</ns0:cell><ns0:cell>64.9 64.9 64.9</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>63.6</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>47.6</ns0:cell><ns0:cell>15.4</ns0:cell><ns0:cell>74.8</ns0:cell><ns0:cell>78.0 78.0 78.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Kendall tau</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>56.2</ns0:cell><ns0:cell>13.9</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>46.2</ns0:cell><ns0:cell>57.3</ns0:cell><ns0:cell>59.8 59.8 59.8</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>16.3</ns0:cell><ns0:cell>9.7</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>44.7</ns0:cell><ns0:cell>69.3</ns0:cell><ns0:cell>77.7 77.7 77.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>14.3</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>49.5</ns0:cell><ns0:cell>71.9</ns0:cell><ns0:cell>73.7 73.7 73.7</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>17.3</ns0:cell><ns0:cell>15.3</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>4.0</ns0:cell><ns0:cell>72.6</ns0:cell><ns0:cell>73.5 73.5 73.5</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>22.8</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>35.1</ns0:cell><ns0:cell>71.3</ns0:cell><ns0:cell>71.8 71.8 71.8</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Time/s</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>com-youtube 72898.7 125651.2</ns0:cell><ns0:cell>116.1</ns0:cell><ns0:cell>4729.8</ns0:cell><ns0:cell>402.9</ns0:cell><ns0:cell>26.7 26.7 26.7</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell cols='2'>5402.3 149680.6</ns0:cell><ns0:cell>244.7</ns0:cell><ns0:cell>10679.0</ns0:cell><ns0:cell>449.8</ns0:cell><ns0:cell>63.5 63.5 63.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>11591.5</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>398.1</ns0:cell><ns0:cell>17446.9</ns0:cell><ns0:cell>566.7</ns0:cell><ns0:cell>104.9 104.9 104.9</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell cols='2'>10704.6 252028.5</ns0:cell><ns0:cell>568.0</ns0:cell><ns0:cell>11729.1</ns0:cell><ns0:cell>744.1</ns0:cell><ns0:cell>163.9 163.9 163.9</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>34309.6</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>612.9</ns0:cell><ns0:cell>18253.6</ns0:cell><ns0:cell>2274.2</ns0:cell><ns0:cell>271.0 271.0 271.0</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Comparison of Top-k% accuracy, Kendall-tau, and running time on large real-world networks with the baseline DrBC model. Results are taken from<ns0:ref type='bibr' target='#b11'>(Fan et al., 2019)</ns0:ref>. The bold results indicate the best performance for a given metric.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Top-k% accuracy, Kendall tau, and execution time in seconds on synthetic graphs of different scales adapted from</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head /><ns0:label /><ns0:figDesc>used only 5 neighbor aggregations which limit the information aggregated especially for big graphs. We use a deeper network with more neighbor aggregations on each</ns0:figDesc><ns0:table /><ns0:note>8/12PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60210:2:0:NEW 30 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Top-k% accuracy, and Kendall tau distance, (&#215;0.01) on large real-world networks showing the ablation study for different parts of the ABCDE model. The bold results indicate the best performance for a given metric.From the experiments demonstrated in Table5, it can be observed that each part's contribution differs for different graph types. ABCDE with no DropEdge outperforms the proposed approach on the com-youtube and amazon graphs which are relatively small networks. Constant DropEdge of 0.2 outperforms all the rest on the Dblp graph which is larger than com-youtube and amazon but smaller than cit-Patents and com-lj. ABCDE with Progressive-DropEdge and skip connections is the best for the largest two graphs, namely cit-Patents and com-lj. Removing skip connections from the model drops the performance significantly in all the cases.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='4'>No DropEdge DropEdge= 0.2 No skip connections ABCDE</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Top-1%</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>78.5 78.5 78.5</ns0:cell><ns0:cell>77.8</ns0:cell><ns0:cell>66.5</ns0:cell><ns0:cell>77.1</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>86.2</ns0:cell><ns0:cell>91.0</ns0:cell><ns0:cell>85.3</ns0:cell><ns0:cell>92.0 92.0 92.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>79.3</ns0:cell><ns0:cell>80.2 80.2 80.2</ns0:cell><ns0:cell>76.9</ns0:cell><ns0:cell>79.8</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>47.4</ns0:cell><ns0:cell>47.1</ns0:cell><ns0:cell>37.6</ns0:cell><ns0:cell>50.2 50.2 50.2</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>69.0</ns0:cell><ns0:cell>69.1</ns0:cell><ns0:cell>46.1</ns0:cell><ns0:cell>70.9 70.9 70.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Top-5%</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>76.2 76.2 76.2</ns0:cell><ns0:cell>75.1</ns0:cell><ns0:cell>65.2</ns0:cell><ns0:cell>75.1</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>88.1 88.1 88.1</ns0:cell><ns0:cell>87.9</ns0:cell><ns0:cell>82.6</ns0:cell><ns0:cell>88.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>72.3</ns0:cell><ns0:cell>74.2 74.2 74.2</ns0:cell><ns0:cell>71.5</ns0:cell><ns0:cell>73.7</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>56.3</ns0:cell><ns0:cell>55.9</ns0:cell><ns0:cell>52.1</ns0:cell><ns0:cell>58.3 58.3 58.3</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>75.4</ns0:cell><ns0:cell>75.4</ns0:cell><ns0:cell>62.8</ns0:cell><ns0:cell>75.7 75.7 75.7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Top-10%</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>78.1 78.1 78.1</ns0:cell><ns0:cell>77.1</ns0:cell><ns0:cell>67.5</ns0:cell><ns0:cell>77.6</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>86.1 86.1 86.1</ns0:cell><ns0:cell>85.4</ns0:cell><ns0:cell>77.6</ns0:cell><ns0:cell>85.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>75.0</ns0:cell><ns0:cell>77.0 77.0 77.0</ns0:cell><ns0:cell>75.5</ns0:cell><ns0:cell>76.3</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>63.4</ns0:cell><ns0:cell>63.0</ns0:cell><ns0:cell>60.4</ns0:cell><ns0:cell>64.9 64.9 64.9</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>78.2 78.2 78.2</ns0:cell><ns0:cell>77.9</ns0:cell><ns0:cell>69.1</ns0:cell><ns0:cell>78.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Kendall tau</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>com-youtube</ns0:cell><ns0:cell>59.8 59.8 59.8</ns0:cell><ns0:cell>59.3</ns0:cell><ns0:cell>56.8</ns0:cell><ns0:cell>59.8 59.8 59.8</ns0:cell></ns0:row><ns0:row><ns0:cell>amazon</ns0:cell><ns0:cell>77.3</ns0:cell><ns0:cell>77.5</ns0:cell><ns0:cell>70.9</ns0:cell><ns0:cell>77.7 77.7 77.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Dblp</ns0:cell><ns0:cell>73.5</ns0:cell><ns0:cell>73.9 73.9 73.9</ns0:cell><ns0:cell>73.9 73.9 73.9</ns0:cell><ns0:cell>73.7</ns0:cell></ns0:row><ns0:row><ns0:cell>cit-Patents</ns0:cell><ns0:cell>73.2</ns0:cell><ns0:cell>72.8</ns0:cell><ns0:cell>71.1</ns0:cell><ns0:cell>73.5 73.5 73.5</ns0:cell></ns0:row><ns0:row><ns0:cell>com-lj</ns0:cell><ns0:cell>71.5</ns0:cell><ns0:cell>70.9</ns0:cell><ns0:cell>65.8</ns0:cell><ns0:cell>71.8 71.8 71.8</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Dear Editors, We thank the reviewers for their great and informative comments. We have made several changes and updated the manuscript to address their concerns. In particular, Figure2 was updated with the 0.8 DropEdge model added to the plot. Reviewer 1 Basic reporting No Comments Experimental design #2 - The author has claimed that the results are worse with 0.8 DropEdge probability. Can the authors provide the results for the same preferably in terms of the loss curve in Figure 2, along with the plots for No DropEdge, 0.2 probability DropEdge and ABCDE algorithm. #2 We’ve added the model with a DropEdge probability of 0.8 to the plot. Validity of the findings #1. The explanation provided is a bit weak, as the result does not seem very generalized. One possible explanation might be due to the distribution of the betweenness centrality index of the nodes are different due to the nature of the graphs involved. Hence that causes a bit of a difference in the accuracy between top 1 and top 10%. Can the authors please investigate this? #2.Figure 3 in Rong et. al has provided the training performance with two different values of DropEdge probability. Even if the results are worse (with higher DropEdge probability), the authors should provide the plots. #1 We did some analysis of the 5 graphs presented. Yet, we did not find any evidence on why there is a difference in top-1% and top-10% accuracies for ABCDE and no-DropEdge models. Some of the analyses are included in the graphs below. We would like to note that ABCDE is still better than no-DropEdge when taking into account the difference in the performance between no-DropEdge and ABCDE models for both top-1% and top-10%. ● On top-1% (ABCDE - noDropEdge) = = (77.1 - 78.5) + (92.0 - 86.2) + (79.8 - 79.3) + (50.2 - 47.4) - (70.9 - 69.0) = 5.8 ● On top-10% (ABCDE - noDropEdge) = = (77.6 - 78.1) + (85.6 - 86.1) + (76.3 - 75.0) + (64.9 - 63.4) + (78.0 - 78.2) = 1.6 Both numbers are positive which means that ABCDE is overall a bit better. We’ve analyzed both betweenness-centrality and degree centrality and found little to no significant difference between (dblp, cit-patents) and (com-youtube, amazon, com-lj). We looked at the normalized betweenness-centrality distribution scores and they were very similar in all the graphs. The degree centrality does not vary much for the graphs either. The plots demonstrate the degree centralities for top-10% and top-1% highest betweenness-centrality scores nodes. #2 Thanks for pointing it out. We’ve added the model with a DropEdge probability of 0.8 to the plot. Additional comments No Comments "
Here is a paper. Please give your review comments after reading it.
231
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The acoustic space in a given environment is filled with footprints arising from three processes: biophony, geophony and anthrophony. Bioacoustic research using passive acoustic sensors can result in thousands of recordings. An important component of processing these recordings is to automate signal detection. In this paper, we describe a new spectrogram-based approach for extracting individual audio events. Spectrogrambased audio event detection (AED) relies on separating the spectrogram into background (i.e., noise) and foreground (i.e., signal) classes using a threshold such as a global threshold, a per-band threshold, or one given by a classifier. These methods are either too sensitive to noise, designed for an individual species, or require prior training data. Our goal is to develop an algorithm that is not sensitive to noise, does not need any prior training data and works with any type of audio event. To do this, we propose: (1) a spectrogram filtering method, the Flattened Local Trimmed Range (FLTR) method, which models the spectrogram as a mixture of stationary and non-stationary energy processes and mitigates the effect of the stationary processes, and (2) an unsupervised algorithm that uses the filter to detect audio events. We measured the performance of the algorithm using a set of six thoroughly validated audio recordings and obtained a sensitivity of 94% and a positive predictive value of 89%. These sensitivity and positive predictive values are very high, given that the validated recordings are diverse and obtained from field conditions. The algorithm was then used to extract audio events in three datasets.</ns0:p><ns0:p>Features of these audio events were plotted and showed the unique aspects of the three acoustic communities.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Spectrogram-based acoustic event detection (AED) relies on separating the spectrogram into background (i.e., noise) and foreground (i.e., signal) classes using a threshold such as a global threshold, a per-band threshold, or one given by a classifier. These methods are either too sensitive to noise, designed for an individual species, or require prior training data. Our goal is to develop an algorithm that is not sensitive to noise, does not need any prior training data and works with any type of acoustic event. To do this, we propose: (1) a spectrogram filtering method, the Flattened Local Trimmed Range (FLTR) method, which models the spectrogram as a mixture of stationary and non-stationary energy processes and mitigates the effect of the stationary processes, and (2) an unsupervised algorithm that uses the filter to detect acoustic events.</ns0:p><ns0:p>We measured the performance of the algorithm using a set of six thoroughly validated audio recordings and obtained a sensitivity of 94% and a positive predictive value of 89%. These sensitivity and positive predictive values are very high, given that the validated recordings were collected in the field from sites with very different environment conditions. The algorithm was then used to extract acoustic events from three datasets. Features of these acoustic events were plotted and showed the unique aspects of the three acoustic communities.</ns0:p></ns0:div> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The acoustic space in a given environment is filled with footprints of activity. These footprints arise as events in the acoustic space from three processes: biophony, or the sound species make (e.g., calls, stridulation); geophony, or the sound made by different earth processes (e.g., rain, wind); and anthrophony, or the sounds that arise from human activity (e.g., automobile or airplane traffic) <ns0:ref type='bibr' target='#b10'>(Krause, 2008)</ns0:ref>. The field of Soundscape Ecology is tasked with understanding and measuring the relation between these processes and their acoustic footprints, as well as the total composition of this acoustic space <ns0:ref type='bibr' target='#b14'>(Pijanowski et al., 2011)</ns0:ref>. Acoustic environment research depends more and more on data acquired through passive sensors <ns0:ref type='bibr' target='#b3'>(Blumstein et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b1'>Aide et al., 2013)</ns0:ref> because recorders can acquire more data than is possible manually <ns0:ref type='bibr' target='#b13'>(Parker III, 1991;</ns0:ref><ns0:ref type='bibr' target='#b7'>Catchpole and Slater, 2003;</ns0:ref><ns0:ref type='bibr' target='#b19'>Remsen, 1994)</ns0:ref>, and these data provide better results than traditional methods <ns0:ref type='bibr' target='#b8'>(Celis-Murillo et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b11'>Marques et al., 2013)</ns0:ref>.</ns0:p><ns0:p>Currently, most soundscape analysis focus on computing indices for each recording in a given dataset <ns0:ref type='bibr' target='#b24'>(Towsey et al., 2014)</ns0:ref>, or on plotting and aggregating the raw acoustic energy <ns0:ref type='bibr' target='#b9'>(Gage and Axel, 2014)</ns0:ref>.</ns0:p><ns0:p>An alternative approach is to use each individual acoustic event as the base data and aggregate features computed from these events, but up until now, it has been difficult to accurately extract individual acoustic events from recordings.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8561:1:1:NEW 9 May 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We define an acoustic event as a perceptual difference in the audio signal that is indicative of some activity. While being a subjective definition, this perceptual difference can be reflected in transformations of the signal (e.g. a dark spot in a recording's spectrogram).</ns0:p><ns0:p>Normally, to use individual acoustic events as base data, a manual acoustic event extraction is performed <ns0:ref type='bibr' target='#b0'>(Acevedo et al., 2009)</ns0:ref>. This is usually done as a first step to build species classifiers, and can be made very accurately. By using an audio visualization and annotation tool, an expert is able to draw a boundary around an acoustic event; however, this method is very time-consuming, is specific to a set of acoustic events and it is not easily scalable for large datasets (e.g. &gt; 1000 minutes of recorded audio), thus an automated detection method could be very useful.</ns0:p><ns0:p>Acoustic event detection (AED) has been used as a first step to build species classifiers for whales, birds and amphibians <ns0:ref type='bibr' target='#b16'>(Popescu et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b12'>Neal et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b1'>Aide et al., 2013)</ns0:ref>. Most AED approaches rely on using some sort of thresholding to binarize the spectrogram into background (i.e., noise) and foreground (i.e., signal) classes. Foreground spectrogram cells satisfying some contiguity constraint are then joined into a single acoustic event. Some methods use a global threshold <ns0:ref type='bibr' target='#b16'>(Popescu et al., 2013)</ns0:ref>, or a per-band threshold <ns0:ref type='bibr' target='#b4'>(Brandes et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b1'>Aide et al., 2013)</ns0:ref>, while others train a species-specific classifier to perform the thresholding <ns0:ref type='bibr' target='#b12'>(Neal et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b6'>Briggs et al., 2009)</ns0:ref>. In <ns0:ref type='bibr' target='#b23'>Towsey et al. (2012)</ns0:ref> the authors reduce the noise in the spectrogram by using a 2D Wiener filter and removing modal intensity on each frequency band before applying a global threshold, but the threshold and parameters used in the AED tended to be species specific. Rather than using a threshold approach, <ns0:ref type='bibr' target='#b5'>Briggs et al. (2012)</ns0:ref> trained a classifier to label each cell in the spectrogram as sound or noise. These methods are either too sensitive to noise, are specialized to specific species, require prior training data or require prior knowledge from the user. What is needed is an algorithm that works for any recording, is not targeted to a specific type of acoustic event, does not need any prior training data, is not sensitive to noise, is fast and requires as little user intervention as possible.</ns0:p><ns0:p>In this article we propose a spectrogram filtering method, the Flattened Local Trimmed Range (FLTR) method, and an unsupervised algorithm that uses this filter for detecting acoustic events. This method filters the spectrogram by modeling it as a mixture of stationary and non-stationary energy processes, and mitigates the effect of the stationary processes. The detection algorithm applies FLTR to the spectrogram and proceeds to threshold it globally. Afterward, each contiguous region above the threshold line is considered an individual acoustic event.</ns0:p><ns0:p>We are interested in detecting automatically all acoustic events in a set of recordings. As such, this method tries to remove all specificity by design. Because of this, this method can work as a form of data reduction. As a first step, this transforms the acoustic data into a set of events that can later feed further analysis.</ns0:p><ns0:p>The presentation of the article follows the workflow in Figure <ns0:ref type='figure' target='#fig_4'>1</ns0:ref>: given a set of recordings, we compute the spectrogram for each one, then the FLTR is computed, a global threshold is applied and, finally, we proceed to extract the acoustic events. These acoustic events are compared with manually labeled acoustic events to determine the precision and accuracy of the automated process. We then applied the AED methodology to recordings from three different sites. Features of the events were calculated and plotted to determine unique aspects of each site. Finally, events within a region of high acoustic activity were sampled to determine the sources of these sounds.</ns0:p></ns0:div> <ns0:div><ns0:head>THEORY Audio Spectrogram</ns0:head><ns0:p>The spectrogram of an audio recording separates the power in the signal into frequency components in a short time window along a much longer time dimension (Fig. <ns0:ref type='figure' target='#fig_1'>2A</ns0:ref>). The spectrogram is defined as the magnitude component of a Short Time Fourier Transform (STFT) on the audio data and it can be viewed as a time-frequency representation of the magnitude of the acoustic energy. This energy gets spread over distinct frequency bins as it changes over time. Thus, providing a way of analyzing audio not just as a linear sequence of samples, but as an evolving distribution, where each acoustic event is rendered as high energy magnitudes in both time and frequency.</ns0:p><ns0:p>We represent a given spectrogram as a function S(t, f ), where 0 &#8804; t &lt; &#964; and 0 &#8804; t &lt; &#951; are the time and frequency coordinates, bounded by &#964;, the number of audio frames given by the STFT and &#951;, the number of frequency bins in the transform.</ns0:p></ns0:div> <ns0:div><ns0:head>2/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8561:1:1:NEW 9 May 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_4'>1</ns0:ref>. Flow diagram showing the workflow followed in this article. Recordings are filtered using FLTR, then thresholded. Contiguous cells form each acoustic event. In step 1, we validate the extracted events. In step 2, we compute features for each event and plot them. Acoustic events from one of the plotted regions are then sampled and cataloged.</ns0:p></ns0:div> <ns0:div><ns0:head>The Flattened Local Trimmed Range</ns0:head><ns0:p>The first step in detecting acoustic events in the spectrogram requires creating the flattened local trimmed range (FLTR) image. Once we have the spectrogram, creating the FLTR requires two steps: 1) flattening the spectrogram and 2) computing the local trimmed range (Fig. <ns0:ref type='figure' target='#fig_1'>2B-C</ns0:ref>). This image is produced by modeling the spectrogram as a sum of different energetic processes, along with some assumptions on the distributions of the acoustic events, and a proposed solution that takes advantage of the model to separate the energetic processes.</ns0:p></ns0:div> <ns0:div><ns0:head>Modeling the Spectrogram</ns0:head><ns0:p>We model the spectrogram S db (t, f ) as a sum of different energetic processes:</ns0:p><ns0:formula xml:id='formula_0'>S db (t, f ) = b( f ) + &#949;(t, f ) + n &#8721; i=1 R i (t, f )I i (t, f ) ,<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where b( f ) is a frequency-dependent process that is taken as constant in time, while &#949;(t, f ) is a process that is stationary in time and frequency with 0-mean, 0-median, and some scale parameter, and R i (t, f ) is a set of non-zero-mean localized energy processes that are bounded by their support functions I i (t, f ),</ns0:p><ns0:p>for 1 &#8804; i &#8804; n. An interpretation for these energetic processes is that b( f ) corresponds to a frequencydependent near-constant noise, &#949;(t, f ) corresponds to a global noise process with a symmetric distribution and the R i (t, f ) are our acoustic events, of which there are n.</ns0:p><ns0:p>In this model, we assume that the set of localized energy processes has four properties:</ns0:p><ns0:p>A1 The localized energy processes are mutually exclusive and are not adjacent. That is, no two localized energy processes share in the same (t, f ) coordinate, nor do they have adjacent coordinates. Thus, &#8704;1 &#8804; i, j &#8804; n, i = j, 0 &#8804; t &lt; &#964;, 0 &#8804; f &lt; &#951;, we have: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_1'>I i (t, f ) I j (t, f ) = 0 I i (t + 1, f ) I j (t, f ) = 0 I i (t &#8722; 1, f ) I j (t, f ) = 0 I i (t, f + 1) I j (t, f ) = 0 I i (t, f &#8722; 1) I j (t, f ) = 0.</ns0:formula><ns0:p>Computer Science This can be done without loss of generality. If two such localized processes existed, we can just 116 consider their union as one.</ns0:p></ns0:div> <ns0:div><ns0:head>117</ns0:head><ns0:p>A2 The regions of localized energy processes dominate the energy distribution in the spectrogram on each given band. That is, &#8704;0 &#8804; t 1 ,t 2 &lt; &#964;, 0 &#8804; f &lt; &#951;, we have:</ns0:p><ns0:formula xml:id='formula_2'>&#949;(t 1 , f ) + b( f ) &#8804; &#949;(t 2 , f ) + b( f ) + n &#8721; i=1 I i (t 2 , f ) R i (t 2 , f ).</ns0:formula><ns0:p>A3 The proportion of samples within a localized energy processes in a given frequency band, denoted as &#961;( f ), is less than half the samples in the entire frequency band. That is, &#8704;0 &#8804; f &lt; &#951;, we have:</ns0:p><ns0:formula xml:id='formula_3'>&#961;( f ) = 1 &#964; &#964; &#8721; t=0 n &#8721; i=1 I i (t, f ) &lt; .5</ns0:formula><ns0:p>A4 Each localized energy process dominates the energy distribution in its surrounding region, when 118 accounting for frequency band-dependent effects. That is, for every (t 1 , f 1 ) point that falls inside 119 a localized energy process (&#8704;1 Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_4'>&#8804; i &#8804; n, 0 &#8804; t 1 &lt; &#964;, 0 &#8804; f 1 &lt; &#951;) where I i (t 1 , f 1 ) = 1)</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>region-dependent time-based radius r i,1 and a frequency-based radius r i,2 , such that for every other</ns0:p><ns0:formula xml:id='formula_5'>(t 2 , f 2 ) point around the vicinity (&#8704;(t 2 , f 2 ) &#8712; [t 1 &#8722; r i,1 ,t 1 + r i,1 ] &#215; [ f 1 &#8722; r i,2 , f 1 + r i,2</ns0:formula><ns0:p>]), we have:</ns0:p><ns0:formula xml:id='formula_6'>&#949;(t 2 , f 2 ) &#8804; &#949;(t 1 , f 1 ) + n &#8721; i=1 I i (t 1 , f 1 ) R i (t 1 , f 1 ).</ns0:formula><ns0:p>We want to extract the R i components or, more specifically, their support functions I i (t, f ), from the spectrogram. If we are able to estimate b( f ) reliably for a given spectrogram, we can then compute</ns0:p><ns0:formula xml:id='formula_7'>&#348;(t, f ) = S db (t, f ) &#8722; b( f )</ns0:formula><ns0:p>, a spectrogram that is corrected for frequency intensity variations. Once that is done, we compute local statistics to estimate the I i (t, f ) regions and, thus, segregate the spectrogram into the localized energy processes R i (t, f ) and an &#949;(t, f ) background process.</ns0:p></ns0:div> <ns0:div><ns0:head>Flattening -Estimating b(f)</ns0:head><ns0:p>Other than A2, A3 and A4, we do not hold any assumptions for &#949;(t, f ) or R i (t, f ). In particular we do not presume to know their distributions. Thus, it is difficult to formulate a model to compute a Maximum</ns0:p><ns0:formula xml:id='formula_8'>A-Posteriori Estimator of b( f )|S db (t, f ). Even so, the frequency sample means &#181;( f ) = 1 &#964; &#8721; &#964;&#8722;1 t=0 S db (t, f )</ns0:formula><ns0:p>of a given spectrogram do not give a good estimate on b( f ) since they get mixed with the sum of non-zero expectations of any intersecting region :</ns0:p><ns0:formula xml:id='formula_9'>&#181;( f ) = 1 &#964; &#964;&#8722;1 &#8721; t=0 S db (t, f ) = 1 &#964; &#964;&#8722;1 &#8721; t=0 b( f ) + &#949;(t, f ) + n &#8721; i=1 R i (t, f )I i (t, f ) = b( f ) + 1 &#964; &#964;&#8722;1 &#8721; t=0 n &#8721; i=1 R i (t, f )I i (t, f ) .</ns0:formula><ns0:p>Since &#949;(t, f ) is a stationary 0-mean process, we do not need to worry about it as it will eventually cancel itself out, but the localized energy process regions do not cancel out. Since our goal is to separate these regions from the rest of the spectrogram in a general manner, if an estimate of b( f ) is to be useful, it should not depend on the particular values within these regions.</ns0:p><ns0:p>While using the mean does not prove to be useful, we can use the frequency sample medians, along with A2 and A3 to remove any frequency-dependent time-constant bands from the spectrogram. We formalize this with the following theorem:</ns0:p><ns0:p>Theorem 1. Let 0 &#8804; f &#8804; &#951; be a frequency band in the spectrogram, with a proportion of localized energy processes given as</ns0:p><ns0:formula xml:id='formula_10'>&#961;( f ) = 1 &#964; &#8721; &#964; t=0 &#8721; n i=0 I i (t, f</ns0:formula><ns0:p>), and a median m( f ). Assume A2 and that &#961;( f ) &lt; .5, then m( f ) depends only in the &#949; process and does not depend on any of the localized energy processes R i (t, f ).</ns0:p><ns0:p>Proof. &#961;( f ) is the proportion of energy samples in a given frequency band f that participate in a localized energy process. Then, &#961;( f ) &lt; .5 implies that less than 50% of the energy samples do so. This means that a 1 &#8722; &#961;( f ) &gt; .5 proportion of the samples in band f are described by the equation</ns0:p><ns0:formula xml:id='formula_11'>S db (t, f ) = b( f ) + &#949;(t, f ).</ns0:formula><ns0:p>A2 implies that the lower half of the population is within this 1&#8722;&#961;( f ) proportion, along with the frequency band median m( f ). Thus m( f ) does not depend on the localized energy processes R i (t, f ) Thus, assuming A2 and A3, m( f ) gives an estimator whose bias is limited by the range of the &#949; process and is completely unaffected by the R i processes. Furthermore, as &#961;( f ) approaches 0, m( f )</ns0:p><ns0:formula xml:id='formula_12'>approaches b( f ).</ns0:formula><ns0:p>We use the term band flattening to refer to the process of subtracting the b( f ) component from</ns0:p><ns0:formula xml:id='formula_13'>S db (t, f ). Thus we call &#348;(t, f ) = S db (t, f ) &#8722; m( f ) the band flattened spectrogram estimate of S db (t, f ).</ns0:formula><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>2B</ns0:ref> shows the output from this flattening procedure. As can be seen, this procedure removes any frequency-dependent time-constant bands in the spectrogram.</ns0:p></ns0:div> <ns0:div><ns0:head>5/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8561:1:1:NEW 9 May 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_14'>Estimating I i (t, f )</ns0:formula><ns0:p>We can use the band flattened spectrogram &#348;(t, f ) to further estimate the I i (t, f ) regions, since:</ns0:p><ns0:formula xml:id='formula_15'>&#348;(t, f ) &#8776; S db (t, f ) &#8722; b( f ) = &#949;(t, f ) + n &#8721; i=1 R i (t, f )I i (t, f ) .</ns0:formula><ns0:p>We do this by computing the local &#945;-trimmed range Ra &#945; { &#348;}. That is, given some 0 &#8804; &#945; &lt; 50, and some r &gt; 0, for each (t, f ) pair, we compute:</ns0:p><ns0:formula xml:id='formula_16'>Ra &#945; { &#348;}(t, f ) = P 100&#8722;&#945; ( &#348;ne r (t, f ) ) &#8722; P &#945; ( &#348;ne r (t, f ) ),</ns0:formula><ns0:p>where P &#945; (&#8226;) is the &#945; percentile statistic, and &#348;ne r (t, f ) is the band flattened spectrogram, with its domain restricted to a square neighborhood of range r (in time and frequency) around the point (t, f ).</ns0:p><ns0:p>Assuming A4, the estimator would give small values for neighborhoods without localized energy processes, but would peak around the borders of any such process. This statistic could then be thresholded to compute estimates of these borders and an estimate of the support functions I i (t, f ). Figure <ns0:ref type='figure' target='#fig_1'>2C</ns0:ref> shows the local trimmed range of a flattened spectrogram image. As can be seen, areas with acoustic events have a higher local trimmed range, while empty areas have a lower one.</ns0:p></ns0:div> <ns0:div><ns0:head>Thresholding</ns0:head><ns0:p>There are many methods that can be used to threshold the resulting FLTR image <ns0:ref type='bibr' target='#b21'>(Sezgin and Sankur, 2004)</ns0:ref>. Of these, we use the entropy-based method developed by Yen et. al. <ns0:ref type='bibr' target='#b30'>(Yen et al., 1995)</ns0:ref>. This method works on the distribution of the values, it defines an entropic correlation TC(t) of foreground and background classes as:</ns0:p><ns0:formula xml:id='formula_17'>TC(t) = &#8722; log t &#8721; v=m f (v) F(t) 2 &#8722; log M &#8721; v=t f (v) 1 &#8722; F(t) 2 ,</ns0:formula><ns0:p>where m and M are the minimum and maximum values of the FLTR spectrogram, and f (&#8226;) and F(&#8226;) are the Probability Density Function (PDF) and Cumulative Density Function (CDF) of these values. The PDF and CDF, in this case, are approximated with a histogram. The Yen threshold is then the value t that maximizes this entropy correlation. That is:</ns0:p><ns0:formula xml:id='formula_18'>t = arg max v&#8712;[m,M] TC(v).</ns0:formula><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>2D</ns0:ref> shows a thresholded FLTR image of a spectrogram. Adjacent (t, f ) coordinates whose value is greater than the threshold t are then considered as the border of one acoustic event. The region enclosed by such borders, including the borders, are then the acoustic events detected within the spectrogram.</ns0:p></ns0:div> <ns0:div><ns0:head>DATA AND METHODOLOGY Data</ns0:head><ns0:p>To test the FLTR algorithm we used two datasets collected and stored by the ARBIMON system (Sieve Analytics, 2015; <ns0:ref type='bibr' target='#b1'>Aide et al., 2013)</ns0:ref>. The recordings were captured using passive audio recording equipment from different locations as part of a long-term audio monitoring network.</ns0:p><ns0:p>The first dataset, the validation dataset, consisted of 2051 manually labeled acoustic events from six audio recordings (every acoustic event was labeled) <ns0:ref type='bibr' target='#b28'>(Vega, 2016d)</ns0:ref>. This set includes a recording from Lagoa do Peri, Brazil; one recording from the Arakaeri Communal Reserve, Per&#250;; one from El Yunque, Puerto Rico; and three underwater recordings from Mona Island, Puerto Rico.</ns0:p><ns0:p>The second dataset, the sites dataset, was a set of 240 recordings from the Amarakaeri Communal <ns0:ref type='bibr'>(Vega, 2016a,b,c)</ns0:ref>. Each set consisted of 10 one-minute recordings per hour, for all 24 hours, sampled uniformly from larger datasets from each site.</ns0:p></ns0:div> <ns0:div><ns0:head>6/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8561:1:1:NEW 9 May 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Methodology</ns0:head><ns0:p>We divided our methodology into two main steps. In the first step, FLTR validation, we extracted acoustic events from the recordings in the first dataset, which we then validated against the manually labeled acoustic events. In the second step, site acoustic event visualization, we extracted the acoustic events from the second dataset, computed feature vectors for each event and plotted them. The recording spectrograms were computed using a Hann window function of 512 audio samples and an overlap of 256 samples.</ns0:p></ns0:div> <ns0:div><ns0:head>FLTR Validation</ns0:head><ns0:p>We used the FLTR algorithm with a 21 &#215;21 window and &#945; = 5 to extract the acoustic events and compared them with the manually labeled acoustic events.</ns0:p><ns0:p>For the validation, we used two comparison methods, the first is based on a basic intersection test between the automatically detected and the manually labeled events' bounds, and the second one is based on an overlap percent. For each manual label and detection event pair, we defined the computed overlap percent as the ratio of the area of their intersection and the area of their union:</ns0:p><ns0:formula xml:id='formula_19'>O (L, D) = A (L &#8745; D) A (L &#8746; D)</ns0:formula><ns0:p>,</ns0:p><ns0:p>where L is a manually labeled event, D is a automatically detected event area, and</ns0:p><ns0:formula xml:id='formula_20'>A (L &#8745; D) and A (L &#8746; D)</ns0:formula><ns0:p>are the area of the intersection and the union of their respective bounds.</ns0:p><ns0:p>On the first comparison method, for each acoustic event whose bounds intersected the bounds of a manually labeled acoustic event, we registered it as a detected acoustic event. For each detected event whose bounds did not intersect any manually labeled acoustic events, we registered it as detected, but without an acoustic event. On the other hand, the manually labeled acoustic events that did not intersect any detected acoustic event were registered as undetected acoustic events.</ns0:p><ns0:p>The second method followed a similar path as the first, but it requires an overlap percent of at least 25%. For each acoustic event whose overlap percent with a manually labeled acoustic event was greater than or equal to 25%, we registered it as a detected acoustic event. For each detected event that did not have any manually labeled acoustic events with an overlap percent of at least 25%, we registered it as detected, but without an acoustic event. On the other hand, the manually labeled acoustic events that did not have an overlap percent of at least 25% with any detected acoustic event were registered as undetected acoustic events.</ns0:p><ns0:p>These data were used to create a confusion matrix to compute the FLTR algorithm's sensitivity and positive predictive value for each method. The sensitivity is computed as the ratio of the number of manually labeled acoustic events that were automatically detected (true positives) over the total count of manually labeled acoustic events (true positives and false negatives).</ns0:p></ns0:div> <ns0:div><ns0:head>Sensitivity =</ns0:head><ns0:p>True Positives True Positives + False Negatives .</ns0:p><ns0:p>This measurement reflects the percent of detected acoustic events that were present in the recording.</ns0:p><ns0:p>The positive predictive value is computed as the ratio of the number of manually labeled acoustic events that were automatically detected (true positives) over the total count of detected acoustic events (true positives and false positives).</ns0:p></ns0:div> <ns0:div><ns0:head>Positive Predictive Value =</ns0:head><ns0:p>True Positives True Positives + False Positives .</ns0:p><ns0:p>This measurement reflects the percent of real acoustic events among the set of detected acoustic events.</ns0:p></ns0:div> <ns0:div><ns0:head>FLTR Application</ns0:head><ns0:p>As with the FLTR validation step, we used the FLTR algorithm with a 21 &#215; 21 window and &#945; = 5 to extract the acoustic events in each of the recording samples in the second dataset, which we then converted into feature vectors.</ns0:p><ns0:p>The variables computed for each extracted acoustic event R i were:</ns0:p><ns0:p>tod Time of Day. Hour in which the recording from this acoustic event was taken.</ns0:p></ns0:div> <ns0:div><ns0:head>7/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8561:1:1:NEW 9 May 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science bw Bandwidth. Length of the acoustic event in Hertz. That is, given</ns0:p><ns0:formula xml:id='formula_21'>F i = { f |&#8707;t, I i (t, f ) = 1}, the bandwidth is defined as: bw i = max(F i ) &#8722; min(F i ).</ns0:formula><ns0:p>(2)</ns0:p><ns0:p>dur Duration. Length of the acoustic event in seconds. That is, given T i = {t|&#8707; f , I i (t, f ) = 1}, the duration is defined as:</ns0:p><ns0:formula xml:id='formula_22'>dur i = max(T i ) &#8722; min(T i ).<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>y max Dominant Frequency. The frequency at which the acoustic event attains its maximum power:</ns0:p><ns0:formula xml:id='formula_23'>y max i = arg max f max t I i (t, f ) &#348;(t, f ) .<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>cov Coverage Ratio. The ratio between the area covered by the detected acoustic event and the area detected by the bounds:</ns0:p><ns0:formula xml:id='formula_24'>cov i = &#8721; t, f I i (t, f ) dur i bw i .<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Using these features, we generate a log-density plot matrix for all pairwise combinations of the features for each site. The plots in the diagonal are log histograms of the column feature.</ns0:p><ns0:p>We measured the information content of each feature pair by computing their joint entropy H:</ns0:p><ns0:formula xml:id='formula_25'>H = 1 N 1 N 2 N 1 &#8721; i=1 N 2 &#8721; j=1 h i, j log 2 h i, j ,</ns0:formula><ns0:p>where log 2 is the base 2 logarithm, h i, j is the number of events in the (i, j) th bin of the joint histogram, and N 1 and N 2 are the number of bins of each variable in the histogram. A higher value of H means a higher information content.</ns0:p><ns0:p>We also focused our attention on areas with a high and medium count of detected acoustic events (log of detected events greater than 6 and 3.5, respectively). As an example, we selected an area of interest in the feature space of the Sabana Seca dataset (i.e., a visual cluster in the bw vs. y max plot). We sampled 50 detected acoustic events from the area and categorized them manually.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head>FLTR Validation</ns0:head><ns0:p>Under the simple intersection test, out of 2051 manually labeled acoustic events, 1922 were detected (true positives), and 129 were not (Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>). Of the 2167 detected acoustic events, 1922 were associated with manually labeled acoustic events (true positives), and 245 were not (false negatives). This resulted in a sensitivity of 94% and a positive predictive value of 89%. Notice that the algorithm only produces detection events. We do not provide a result for true negatives as any arbitrary number of true negative examples could be made, thus skewing the data.</ns0:p><ns0:p>Under the overlap percentage test, out of 2051 manually labeled acoustic events, 1744 were detected (true positives), and 307 were not (Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>). Of the 2167 detected acoustic events, 1744 were associated with manually labeled acoustic events (true positives), and 423 were not (false negatives). This resulted in a sensitivity of 85% and a positive predictive value of 80%.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref> shows the spectrograms of a sample of six acoustic events from the validation step, along with their FLTR images. Figures 3A-3D are manually labeled acoustic events that were detected. Figures <ns0:ref type='figure' target='#fig_3'>3E-3H</ns0:ref> are manually labeled acoustic events that were not detected. Figures 3I-3L are detected acoustic events that were not labeled. The FLTR images show how the surrounding background gets filtered and the acoustic event stands above it. Figure <ns0:ref type='figure' target='#fig_3'>3F</ns0:ref> shows lower than threshold FLTR values, possibly due to the spectrogram flattening. FLTR values in Fig. <ns0:ref type='figure' target='#fig_3'>3H</ns0:ref> is too low to cross the threshold as well and Fig. <ns0:ref type='figure' target='#fig_3'>3I</ns0:ref> shows a detection of some low-frequency short lived audio noise.</ns0:p></ns0:div> <ns0:div><ns0:head>9/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8561:1:1:NEW 9 May 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>FLTR Application</ns0:head><ns0:p>The feature pairs with highest joint entropy in the plots of the Per&#250; recordings are cov vs. y max, y max vs. tod and cov vs. tod with values of 9.37, 9.25 and 8.81 respectively (Fig. <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>). The cov vs. y max plot shows three areas of high event detection count at 2.4 &#8722; 4.6 kHz with 84 &#8722; 100% coverage, 7.0 &#8722; 7.4 kHz with 88 &#8722; 100% coverage and 8.4 &#8722; 8.8 kHz with 84 &#8722; 100% coverage. Three areas of medium event detection count can be found at 12.8 &#8722; 14.9 kHz with 68 &#8722; 90% coverage, 18.5 &#8722; 20.4 kHz with 59 &#8722; 90% coverage and one spanning the whole 90 &#8722; 100% coverage band. The y max vs. tod plot shows two areas of high event detection count from 5pm to 5am at 2.9 &#8722; 5.9 kHz and from 7pm to 4am at 9.6 &#8722; 10 kHz.</ns0:p><ns0:p>Three areas of medium event detection count can be found at from 6pm to 5am at 17.6 &#8722; 20.9 kHz, from 5pm to 5am at 15.7 &#8722; 10.5 kHz and one spanning the entire day at 7.5 &#8722; 10.5 kHz. The cov vs. tod plot shows one area of high event detection count from 6pm to 5am with 84 &#8722; 100% coverage and another area of medium event detection count throughout the entire day with with 60 &#8722; 84% coverage from 6pm to 5am and with 77 &#8722; 100% coverage from 6am to 5pm.</ns0:p><ns0:p>The feature pairs with highest joint entropy in the plots of the El Verde recordings are cov vs. y max, y max vs. tod and cov vs. tod with values of 9.06, 9.03 and 8.84 respectively (Fig. <ns0:ref type='figure'>5</ns0:ref>). The cov vs.</ns0:p><ns0:p>y max plot shows two areas of high event detection count at 3.3 &#8722; 1.9 kHz with 67 &#8722; 100% coverage and 1.4 &#8722; 0.8 kHz with 82 &#8722; 100% coverage. Three areas of medium event detection count can be found at Manuscript to be reviewed</ns0:p><ns0:p>Computer Science from 6pm to 7am at 1.9 &#8722; 2.9 kHz, from 7pm to 6am at 1.0 &#8722; 1.4 kHz and one spanning the entire day at 0 &#8722; 0.5 kHz. Two areas of medium event detection count span the whole day, one at 3.5 &#8722; 5.1 kHz and another one at 6.1 &#8722; 9.9 kHz. Vertical bands of medium event detection count areas can be found at 12am&#8722;5am, 7am, 10am&#8722;12pm, 3pm and 8pm. The cov vs. tod plot shows an area of medium event detection count spanning the entire day with 46 &#8722; 100% coverage, changing to 75 &#8722; 100% coverage from 1pm to 5pm. There seems to be a downward trending density line on the upper left corner of the plot in y max vs. bw.</ns0:p><ns0:p>The feature pairs with highest joint entropy in the plots of the Sabana Seca recordings are cov vs.</ns0:p><ns0:p>y max, y max vs. tod and cov vs. tod with values of 8.71, 8.78 and 8.36 respectively (Fig. <ns0:ref type='figure'>6</ns0:ref>). The cov vs. y max plot shows four areas of high event detection count at 0 &#8722; 0.2 kHz with 83 &#8722; 100% coverage, 1.3 &#8722; 1.8 kHz with 87 &#8722; 100% coverage, 4.6 &#8722; 5.5 kHz with 79 &#8722; 100% coverage and 7.0 &#8722; 7.9 kHz with 74 &#8722; 100% coverage. Areas of medium event detection count can be found surrounding the areas of high event detection count. The y max vs. tod plot shows four areas of high event detection count from 3pm to 7am at 6.8 &#8722; 8.1 kHz, from 4pm to 7am at 4.5 &#8722; 5.5 kHz, from 5pm to 11pm at 1.3 &#8722; 4.1 kHz and from 8am to 9pm at 0 &#8722; 0.4 kHz. Two areas of medium event detection count can be found from 1am to 7am at 1.4 &#8722; 4.4 kHz and from 8am to 2pm at 3.7 &#8722; 9.4 kHz span the whole day, one at 3.5 &#8722; 5.1 kHz and another one at 6.1 &#8722; 9.9 kHz. The cov vs. tod plot shows an area of medium event detection count Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Sampled Areas of Interest</ns0:head><ns0:p>In the Sabana Seca bw vs. y max plot, there are five high event detection count areas (Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>). We focus on the region with bandwidth between 700 Hz and 1900 Hz, and dominant frequency between 2650 Hz and 3550 Hz.</ns0:p><ns0:p>The 50 sampled acoustic events from the area of interest were arranged into six groups (Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>).</ns0:p><ns0:p>The majority of the events (40 out of 50 events) were the second note, 'qui', of the call of the frog Eleutherodactylus coqui. The second largest group is composed of the chirp sound of an Leptodactylus albilabiris (5 events). The third group is two unknown, similar, calls. The fourth group is an event of an</ns0:p><ns0:p>Leptodactylus albilabiris chirp with an almost overlapping Eleutherodactylus coqui's 'qui' tone. The last two groups are the call of an unknown insect, and an acoustic event arising from a person speaking on a radio station from an interference picked up by the recorder.</ns0:p></ns0:div> <ns0:div><ns0:head>13/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8561:1:1:NEW 9 May 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head></ns0:div> <ns0:div><ns0:head>FLTR Algorithm</ns0:head><ns0:p>The FLTR algorithm, in essence, functions as a noise filter for the recording. It takes the spectrogram from a noisy field recording (Fig. <ns0:ref type='figure' target='#fig_1'>2A</ns0:ref>), and outputs an image where, in theory, the only variation is due to localized acoustic energy (Fig. <ns0:ref type='figure' target='#fig_1'>2C</ns0:ref>). The model imposed on a spectrogram is also very generic in the sense that no particular species or sound is modeled, but rather it models the different sources of acoustic energy. It is reasonable to think that any spectrogram is composed of these three components without a loss of generality: (1) a frequency-based noise level, (2) some diffuse energy jitter, and (3) specific, localized events.</ns0:p><ns0:p>The end product of the flattening step is a spectrogram with no frequency-based components (Fig. <ns0:ref type='figure' target='#fig_1'>2B</ns0:ref>). By using the frequency band medians we are able to stay ignorant of the nature of the short-term dynamics in the spectrogram while being able to remove any long-term nuisance effects, such as (constant)</ns0:p><ns0:p>background noise or a specific audio sensor's frequency response. Thus, we end up with a spectrogram that is akin to a flat landscape with two components: (1) a roughness element (i.e. grass leaves) in the landscape and (2) a series of mounds, each corresponding to a given acoustic event. Due to this roughness element, a global threshold at this stage is ineffective. The local trimmed range however is able to enhance the contrast between the flat terrain and the mounds (Fig. <ns0:ref type='figure' target='#fig_1'>2C</ns0:ref>), enough to detect the mounds by using a simple global threshold (Fig. <ns0:ref type='figure' target='#fig_1'>2C</ns0:ref>). By using a Yen threshold, we maximize the entropy of both the background and foreground classes (Fig. <ns0:ref type='figure' target='#fig_1'>2D</ns0:ref>). In the end, the FLTR algorithm has the advantage of not trying to guess the distribution of the acoustic energy within the spectrogram, but rather it exploits robust statistics that work for any distribution to separate these three modeled components.</ns0:p><ns0:p>From the simple intersection test, a sensitivity of 94% assures us that most of the acoustic events in a Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>given recording will be extracted by the FLTR segmentation algorithm, while a positive predictive value of 89% assures us that if something is detected in the recording, it is most likely an acoustic event. Thus this algorithm can confidently extract acoustic events from a given set of recordings.</ns0:p><ns0:p>From the coverage percentage test, a sensitivity of 85% assures us that most of the acoustic events in a given recording will be extracted by the FLTR segmentation algorithm, while a positive predictive value of 80% assures us that if something is detected in the recording, it is most likely an acoustic event. Thus this algorithm can confidently extract acoustic events from a given set of recordings.</ns0:p><ns0:p>These performance statistics are obtained from a dataset of only six recordings. Because of this small recording size, biases that may occur due to correlations between acoustic events within a recording cannot be addressed. We tried to reduce this bias by selecting recordings from very different environments.</ns0:p><ns0:p>This limitation arises from the fact that manually annotating the acoustic events in a recording is very time consuming. Thus, a limiting factor on the sample size in the validation dataset is that the number of extracted acoustic events for each recording averages to 340 per recording (i.e. for six recordings, we have 2051 manually labeled acoustic events in total).</ns0:p><ns0:p>The 21 &#215; 21 window parameter was selected in an ad-hoc manner and corresponds to a square neighborhood of a maximum of 10 spectrogram cells around the central cell in the local trimmed range computation step. This allows us to compare each value in the spectrogram to a local neighborhood of about 122 ms and 1.8 kHz in size (assuming a sampling rate of 44100, or 500 ms and 410 Hz for sampling rate of 10000 Hz). The &#945; = 5 percentile was also selected ad-hoc. It corresponds to the local trimmed range computing the difference between the 95% and 5% percentiles. This allows the window to contain at least 5% percent of its content as low-valued outliers, and another 5% as high-valued outliers. While these parameter values provided good results, it is not know how optimal they are, nor how the sensitivity and positive predictive value would change if other parameter values were used.</ns0:p><ns0:p>A3 implies that a recording needs to have unsaturated frequency bands (&#961;( f ) &lt; .5) for the method to work efficiently. This assumption implies that Theorem 1 holds for frequency bands without intense constant chorus (&#961;( f ) &#8804; .5). However, as &#961;( f ) approaches 1, the frequency band median approaches the median of the aggregate localized energy processes on the frequency band. Thus, at least 50% of the values, more notably the highest values in the aggregate localized energy processes, will always be above b( f ). Depending on factors, such as their size and remaining intensity, they could very well be detected.</ns0:p><ns0:p>The degradation of the detection algorithm in relation to these assumptions, however, is still to be studied.</ns0:p></ns0:div> <ns0:div><ns0:head>FLTR Application</ns0:head><ns0:p>In all three sites, the plots with the most joint entropy were cov vs. y max, y max vs. tod and cov vs.</ns0:p><ns0:p>tod. cov measures how well a detection fits within a bounding box and can serve as a measure of the complexity of the event. This value changing across tod and y max implies that the complexity of the detected event change across these features. The high joint entropy of y max vs. tod is expected, since each variable amounts to being a location estimate: (tod is a location estimate in time while y max in frequency) and, thus, subject to the randomness of when (in time) and where (in frequency) an acoustic event occurs. Interestingly, rather than being uniformly distributed, the acoustic events are non-randomly distributed, presumably reflecting the variation in acoustic activity throughout the day. These tod vs.</ns0:p><ns0:p>y max plots present a temporal soundscape, which provide us with insights into how the acoustic space is partitioned in time and frequency in each ecosystem. For example, in the Sabana Seca and El Verde plots, there is a clear difference in acoustic event density between day time and night time. This correlates with the activity of amphibian vocalizations during the night in these sites <ns0:ref type='bibr' target='#b20'>(R&#237;os-L&#243;pez and Villanueva-Rivera, 2013;</ns0:ref><ns0:ref type='bibr' target='#b29'>Villanueva-Rivera, 2014</ns0:ref>).</ns0:p><ns0:p>An interesting artifact is a downward trending density line that appears on the upper left corner of the plot in y max vs. bw in Figure <ns0:ref type='figure'>5</ns0:ref>. A least squares fit to this line gives the equation &#8722;0.93X + 21.5 kHz, which is close to the maximum frequency of the recordings from El Verde site (these recordings have a sampling rate of 44100). This artifact seems to arise because the upper boundary of a detected event and bw are constrained to be below this maximum frequency and, for low bw values, y max is to be close to the upper boundary of the detected event.</ns0:p><ns0:p>Another useful application of the FLTR methodology is to sample specific regions of activity to determine the source of the sounds. In the sampled area of interest from Sabana Seca, approximately 80% of the sampled of the events were a single note of a call of E. coqui (40 events), and around 10% were the chirp of L. albilabris (5 events). This demonstrates how the methodology can be used to annotate regions Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>of peak activity in a soundscape.</ns0:p><ns0:p>Using the confusion matrix in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> as a ruler, we could estimate that around 94% of the acoustic events were detected and that they conform around 89% of the total amount of detected events.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>The FLTR algorithm uses a sound spectrogram model. Using robust statistics, it is able to exploit the spectrogram model without assuming any specific energy distributions. Coupled with the Yen threshold, we are able to extract acoustic events from recordings with high levels of sensitivity and precision.</ns0:p><ns0:p>Using this algorithm, we are able to explore the acoustic environment using acoustic events as base data. This provides us with an excellent vantage point where any feature computed from these acoustic events can be explored, for example the time of day vs. dominant frequency distribution of an acoustic environment (i.e. a temporal soundscape) down to the level of the individual acoustic events composing it.</ns0:p><ns0:p>As a tool, the FLTR algorithm, or any improvements thereof, have the potential of shifting the paradigm from using recordings to acoustic events as base data for ecological acoustic research.</ns0:p></ns0:div><ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. FLTR steps. (A) An audio spectrogram. Color scale is in dB. (B) A band flattened spectrogram. (C) Local Trimmed Range (20 &#215; 20 window). (D) Thresholded image.</ns0:figDesc><ns0:graphic coords='5,183.09,63.78,330.86,394.80' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:01:8561:1:1:NEW 9 May 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Sample of the intersections of 6 acoustic events along with their FLTR image. (A-D) are manually labeled and detected, (E-H) are manually labeled but not detected, and (I-L) are detected but not manually labeled.</ns0:figDesc><ns0:graphic coords='11,141.73,63.78,413.58,363.40' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>7. 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>&#8722; 9.8 kHz with 59 &#8722; 98% coverage, 20.1 &#8722; 20.8 kHz with 65 &#8722; 98% coverage and one spanning the whole 98 &#8722; 100% coverage band. The y max vs. tod plot shows three areas of high event detection count 10/17 PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8561:1:1:NEW 9 May 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Amarakaeri, Per&#250;, log density plot matrix of acoustic events extracted from 240 sample recordings. Variables shown are time of day (tod), bandwidth (bw), duration (dur), dominant frequency (y max) and coverage (cov). Note high H values for cov vs. y max (9.37), y max vs. tod (9.25) and cov vs. tod (8.81).</ns0:figDesc><ns0:graphic coords='12,141.73,63.77,413.59,378.18' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5. El Verde, Puerto Rico, log density plot matrix of acoustic events extracted from 240 sample recordings. Variables shown are time of day (tod), bandwidth (bw), duration (dur), dominant frequency (y max) and coverage (cov). Note high H n values for tod (0.98), cov (0.84) and y max (0.83).</ns0:figDesc><ns0:graphic coords='13,141.73,63.77,413.59,378.18' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. (A) Bandwidth (bw) vs. Dominant Frequency (y max) log density plot of matrix of acoustic events extracted from Sabana Seca. Rectangles mark different high event count regions. An arrow marks the Area of Interest. (B) Closeup on the area of interest at 700 Hz &#8804; bw &#8804; 1900 Hz, 2650 Hz &#8804; y max &#8804; 3550 Hz.</ns0:figDesc><ns0:graphic coords='15,141.73,63.78,413.57,184.91' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='4,183.09,63.78,330.86,218.95' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='14,141.73,63.77,413.59,378.18' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>3/17 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2016:01:8561:1:1:NEW 9 May 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Reserve in Per&#250; (from June 2014 to February 2015), 240 recordings from El Verde area in El Yunque National Forest, Puerto Rico (from March 2008 to July 2014), and 240 recordings from a wetland in Sabana Seca, Puerto Rico (from March 2008 to August 2014)</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Confusion matrix based on the FLTR results of the six recordings from the validation dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='4'>acoustic event no acoustic event total</ns0:cell></ns0:row><ns0:row><ns0:cell>detected</ns0:cell><ns0:cell>1922</ns0:cell><ns0:cell cols='2'>245 2167</ns0:cell></ns0:row><ns0:row><ns0:cell>not detected</ns0:cell><ns0:cell>129</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>129</ns0:cell></ns0:row><ns0:row><ns0:cell>total</ns0:cell><ns0:cell>2051</ns0:cell><ns0:cell cols='2'>245 2296</ns0:cell></ns0:row><ns0:row><ns0:cell>Sensitivity</ns0:cell><ns0:cell /><ns0:cell>1922/2051</ns0:cell><ns0:cell>94%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Positive Predictive Value</ns0:cell><ns0:cell>1922/2167</ns0:cell><ns0:cell>89%</ns0:cell></ns0:row></ns0:table><ns0:note>Notice that the algorithm only produces detection events. We do not provide a result for true negatives as any arbitrary number of true negative examples could be made, thus skewing the data.8/17PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8561:1:1:NEW 9 May 2016)Manuscript to be reviewed</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Confusion matrix based on the FLTR results of the six recordings from the validation dataset.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Summary of the 50 acoustic events sampled from the area of interest.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:note place='foot' n='17'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8561:1:1:NEW 9 May 2016) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Giovany Vega Viera Review Rebuttal Letter Below, we address (in bold) the comments and observations made by the reviewers of the manuscript “Audio segmentation using flattened local trimmed range for ecological acoustic space analysis”. Reviewer 1 (Anonymous) Basic reporting My only 'Basic Reporting' criticism given the guidelines is that the data has not been made clearly available to readers, which seems to go against PeerJ guidelines. The subsection 'Data' in 'DATA AND METHODOLOGY' presents no indication of where others can find the data sets used. The data are published in figshare. In order for it to be properly referenced, I have cited the datasets used for the article in the “Data” subsection, along with bibliographic entries for each dataset. Experimental design No comments Validity of the findings No comments Comments for the author I found this to be a well written and interesting paper where the authors propose a new way to screen for generic 'audio events' in acoustic recordings. Sometimes I felt that words were somewhat arbitrarily used, and provide further comments on this below. While screening “recordings” for “audio events” is an interesting question, and one likely to become more and more important as recordings become more and more common, it is at odds with the perhaps more usual task found in the literature (that I am most familiar with), where one is trying to identify specific signals (e.g. a coqui sound, a bird song, a whale sound) in recordings. Therefore, I believe that the authors must bring forward that distinction and in particular must find a precise way to define what is an “acoustic event”. If that definition is unclear, how can we judge a given algorithm is useful or efficient? One needs to unambiguously define what are the “audio events” (i.e. the signal) one needs to detect in order to quantify true positives and false positives, etc. This must be done from the start, around line 43 or before when the term is first used. A definition of an acoustic event has been provided in the article. Acoustic event is now used throughout all the document, rather than using both acoustic and audio event in the previous version. When one reads the abstract sentence “Our goal is to develop an algorithm that is not sensitive to noise, does not need any prior training data and works with any type of audio event.” or the introduction sentence, even more optimistic, “What is needed is an algorithm that works for any recording, is not targeted to a specific type of audio event, does not need any prior training data, is not sensitive to noise, is fast and requires as little user intervention as possible., one has to wonder, what is the catch? One usually says there are no free lunches in statistics. So what is the price to pay? And the price you pay is specificity. When you choose to detect “audio events” without clearly defining them a priori, then you actually lose the ability of doing many of the analysis that you might do when you target a specific signal. This is fine if is intended, but that must be explicitly stated so, and eventually discuss clearly the notion that this is a precursor step in other more elaborate and fine­tuned procedures. We agree and the non­specificity of the method was stated explicitly in the Introduction. We also add that the method’s intent is to work as a data reduction procedure (i.e. it selects the interesting parts in the recordings). This is to me an important point: I thought that all of these discussions regarding the distribution of variables when events were detected is not really possible unless we know the distribution of the variables available for the entire duration of the recordings was available. It would mean very different things to have many events at 18:00 hours if most of the recordings were at 18:00 hours versus if few recordings were at 18:00s! (see e.g. discussions around line 292­294). The data in the sites dataset was sampled so that it is evenly distributed over the whole day. We have added a sentence in the “Data” subsection of “Data and Methodology” to clarify this point. More detailed/specific comments that might be used to improve the paper follow below. Line 10 – 11 – I would say that “thousands of recordings” is wording to be avoided. Recording is a meaningless unit. It should be easy to replace by something that is unambiguous to readers. Same on line 48, “recordings” again used as a unit. This is not a sensible unit. Please check the entire text for the use of the word, as sometimes it makes sense, other not. In particular when you describe the data, again “recordings” is not a useful description. I know you are thinking about Arbimon, but this must be general. How long is a recording say? Arbimon recordings are usually one minute long. we have verified the usage of phrases with the word “recording”. In cases where recording is used as a unit of audio we have either rewritten it as minutes of recorded audio or qualified it as one­minute recordings. In other cases, such as when recording is used as a bearer of audio data, it has been left as is. Line 46­47 – the “easily draw a boundary around any audio event” is strictly not true, especially at low signal to noise ratios, or when a given frequency band is saturated and so many overlapping events occur continuously for a long time period. Please reword. You’re correct. we have reworded it as “draw a boundary around an acoustic event”. While I understand the purpose of the last sentence in the introduction, I believe it includes several bits of information that should belong in the methods. It makes sense to end the discussion with laying the paper that lies ahead, but one should avoid technical details like “2051 manually labelled audio events” or “20 recordings”. One gets confused also because in the text you say this is the workflow of the article, the figure states it’s the workflow of the AED methodology. These can be naturally closely related, but are not one and the same. We have removed technical details such as the number of manually labeled audio events or the number of recordings on each step. We have also rewritten the caption in Figure 1 so it reflects better the workflow in the article. Figure 2 legend – to me Yen threshold means nothing, and legends should be self­explanatory. I suggest removing as it’s mostly diversionary here, this is in the text after line 153. We have removed the “Yen threshold” parenthesis. Line 89 – here you refer fig 2C, but fig 2B was not mentioned yet. It would flow better if the description matched the figure order. We have moved the figure reference to the next sentence, so that it can also include fig 2B. Property A1. – what is tau? And eta? I mean, I know, but rigorously it would be useful to define these. We have included a formal definition for the usage of S(t, f), which defines tau and eta as the bounds of t and f respectively. Line 118 – I believe the notation needs tweaking, as S_db(t,f) has no I which is what it needs to be summed over? Corrected. summation index was t, not i. Line 103­109 – I think you cannot extend this to infinity. Would the algorithm not break apart if an entire recording had a given frequency band saturated? Discuss, or change wording, please. Depends really on the nature of the recording. If what gets recorded is essentially stationary (pretty much the same noises most of the time), then it wouldn’t necessarily break. However, as soon as any of the assumptions change then it would definitely degrade. If a band gets saturated, then the b(f) estimation part would degrade, since some of the saturation (the part that’s less than the band median) would be confused with a constant noise source and removed. This is discussed in the Discussion section. Line 124 – “valuation”? This has been removed, and the sentence has been rewritten. Line 132 – Surely this is over a given time frame in practice? What is the time frame considered? Will you discuss the sensitivity to different time frames considered? In practice, the timeframe is the whole recording. In theory it should be enough time to guarantee that epsilon is stationary. Line 133 There’s something missing as like an “at least” before “1­\rho(f) proportion”, right? \rho(f) is the proportion of samples that have a localized energy process within them and1­\rho(f) exactly is the opposite. We have added the interpretation of \rho(f) so it is clearer. Lines 141­142, I find this confusing… would it not be clearer if you delete just ”by estimating it as” and replace by “:” We moved the equation to the next sentence and removed the “by estimating it as” part. Just before line 146 – he “r>0” is this in time, frequency, both? We have specified that it is in time and frequency. Line 146 – Equations should read just as text. Therefore, the “Where” must be “where” and not indented, and there’s no dot after the previous equation. Same in line 201. Check remaining instances. We have revised all equations and added the necessary punctuations. Line 148 – reword the arbitrary “the estimator should have a small response”. What are you referring to? What is an estimator response? The response of the estimator is just the values that the estimator produces for the given regions. The phrase has been reworded to “the estimator would give small values”. Line 155 – I understand what you mean, but clarify “image values”, since there’s strictly no images here. The phrase “image values” has been changed to “spectrogram values”. Line 156 – “Th” should be “T”? Th and T are not the same variable, T is an independent variable in the entropic correlation TC(T), while Th is the value that maximizes it (i.e. the threshold). This subsection has been reworded to make the distinction more explicit, also the variable names have changed to t being the independent variable in TC(t) and t­hat being the selected threshold. Line 157 – the contiguous here refers to both time and frequency? Yes, we have rephrased it as “adjacent (t, f) coordinates” to make it clearer. Line 168 – not sure why you need the descriptor “the sites dataset”? The first data set was also collected on sites… While the first dataset is collected from different sites, the purpose is to validate the method and thus, the site distinction is not needed. We named the second dataset the sites dataset because the recordings are processed site­wise. Line 175 – as I said, here’s a good example. You set to find “events” here. What is an event? Is this a circular definition to some extent, since events are sounds you are able to classify as events??? In particular, how these relate e.g. to the 3 types of sound you mention in the first paragraph of the introduction? An acoustic event is, basically, anything that stands out from an audio recording. The sources of these events are those three types of sounds. Defining what “stands out” is the tricky part, since our most basic of reference is the ability to detect changes in either the raw audio or any other transform we apply to it. Line 179 – 21 by 21 and alpha=5. These are fundamental details. Are these values optimal in any way? Why? What is the sensitivity of the method to changing them? What are recommendations for users? We have included a paragraph about the parameter choice in the discussion. Essentially they are ad­hoc parameters that follow certain structure. Line 188 and 191 (2 times) – I think the word “automatically” needs to be added for clarity, e.g. “were detected over the total count” is “were automatically detected over the total count” Added the word “automatically” to these phrases. Line 187­192 – I’d introduce the wording true and false positives here, and note explicitly this is a typical confusion 2 by 2 matrix but one of the cells is absent (i.e. there are no true negatives). The true and false positive wording has been included, along with formulas for sensitivity and positive predictive value. Line after 200 – you need a more rigorous wording. “To measure the degree of separation of each variable on the audio event density”… there is no such characteristic as “variable separation”… what do you mean exactly? We have rewritten this, emphasizing the information content of the variable. Line 201 i­th should be i^{th} (superscript) th Corrected i­th to i​ Line 203 – “The 2­variable marginal distributions” reword to bivariate? Changed all instances of “2­variable marginal” to “bivariate” in the text Line 205 – why this one chosen? Just as an example? If so say so. Yes, it is an example of what can be done with the method. we have said so explicitly now. Line 208 – words missing or plural vs singular mistake T​ he text has been thoroughly edited to remove these mistakes. Table 1 legend – last sentence – you say “arbitrary number of false negative examples”. Don’t you mean “true negatives”? Corrected the term to “true negative”. Figure 3 legend – need to label the columns, since each column is a different type of image, right? We labeled the columns with the respective image type. Figure 4,5,6 – you need at the very least to state explicitly that dark is less and white is more, but a scale would be helpful! A color bar has been added to the side of the plot matrix. Figure 6 – plot y_max by tod. What is the reason for the weird vertical bands? It’s hard for me to imagine what would create these tod “discontinuities”? The “discontinuities” are produced by aggregating the recordings into 24­1 hour bins. Line 228 – so does “cov”, in fact with a larger H value than _max. Why do you not mention it “​ Cov” Is now mentioned in the results. Line 236­237 – explanation? Is this a feature or an artefact? If you raised the question you can’t leave it unanswered. This has been added to the discussion. Line 243 – seems inconsistent to me to do this “separation” (and I do not like the word as I said above) only visually in 2d but with an H statistic in 1d, why this choice? Separation has been replaced with information content using joint entropy Line 247 and figure 7 – 6? Which? I can’t see this. Please mark them all in figure 7. All the areas have been marked with a rectangle. The Area of Interest is pointed to by an arrow. Figure 7 legend – “Close­up on the first area”… why the first. The word “first” has now been removed. Line 251 and several others – all latin names must be italicized, including in references. All species names have been italicized. Line 262­264 – Discuss problems with intense chorus on a given noise band. In recordings with an intense chorus, rho(f) would be >= .5, and thus the theorem would not be true. Regardless, energy peaks within the chorus would still be preserved. Depending on their size, they could appear in the flattened spectrogram, and still be possibly detected. Added a paragraph with this reasoning in the discussion. Line 281 –use of word recordings here is inconsistent (cf. with say line 168­171), you say 6, these are presumably what you referred as dataset? The six recording referred to in line 281 are the recordings from the validation dataset as part of the discussion of the FLTR validation results. The recordings referred to in lines 168­171 are the recordings from the sites dataset, which is used in the FLTR application step. Line 282 – remove the two instances of the word “any”, not useful.. but this needs added clarification, not sure what is meant here? The two instances of “any” have been removed. The paragraph is saying that although our validation dataset is small, some statistical bias in the results is to be expected, but that we used recordings from different environments in the test to reduce this bias. Lines 300­303 – I’d like to see comments regarding whether these have also been missed sometimes or not? As this was a sample of the detections from a region of interest in an application of the algorithm, we only focused on cataloguing the detected events. However we can estimate the missed events with the computed sensitivity and positive predictive value. We added a paragraph indicating this. Line 334 – “University Press” “university” was capitalized. Line 344­ incomplete ref? A new reference is included. Line 347 – “Conference on…”­ incomplete Conference name was included before the “Conference on” text. Corrected the sentence. Line 355 – “pages”??? Not sure what the error is. Revised references to include complete information. Line 366 – species name needs italics Eleutherodactylus ​ written in italics. Reviewer 2 (Michael Towsey) Basic reporting The English language is good. The mathematical description has not helped the presentation. It took time to understand and was not adding much more than could have been said quickly in words. In particular, symbols are used which are not defined ­ e.g. eta, n, tau. The numeral '1' is used for the indicator function but it is not stated as an indicator function and is doubly confusing because bold or blackboard­bold font is not used. The symbols were defined, capital I is now used instead of 1 for the indicator (support) function. The mathematical description was revised and simplified a bit. T in the equation just above line 154 is not defined. I had to go on­line to see what it was. It is the threshold which is later referred using the letter 'Th'. All this suggests that not a lot of care was put into proof reading. T is the variable used in the entropic correlation TC(T), and Th is the value maximizing the correlation. we have changed the variables to t as the variable for TC(t), v as the index in the sums in TC(t), and t­hat for the threshold. we have also made explicit the relation between TC(t) and t­hat. The images are just of sufficient resolution to support the text. Figures 4, 5 and 6 are puzzling in that the images at top and bottom of the left column have different time scales. Also it appears from the top left image that more events happen at night than in the day. The reader needs a lot more help to understand these figures. The second time scale has been removed, now all tod figures have same time scale. Also, additional explanation is included. Experimental design The method for subtracting base­line value is valid although it is based on an important assumption. The signal model is not that different from an additive noise model. The authors state that they do not make an assumption about the distribution of the noise, symbol epsilon in text, and this is a nice feature. (Although later the use of 5 percentile tails to establish cutoffs implies that something like gaussian noise is assumed). The nicest feature of the method is that used to calculate the Range estimator and the use of entropic correlation. I am not aware of this being done elsewhere and is the main interesting result of the paper. The authors report their accuracy based on overlap of observed rectangle with predicted rectangle. This criterion is far too liberal because even a slight overlap can lead to a correct prediction for the wrong reasons. I would suggest that at least a 50% overlap is required which is indeed the case in some of their images. We added a second comparison test which is measured as the proportion of the area of the intersection over the area of the union. In this test, a hit is decided when this ratio is greater than .25 Validity of the findings The authors imply that there are few thresholds or critical parameters that must be tuned for their system. However the use of the 21x21 window size is surely important and must have been determined by trial and error. We agree that the selection of the window size was not addressed, we now include an acoustic interpretation to the window size in the Methodology / FLTR Validation section. If an event was too large in area, this window would leave 'holes' in the event due to the way they calculate the range estimator. In general I felt the authors were too uncritical of their method. This method is basically finding the borders of an acoustic event. While this does leave holes for big events it is just a matter of filling the interior of the event. Text has been added in the Theory section (Thresholding subsection) to include this detail. The important assumption of rho(f) < 0.5 is hidden in extensive and not very helpful mathematics. It is a reasonable assumption ­ other methods have to make similar assumptions but the authors claim some superiority for their method. The mathematics try to formalize a reason on why the method should work. The theorem and its proof have been simplified a bit. And also, the requirement for rho(f) < 0.5 has been restated as an extra assumption (A3 is now A4, and this is A3). Comments for the author This could be a nice paper. The method of using the Range estimator is very nice and something I am sure others will emulate when the paper is published. However the paper is inadequate in three respects: 1) The authors have been too uncritical in promoting the advantages of their method. The document was reviewed and comments towards this issue were made. 2) The estimates of accuracy are based on a very easy success criterion. A second (more vigorous) success criterion was added. 3) There is no comparison with another method. I totally agree that a fixed threshold technique is not useful but there are better techniques to compare their method with. We did not analyze results for other methods that only detect acoustic events. "
Here is a paper. Please give your review comments after reading it.
232
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Mobile edge computing (MEC) is introduced as part of edge computing paradigm, that exploit cloud computing resources, at a nearer premises to service users. Cloud service users often search for cloud service providers to meet their computational demands. Due to the lack of previous experience between cloud service providers and users, users hold several doubts related to their data security and privacy, job completion and processing performance efficiency of service providers. This paper presents an integrated three-tier trust management framework that evaluates cloud service providers in three main domains; tier I-evaluates service provider compliance to the agreed upon service level agreement, tier II-computes the processing performance of a service provider based on its number of successful processes, and tier III-measures the violations committed by a service provider, per computational interval, during its processing in the MEC network. The three-tier evaluation is performed during phase I computation. In phase II, a service provider total trust value and status are gained through the integration of the three tiers using the developed overall trust fuzzy inference system (FIS). Simulation results of phase I, shows service provider trust value in terms of service level agreement compliance, processing performance and violations' measurement independently. This disseminates service provider's points of failure, which enables a service provider to enhance its future performance for the evaluated domains. Phase II results, show the overall trust value and status per service provider after integrating the three tiers using overall trust FIS. The proposed model, is distinguished among other models by evaluating different parameters for a service provider.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Cloud computing (CC) provides a variety of computing resources such as processing capabilities, storage, servers, for multiple cloud users over the network <ns0:ref type='bibr'>(Monir et al.2015)</ns0:ref>. Such resources are physically located at large data centers which are far away from users' proximity. This causes high data transfer delays between service users and cloud resources, resulting in an increased network latency, while preventing real time applications like vehicular networks from being processed in a timely manner <ns0:ref type='bibr' target='#b28'>(Roman, Lopez &amp; Mambo et al. 2016)</ns0:ref>; <ns0:ref type='bibr' target='#b18'>(Mach &amp; Becvar et al. 2017)</ns0:ref>; <ns0:ref type='bibr'>(Shi et al. 2016)</ns0:ref>; <ns0:ref type='bibr' target='#b42'>(Taleb et al. 2017)</ns0:ref>. Mobile edge computing had emerged as part of the cloud computing paradigm, in an attempt to be nearer to user premises, under the coverage of radio access networks (RAN), <ns0:ref type='bibr' target='#b0'>(Ahmed &amp; Ahmed et al. 2016)</ns0:ref>; <ns0:ref type='bibr' target='#b3'>(Aslanpour &amp; Toosi, et al. 2021)</ns0:ref>.</ns0:p><ns0:p>In MEC, service execution such as computation and storage, are transferred from the cloud network to the mobile base stations located at the network edge <ns0:ref type='bibr' target='#b46'>(Wang et al. 2017)</ns0:ref>; <ns0:ref type='bibr'>(Hu, Patel &amp; Sabella et al.2015)</ns0:ref>; <ns0:ref type='bibr' target='#b17'>(Leppanen et al. 2019)</ns0:ref>; <ns0:ref type='bibr'>(Nunna et al.2015)</ns0:ref>. This provided low network latency, scalability and utilization of resources, which in return, minimizes computational and network overhead during data offloading for computational purposes. On the other hand, it enabled real time and data sensitive applications, such as smart health care systems, to be efficiently executed within their time limitation <ns0:ref type='bibr'>(Shangguang et al.2019)</ns0:ref>; <ns0:ref type='bibr'>(Chen et al.2016)</ns0:ref>; <ns0:ref type='bibr' target='#b36'>(Shi &amp; Dustdar et al. 2016)</ns0:ref>; <ns0:ref type='bibr' target='#b8'>(Corcoran &amp; Datta et al. 2016)</ns0:ref>. MEC allowed more service providers and users to connect to the network, benefiting from the processing and storage capabilities, which became more accessible to them (SHI, <ns0:ref type='bibr' target='#b38'>SUN &amp; CAO et al. 2017)</ns0:ref>; <ns0:ref type='bibr' target='#b19'>(MAOY, YOU &amp; ZHANG et al. 2017)</ns0:ref>; <ns0:ref type='bibr'>(Rani et al.2021)</ns0:ref>. This had relatively increased the transactions rate and number of participants connected to the MEC paradigm.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.1'>Problem Statement</ns0:head><ns0:p>However, due to the large number of communicating entities, several security and trust issues arise in such a vulnerable environment <ns0:ref type='bibr' target='#b44'>(Tang &amp; Alazab et al. 2017)</ns0:ref>. These security threats such as; fake service users, malicious service providers or denial of service attack <ns0:ref type='bibr'>(Jhaveri et al.2018)</ns0:ref>. Trust issues arise when service users route their private data for computational purposes, to unknown remote service providers, were they lose control on it <ns0:ref type='bibr'>(Sheikh et al.2012)</ns0:ref>. Due to lack of previous experience between service users and providers, service users hold several doubts like:</ns0:p><ns0:p>&#61623; their data security, confidentiality and privacy <ns0:ref type='bibr'>(Deepa et al.2020)</ns0:ref>; <ns0:ref type='bibr' target='#b25'>(Ranaweera, Jurcut &amp; Liyanage et al. 2021</ns0:ref>) &#61623; unknown service provider's processing performance efficiency and trust degree, &#61623; no guarantee that the selected service provider would abide to the agreed upon service level agreement (SLA) terms, &#61623; no recording of historical violations committed by a service provider, and the type of it. This may give a chance for malicious service providers to re-do their incorrect actions again, knowing that they are untraced by any authorized entity. A service level agreement acts as a contract, signed between a service provider and user that states the agreed upon processing conditions. However, there isn't a standard format for an SLA, which obscures its legal judgment. On the other hand, trust data extraction is a very difficult task due to the large number of transactions follow, in which a huge amount of data is generated like transaction type, terms, cost and service users' ratings. Service user ratings, could be untrusted, biased, irrelevant or difficult to filter. Therefore, service users demand guidance of service providers' trust degree prior their selection. Several works had been introduced in literature that evaluated service providers in terms of processing performance, processing quality, response time or SLA compliance degree. However, to the best of our knowledge, till date none of the previous works covered all these major parameters together. Some of these works faced challenges such as depending on service users' feedback opinion, lack of trust results update, unclear service provider assessment criteria.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.2'>Motivation</ns0:head><ns0:p>It is essential to build a trust evaluation scheme to evaluate service providers' performance in the MEC environment for four main reasons;</ns0:p><ns0:p>1-Service users will be aware service provider's trust degree prior their interaction. 2-Service providers will understand that their actions are being monitored and recorded in a historical database. This motivates them to enhance their processing performance capabilities and limits any malicious actions to happen. 3-The trust evaluation scheme allows a service provider to know its faulty points to improve them, <ns0:ref type='bibr' target='#b2'>(Asghar et al. 2020</ns0:ref>). 4-Service providers with good trust value will attract more service users, which increases their profits. A standard and universal trust evaluation model for MEC entities, would greatly contribute in distinguishing trustworthy service providers among others in the MEC network and their offered services. This would avoid attacks such as, malicious or fake service providers, and collusion attacks. Building trusted relationships would secure future interactions in the MEC paradigm. Consequently, service users' confidence and reliability on the MEC services will increase, leading to higher transactions rate <ns0:ref type='bibr'>(Chong et al.2013)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.3'>Paper Contribution</ns0:head><ns0:p>Trust is defined as the level of service user confidence towards a service provider for fulfilling its computational requirements as expected <ns0:ref type='bibr' target='#b4'>(Chahal &amp; Singh et al .2015)</ns0:ref>; <ns0:ref type='bibr' target='#b30'>(Ruan, Durresi &amp; Alfantoukh et al. 2016)</ns0:ref>; <ns0:ref type='bibr'>(Ruan &amp; Durresi et al. 2016)</ns0:ref>; <ns0:ref type='bibr' target='#b31'>(Ruan &amp; Durresi et al. 2017)</ns0:ref>. A trust management system builds trusted relationships between the participating entities, by assessing each service provider provisioned services and making trust level results available to service users when requested. Therefore, service providers' trust history should be captured, to avoid trust computation prior each new interaction, which saves time and yields to users' awareness of service provider's past interactions.</ns0:p><ns0:p>To address the above limitations, this research introduces the need of a unified trust management framework that evaluates service providers' provisioned services in the MEC network considering various parameters. Trust evaluation is performed in a centralized manner, by a fully trusted third party known as cloud service manager (CSM), Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to grantee trust results credibility <ns0:ref type='bibr'>(Felix &amp; Ricardo et al. 2012)</ns0:ref>; <ns0:ref type='bibr'>(Hatzivasilis et al.2020)</ns0:ref>. This also promotes for a secure and successful transactions between service users and providers in the MEC environment <ns0:ref type='bibr'>(Rathee et al.2017)</ns0:ref>.</ns0:p><ns0:p>On the other hand, fuzzy logic concept is used to address a situation of partial truth or uncertainty of values. This is an ideal choice, when evaluating the trustworthiness of a service provider, were the trust result is a dynamic variable depending on several measured parameters computed per computational interval <ns0:ref type='bibr'>(Tariq et al.2020)</ns0:ref>. This paper presents an integrated three-tier trust management framework using fuzzy logic. The main contributions of this paper are: Phase I: constitutes of Three-Tiers 1-Evaluation of service provider SLA compliance degree in tier I, 2-Computation of service provider processing performance in tier II, 3-Measurement of service provider violations in tier III, 4-each tier trust evaluation is performed independently per transaction in a batch processing manner. Phase II: Three-Tiers Integration 5-a Matlab based overall trust fuzzy inference system (FIS) was developed to integrate the evaluated results of tiers I, II and II, in order to gain an overall trust value and status for a service provider.</ns0:p><ns0:p>In the proposed framework, an SLA trust value is evaluated using four parameters (execution time, storage, cost and maintenance), to have a standard format, which allows it for a consistent judgment <ns0:ref type='bibr' target='#b35'>(Sheikh, Sebestian &amp; Max et al. 2011)</ns0:ref>. On the other hand, the processing performance of a service provider is measured by computing the number of successful processes verses the total number of accepted jobs, while gaining the failure ratio. The violations measurement is gained by maintaining the type and number of malicious actions committed per service provider. The main aim of tier III, is to monitor any wrong actions performed by a service provider. The three-tiers evaluation is performed using the proposed mathematical equations and algorithms. The output results of each tier of the three-tiers are inserted as an input to the buildup overall trust FIS, which provides a total trust value and status for a service provider per computational interval. This gives a full representation of a service provider abidance to the SLA contract, processing performance and violations committed during its service provisioning in the MEC paradigm. This paper is organized as follows; section 2, introduces the literature review and related work. Section 3, presents the integrated three-tier trust management framework; Phase I. The three-tiers integration using fuzzy logic is detailed in section 4. Section 5, shows the simulation results. Finally, the conclusion and future work are discussed in section 6.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Literature Review and Related Work</ns0:head><ns0:p>Many researches had developed various trust evaluation schemes to assess service providers' provisioned services and behavior in different edge computing (EC) paradigms. This is in an attempt to improve service providers' quality of service (QoS) and decrease losses emerging due to malicious actions performed over the network. This in return, will increase service users' trust and dependency EC resources <ns0:ref type='bibr'>(Jhaveri et al.2018)</ns0:ref>. Table <ns0:ref type='table'>'</ns0:ref>1' discusses some of the related work.</ns0:p><ns0:p>The above mentioned attempts measured trust considering different parameters, yet there isn't a unified service provider trust evaluation framework that integrates all major attributes together. Such main attributes are; SLA compliance degree, processing performance level and violations measurement of service providers' provisioned services in the MEC network.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Proposed Integrated Three-Tier Trust Management Framework</ns0:head><ns0:p>The proposed framework aims to measure service provider trust value, considering various attributes. The model is built up of two phases, as shown in Figure <ns0:ref type='figure'>'</ns0:ref>1'. Phase I, constitutes of three main tiers, tier I evaluates service provider's SLA compliance degree, tier II computes the processing performance value of a service provider, while tier III measures the violations committed by a service provider during its processing in the MEC network. Phase II, integrates the results of the three tiers, to gain an overall trust value and status of a service provider using fuzzy logic concept.</ns0:p><ns0:p>.1 Acting Protocol Entities: &#61623; Service User ' ': j th service user, is a user requesting a certain job to meet its computational needs. is &#119878;&#119880; &#119895; &#119878;&#119880; &#119895; represented by two attributes {service user unique id and name}. &#61623; Service Provider ' ': i th service provider, could be an ordinary provider or an organization supplying &#119878;&#119903; &#119894; computational services to users.</ns0:p><ns0:p>is represented by three attributes {service provider unique id, service &#119878;&#119903; &#119894; provider name and offered service type}.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2021:02:58033:1:2:NEW 29 Jul 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#61623; Cloud Broker: acts as an intermediate entity to match between a service user seeking a suitable service provider.</ns0:p><ns0:p>A cloud broker is considered as a semi-trusted entity. &#61623; Cloud Service Manager (CSM): is regarded as a fully trusted authorized party in the MEC network. CSM is responsible to perform, regulate and audit trust computation process for service providers in the MEC environment <ns0:ref type='bibr'>(Felix &amp; Ricardo et al. 2012)</ns0:ref>. CSM can exchange computed trust values of service providers within its coverage range with other CSM, in case requested. CSM also provides secure storage of trust computed results of service providers. &#61623; Network Provider: is responsible for registering a service user, provider and cloud broker to the MEC network.</ns0:p><ns0:p>It also handles network communications between all of the above entities.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>List of Assumptions:</ns0:head><ns0:p>The proposed trust framework considers the below assumptions: &#61623; Trust computation is handled by a fully trusted third party like CSM. &#61623; In case service provider1 sends part or all of user's required task to service provider2 for processing, known as process migration, service provider1 is totally responsible for user's data security. Service provider1 should also inform the user of this attempt, and offer the relative guarantee to ensure service user's data security, privacy and integrity. &#61623; A service provider could own one or more platform that offers one or more different service type. &#61623; Each service type of a service provider is evaluated independently regardless it's the same service provider.</ns0:p><ns0:p>&#61623; There are three main jobs requested over the MEC network; 1-processing, 2-storage, 3-both of them, known as Job_type {Job1, Job2, Job3} respectively.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Phase I: Proposed Three-Tier Trust Evaluation Framework</ns0:head><ns0:p>Phase I, constitutes of three tiers trust evaluation; Tier I-Service level agreement evaluation, Tier II-Processing performance evaluation and Tier III-Violations measurement. In each tier, several parameters are evaluated to gain the tier trust value, as shown in Figure <ns0:ref type='figure'>2</ns0:ref>.</ns0:p><ns0:p>Tier I trust results are gained by service user rating of SLA, upon process completion. Tier II is computed by evaluating the processing performance of a service provider. Whereas, Tier III provides a violations measurement and warnings received by a service provider. The three-tier computation is performed per 'n' computational interval as described in the below subsections.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.1'>Tier I -Service Level Agreement Evaluation</ns0:head><ns0:p>A service level agreement, is an agreement placed between a service provider and user, which states the job type requested by the service user and it's agreed upon computational conditions. Assume that the requested job is one of the previously mentioned three job types; known as 'a', and the total number of requested jobs received by i th service provider, is referred to as 'A'. The main conditions mentioned in an SLA should be standard for all SLAs, maintained by both parties and eligible for judgment if needed <ns0:ref type='bibr'>(Chong et al.2013)</ns0:ref>. Assume that all SLAs contain four major conditions; computational cost , required computational storage capacity GB/TB ( ), computational</ns0:p><ns0:formula xml:id='formula_0'>(S&#119862; &#119894;&#119886; ) &#119878;&#119878; &#119894;&#119886;</ns0:formula><ns0:p>maintenance duration hr/min , and agreed computational execution time hr/min be , <ns0:ref type='bibr'>(Monir,</ns0:ref> (&#119878;&#119872; &#119894;&#119886; ) (S&#119864; &#119894;&#119886; )</ns0:p><ns0:p>AbdelKader &amp; <ns0:ref type='bibr' target='#b22'>EI-Horbaty et al. 2019)</ns0:ref>. Upon job completion, a service user performs a compulsory rating process, to rate the service provider compliance to the four agreed upon conditions according to its own job execution experience. Assume that each of the above four major SLA components are rated as 'r', shown in Table <ns0:ref type='table'>'</ns0:ref>2'.</ns0:p><ns0:p>The above mentioned values of Table <ns0:ref type='table'>'</ns0:ref>2', could be adjusted according to own perspective. Such ratings of computational cost, storage, maintenance and execution time ( _R, _R, _R, _R)</ns0:p><ns0:p>&#119878;&#119862; &#119894;&#119886; &#119878;&#119878; &#119894;&#119886; &#119878;&#119872; &#119894;&#119886; &#119878;&#119864; &#119894;&#119886; respectively, reflects user degree of satisfaction/dissatisfaction and compliance of a service provider against the agreed upon SLA conditions. Assume that the total number of rated SLAs, equals the total number of accepted processes 'A' by i th service provider. Let the computed rated SLA value be , were it ranges between '0' and '20',</ns0:p></ns0:div> <ns0:div><ns0:head>&#119930;&#119923;&#119912; &#119946;&#119938; _&#119929;_&#119959;&#119938;&#119949;&#119958;&#119942;</ns0:head><ns0:p>given by equation ( <ns0:ref type='formula'>1</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>SLAs average is computed per service provider as given in equation ( <ns0:ref type='formula'>2</ns0:ref>), by the CSM. This guarantees SLA evaluation results credibility. Noting that, each requested job is given a separate SLA, even if it's requested by the same service user, and performed by the same service provider. However, each job could have different computational requirements.</ns0:p><ns0:p>On the other hand, a certain threshold value is set for each dissatisfaction rated component. The dissatisfaction rate is computed per i th service provider for each component per 'n' computational interval. In case the dissatisfaction rate of any component had exceeded the predefined threshold for this component, a warning is issued for i th service provider as shown in Table <ns0:ref type='table'>'</ns0:ref>2'. This alerts service providers for any dissatisfactory results gained for the SLA components in order to enhance their processing capabilities in this component.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.2.'>Tier II -Processing Performance Computation</ns0:head><ns0:p>The processing performance of i th service provider ' ', refers to the value of successful processes accomplished &#119875; &#119894; by a service provider in 'n' computational interval. Assume process 'a' ends, either as successful or incomplete. A complete process implies that the service provider had abided to all the four processing conditions and is referred to as process success, ' '. On the other hand, an incomplete process could be a result of one of the below three states;</ns0:p><ns0:p>&#119875;&#119878; &#119894; 1-a service provider had started the job processing but didn't complete it within its agreed conditions, 2-a service provider didn't start the job processing though accepted the job, 3-a service provider had started the job processing and is proceeding within its agreed upon conditions, and didn't exceed its process execution time ). However, the service user wishes to terminate the job processing (&#119878;&#119864; &#119894;&#119886; transaction. States 1 and 2, are recommended as incomplete job processing, known as processing incompliance ' ' by a &#119875;&#119868; &#119894; service provider. State 3 is referred to as user termination case, referred as ' '. Tier II evaluates the processing &#119880;&#119879; &#119894; performance of i th service provider, in terms of: &#61623; Average processing success ratio ' ': is considered as the number of successful processes implemented &#119860;&#119875;&#119878; &#119894; &#119875;&#119878; &#119894; by i th service provider, divided by the total number of accepted processes 'A', as shown in equation (3): = (3) &#119912;&#119927;&#119930; &#119946; &#119927;&#119930; &#119946; &#119912; &#61623; Average processing incompliance ratio ' ': is the number of accepted processes by a service provider but &#119860;&#119875;&#119868; &#119894; failed to perform or complete them ' ', is computed by equation ( <ns0:ref type='formula'>4</ns0:ref> ': where a service user feels dissatisfied for any reason, and decides to &#119860;&#119880;&#119879;&#119877; &#119894; terminate its computational transaction, as represented in state no.3. This is known as user termination ratio, ' ', and computed by equation ( <ns0:ref type='formula'>5</ns0:ref>);</ns0:p><ns0:formula xml:id='formula_1'>&#119880;&#119879;&#119877; &#119894; = (5) &#61664; State 3 &#119912;&#119932;&#119931;&#119929; &#119946; &#119932;&#119931;&#119929; &#119946;</ns0:formula></ns0:div> <ns0:div><ns0:head>&#119912;</ns0:head><ns0:p>The processing performance of i th service provider is gained by equation ( <ns0:ref type='formula' target='#formula_2'>6</ns0:ref>);</ns0:p><ns0:formula xml:id='formula_2'>&#119927; &#119946; =<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>&#119927; &#119946; &#119912;&#119927;&#119930; &#119946; where 0 &#8804; &#8804; 1 for 'n=1' computational interval &#119927; &#119946; Equation ( <ns0:ref type='formula' target='#formula_2'>6</ns0:ref>), shows the processing performance degree of a service provider in the MEC environment. A predefined threshold is set for each of processing incompliance and user termination ratio values. In case a service provider computed results of these two parameters had exceeded this threshold within 'n' computational interval, a relative warning is issued for i th service provider as shown in Table <ns0:ref type='table'>'</ns0:ref>3'. This is to alert i th service provider for such incidents.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.3'>Tier III: Violations Measurement</ns0:head><ns0:p>Tier III proposes two algorithms; 1-complain and evidence algorithm, 2-violations measurement algorithm. Data privacy leakage <ns0:ref type='bibr' target='#b41'>(Talal &amp; Quan et al. 2015)</ns0:ref>, incidents occurring by service providers are monitored through the 'complain and evidence' algorithm as described in section 'A'. On the other hand, the violations measurement algorithm presented in section 'B', counts the warnings gained by a service provider per 'n' computational interval, to gain its trust value in tier III.</ns0:p></ns0:div> <ns0:div><ns0:head>A) Data Privacy Leakage Complain Algorithm</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58033:1:2:NEW 29 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A service provider is responsible for service user data privacy while a user is provisioning provider's services or applications, by installing the necessary data protection mechanisms. However, a service user may face a situation where it discovers that its own private data had been routed by a service provider, without prior permission for doing so, this hinders user's data privacy <ns0:ref type='bibr'>(Deepa et al.2020)</ns0:ref>; <ns0:ref type='bibr' target='#b40'>(Talal &amp; Quan et al. 2013)</ns0:ref>. A data privacy leakage is known as a security threat, where it can lead to cyber-attacks, social issues, or cause user robbery by different means <ns0:ref type='bibr'>(Javed et al.2021)</ns0:ref>; <ns0:ref type='bibr'>(Iwendi et al.2020)</ns0:ref>; <ns0:ref type='bibr' target='#b45'>(Vasani &amp; Chudasama et al. 2018)</ns0:ref>. To monitor such service provider actions, a service user sends a complain message ' ' to the CSM, against the suspected service provider including an evidence for M DPR this incident. The complaint message ' ' should include;</ns0:p><ns0:p>M DPR 1-screenshot of service user data appearing in unknown platform for the user, 2-service user data ownership evidence (could be a previous email sent from the user to the respective service provider including this data), 3-transaction SLA including; &amp; .</ns0:p></ns0:div> <ns0:div><ns0:head>&#119878;&#119903; &#119894; &#119878;&#119880; &#119895;</ns0:head><ns0:p>The mentioned conditions in the complaint message, ' ' are investigated by the CSM for verification. If the</ns0:p></ns0:div> <ns0:div><ns0:head>M DPR</ns0:head><ns0:p>CSM investigation results are approved to be true against i th service provider, this incident is considered as incident 1 and an alarm message 'incident1_ ' is sent to the accused service provider accordingly, as shown in Figure <ns0:ref type='figure'>'</ns0:ref>3'.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119878;&#119903; &#119894;</ns0:head><ns0:p>Incident 1 is counted and stored by the CSM.</ns0:p><ns0:p>In case a data privacy leakage complain had been submitted by a different service user, but against the same service provider, CSM investigates the case. If CSM approves the complaint message to be true as mentioned previously, CSM considers these incidents as data privacy leakage attack, where the accused service provider is penalized by sending to it warning no.7, 'W7_ '. On the other hand, such warning degrades the trust value for the accused service &#119878;&#119903; &#119894; provider as shown in Table <ns0:ref type='table'>'</ns0:ref>4'. This is to avoid recurrent occurrence of such attacks in the future by malicious service providers.</ns0:p><ns0:p>The data privacy leakage complain algorithm presented in Figure <ns0:ref type='figure'>'</ns0:ref>3', traces and counts data leakage actions committed by i th service provider in 'n' computational interval, with different users. These incidents and warning are being stored in a historical database by the CSM per i th service provider.</ns0:p></ns0:div> <ns0:div><ns0:head>B) Violations Measurement Computation</ns0:head><ns0:p>The violations measurement computation algorithm measures the warnings imposed on i th service provider in tiers I and II, (during SLA evaluation and processing performance computation), and in case of a data privacy leakage incident. The relationship between the three-tier framework warnings is shown in Figure <ns0:ref type='figure'>'4'</ns0:ref>.</ns0:p><ns0:p>Each warning type is given a number, from 1 to 7, as shown in Table <ns0:ref type='table'>'</ns0:ref>4'. A decreasing factor '&#952;' is given for warnings imposed on i th service provider, which is computed according to the number and reason of warning. Noting that '&#952;' is a changing variable, that could be assigned different values according to own perspective. Assume the total warnings value for i th service provider computed in tier III, be ' ', which is calculated by equation ( <ns0:ref type='formula' target='#formula_3'>7</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_3'>&#119881; &#119894; = &#8721; &#952;W1, &#952;W2, &#952;W3, &#8230;, &#952;W7<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>&#119933; &#119946; where 0 &#8804; &#8804; 1 &#119933; &#119946; Given that ' ' gives the total warnings value, hence, the violations measurement ' ', is computed by;</ns0:p><ns0:formula xml:id='formula_4'>&#119881; &#119894; &#119879;&#119881; &#119894; = 1 - (8)</ns0:formula><ns0:p>&#119931;&#119933; &#119946; &#119933; &#119946; The violations measurement of i th service provider is computed in tier III, as shown in Figure <ns0:ref type='figure'>5</ns0:ref>, by the CSM. In this algorithm, CSM checks whether i th service provider had received any warnings (W1_ , &#8230;, W7_ ). If i th service &#119878;&#119903; &#119894; &#119878;&#119903; &#119894; provider received any warning, it computes its relative '&#952;' and ' ' value by equation ( <ns0:ref type='formula' target='#formula_3'>7</ns0:ref>) accordingly. 'T ' is &#119881; &#119894; &#119881; &#119894; computed by equation ( <ns0:ref type='formula'>8</ns0:ref>), to gain the violations measurement for i th service provider.</ns0:p><ns0:p>The violations measurement algorithm presented in Figure <ns0:ref type='figure'>'</ns0:ref>5', checks the warnings received by i th service provider, to compute its total warnings value ' '. Hence, the total warnings value is deducted from '1', to gain its violations &#119881; &#119894; measurement '</ns0:p><ns0:p>'. In case a service provider didn't receive any warnings, this service provider gains the full value &#119879;&#119881; &#119894; of the violations measurement ' ', which is equal to '1'.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119879;&#119881; &#119894;</ns0:head></ns0:div> <ns0:div><ns0:head n='3.4'>Summary</ns0:head><ns0:p>The proposed three-tier protocol in phase I, aims to maintain the trust value of i th service provider considering three main attributes; SLA compliance degree, processing performance level and violations measurement. This is in an attempt to optimize trust results credibility. In phase II, an integration of the three-tier results is provided to gain an overall trust value of i th service provider, building up the whole framework. This disseminates service providers' processing performance and pervious interactions in the MEC network.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>Phase II: Three-Tier Integration using Fuzzy Logic</ns0:head><ns0:p>The overall trust value of i th service provider is an aggregate value of the computed end results of each of the above three tiers (SLA evaluation, processing performance computation and violations measurement). Hence, i th service provider overall trust value is computed using the developed fuzzy inference system (FIS) named 'Overall_Trust', as shown in Figure <ns0:ref type='figure'>'</ns0:ref>1' and described in sections 4.1 and 4.2.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Using Fuzzy Logic for Trust Computation in Mobile Edge Computing</ns0:head><ns0:p>Fuzzy logic is a form of artificial intelligence, were the input parameters are given to a fuzzy inference system as unclear or uncertain information, denoting partial truth of a parameter <ns0:ref type='bibr'>(Nagarajan et al.2017)</ns0:ref>. This is in contrast of Boolean logic, which belongs to discrete numbers, either 0 or 1. Such input values range between 0 and 1 and are placed in membership functions to distinguish each range of values, known as fuzzification process <ns0:ref type='bibr'>(Sule et al.2017</ns0:ref>). If-thenelse rules are set to the fuzzy inference rule base editor, in the Matlab program, in order to allocate each range of input values to a specific output decision. The output result is converted into a crisp value known as defuzzification process. Fuzzy logic concept has great advantages, like flexibility, fast response time, low cost and logical reasoning. For these reasons, fuzzy logic was chosen to compute service providers overall trust value and status in this framework, since MEC is a highly dynamic environment. Service providers trust computation helps service users during their service provider selection, and balances between offered services and cost.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Integration of Tiers I, II and III Results using Fuzzy Logic</ns0:head><ns0:p>Fuzzy logic concept is used to integrate the end results of tiers I, II, and III of phase I, in order to gain an overall trust value for i th service provider, during phase II computation, as shown in Figure <ns0:ref type='figure'>'</ns0:ref>1'. Fuzzy logic toolbox in Matlab was used to develop the 'overall trust' system. Trapezoid-cure shape was used to represent each of the three fuzzy inputs of the three tiers, (SLA evaluation, processing performance computation, violations measurement), and their relative fuzzy membership functions, as shown in Figure <ns0:ref type='figure'>'</ns0:ref>6'. Membership functions input values range between [0,100], which are converted to fuzzy linguistic input variables, forming three fuzzy sets (low, medium, high), during the fuzzification process, as presented in Table <ns0:ref type='table'>'</ns0:ref>5'.</ns0:p><ns0:p>Centroid defuzzification was performed to obtain a crisp overall trust value for i th service provider. The triangularshape curve represents the output value membership functions (low, medium, high, excellent), as shown in Figure <ns0:ref type='figure'>'</ns0:ref>7'. i th service provider overall trust value belongs to one of the four fuzzy output sets. Twenty-one inference rules were added to the Mamdani inference system, to compute the overall trust status of i th service provider, considering its SLA evaluation, processing performance computation and violations measurement. Each rule, used AND logical operator, to co-relate between input and output variables, as given in <ns0:ref type='bibr'>Table '6' and Figure '8'.</ns0:ref> The fuzzification of the three-tiers values participate in forming the overall trust fuzzy inference system, which computes i th service provider overall trust in the MEC network, as presented in Figure '9'. A smooth surface is shown in Figure <ns0:ref type='figure'>'</ns0:ref>10', while comparing the processing performance and SLA evaluations against the overall trust value.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Phase II: Summary</ns0:head><ns0:p>The main aim of phase II, is to provide an overall trust value for a service provider in the MEC network. This is done by gaining the value of service provider's successful interactions and the violations degree measurement, while maintaining service users' opinion of the SLA. Through the fuzzy logic concept, the three tiers' end results were integrated to produce i th service provider overall trust value and status.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>Simulation Results and Discussion</ns0:head><ns0:p>The integrated three-tier trust management framework simulation was performed using Matlab R2017a. Simulation results are shown in section 5.1. A comparison with previous protocols is presented in section 5.2. Section 5.3 depicts the proposed framework achievements.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1'>Integrated three-tier trust management framework Simulation Results</ns0:head><ns0:p>A Matlab based simulation was developed to show phase I and II results of the integrated three-tier trust management framework. Random number generation was used for the input values of each of: SLA rating components; number of received processes (successful, failed and user termination ratio); and privacy leakage incident. The overall trust fuzzy inference system was developed using Matlab fuzzy logic designer tool. However, each of the simulated service providers was forced a specific range of values, to validate the developed system equations and twenty-one fuzzy inference system rules set, in various conditions. The overall trust FIS contains three input membership functions. Each System setup:</ns0:p><ns0:p>1. Assume five different service providers' cases studies. 2. Each service provider initial trust value is '0'. 3. Three-tier computation was performed in n=1 computational interval. 4. All service providers received the same number of job requests, in one job_type {Job3}. 5. Hardware PC configuration was; core i7, RAM 6 GB and hard disk 1 Tera. The three-tier simulation results of phase I for the five service providers are shown in Table <ns0:ref type='table'>'</ns0:ref>7'. These results are shown for i th service provider, upon the completion of 'n' computational interval. Phase I presents detailed computational results and points of enhancement for each service provider per tier. The warning methodology helps a service provider to know its points of weakness.</ns0:p><ns0:p>Phase II simulation results are shown in Table <ns0:ref type='table'>'</ns0:ref>8'. The overall trust FIS computes the trust value and status per service provider in one of the predefined job types. In case a service user is seeking the processing of job3, overall trust value and status of the available service providers in this job type is shown as in Table <ns0:ref type='table'>'</ns0:ref>8'. However, a service user could reveal the three-tier results, with the overall trust value and status of the five service providers for serious selection, as shown in Figure <ns0:ref type='figure'>'</ns0:ref>11'.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>'</ns0:ref>11' gives a full representation for each of the simulated five service providers processing performance in the MEC network.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2'>Comparison with Previous Protocols</ns0:head><ns0:p>A comparison between the proposed three-tier trust management framework evaluated parameters, and previous protocols measured parameters in cloud computing and mobile edge computing is presented in Table <ns0:ref type='table'>'</ns0:ref>9'.</ns0:p><ns0:p>As shown in Table <ns0:ref type='table'>'</ns0:ref>9', the proposed framework had successfully measured various service performance parameters for service providers, with a warning and history capturing methodology, in comparison to previous works.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3'>The Intergrated Three-Tier Trust Management Framework Achievements</ns0:head><ns0:p>The Proposed Integrated Three-Tier Trust Management Framework achieves the following:</ns0:p><ns0:p>&#61623; Service provider assessment based on its behavior during its interactions on the MEC network according to its predefined job type and not by the quantity of hardware or software resources it possess, or job type offered. &#61623; Service providers' awareness of their faulty points, while displaying the degree of improvement needed per component. &#61623; Future prediction for service provider performance competency per tier. &#61623; Malicious service provider detection and their points of attack through the violations measurement, warnings mechanism and the low trust fuzzy membership function. &#61623; Trust evaluation with minimal human interaction to maximize trust results credibility. &#61623; Dynamic trust computation for service providers, per 'n' computational interval, considering history capturing mechanism. &#61623; Helps in cost estimation, for example, if a service provider's service cost is high, this could be justified if its trust value is high in the three evaluated components. Service users' awareness and guidance during their service selection for credible and trustworthy service providers. Consequently, service users will gain confidence and increase their dependency on the MEC network services. &#61623; Low computational trust evaluation time complexity. This is mainly due to; 1-the simplicity of the model equations, 2-history capturing makes trust results update easy, 3-the developed fuzzy inference system. Thus, the model can be used for large number of service providers in the MEC network, which is time efficient. &#61623; Computational storage saving, were there is no data redundancy during trust computation.</ns0:p></ns0:div> <ns0:div><ns0:head n='6.'>Conclusion and Future Work</ns0:head><ns0:p>Finding credible service providers in a vast ambiguous environment like mobile edge computing had been a very hard task faced by service users. The proposed integrated three-tier trust management framework, measures the trust value of a service provider considering three main attributes; SLA compliance, processing performance and violations measurement, in tiers I, II and III of phase I. Tiers I, II and III constitutes of different evaluated parameters, which eliminates the opportunity of false ratings and collusion attacks by service users. However, the three-tier protocol shows service provider's points of weakness or strength per tier. The three-tier results are aggregated to give an overall trust value and status for a service provider, using the developed overall fuzzy inference system in phase II. Trust evaluation in performed in a history capturing manner to ease trust updating process. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>From a service user perspective, the three-tier protocol could also be beneficial, since service users may have different preferences, in spite they are requesting the same job type. For example, a service user may search for a service provider which will abide to its SLA, in terms of time compliance, for time critical operations, rather than its processing performance quality. This obviously helps service users and organizations in their selection for credible service providers to fulfil their computational demands.</ns0:p><ns0:p>The integrated three-tier framework computational simulation results show the trust value of a service provider in terms of SLA compliance, processing performance, and violations occurrence. The proposed warnings protocol, shows each service provider points of weakness, which supports its improvement. Finally, the results of the three-tier framework integration reflects the overall trust value and status of a service provider in the MEC network.</ns0:p><ns0:p>In the future, we plan to measure service user trust value, which could be a factor multiplied by its rated value. Therefore, the higher this factor, the more likely this rating to be true, which increases the service provider computed trust results reliability. On the other hand, service user trust value can be considered as a filtering mechanism for falsified ratings. However, the transaction cost should be considered as an affecting weight, during trust computation. Usually transactions of high cost are rarely mentioned to be fake. The active work time of a service provider should be measured in comparison to its total subscription life time in the MEC network. This actually reflects dedicated operational service providers. Trust evaluation results were totally dependent upon service users' feedback opinion, which may led to less reliable trust results. <ns0:ref type='bibr' target='#b18'>(Ma &amp; Li et al. 2018)</ns0:ref> EC</ns0:p><ns0:p>Trust was measured by evaluating deployed data security and privacy mechanisms in terms of resource identity, performance and quality of service.</ns0:p><ns0:p>Trust updating and sharing was not addressed, which weakens the trust evaluation efficiency of the model.</ns0:p></ns0:div> <ns0:div><ns0:head>(Deng et al. 2020) MEC</ns0:head><ns0:p>A reputation-based trust evaluation model and management for service providers was introduced that measured trust in terms of identity verification, deployed hardware capabilities (CPU, memory, disk, online time) and behavior.</ns0:p><ns0:p>Trust results were derived from service consumers' previous interactions' ratings. Unfortunately, such users' ratings may not be trustworthy enough. <ns0:ref type='bibr'>(Ruan, Durresi &amp; Uslu et al.2018)</ns0:ref> MEC Service provider's trustworthiness is measured according to its performance per transaction with a service user. A degree of confidence measure is associated accordingly that shows user expectation of service provider future behavior.</ns0:p><ns0:p>The model depended on users' ratings, who could have different perspectives which may negatively affect trust evaluation accuracy.</ns0:p><ns0:p>Monitoring and comparing such ratings in userprovider relationships is time consuming and may produce redundant data. <ns0:ref type='bibr' target='#b16'>(Khan, Chan &amp; Chua et al. 2018</ns0:ref>)</ns0:p></ns0:div> <ns0:div><ns0:head>CC</ns0:head><ns0:p>Service providers' quality of service was evaluated in terms of service availability, response time and throughput.</ns0:p><ns0:p>Fuzzy rules were used to predict future behavior of a cloud service provider. The model helped service users in their service cost estimation.</ns0:p><ns0:p>( <ns0:ref type='bibr' target='#b1'>Akhtar et al. 2014</ns0:ref>) CC Service provider performance was evaluated in terms of infrastructure (response time and resource utilization with respect to the number of users) and application performance (in terms of; response time to a user, volume of data linked and processing migration).</ns0:p><ns0:p>Service provider performance evaluation was computed using fuzzy logic.</ns0:p><ns0:p>Results managed to conclude the service provider performance level. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_1'><ns0:head>Fig. 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Fig.6. FIS input variables membership functions.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58033:1:2:NEW 29 Jul 2021) Manuscript to be reviewed Computer Science membership function presents the computed results of each tier (SLA, processing performance and violations measurement).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58033:1:2:NEW 29 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 1 FIS</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58033:1:2:NEW 29 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>&#119930;&#119923;&#119912; &#119946;&#119938; _&#119929;&#119930;&#119914; &#119946;&#119938; _&#119929; &#119930;&#119930; &#119946;&#119938; _&#119929; &#119930;&#119924; &#119946;&#119938; _&#119929; &#119930;&#119916; &#119946;&#119938; _&#119929; Equation (1) is multiplied by 5, to get the result in percentage form. Thus, the average computed SLAs value, ' ', is calculated by;</ns0:figDesc><ns0:table><ns0:row><ns0:cell>), is as follows:</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>_value = (</ns0:cell><ns0:cell>+</ns0:cell><ns0:cell cols='2'>+</ns0:cell><ns0:cell>+</ns0:cell><ns0:cell>)*5</ns0:cell><ns0:cell>(1)</ns0:cell></ns0:row><ns0:row><ns0:cell>&#119930;&#119923;&#119912; &#119938;&#119959;&#119942;&#119955;&#119938;&#119944;&#119942;</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>= &#119930;&#119923;&#119912; &#119938;&#119959;&#119942;&#119955;&#119938;&#119944;&#119942; &#8721; &#119912; &#119938; = &#120783;</ns0:cell><ns0:cell cols='2'>&#119930;&#119923;&#119912; &#119946;&#119938; _&#119929;_&#119959;&#119938;&#119949;&#119958;&#119942;</ns0:cell><ns0:cell /><ns0:cell>(2)</ns0:cell></ns0:row></ns0:table><ns0:note>&#119912; PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58033:1:2:NEW 29 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>SLA components rating &amp; warnings.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58033:1:2:NEW 29 Jul 2021) Manuscript to be reviewed Computer Science 1</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 . SLA components rating &amp; warnings Variable Dissatisfaction Rate Satisfaction Rate Consequences of Dissatisfaction Rate &gt; Threshold SLA Drawback Warnings</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>_R _R &#119878;&#119862; &#119894;&#119886; &#119878;&#119878; &#119894;&#119886; _R &#119878;&#119872; &#119894;&#119886; _R &#119878;&#119864; &#119894;&#119886;</ns0:cell><ns0:cell>0 &#8804; r &#8804; 2 0 &#8804; r &#8804; 2 0 &#8804; r &#8804; 2 0 &#8804; r &#8804; 2</ns0:cell><ns0:cell>3 &#8804; r &#8804; 5 3 &#8804; r &#8804; 5 3 &#8804; r &#8804; 5 3 &#8804; r &#8804; 5</ns0:cell><ns0:cell>W1_ issued &#119878;&#119903; &#119894; W2_ issued &#119878;&#119903; &#119894; W3_ issued &#119878;&#119903; &#119894; W4_ issued &#119878;&#119903; &#119894;</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 . Tier II imposed warnings. Tier II-Variables Processing Performance Drawbacks Warnings &gt;</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Threshold &gt; Threshold &#119860;&#119880;&#119879;&#119877; &#119894; &#119860;&#119875;&#119868; &#119894;</ns0:cell><ns0:cell>W5_ issued &#119878;&#119903; &#119894; W6_ issued &#119878;&#119903; &#119894;</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 . Warning number, reason and '&#952;' decreasing factor.</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Warning name</ns0:cell><ns0:cell>Reason</ns0:cell><ns0:cell>'&#952;' decreasing factor</ns0:cell><ns0:cell>Tier</ns0:cell></ns0:row><ns0:row><ns0:cell>W1_&#119930;&#119955; &#119946;</ns0:cell><ns0:cell>Exceeded cost dissatisfaction threshold.</ns0:cell><ns0:cell>&#952;W1 = 0.1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>W2_&#119930;&#119955; &#119946;</ns0:cell><ns0:cell>Exceeded storage dissatisfaction threshold.</ns0:cell><ns0:cell>&#952;W2 = 0.1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>W3_&#119930;&#119955; &#119946;</ns0:cell><ns0:cell>Exceeded maintenance dissatisfaction threshold.</ns0:cell><ns0:cell>&#952;W3 = 0.1</ns0:cell><ns0:cell>I-SLA</ns0:cell></ns0:row><ns0:row><ns0:cell>W4_&#119930;&#119955; &#119946;</ns0:cell><ns0:cell>Exceeded agreed computational execution time dissatisfaction threshold.</ns0:cell><ns0:cell>&#952;W4 = 0.1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>W5_&#119930;&#119955; &#119946; W6_&#119930;&#119955; &#119946;</ns0:cell><ns0:cell>Exceeded processing incompliance threshold. Exceeded user termination ratio threshold.</ns0:cell><ns0:cell>&#952;W5 = 0.2 &#952;W6 = 0.1</ns0:cell><ns0:cell>II-Processing Performance</ns0:cell></ns0:row><ns0:row><ns0:cell>W7_&#119930;&#119955; &#119946;</ns0:cell><ns0:cell>Data privacy leakage incident.</ns0:cell><ns0:cell>&#952;W7 = 0.3</ns0:cell><ns0:cell>III-Violations Measurement</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Dear Respected Editor in Chief, Esteemed Reviewers, Hope this mail finds you well and safe. First of all I would like to deeply thank you for your valuable and informative comments that show the great effort and time exerted in my research paper. I would like to also thank you so much for sending me the valuable and recent works, which I really benefited from. I had accommodated all your requests as below. Reviewer 1 (Anonymous) 1. Even though the motivations of the current work are clearly discussed, the contributions of the paper are not discussed. What are the main contributions of the current work? -Section 1.3 was added to discuss the paper contributions. 2. The literature review section can be summarized as a table. -Done 4. The main contribution of any paper is the proposed work. This section is very small in this paper. It has to be elaborated with discussion on the novelty and more detailed discussion on the proposed work. -Section was extended to include more details. 5. The limitations and the future scope of the current work can be discussed in conclusion. Done Reviewer 2 (Rutvij Jhaveri) Basic reporting Many grammatical errors and typos are found (all the way from Abstract to Conclusion). -Corrected Experimental design Comparison of the proposed work with existing work can be useful. -Done 1. The flow of the introduction should be.. 1-introduce the problem at high level, 2-discuss about some of the existing solutions, 3-identify the gap or scope of improvement, and 4-then discuss in order to address the identified gaps what is the methodology we are using after that 5-we have to list out the contributions -Done 2. Literature review needs to be revised majorly and extended by including some latest articles. Comparison of different works can be summarized in a table. -Done 3. Section 3 contains very less contents. Can it be merged with Section 4 along with Section 5? -Sections 3 & 4 were merged. 4. Figures quality are poor. -Done 5. Comparison of the proposed work with existing work can be useful. -Done in Section 5.2. 6. Conclusion should be revised to write the concluded facts and should not reflect abstract. -Done Reviewer 3 (Anonymous) - Please highlight the contribution clearly in the introduction -Accommodated in section 1.3 - this paper lacks in Novelty of the proposed approach. The author should highlight the contribution clearly in the introduction and provide a comparison note with existing studies. -Accommodated in sections 1.3 (paper contribution) & section 5.2 (comparison with previous work). - Some Paragraphs in the paper can be merged and some long paragraphs can be split into two. -Done - The quality of the figures can be improved more. Figures should be eye-catching. It will enhance the interest of the reader. -Done - Figure 11, only the graph area should be added to the paper. remove grey borders. and the same for others. -Done - The background of figure 'Figure 12 Service Providers Overall Trust results' should be white with font color black. -Done Experimental design - What are the computational resources reported in the state of the art for the same purpose? -Computational resources aren’t always mentioned in the previous works. Please kindly note that evaluation criteria used does not require heavy computational resources. Validity of the findings - What are the evaluations used for the verification of results? -We simulated different service providers, using random number generation, to validate different trust states of service providers. We used logical reasoning to interpret the trust results of the five different service providers. - Clearly highlight the terms used in the algorithm and explain them in the text. Done I hope that my revised paper will meet your esteemed expectations. I look forward my hearing from your respectful journal. Thanks so much for your time and great efforts. "
Here is a paper. Please give your review comments after reading it.
233
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Mobile edge computing (MEC) is introduced as part of edge computing paradigm, that exploit cloud computing resources, at a nearer premises to service users. Cloud service users often search for cloud service providers to meet their computational demands. Due to the lack of previous experience between cloud service providers and users, users hold several doubts related to their data security and privacy, job completion and processing performance efficiency of service providers. This paper presents an integrated three-tier trust management framework that evaluates cloud service providers in three main domains; tier I-evaluates service provider compliance to the agreed upon service level agreement, tier II-computes the processing performance of a service provider based on its number of successful processes, and tier III-measures the violations committed by a service provider, per computational interval, during its processing in the MEC network. The three-tier evaluation is performed during phase I computation. In phase II, a service provider total trust value and status are gained through the integration of the three tiers using the developed overall trust fuzzy inference system (FIS). Simulation results of phase I, shows service provider trust value in terms of service level agreement compliance, processing performance and violations' measurement independently. This disseminates service provider's points of failure, which enables a service provider to enhance its future performance for the evaluated domains. Phase II results, show the overall trust value and status per service provider after integrating the three tiers using overall trust FIS. The proposed model, is distinguished among other models by evaluating different parameters for a service provider.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Cloud computing (CC) provides a variety of computing resources such as processing capabilities, storage, servers, for multiple cloud users over the network <ns0:ref type='bibr'>(Monir et al.2015)</ns0:ref>. Such resources are physically located at large data centers which are far away from users' proximity. This causes high data transfer delays between service users and cloud resources, resulting in an increased network latency, while preventing real time applications like vehicular networks from being processed in a timely manner <ns0:ref type='bibr' target='#b29'>(Roman, Lopez &amp; Mambo et al. 2016)</ns0:ref>; <ns0:ref type='bibr' target='#b20'>(Mach &amp; Becvar et al. 2017)</ns0:ref>; <ns0:ref type='bibr'>(Shi et al. 2016)</ns0:ref>; <ns0:ref type='bibr' target='#b43'>(Taleb et al. 2017)</ns0:ref>. Mobile edge computing had emerged as part of the cloud computing paradigm, in an attempt to be nearer to user premises, under the coverage of radio access networks (RAN), <ns0:ref type='bibr' target='#b1'>(Ahmed &amp; Ahmed et al. 2016)</ns0:ref>; <ns0:ref type='bibr' target='#b4'>(Aslanpour &amp; Toosi, et al. 2021)</ns0:ref>.</ns0:p><ns0:p>In MEC, service execution such as computation and storage, are transferred from the cloud network to the mobile base stations located at the network edge <ns0:ref type='bibr' target='#b47'>(Wang et al. 2017)</ns0:ref>; <ns0:ref type='bibr'>(Hu, Patel &amp; Sabella et al.2015)</ns0:ref>; <ns0:ref type='bibr' target='#b18'>(Leppanen et al. 2019)</ns0:ref>; <ns0:ref type='bibr'>(Nunna et al.2015)</ns0:ref>. This provided low network latency, scalability and utilization of resources, which in return, minimizes computational and network overhead during data offloading for computational purposes. On the other hand, it enabled real time and data sensitive applications, such as smart health care systems, to be efficiently executed within their time limitation <ns0:ref type='bibr'>(Shangguang et al.2019)</ns0:ref>; <ns0:ref type='bibr'>(Chen et al.2016)</ns0:ref>; <ns0:ref type='bibr' target='#b37'>(Shi &amp; Dustdar et al. 2016)</ns0:ref>; <ns0:ref type='bibr' target='#b9'>(Corcoran &amp; Datta et al. 2016)</ns0:ref>. MEC allowed more service providers and users to connect to the network, benefiting from the processing and storage capabilities, which became more accessible to them (SHI, <ns0:ref type='bibr' target='#b39'>SUN &amp; CAO et al. 2017)</ns0:ref>; <ns0:ref type='bibr' target='#b21'>(MAOY, YOU &amp; ZHANG et al. 2017)</ns0:ref>; <ns0:ref type='bibr'>(Rani et al.2021)</ns0:ref>. This had relatively increased the transactions rate and number of participants connected to the MEC paradigm.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.1'>Problem Statement</ns0:head><ns0:p>However, due to the large number of communicating entities, several security and trust issues arise in such a vulnerable environment <ns0:ref type='bibr' target='#b45'>(Tang &amp; Alazab et al. 2017)</ns0:ref>. These security threats such as; fake service users, malicious service providers or denial of service attack <ns0:ref type='bibr'>(Jhaveri et al.2018)</ns0:ref>. Trust issues arise when service users route their private data for computational purposes, to unknown remote service providers, were they lose control on it <ns0:ref type='bibr'>(Sheikh et al.2012)</ns0:ref>. Due to lack of previous experience between service users and providers, service users hold several doubts like:</ns0:p><ns0:p>&#61623; their data security, confidentiality and privacy <ns0:ref type='bibr'>(Deepa et al.2020)</ns0:ref>; <ns0:ref type='bibr' target='#b26'>(Ranaweera, Jurcut &amp; Liyanage et al. 2021</ns0:ref>) &#61623; unknown service provider's processing performance efficiency and trust degree, &#61623; no guarantee that the selected service provider would abide to the agreed upon service level agreement (SLA) terms, &#61623; no recording of historical violations committed by a service provider, and the type of it. This may give a chance for malicious service providers to re-do their incorrect actions again, knowing that they are untraced by any authorized entity. A service level agreement acts as a contract, signed between a service provider and user that states the agreed upon processing conditions. However, there isn't a standard format for an SLA, which obscures its legal judgment. On the other hand, trust data extraction is a very difficult task due to the large number of transactions follow, in which a huge amount of data is generated like transaction type, terms, cost and service users' ratings. Service user ratings, could be untrusted, biased, irrelevant or difficult to filter. Therefore, service users demand guidance of service providers' trust degree prior their selection. Several works had been introduced in literature that evaluated service providers in terms of processing performance, processing quality, response time or SLA compliance degree. However, to the best of our knowledge, till date none of the previous works covered all these major parameters together. Some of these works faced challenges such as depending on service users' feedback opinion, lack of trust results update, unclear service provider assessment criteria.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.2'>Motivation</ns0:head><ns0:p>It is essential to build a trust evaluation scheme to evaluate service providers' performance in the MEC environment for four main reasons;</ns0:p><ns0:p>1-Service users will be aware service provider's trust degree prior their interaction. 2-Service providers will understand that their actions are being monitored and recorded in a historical database. This motivates them to enhance their processing performance capabilities and limits any malicious actions to happen. 3-The trust evaluation scheme allows a service provider to know its faulty points to improve them, <ns0:ref type='bibr' target='#b3'>(Asghar et al. 2020</ns0:ref>). 4-Service providers with good trust value will attract more service users, which increases their profits. A standard and universal trust evaluation model for MEC entities, would greatly contribute in distinguishing trustworthy service providers among others in the MEC network and their offered services. This would avoid attacks such as, malicious or fake service providers, and collusion attacks. Building trusted relationships would secure future interactions in the MEC paradigm. Consequently, service users' confidence and reliability on the MEC services will increase, leading to higher transactions rate <ns0:ref type='bibr'>(Chong et al.2013</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head n='1.3'>Paper Contribution</ns0:head><ns0:p>Trust is defined as the level of service user confidence towards a service provider for fulfilling its computational requirements as expected <ns0:ref type='bibr' target='#b5'>(Chahal &amp; Singh et al .2015)</ns0:ref>; <ns0:ref type='bibr' target='#b31'>(Ruan, Durresi &amp; Alfantoukh et al. 2016)</ns0:ref>; <ns0:ref type='bibr'>(Ruan &amp; Durresi et al. 2016)</ns0:ref>; <ns0:ref type='bibr' target='#b32'>(Ruan &amp; Durresi et al. 2017)</ns0:ref>. A trust management system builds trusted relationships between the participating entities, by assessing each service provider provisioned services and making trust level results available to service users when requested. Therefore, service providers' trust history should be captured, to avoid trust computation prior each new interaction, which saves time and yields to users' awareness of service provider's past interactions.</ns0:p><ns0:p>To address the above limitations, this research introduces the need of a unified trust management framework that evaluates service providers' provisioned services in the MEC network considering various parameters. Trust evaluation is performed in a centralized manner, by a fully trusted third party known as cloud service manager (CSM), Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to grantee trust results credibility <ns0:ref type='bibr'>(Felix &amp; Ricardo et al. 2012)</ns0:ref>; <ns0:ref type='bibr'>(Hatzivasilis et al.2020)</ns0:ref>. This also promotes for a secure and successful transactions between service users and providers in the MEC environment <ns0:ref type='bibr'>(Rathee et al.2017)</ns0:ref>.</ns0:p><ns0:p>On the other hand, fuzzy logic concept is used to address a situation of partial truth or uncertainty of values <ns0:ref type='bibr' target='#b0'>(AbdelKader, Naik &amp; Nayak et al. 2011)</ns0:ref>. This is an ideal choice, when evaluating the trustworthiness of a service provider, were the trust result is a dynamic variable depending on several measured parameters computed per computational interval <ns0:ref type='bibr'>(Tariq et al.2020)</ns0:ref>. This paper presents an integrated three-tier trust management framework using fuzzy logic. The main contributions of this paper are: Phase I: constitutes of Three-Tiers 1-Evaluation of service provider SLA compliance degree in tier I, 2-Computation of service provider processing performance in tier II, 3-Measurement of service provider violations in tier III, 4-each tier trust evaluation is performed independently per transaction in a batch processing manner. Phase II: Three-Tiers Integration 5-a Matlab based overall trust fuzzy inference system (FIS) was developed to integrate the evaluated results of tiers I, II and II, in order to gain an overall trust value and status for a service provider.</ns0:p><ns0:p>In the proposed framework, an SLA trust value is evaluated using four parameters (execution time, storage, cost and maintenance), to have a standard format, which allows it for a consistent judgment <ns0:ref type='bibr' target='#b36'>(Sheikh, Sebestian &amp; Max et al. 2011)</ns0:ref>. On the other hand, the processing performance of a service provider is measured by computing the number of successful processes verses the total number of accepted jobs, while gaining the failure ratio. The violations measurement is gained by maintaining the type and number of malicious actions committed per service provider. The main aim of tier III, is to monitor any wrong actions performed by a service provider. The three-tiers evaluation is performed using the proposed mathematical equations and algorithms. The output results of each tier of the three-tiers are inserted as an input to the buildup overall trust FIS, which provides a total trust value and status for a service provider per computational interval. This gives a full representation of a service provider abidance to the SLA contract, processing performance and violations committed during its service provisioning in the MEC paradigm. This paper is organized as follows; section 2, introduces the literature review and related work. Section 3, presents the integrated three-tier trust management framework; Phase I. The three-tiers integration using fuzzy logic is detailed in section 4. Section 5, shows the simulation results. Finally, the conclusion and future work are discussed in section 6.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Literature Review and Related Work</ns0:head><ns0:p>Many researches had developed various trust evaluation schemes to assess service providers' provisioned services and behavior in different edge computing (EC) paradigms. This is in an attempt to improve service providers' quality of service (QoS) and decrease losses emerging due to malicious actions performed over the network. This in return, will increase service users' trust and dependency EC resources <ns0:ref type='bibr'>(Jhaveri et al.2018)</ns0:ref>. Table <ns0:ref type='table'>'</ns0:ref>1' discusses some of the related work.</ns0:p><ns0:p>The above mentioned attempts measured trust considering different parameters, yet there isn't a unified service provider trust evaluation framework that integrates all major attributes together. Such main attributes are; SLA compliance degree, processing performance level and violations measurement of service providers' provisioned services in the MEC network.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Proposed Integrated Three-Tier Trust Management Framework</ns0:head><ns0:p>The proposed framework aims to measure service provider trust value, considering various attributes. The model is built up of two phases, as shown in Figure <ns0:ref type='figure'>'</ns0:ref>1'. Phase I, constitutes of three main tiers, tier I evaluates service provider's SLA compliance degree, tier II computes the processing performance value of a service provider, while tier III measures the violations committed by a service provider during its processing in the MEC network. Phase II, integrates the results of the three tiers, to gain an overall trust value and status of a service provider using fuzzy logic concept.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Acting Protocol Entities:</ns0:head><ns0:p>&#61623; Service User ' ': j th service user, is a user requesting a certain job to meet its computational needs. is &#119878;&#119880; &#119895; &#119878;&#119880; &#119895; represented by two attributes {service user unique id and name}. &#61623; Service Provider ' ': i th service provider, could be an ordinary provider or an organization supplying &#119878;&#119903; &#119894; computational services to users.</ns0:p><ns0:p>is represented by three attributes {service provider unique id, service &#119878;&#119903; &#119894; provider name and offered service type}. &#61623; Cloud Broker: acts as an intermediate entity to match between a service user seeking a suitable service provider.</ns0:p><ns0:p>A cloud broker is considered as a semi-trusted entity.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_5'>2021:02:58033:2:0:NEW 4 Aug 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#61623; Cloud Service Manager (CSM): is regarded as a fully trusted authorized party in the MEC network. CSM is responsible to perform, regulate and audit trust computation process for service providers in the MEC environment <ns0:ref type='bibr'>(Felix &amp; Ricardo et al. 2012)</ns0:ref>. CSM can exchange computed trust values of service providers within its coverage range with other CSM, in case requested. CSM also provides secure storage of trust computed results of service providers. &#61623; Network Provider: is responsible for registering a service user, provider and cloud broker to the MEC network.</ns0:p><ns0:p>It also handles network communications between all of the above entities.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>List of Assumptions:</ns0:head><ns0:p>The proposed trust framework considers the below assumptions: &#61623; Trust computation is handled by a fully trusted third party like CSM. &#61623; In case service provider1 sends part or all of user's required task to service provider2 for processing, known as process migration, service provider1 is totally responsible for user's data security. Service provider1 should also inform the user of this attempt, and offer the relative guarantee to ensure service user's data security, privacy and integrity. &#61623; A service provider could own one or more platform that offers one or more different service type. &#61623; Each service type of a service provider is evaluated independently regardless it's the same service provider.</ns0:p><ns0:p>&#61623; There are three main jobs requested over the MEC network; 1-processing, 2-storage, 3-both of them, known as Job_type {Job1, Job2, Job3} respectively.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Phase I: Proposed Three-Tier Trust Evaluation Framework</ns0:head><ns0:p>Phase I, constitutes of three tiers trust evaluation; Tier I-Service level agreement evaluation, Tier II-Processing performance evaluation and Tier III-Violations measurement. In each tier, several parameters are evaluated to gain the tier trust value, as shown in Figure <ns0:ref type='figure'>2</ns0:ref>.</ns0:p><ns0:p>Tier I trust results are gained by service user rating of SLA, upon process completion. Tier II is computed by evaluating the processing performance of a service provider. Whereas, Tier III provides a violations measurement and warnings received by a service provider. The three-tier computation is performed per 'n' computational interval as described in the below subsections.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.1'>Tier I -Service Level Agreement Evaluation</ns0:head><ns0:p>A service level agreement, is an agreement placed between a service provider and user, which states the job type requested by the service user and it's agreed upon computational conditions. Assume that the requested job is one of the previously mentioned three job types; known as 'a', and the total number of requested jobs received by i th service provider, is referred to as 'A'. The main conditions mentioned in an SLA should be standard for all SLAs, maintained by both parties and eligible for judgment if needed <ns0:ref type='bibr'>(Chong et al.2013)</ns0:ref>. Assume that all SLAs contain four major conditions; computational cost , required computational storage capacity GB/TB ( ), computational</ns0:p><ns0:formula xml:id='formula_0'>(S&#119862; &#119894;&#119886; ) &#119878;&#119878; &#119894;&#119886;</ns0:formula><ns0:p>maintenance duration hr/min , and agreed computational execution time hr/min be , <ns0:ref type='bibr'>(Monir,</ns0:ref> (&#119878;&#119872; &#119894;&#119886; ) (S&#119864; &#119894;&#119886; )</ns0:p><ns0:p>AbdelKader &amp; <ns0:ref type='bibr' target='#b23'>EI-Horbaty et al. 2019)</ns0:ref>. Upon job completion, a service user performs a compulsory rating process, to rate the service provider compliance to the four agreed upon conditions according to its own job execution experience. Assume that each of the above four major SLA components are rated as 'r', shown in Table <ns0:ref type='table'>'</ns0:ref>2'.</ns0:p><ns0:p>The above mentioned values could be adjusted according to own perspective. Such ratings of computational cost, storage, maintenance and execution time ( _R, _R, _R, _R)</ns0:p><ns0:p>&#119878;&#119862; &#119894;&#119886; &#119878;&#119878; &#119894;&#119886; &#119878;&#119872; &#119894;&#119886; &#119878;&#119864; &#119894;&#119886; respectively, reflects user degree of satisfaction/dissatisfaction and compliance of a service provider against the agreed upon SLA conditions. Assume that the total number of rated SLAs, equals the total number of accepted processes 'A' by i th service provider. Let the computed rated SLA value be , were it ranges between '0' and '20',</ns0:p></ns0:div> <ns0:div><ns0:head>&#119930;&#119923;&#119912; &#119946;&#119938; _&#119929;_&#119959;&#119938;&#119949;&#119958;&#119942;</ns0:head><ns0:p>given by equation ( <ns0:ref type='formula'>1</ns0:ref>), is as follows:</ns0:p><ns0:formula xml:id='formula_1'>_value = ( + + + )*5 (1)</ns0:formula><ns0:p>&#119930;&#119923;&#119912; &#119946;&#119938; _&#119929; &#119930;&#119914; &#119946;&#119938; _&#119929; &#119930;&#119930; &#119946;&#119938; _&#119929; &#119930;&#119924; &#119946;&#119938; _&#119929; &#119930;&#119916; &#119946;&#119938; _&#119929; Equation ( <ns0:ref type='formula'>1</ns0:ref>) is multiplied by 5, to get the result in percentage form. Thus, the average computed SLAs value, ' ', is calculated by;</ns0:p><ns0:formula xml:id='formula_2'>&#119930;&#119923;&#119912; &#119938;&#119959;&#119942;&#119955;&#119938;&#119944;&#119942; = (2)</ns0:formula><ns0:p>&#119930;&#119923;&#119912; &#119938;&#119959;&#119942;&#119955;&#119938;&#119944;&#119942; &#8721; &#119912; &#119938; = &#120783; &#119930;&#119923;&#119912; &#119946;&#119938; _&#119929;_&#119959;&#119938;&#119949;&#119958;&#119942; &#119912; SLAs average is computed per service provider as given in equation ( <ns0:ref type='formula'>2</ns0:ref>), by the CSM. This guarantees SLA evaluation results credibility. Noting that, each requested job is given a separate SLA, even if it's requested by the Manuscript to be reviewed Computer Science same service user, and performed by the same service provider. However, each job could have different computational requirements.</ns0:p><ns0:p>On the other hand, a certain threshold value is set for each dissatisfaction rated component. The dissatisfaction rate is computed per i th service provider for each component per 'n' computational interval. In case the dissatisfaction rate of any component had exceeded the predefined threshold for this component, a warning is issued for i th service provider as shown in Table <ns0:ref type='table'>'</ns0:ref>2'. This alerts service providers for any dissatisfactory results gained for the SLA components in order to enhance their processing capabilities in this component.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.2.'>Tier II -Processing Performance Computation</ns0:head><ns0:p>The processing performance of i th service provider ' ', refers to the value of successful processes accomplished &#119875; &#119894; by a service provider in 'n' computational interval. Assume process 'a' ends, either as successful or incomplete. A complete process implies that the service provider had abided to all the four processing conditions and is referred to as process success, ' '. On the other hand, an incomplete process could be a result of one of the below three states;</ns0:p><ns0:p>&#119875;&#119878; &#119894; 1-a service provider had started the job processing but didn't complete it within its agreed conditions, 2-a service provider didn't start the job processing though accepted the job, 3-a service provider had started the job processing and is proceeding within its agreed upon conditions, and didn't exceed its process execution time ). However, the service user wishes to terminate the job processing (&#119878;&#119864; &#119894;&#119886; transaction. States 1 and 2, are recommended as incomplete job processing, known as processing incompliance ' ' by a &#119875;&#119868; &#119894; service provider. State 3 is referred to as user termination case, referred as ' '. Tier II evaluates the processing &#119880;&#119879; &#119894; performance of i th service provider, in terms of: &#61623; Average processing success ratio ' ': is considered as the number of successful processes implemented &#119860;&#119875;&#119878; &#119894; &#119875;&#119878; &#119894; by i th service provider, divided by the total number of accepted processes 'A', as shown in equation ( <ns0:ref type='formula'>3</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_3'>= (3) &#119912;&#119927;&#119930; &#119946; &#119927;&#119930; &#119946; &#119912; &#61623; Average processing incompliance ratio '</ns0:formula><ns0:p>': is the number of accepted processes by a service provider but &#119860;&#119875;&#119868; &#119894; failed to perform or complete them ' ', is computed by equation ( <ns0:ref type='formula'>4</ns0:ref> ': where a service user feels dissatisfied for any reason, and decides to &#119860;&#119880;&#119879;&#119877; &#119894; terminate its computational transaction, as represented in state no.3. This is known as user termination ratio, ' ', and computed by equation ( <ns0:ref type='formula'>5</ns0:ref>);</ns0:p><ns0:formula xml:id='formula_4'>&#119880;&#119879;&#119877; &#119894; = (5) &#61664; State 3 &#119912;&#119932;&#119931;&#119929; &#119946; &#119932;&#119931;&#119929; &#119946;</ns0:formula></ns0:div> <ns0:div><ns0:head>&#119912;</ns0:head><ns0:p>The processing performance of i th service provider is gained by equation ( <ns0:ref type='formula' target='#formula_5'>6</ns0:ref>);</ns0:p><ns0:formula xml:id='formula_5'>&#119927; &#119946; =<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>&#119927; &#119946; &#119912;&#119927;&#119930; &#119946; where 0 &#8804; &#8804; 1 for 'n=1' computational interval &#119927; &#119946; Equation ( <ns0:ref type='formula' target='#formula_5'>6</ns0:ref>), shows the processing performance degree of a service provider in the MEC environment. A predefined threshold is set for each of processing incompliance and user termination ratio values. In case a service provider computed results of these two parameters had exceeded this threshold within 'n' computational interval, a relative warning is issued for i th service provider as shown in Table <ns0:ref type='table'>'</ns0:ref>3'. This is to alert i th service provider for such incidents.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.3'>Tier III: Violations Measurement</ns0:head><ns0:p>Tier III proposes two algorithms; 1-complain and evidence algorithm, 2-violations measurement algorithm. Data privacy leakage <ns0:ref type='bibr' target='#b42'>(Talal &amp; Quan et al. 2015)</ns0:ref>, incidents occurring by service providers are monitored through the 'complain and evidence' algorithm as described in section 'A'. On the other hand, the violations measurement algorithm presented in section 'B', counts the warnings gained by a service provider per 'n' computational interval, to gain its trust value in tier III.</ns0:p></ns0:div> <ns0:div><ns0:head>A) Data Privacy Leakage Complain Algorithm</ns0:head><ns0:p>A service provider is responsible for service user data privacy while a user is provisioning provider's services or applications, by installing the necessary data protection mechanisms. However, a service user may face a situation where it discovers that its own private data had been routed by a service provider, without prior permission for doing Manuscript to be reviewed Computer Science so, this hinders user's data privacy <ns0:ref type='bibr'>(Deepa et al.2020)</ns0:ref>; <ns0:ref type='bibr' target='#b41'>(Talal &amp; Quan et al. 2013)</ns0:ref>. A data privacy leakage is known as a security threat, where it can lead to cyber-attacks, social issues, or cause user robbery by different means <ns0:ref type='bibr'>(Javed et al.2021)</ns0:ref>; <ns0:ref type='bibr'>(Iwendi et al.2020)</ns0:ref>; <ns0:ref type='bibr' target='#b46'>(Vasani &amp; Chudasama et al. 2018)</ns0:ref>. To monitor such service provider actions, a service user sends a complain message ' ' to the CSM, against the suspected service provider including an evidence for M DPR this incident. The complaint message ' ' should include;</ns0:p><ns0:p>M DPR 1-screenshot of service user data appearing in unknown platform for the user, 2-service user data ownership evidence (could be a previous email sent from the user to the respective service provider including this data), 3-transaction SLA including; &amp; .</ns0:p></ns0:div> <ns0:div><ns0:head>&#119878;&#119903; &#119894; &#119878;&#119880; &#119895;</ns0:head><ns0:p>The mentioned conditions in the complaint message, ' ' are investigated by the CSM for verification. If the</ns0:p></ns0:div> <ns0:div><ns0:head>M DPR</ns0:head><ns0:p>CSM investigation results are approved to be true against i th service provider, this incident is considered as incident 1 and an alarm message 'incident1_ ' is sent to the accused service provider accordingly, as shown in Figure <ns0:ref type='figure'>'</ns0:ref>3'.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119878;&#119903; &#119894;</ns0:head><ns0:p>Incident 1 is counted and stored by the CSM.</ns0:p><ns0:p>In case a data privacy leakage complain had been submitted by a different service user, but against the same service provider, CSM investigates the case. If CSM approves the complaint message to be true as mentioned previously, CSM considers these incidents as data privacy leakage attack, where the accused service provider is penalized by sending to it warning no.7, 'W7_ '. On the other hand, such warning degrades the trust value for the accused service &#119878;&#119903; &#119894; provider as shown in Table <ns0:ref type='table'>'</ns0:ref>4'. This is to avoid recurrent occurrence of such attacks in the future by malicious service providers.</ns0:p><ns0:p>The data privacy leakage complain algorithm presented in Figure <ns0:ref type='figure'>'</ns0:ref>3', traces and counts data leakage actions committed by i th service provider in 'n' computational interval, with different users. These incidents and warning are being stored in a historical database by the CSM per i th service provider.</ns0:p></ns0:div> <ns0:div><ns0:head>B) Violations Measurement Computation</ns0:head><ns0:p>The violations measurement computation algorithm measures the warnings imposed on i th service provider in tiers I and II, (during SLA evaluation and processing performance computation), and in case of a data privacy leakage incident. The relationship between the three-tier framework warnings is shown in Figure <ns0:ref type='figure'>'</ns0:ref>4'.</ns0:p><ns0:p>Each warning type is given a number, from 1 to 7, as shown in Table <ns0:ref type='table'>'</ns0:ref>4'. A decreasing factor '&#952;' is given for warnings imposed on i th service provider, which is computed according to the number and reason of warning. Noting that '&#952;' is a changing variable, that could be assigned different values according to own perspective. Assume the total warnings value for i th service provider computed in tier III, be ' ', which is calculated by equation ( <ns0:ref type='formula' target='#formula_6'>7</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_6'>&#119881; &#119894; = &#8721; &#952;W1, &#952;W2, &#952;W3, &#8230;, &#952;W7<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>&#119933; &#119946; where 0 &#8804; &#8804; 1 &#119933; &#119946; Given that ' ' gives the total warnings value, hence, the violations measurement ' ', is computed by;</ns0:p><ns0:formula xml:id='formula_7'>&#119881; &#119894; &#119879;&#119881; &#119894; = 1 -<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>&#119931;&#119933; &#119946; &#119933; &#119946; The violations measurement of i th service provider is computed in tier III, as shown in Figure <ns0:ref type='figure'>5</ns0:ref>, by the CSM. In this algorithm, CSM checks whether i th service provider had received any warnings (W1_ , &#8230;, W7_ ). If i th service &#119878;&#119903; &#119894; &#119878;&#119903; &#119894; provider received any warning, it computes its relative '&#952;' and ' ' value by equation ( <ns0:ref type='formula' target='#formula_6'>7</ns0:ref>) accordingly. 'T ' is</ns0:p><ns0:formula xml:id='formula_8'>&#119881; &#119894; &#119881; &#119894;</ns0:formula><ns0:p>computed by equation ( <ns0:ref type='formula' target='#formula_7'>8</ns0:ref>), to gain the violations measurement for i th service provider.</ns0:p><ns0:p>The violations measurement algorithm presented in Figure <ns0:ref type='figure'>'</ns0:ref>5', checks the warnings received by i th service provider, to compute its total warnings value ' '. Hence, the total warnings value is deducted from '1', to gain its violations &#119881; &#119894; measurement '</ns0:p><ns0:p>'. In case a service provider didn't receive any warnings, this service provider gains the full value &#119879;&#119881; &#119894; of the violations measurement ' ', which is equal to '1'.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119879;&#119881; &#119894;</ns0:head></ns0:div> <ns0:div><ns0:head n='3.4'>Summary</ns0:head><ns0:p>The proposed three-tier protocol in phase I, aims to maintain the trust value of i th service provider considering three main attributes; SLA compliance degree, processing performance level and violations measurement. This is in an attempt to optimize trust results credibility. In phase II, an integration of the three-tier results is provided to gain an overall trust value of i th service provider, building up the whole framework. This disseminates service providers' processing performance and pervious interactions in the MEC network. </ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>Phase II: Three-Tier Integration using Fuzzy Logic</ns0:head><ns0:p>The overall trust value of i th service provider is an aggregate value of the computed end results of each of the above three tiers (SLA evaluation, processing performance computation and violations measurement). Hence, i th service provider overall trust value is computed using the developed fuzzy inference system (FIS) named 'Overall_Trust', as shown in Figure <ns0:ref type='figure'>'</ns0:ref>1' and described in sections 4.1 and 4.2.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Using Fuzzy Logic for Trust Computation in Mobile Edge Computing</ns0:head><ns0:p>Fuzzy logic is a form of artificial intelligence, were the input parameters are given to a fuzzy inference system as unclear or uncertain information, denoting partial truth of a parameter <ns0:ref type='bibr'>(Nagarajan et al.2017)</ns0:ref>. This is in contrast of Boolean logic, which belongs to discrete numbers, either 0 or 1. Such input values range between 0 and 1 and are placed in membership functions to distinguish each range of values, known as fuzzification process <ns0:ref type='bibr'>(Sule et al.2017</ns0:ref>). If-thenelse rules are set to the fuzzy inference rule base editor, in the Matlab program, in order to allocate each range of input values to a specific output decision. The output result is converted into a crisp value known as defuzzification process. Fuzzy logic concept has great advantages, like flexibility, fast response time, low cost and logical reasoning. For these reasons, fuzzy logic was chosen to compute service providers overall trust value and status in this framework, since MEC is a highly dynamic environment. Service providers trust computation helps service users during their service provider selection, and balances between offered services and cost.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Integration of Tiers I, II and III Results using Fuzzy Logic</ns0:head><ns0:p>Fuzzy logic concept is used to integrate the end results of tiers I, II, and III of phase I, in order to gain an overall trust value for i th service provider, during phase II computation, as shown in Figure <ns0:ref type='figure'>'</ns0:ref>1'. Fuzzy logic toolbox in Matlab was used to develop the 'overall trust' system. Trapezoid-cure shape was used to represent each of the three fuzzy inputs of the three tiers, (SLA evaluation, processing performance computation, violations measurement), and their relative fuzzy membership functions, as shown in Figure <ns0:ref type='figure'>'</ns0:ref>6'. Membership functions input values range between [0,100], which are converted to fuzzy linguistic input variables, forming three fuzzy sets (low, medium, high), during the fuzzification process, as presented in Table <ns0:ref type='table'>'</ns0:ref>5'.</ns0:p><ns0:p>Centroid defuzzification was performed to obtain a crisp overall trust value for i th service provider. The triangularshape curve represents the output value membership functions (low, medium, high, excellent), as shown in Figure <ns0:ref type='figure'>'</ns0:ref>7'. i th service provider overall trust value belongs to one of the four fuzzy output sets. Twenty-one inference rules were added to the Mamdani inference system, to compute the overall trust status of i th service provider, considering its SLA evaluation, processing performance computation and violations measurement. Each rule, used AND logical operator, to co-relate between input and output variables, as given in Table <ns0:ref type='table'>'</ns0:ref>6'.</ns0:p><ns0:p>The fuzzification of the three-tiers values participate in forming the overall trust fuzzy inference system, which computes i th service provider overall trust in the MEC network, as presented in Figure '8'. A smooth surface is shown in Figure <ns0:ref type='figure'>'</ns0:ref>9', while comparing the processing performance and SLA evaluations against the overall trust value.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Phase II: Summary</ns0:head><ns0:p>The main aim of phase II, is to provide an overall trust value for a service provider in the MEC network. This is done by gaining the value of service provider's successful interactions and the violations degree measurement, while maintaining service users' opinion of the SLA. Through the fuzzy logic concept, the three tiers' end results were integrated to produce i th service provider overall trust value and status.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>Simulation Results and Discussion</ns0:head><ns0:p>The integrated three-tier trust management framework simulation was performed using Matlab R2017a. Simulation results are shown in section 5.1. A comparison with previous protocols is presented in section 5.2. Section 5.3 depicts the proposed framework achievements.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1'>Integrated three-tier trust management framework Simulation Results</ns0:head><ns0:p>A Matlab based simulation was developed to show phase I and II results of the integrated three-tier trust management framework. Random number generation was used for the input values of each of: SLA rating components; number of received processes (successful, failed and user termination ratio); and privacy leakage incident. The overall trust fuzzy inference system was developed using Matlab fuzzy logic designer tool. However, each of the simulated service providers was forced a specific range of values, to validate the developed system equations and twenty-one fuzzy inference system rules set, in various conditions. The overall trust FIS contains three input membership functions. Each membership function presents the computed results of each tier (SLA, processing performance and violations measurement).</ns0:p><ns0:p>System setup:</ns0:p><ns0:p>1. Assume five different service providers' cases studies. 2. Each service provider initial trust value is '0'. 3. Three-tier computation was performed in n=1 computational interval. 4. All service providers received the same number of job requests, in one job_type {Job3}. 5. Hardware PC configuration was; core i7, RAM 6 GB and hard disk 1 Tera. The three-tier simulation results of phase I for the five service providers are shown in Table <ns0:ref type='table'>'</ns0:ref>7'. These results are shown for i th service provider, upon the completion of 'n' computational interval. Phase I presents detailed computational results and points of enhancement for each service provider per tier. The warning methodology helps a service provider to know its points of weakness.</ns0:p><ns0:p>Phase II simulation results are shown in Table <ns0:ref type='table'>'</ns0:ref>8'. The overall trust FIS computes the trust value and status per service provider in one of the predefined job types. In case a service user is seeking the processing of job3, overall trust value and status of the available service providers in this job type is shown as in Table <ns0:ref type='table'>'</ns0:ref>8'. However, a service user could reveal the three-tier results, with the overall trust value and status of the five service providers for serious selection, as shown in Figure <ns0:ref type='figure'>'</ns0:ref>11'.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>'</ns0:ref>10' gives a full representation for each of the simulated five service providers processing performance in the MEC network.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2'>Comparison with Previous Protocols</ns0:head><ns0:p>A comparison between the proposed three-tier trust management framework evaluated parameters, and previous protocols measured parameters in cloud computing and mobile edge computing is presented in Table <ns0:ref type='table'>'</ns0:ref>9'.</ns0:p><ns0:p>As shown in Table <ns0:ref type='table'>'</ns0:ref>9', the proposed framework had successfully measured various service performance parameters for service providers, with a warning and history capturing methodology, in comparison to previous works.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3'>The Intergrated Three-Tier Trust Management Framework Achievements</ns0:head><ns0:p>The Proposed Integrated Three-Tier Trust Management Framework achieves the following:</ns0:p><ns0:p>&#61623; Service provider assessment based on its behavior during its interactions on the MEC network according to its predefined job type and not by the quantity of hardware or software resources it possess, or job type offered. &#61623; Service providers' awareness of their faulty points, while displaying the degree of improvement needed per component. &#61623; Future prediction for service provider performance competency per tier. &#61623; Malicious service provider detection and their points of attack through the violations measurement, warnings mechanism and the low trust fuzzy membership function. &#61623; Trust evaluation with minimal human interaction to maximize trust results credibility. &#61623; Dynamic trust computation for service providers, per 'n' computational interval, considering history capturing mechanism. &#61623; Helps in cost estimation, for example, if a service provider's service cost is high, this could be justified if its trust value is high in the three evaluated components. Service users' awareness and guidance during their service selection for credible and trustworthy service providers. Consequently, service users will gain confidence and increase their dependency on the MEC network services. &#61623; Low computational trust evaluation time complexity. This is mainly due to; 1-the simplicity of the model equations, 2-history capturing makes trust results update easy, 3-the developed fuzzy inference system. Thus, the model can be used for large number of service providers in the MEC network, which is time efficient. &#61623; Computational storage saving, were there is no data redundancy during trust computation.</ns0:p></ns0:div> <ns0:div><ns0:head n='6.'>Conclusion and Future Work</ns0:head><ns0:p>Finding credible service providers in a vast ambiguous environment like mobile edge computing had been a very hard task faced by service users. The proposed integrated three-tier trust management framework, measures the trust value of a service provider considering three main attributes; SLA compliance, processing performance and violations measurement, in tiers I, II and III of phase I. Tiers I, II and III constitutes of different evaluated parameters, which eliminates the opportunity of false ratings and collusion attacks by service users. However, the three-tier protocol shows service provider's points of weakness or strength per tier. The three-tier results are aggregated to give an overall trust Manuscript to be reviewed Computer Science value and status for a service provider, using the developed overall fuzzy inference system in phase II. Trust evaluation in performed in a history capturing manner to ease trust updating process.</ns0:p><ns0:p>From a service user perspective, the three-tier protocol could also be beneficial, since service users may have different preferences, in spite they are requesting the same job type. For example, a service user may search for a service provider which will abide to its SLA, in terms of time compliance, for time critical operations, rather than its processing performance quality. This obviously helps service users and organizations in their selection for credible service providers to fulfil their computational demands.</ns0:p><ns0:p>The integrated three-tier framework computational simulation results show the trust value of a service provider in terms of SLA compliance, processing performance, and violations occurrence. The proposed warnings protocol, shows each service provider points of weakness, which supports its improvement. Finally, the results of the three-tier framework integration reflects the overall trust value and status of a service provider in the MEC network.</ns0:p><ns0:p>In the future, we plan to measure service user trust value, which could be a factor multiplied by its rated value. Therefore, the higher this factor, the more likely this rating to be true, which increases the service provider computed trust results reliability. On the other hand, service user trust value can be considered as a filtering mechanism for falsified ratings. However, the transaction cost should be considered as an affecting weight, during trust computation. Usually transactions of high cost are rarely mentioned to be fake. The active work time of a service provider should be measured in comparison to its total subscription life time in the MEC network. This actually reflects dedicated operational service providers. Manuscript to be reviewed Trust evaluation results were totally dependent upon service users' feedback opinion, which may led to less reliable trust results. <ns0:ref type='bibr' target='#b19'>(Ma &amp; Li et al. 2018)</ns0:ref> EC</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>Trust was measured by evaluating deployed data security and privacy mechanisms in terms of resource identity, performance and quality of service.</ns0:p><ns0:p>Trust updating and sharing was not addressed, which weakens the trust evaluation efficiency of the model.</ns0:p></ns0:div> <ns0:div><ns0:head>(Deng et al. 2020) MEC</ns0:head><ns0:p>A reputation-based trust evaluation model and management for service providers was introduced that measured trust in terms of identity verification, deployed hardware capabilities (CPU, memory, disk, online time) and behavior.</ns0:p><ns0:p>Trust results were derived from service consumers' previous interactions' ratings. Unfortunately, such users' ratings may not be trustworthy enough. <ns0:ref type='bibr'>(Ruan, Durresi &amp; Uslu et al.2018)</ns0:ref> MEC Service provider's trustworthiness is measured according to its performance per transaction with a service user. A degree of confidence measure is associated accordingly that shows user expectation of service provider future behavior.</ns0:p><ns0:p>The model depended on users' ratings, who could have different perspectives which may negatively affect trust evaluation accuracy.</ns0:p><ns0:p>Monitoring and comparing such ratings in userprovider relationships is time consuming and may produce redundant data. <ns0:ref type='bibr' target='#b17'>(Khan, Chan &amp; Chua et al. 2018</ns0:ref>)</ns0:p></ns0:div> <ns0:div><ns0:head>CC</ns0:head><ns0:p>Service providers' quality of service was evaluated in terms of service availability, response time and throughput.</ns0:p><ns0:p>Fuzzy rules were used to predict future behavior of a cloud service provider. The model helped service users in their service cost estimation.</ns0:p><ns0:p>( <ns0:ref type='bibr' target='#b2'>Akhtar et al. 2014</ns0:ref>) CC Service provider performance was evaluated in terms of infrastructure (response time and resource utilization with respect to the number of users) and application performance (in terms of; response time to a user, volume of data linked and processing migration).</ns0:p><ns0:p>Service provider performance evaluation was computed using fuzzy logic.</ns0:p><ns0:p>Results managed to conclude the service provider performance level. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58033:2:0:NEW 4 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58033:2:0:NEW 4 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Fig. 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Fig.6. FIS input variables membership functions.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Fig. 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Fig.7. FIS output membership functions.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58033:2:0:NEW 4 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 1 FIS</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>by computing users' opinion in service provider's processing cost, storage, maintenance and execution time.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58033:2:0:NEW 4 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>SLA components rating &amp; warnings.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58033:2:0:NEW 4 Aug 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 . SLA components rating &amp; warnings Variable Dissatisfaction Rate Satisfaction Rate Consequences of Dissatisfaction Rate &gt; Threshold SLA Drawback Warnings</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>_R _R &#119878;&#119862; &#119894;&#119886; &#119878;&#119878; &#119894;&#119886; _R &#119878;&#119872; &#119894;&#119886; _R &#119878;&#119864; &#119894;&#119886;</ns0:cell><ns0:cell>0 &#8804; r &#8804; 2 0 &#8804; r &#8804; 2 0 &#8804; r &#8804; 2 0 &#8804; r &#8804; 2</ns0:cell><ns0:cell>3 &#8804; r &#8804; 5 3 &#8804; r &#8804; 5 3 &#8804; r &#8804; 5 3 &#8804; r &#8804; 5</ns0:cell><ns0:cell>W1_ issued &#119878;&#119903; &#119894; W2_ issued &#119878;&#119903; &#119894; W3_ issued &#119878;&#119903; &#119894; W4_ issued &#119878;&#119903; &#119894;</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 . Tier II imposed warnings. Tier II-Variables Processing Performance Drawbacks Warnings &gt;</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Threshold &gt; Threshold &#119860;&#119880;&#119879;&#119877; &#119894; &#119860;&#119875;&#119868; &#119894;</ns0:cell><ns0:cell>W5_ issued &#119878;&#119903; &#119894; W6_ issued &#119878;&#119903; &#119894;</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 . Warning number, reason and '&#952;' decreasing factor.</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Warning name</ns0:cell><ns0:cell>Reason</ns0:cell><ns0:cell>'&#952;' decreasing factor</ns0:cell><ns0:cell>Tier</ns0:cell></ns0:row><ns0:row><ns0:cell>W1_&#119930;&#119955; &#119946;</ns0:cell><ns0:cell>Exceeded cost dissatisfaction threshold.</ns0:cell><ns0:cell>&#952;W1 = 0.1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>W2_&#119930;&#119955; &#119946;</ns0:cell><ns0:cell>Exceeded storage dissatisfaction threshold.</ns0:cell><ns0:cell>&#952;W2 = 0.1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>W3_&#119930;&#119955; &#119946;</ns0:cell><ns0:cell>Exceeded maintenance dissatisfaction threshold.</ns0:cell><ns0:cell>&#952;W3 = 0.1</ns0:cell><ns0:cell>I-SLA</ns0:cell></ns0:row><ns0:row><ns0:cell>W4_&#119930;&#119955; &#119946;</ns0:cell><ns0:cell>Exceeded agreed computational execution time dissatisfaction threshold.</ns0:cell><ns0:cell>&#952;W4 = 0.1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>W5_&#119930;&#119955; &#119946; W6_&#119930;&#119955; &#119946;</ns0:cell><ns0:cell>Exceeded processing incompliance threshold. Exceeded user termination ratio threshold.</ns0:cell><ns0:cell>&#952;W5 = 0.2 &#952;W6 = 0.1</ns0:cell><ns0:cell>II-Processing Performance</ns0:cell></ns0:row><ns0:row><ns0:cell>W7_&#119930;&#119955; &#119946;</ns0:cell><ns0:cell>Data privacy leakage incident.</ns0:cell><ns0:cell>&#952;W7 = 0.3</ns0:cell><ns0:cell>III-Violations Measurement</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58033:2:0:NEW 4 Aug 2021)</ns0:note> </ns0:body> "
"Dear Respected Editor in Chief, Esteemed Reviewers, Hope this mail finds you well and safe. First of all I would like to deeply thank you for your valuable and informative comments that show the great effort and time exerted in my research paper. I would like to also thank you so much for sending me the valuable and recent works, which I really benefited from. I had accommodated all your requests as below. Reviewer 1 (Anonymous) 1. Even though the motivations of the current work are clearly discussed, the contributions of the paper are not discussed. What are the main contributions of the current work? -Section 1.3 was added to discuss the paper contributions. (lines 104-112) 2. The literature review section can be summarized as a table. -Done (line 132). 4. The main contribution of any paper is the proposed work. This section is very small in this paper. It has to be elaborated with discussion on the novelty and more detailed discussion on the proposed work. -Section was extended to include more details. (lines 88-122) 5. The limitations and the future scope of the current work can be discussed in conclusion. Done (lines 462-468). Reviewer 2 (Rutvij Jhaveri) Basic reporting Many grammatical errors and typos are found (all the way from Abstract to Conclusion). -Corrected Experimental design Comparison of the proposed work with existing work can be useful. -Done (line 421). 1. The flow of the introduction should be.. 1-introduce the problem at high level, 2-discuss about some of the existing solutions, 3-identify the gap or scope of improvement, and 4-then discuss in order to address the identified gaps what is the methodology we are using after that 5-we have to list out the contributions -Done 1-Problem statement (line 51), 2-some existing solutions (line 64), 3-motivation (lines 75-87), 4- what is the methodology we are using (lines 89-103), 5- contributions (lines 104-112). 2. Literature review needs to be revised majorly and extended by including some latest articles. Comparison of different works can be summarized in a table. -Done (line 132). 3. Section 3 contains very less contents. Can it be merged with Section 4 along with Section 5? -Sections 3 & 4 were merged. 4. Figures quality are poor. -Done 5. Comparison of the proposed work with existing work can be useful. -Done in Section 5.2 (line 421). 6. Conclusion should be revised to write the concluded facts and should not reflect abstract. -Done (lines 445-457) Reviewer 3 (Anonymous) - Please highlight the contribution clearly in the introduction -Accommodated in section 1.3 (lines 104-112). - this paper lacks in Novelty of the proposed approach. The author should highlight the contribution clearly in the introduction and provide a comparison note with existing studies. -Accommodated in sections 1.3 (paper contribution) & section 5.2 (comparison with previous work (line 421)). - Some Paragraphs in the paper can be merged and some long paragraphs can be split into two. -Done - The quality of the figures can be improved more. Figures should be eye-catching. It will enhance the interest of the reader. -Done - Figure 11, only the graph area should be added to the paper. remove grey borders. and the same for others. -Done - The background of figure 'Figure 12 Service Providers Overall Trust results' should be white with font color black. -Done Experimental design - What are the computational resources reported in the state of the art for the same purpose? -Computational resources aren’t always mentioned in the previous works. Please kindly note that evaluation criteria used does not require heavy computational resources. Validity of the findings - What are the evaluations used for the verification of results? We simulated different service providers, using random number generation, to validate different trust states of service providers. We used logical reasoning to interpret the trust results of the five different service providers. - Clearly highlight the terms used in the algorithm and explain them in the text. Done throughout the paper. I hope that my revised paper will meet your esteemed expectations. I look forward my hearing from your respectful journal. Thanks so much for your time and great efforts. "
Here is a paper. Please give your review comments after reading it.
234
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>While traditional methods for calling variants across whole genome sequence data rely on alignment to an appropriate reference sequence, alternative techniques are needed when a suitable reference does not exist. We present a novel alignment and assembly free variant calling method based on information theoretic principles designed to detect variants have strong statistical evidence for their ability to segregate samples in a given dataset. Our method uses the context surrounding a particular nucleotide to define variants. Given a set of reads, we model the probability of observing a given nucleotide conditioned on the surrounding prefix and suffixes of length k as a multinomial distribution. We then estimate which of these contexts are stable intra-sample and varying inter-sample using a statistic based on the Kullback-Leibler divergence.</ns0:p><ns0:p>The utility of the variant calling method was evaluated through analysis of a pair of bacterial datasets and a mouse dataset. We found that our variants are highly informative for supervised learning tasks with performance similar to standard reference based calls and another reference free method (DiscoSNP++). Comparisons against reference based calls showed our method was able to capture very similar population structure on the bacterial dataset. The algorithm's focus on discriminatory variants makes it suitable for many common analysis tasks for organisms that are too diverse to be mapped back to a single reference sequence.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>Many sequencing studies begin by the transformation of raw sequence data to relatively few features, usually single-nucleotide variants. Typically, this is done by aligning the individual sequence reads to a reference genome to identify single nucleotide differences from the reference.</ns0:p><ns0:p>Although straightforward, the genome alignment approach has several shortcomings:</ns0:p><ns0:p>&#8226; a suitable reference may not exist; this is especially important for unstable genomes such the anuploid genomes frequently encountered in cancer <ns0:ref type='bibr'>(Beroukhim, Mermel, Porter, et al., 2010)</ns0:ref>, and also for some organisms with large genetic diversity such as bacteria <ns0:ref type='bibr' target='#b13'>(Ochman, Lawrence, and Groisman, 2000)</ns0:ref>;</ns0:p><ns0:p>&#8226; selecting a reference may be difficult when there is uncertainty about what has been sampled; and</ns0:p><ns0:p>&#8226; it performs poorly when a sample contains significant novel material, i. e., sequences that are not simple variations of the reference.</ns0:p><ns0:p>Existing reference-free approaches are either based on assembly <ns0:ref type='bibr' target='#b12'>(Li, 2012)</ns0:ref>, which possibly introduces misassembly biases, or on searching for structural motifs within a universal de Bruijn graph of all samples <ns0:ref type='bibr' target='#b14'>(Peterlongo, Schnel, Pisanti, et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b10'>Iqbal, Caccamo, Turner, et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b19'>Uricaru, Rizk, Lacroix, et al., 2015)</ns0:ref> that correspond to simple variants.</ns0:p><ns0:p>We present a variant calling algorithm to generate features from unaligned raw reads. Rather than attempting to identify all genetic variation within a given set of samples, we instead focus on selected variants that have have strong statistical evidence for their ability to segregate samples in a given dataset. Such variants form useful features for many tasks including genomic prediction of a given phenotype, modelling population structure or clustering samples into related groups.</ns0:p><ns0:p>Our method uses the context surrounding a particular nucleotide to define variants. Given a set of reads, we model the probability of observing a given nucleotide conditioned on the surrounding prefix and suffix nucleotide sequences of length k as a multinomial distribution. We then estimate which of these contexts form potential variants, i. e., those that are stable intrasample and varying inter-sample, using a statistic based on the Kullback-Leibler divergence.</ns0:p><ns0:p>Given this list of candidate variants, we call those variants by maximum likelihood of our multinomial model. Furthermore, we show that the size of the context k can be chosen using the minimum message length principle <ns0:ref type='bibr' target='#b20'>(Wallace and Boulton, 1968</ns0:ref>) and that our context selection statistic is &#947;-distributed. Consequently, k can be determined from the data and the contexts surrounding variants can be selected with statistical guarantees on type-1 errors.</ns0:p><ns0:p>The utility of variant calling method was evaluated through simulation experiments and empirical analysis of a pair of bacterial datasets and a mouse dataset. Through simulations we showed the method has good power and false positive rate for detecting variants, though the ability to detect rare variants required high depth and large number of samples.</ns0:p><ns0:p>Our empirical results indicated our variants are highly informative for antimicrobial resistance phenotypes on the bacterial datasets and were able to accurately capture population structure.</ns0:p><ns0:p>On the mouse dataset, the variants were also found to be good for modelling coat colour.</ns0:p><ns0:p>Further investigations of the variants found for the bacterial dataset using a known reference sequence revealed variants associated with boxB repeat regions, a repeat previously used for population structure mapping <ns0:ref type='bibr' target='#b16'>(Rakov, Ubukata, and Robinson, 2011)</ns0:ref>, suggesting the model can generate features for more complex genetic elements. These results suggest the variants are capturing genotypic variation well and can model heritable traits in different organisms. Our proposed method will be of strongest utility when modelling of population structure, phylogenetic relationships or phenotypes from genotype for large scale datasets of organisms with either variable genomes (as is the case for many bacteria), or those lacking a reference genome.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>Methods</ns0:head><ns0:p>Our variant calling method comprises two steps: modelling the probability that a base is observed in a sample given the surrounding context; and determining which contexts surround variable bases in a population represented by several samples. The former provides a mechanism to call variants in a sample given a set of contexts, and the latter determines the set of contexts associated with variants. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='2.1'>Variant calling</ns0:head><ns0:p>We consider the case of variant calling directly from a collection of reads. Let random variable x i j taking values in {A,C, G, T } denote the j th nucleotide of the i th read, with 1 &#8804; i &#8804; n and 1 &#8804; j &#8804; m i the number of reads and nucleotides in the read i.</ns0:p><ns0:p>Definition 1 (k-context) The k-context around a nucleotide j consists of a k-prefix sequence</ns0:p><ns0:formula xml:id='formula_0'>&#960; k (x i , j) := [x i( j&#8722;k) , x i( j&#8722;k+1) , . . . , x i( j&#8722;1) ].</ns0:formula><ns0:p>and a k-suffix sequence</ns0:p><ns0:formula xml:id='formula_1'>&#963; k (x i , j) := [x i( j+1) , x i( j+2) , . . . , x i( j+k) ].</ns0:formula><ns0:p>Contexts that consist of only the prefix/suffix sequences are suffix/prefix-free.</ns0:p><ns0:p>Definition 2 (k-context probability) The k-context probability is the probability of observing a base at a particular position given the context, that is</ns0:p><ns0:formula xml:id='formula_2'>P(x i j |&#960; k (x i , j), &#963; k (x i , j)).</ns0:formula><ns0:p>The k-context probabilities can be estimated from the data by maximising a pseudolikelihood. </ns0:p><ns0:formula xml:id='formula_3'>Let f (b, &#960; k , &#963; k ) := 1 + &#8721; i j x i j = b &#8743; &#960; k = &#960; k (x i , j) &#8743; &#963; k = &#963; k (x i , j)</ns0:formula><ns0:formula xml:id='formula_4'>P(b|&#960; k , &#963; k ) := f (b, &#960; k , &#963; k ) &#8721; b f (b , &#960; k , &#963; k )</ns0:formula><ns0:p>.</ns0:p><ns0:p>The suffix/prefix free densities are thus</ns0:p><ns0:formula xml:id='formula_5'>P(b|&#960; k ) = &#8721; &#963; k P(b|&#960; k , &#963; k ) and P(b|&#963; k ) = &#8721; &#960; k P(b|&#960; k , &#963; k ).</ns0:formula><ns0:p>Given a context (&#960; k , &#963; k ), the base can be called as arg max b P(b|&#960; k , &#963; k ), and similarly for prefix/suffix free densities.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Variant finding</ns0:head><ns0:p>Determining the list of variants consists of determining which contexts (&#960; k , &#963; k ) surround a variable base in our population, then call the base for each variant-defining context and each sample. We consider inter-sample variants and not intra-sample variants; we are interested in finding contexts which define variants that differ amongst samples and are not attributable to noise. In this section, we develop a statistic based on the Kullback-Leibler (KL) divergence that achieves these two points.</ns0:p><ns0:p>Let X be a set of samples, each consisting of a collection of reads as defined above. For each x &#8712; X , we refer to the j th nucleotide of the i th read as x i j , the number of reads in the sample as n x , and the number of nucleotides in read x i as m x i . Similarly to the previous section, we denote f x (b, &#960; k , &#963; k ) as the frequency of observing base b given context (&#960; k , &#963; k ) for sample x. As before, a pseudocount is used when estimating f x to encode a uniform prior.</ns0:p><ns0:p>The KL divergence measure provides a way of quantifying the differences between two probability distributions. We will develop a statistic based upon the KL-divergence that compares Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Definition 3 (Kullback-Leibler divergence) Let P and Q be two discrete probability densities over the domain Y . The Kullback-Leibler (KL) divergence is</ns0:p><ns0:formula xml:id='formula_6'>P(&#8226;) || kl Q(&#8226;) := &#8721; y&#8712;Y P(y) log P(y) Q(y) .</ns0:formula><ns0:p>Definition 4 (Total divergence) The total divergence for a given context (&#960; k , &#963; k ) is estimated as the total KL divergence between the samples in the dataset X and the expected probability distribution given the context:</ns0:p><ns0:formula xml:id='formula_7'>D X (&#960; k , &#963; k ) := &#8721; x&#8712;X P x (&#8226;|&#960; k , &#963; k ) || kl Q(&#8226;|&#960; k , &#963; k ),</ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_8'>P x (&#8226;|&#960; k , &#963; k ) := f x (b, &#960; k , &#963; k ) &#8721; b f x (b , &#960; k , &#963; k )</ns0:formula><ns0:p>.</ns0:p><ns0:p>denotes the probability density estimated for sample x and context (&#960; k , &#963; k ) and</ns0:p><ns0:formula xml:id='formula_9'>Q(b|&#960; k , &#963; k ) := &#8721; x&#8712;X f x (b, &#960; k , &#963; k ) &#8721; x&#8712;X ,b f x (b , &#960; k , &#963; k )</ns0:formula><ns0:p>.</ns0:p><ns0:p>The total divergence statistic is proportional to the expected KL-divergence between a sample and the global expected probability distribution. To see why this statistic is robust to noise consider the case where variation is due purely to noise. As the noise distribution is independent of sample, it will be well modelled by the expected distribution Q and therefore the divergence between each sample and Q will be small. Conversely, if variation is due to samples being drawn from two or more latent probability densities, then Q will be an average of these latent densities and divergence will be high.</ns0:p><ns0:p>The next theorem is crucial for determining when a particular divergence estimate indicates a significant divergence from the expected distribution Q. Using this theorem, we can use hypothesis testing to select which contexts are not well explained by Q. These contexts not well explained by Q are variant and we call them as in section 2.1.</ns0:p><ns0:p>Theorem 5 Under random sampling from Q, D follows a &#947; distribution.</ns0:p><ns0:p>The proof of this theorem is trivial given a well known result regarding the G-test (see <ns0:ref type='bibr' target='#b18'>Sokal and Rohlf (1994)</ns0:ref>):</ns0:p><ns0:p>Lemma 6 Let f x be a frequency function and g</ns0:p><ns0:formula xml:id='formula_10'>:= E[ f x ]. The G-test is G := &#8721; x&#8712;X &#8721; b&#8712;{A,T,C,G} f x (b, &#960; k , &#963; k ) log f x (b, &#960; k , &#963; k ) g(b, &#960; k , &#963; k ) .</ns0:formula><ns0:p>Under the null hypothesis that f x results from random sampling from a distribution with expected frequencies g, G follows a &#967; 2 distribution with 3|X | degrees of freedom asymptotically.</ns0:p><ns0:p>From this lemma, the proof of theorem 5 follows easily:</ns0:p><ns0:formula xml:id='formula_11'>Proof D is proportional to the G-test. As the G-test is &#967; 2 -distributed, D is &#947;-distributed.</ns0:formula><ns0:p>Clearly our statistic D is very similar to G, but has an important property: D is invariant to coverage. As D operates on estimates of the probability rather than the raw counts, changes in coverage are effectively normalised out. This is advantageous for variant discovery as it avoids coverage bias and allows variants to be called for (proportionally) low-coverage areas, if statistical support for their variability in the population exists.</ns0:p><ns0:p>To select contexts a &#947; distribution is fitted to the data. For the results in our experiments, we used a Bayesian mixture model with a &#946; prior over the mixing weights whereby each context could originate from the null (&#947;) distribution or from a uniform distribution. The mixing weights were then used to determine if a context is not well supported by the null distribution. Such a model comparison procedure has several advantages and directly estimates the probabilities of support by the data for each context <ns0:ref type='bibr' target='#b11'>(Kamary, Mengersen, Robert, et al., 2014)</ns0:ref>, providing an easily interpretable quantity.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Choosing context size</ns0:head><ns0:p>The problem of choosing context size k is difficult; if too large then common structures will not be discovered, and if too small then base calling will be unreliable. We propose to choose k using the minimum message length principle <ns0:ref type='bibr' target='#b20'>(Wallace and Boulton, 1968)</ns0:ref>.</ns0:p><ns0:p>Consider a given sample x. The message length of a two-part code is the length of the compressed message plus the length of the compressor/decompresser. In our case, the length of the compressed message is given by the entropy of our above probability distribution:</ns0:p><ns0:formula xml:id='formula_12'>L(x; P(&#8226;|&#960; k , &#963; k )) := &#8722; &#8721; i j log P(x i j |&#960; k , &#963; k ).</ns0:formula><ns0:p>The compressor/decompresser is equivalent to transmitting the counts for the probability distribution. This can be thought of as transmitting a k length tuple of counts. Let N = &#8721; i (m i &#8722; 2k) be the total number of contexts in the read set (i. e., the total number of prefix and suffix pairs in the data). Thus, N+4 2k &#8722;1</ns0:p><ns0:formula xml:id='formula_13'>4 2k &#8722;1</ns0:formula><ns0:p>count distributions are possible amongst the number of total prefix and suffix pairs (4 k &#215; 4 k = 4 2k distinct prefix/suffix pairs), giving a total message length of</ns0:p><ns0:formula xml:id='formula_14'>ML(x; P(&#8226;|&#960; k , &#963; k )) := L(x; P(&#8226;|&#960; k , &#963; k )) + log N + 4 2k &#8722; 1 4 2k &#8722; 1 .</ns0:formula><ns0:p>Approximating the R.H.S using Stirling's approximation and dropping constant terms yields</ns0:p><ns0:formula xml:id='formula_15'>ML &#8764; &#8733; L(x; P(&#8226;|&#960; k , &#963; k )) + 2 N + 2 4 k+1 &#8722; 1 log N + 2 4 k 2 &#8722; 4 k 2 4 k+1 &#8722; 1 + 1 log 2 2 .</ns0:formula><ns0:p>For suffix free densities the message length simplifies to</ns0:p><ns0:formula xml:id='formula_16'>ML(x; P(&#8226;|&#960; k )) := L(x; P(&#8226;|&#960; k )) + log N + 4 k &#8722; 1 4 k &#8722; 1 &#8764; &#8733; L(x; P(&#8226;|&#960; k )) + 2 N + 2 2 k+1 &#8722; 1 log N + 2 2 k 2 &#8722; 2 k 2 2 k+1 &#8722; 1 + 1 log 2 2 ,</ns0:formula><ns0:p>and similarly for prefix free. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='2.4'>Prefix/suffix free contexts</ns0:head><ns0:p>The method we have presented so far has been developed for any contexts defined by any combination of prefix and suffix. The question of whether prefix/suffix-free contexts or full contexts (both prefix and suffix) naturally arises. The decision depends on the type of variants of interest: using full contexts will restrict the variants to single nucleotide variants (SNV), while one sided contexts allow for more general types of variants such as insertions and deletions. Full contexts also have less power to detect variation caused by close-by SNVs; two SNVs in close proximity will create several different contexts when modelling with both prefixes and suffixes.</ns0:p><ns0:p>It is also worth remarking that the choice between prefix and suffix free contexts is immaterial under the assumption of independent noise and sufficient coverage. Thus, our experiments concentrate on suffix-free contexts as it is the more general case.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.5'>Reference-based variant calling</ns0:head><ns0:p>To compare the ability of our proposed method to a reference-based approach, we have processed all datasets using a standard mapping-based SNP calling pipeline. Using SAMtools v1.2-34, raw reads from each sample were mapped to the relevant reference sequence and sorted. The mapped reads are then further processed to remove duplicates arising from PCR artefacts using Picard v1.130 and to realign reads surrounding indels using GATK v3.3-0. Pileups are then created across all samples using SAMtools and SNPs are called using the consensus-method of BCFtools v1.1-137. The resulting SNPs were then filtered to remove those variants with phred-scaled quality score below 20, minor allele frequency below 0.01 or SNPs that were called in less than 10% of samples.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>Results</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1'>Simulation study</ns0:head><ns0:p>We first investigate the power and the false positive rate (FPR) of our method by simulations as minor allele frequency (MAF), sequencing depth, and sample size are varied. A total of 3,000 contexts per sample, of which one was a variant site with two possible alleles across the population, were simulated by sampling counts from a multinomial distribution. This corresponds to a simulating a SNP, indel or any other variant whose first base, i. e., the base directly following the context, is bi-allelic. Each context was simulated with a sequencing read error of 1% by sampling from a multinomial distribution, with the total number of simulations per context determined by the specified sequencing depth. Variants were determined by fitting a gamma distribution and rejecting at a level of p &lt; 0.05 corrected for multiple testing by Bonferroni's method. This procedure is repeated 1,000 times for each combination of simulation parameters.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> and fig. <ns0:ref type='figure'>2</ns0:ref> shows the results of the simulation. With a depth of 25 our method is able to recover the variant site with high power when the MAF is 20% or higher, even with few samples (50). The FPR was also well controlled, but reduces sharply with moderate depth (&gt;25) at 100 samples, and is low at most depth for 1,000 samples. Identification of rare variants at low sample sizes (1% MAF at 100 samples) is not reliable, however rare variants are still identifiable with high power at high depth and samples (depth greater than 64 and 1,000 samples). Figure <ns0:ref type='figure'>1</ns0:ref>: Power curves for 3000 simulated contexts with a single variant context for varying depth and sample size (panels). The bi-allelic variant context was simulated 1,000 times and curves show the mean of the 1,000 simulations. The error for the mean is less than 3% in all cases.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Empirical experiments</ns0:head><ns0:p>We also evaluated our method on three different datasets: two datasets are of Streptococcus pneumoniae bacteria, one collected in Massachusetts (Nicholas J <ns0:ref type='bibr' target='#b5'>Croucher, Finkelstein, Pelton, et al., 2013)</ns0:ref> and the other in Thailand <ns0:ref type='bibr'>(Chewapreecha, Harris, Nicholas J Croucher, et al., 2014)</ns0:ref>; and one mouse dataset <ns0:ref type='bibr'>(Fairfield, Gilbert, Barter, et al., 2011)</ns0:ref>. The two S. pneumoniae datasets comprise 681 and 3,369 samples sequenced using Illumina sequencing technology. The Jax6 mouse dataset <ns0:ref type='bibr'>(Fairfield, Gilbert, Barter, et al., 2011)</ns0:ref> contains sequenced exomes of 16 inbred mouse lines.</ns0:p><ns0:p>All experiments were conducted with suffix-free contexts and only contexts present across all samples were evaluated for variants. Our method identified 40,071 variants in the Massachusetts dataset, 57,050 in the Thailand dataset, and 50,000 in the mouse dataset. We refer to these as KL variants.</ns0:p><ns0:p>We also compare our method with a mapping-based SNP calling approach on the S. pneumoniae datasets. Using sequence for S. pneumoniae ATCC 700669 (NCBI accession NC_011900.1) as a reference, there were 181,511 and 251,818 SNPs called for the Massachusetts and Thailand datasets. To be comparable with the resulting binary SNPs calls, we transform our multi-allelic variants to binary variants with the major allele being one and other alleles being zero.</ns0:p><ns0:p>Finally, we compare our results with variants called by another reference-free caller Dis-coSNP++ <ns0:ref type='bibr' target='#b19'>(Uricaru, Rizk, Lacroix, et al., 2015)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Figure <ns0:ref type='figure'>2</ns0:ref>: False positive rate for 3000 simulated contexts with a single variant context for varying depth and sample size (panels) as described in fig. <ns0:ref type='figure'>1</ns0:ref>. The error for the mean is less than 3% in all cases.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Message lengths</ns0:head><ns0:p>Our first experiment investigated the optimal k resulting from our message length criterion (see section 2.3). Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> shows the results of various contexts sizes on three samples, one from each of the Massachusetts S. pneumoniae, Thailand S. pneumoniae and Jax6 mouse data. Both S.</ns0:p><ns0:p>pneumoniae samples had the shortest message length at k = 14, and the 129S1/SvImJ mouse line had the shortest message length at k = 15.</ns0:p><ns0:p>To evaluate the stability of the message length criterion, the optimal k according to message length was calculated on all samples from the Massachusetts data (table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>). Most samples (83%) had an optimal length of k = 14, with the remainder being optimal at k = 13. Investigation into the singleton sample with minimal length at k = 9 revealed a failed sequencing with only 18,122 reads present. We also evaluated all samples present in the Jax6 dataset and found all samples had minimal message length at k = 15. The stability of k is therefore high and we use k = 14 for the two S. pneumoniae datasets and k = 15 for the Jax6 mouse dataset henceforth in all experiments. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The optimal k under the MML framework is k = 14 for the S. pneumoniae datasets and k = 15 for Jax6.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4'>Supervised learning performance</ns0:head><ns0:p>To investigate the robustness of our variants for genomic prediction tasks, we evaluated the ability of variants called on the Massachusetts S. pneumoniae dataset for the prediction of Benzylpenicillin resistance under different training and testing scenarios across the two S. pneumoniae datasets. Each sample was labelled as resistant if the minimum inhibitory concentration exceeded 0.063 &#181;g/mL <ns0:ref type='bibr'>(Chewapreecha, Marttinen, Nicholas J. Croucher, et al., 2014)</ns0:ref>. In all tasks, a support vector machine (SVM) <ns0:ref type='bibr' target='#b17'>(Schlkopf and Smola, 2001)</ns0:ref> was used to predict resistance from the variants, and the performance measured using the Area under the Receiver Operating Characteristic (AROC).</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> shows the results of the experiments. Each row indicates what dataset models were trained on and the columns denote the testing dataset. For intra-dataset experiments (i. e., the diagonal), AROC was estimated using 10-fold cross validation.</ns0:p><ns0:p>Our variants are clearly capturing the various resistance mechanisms, as evident by the strong 10-fold cross validation predictive performance. In comparison to the traditional pipeline and DiscoSNP++ features (on Massachusetts data only) also performed well. Given the high level of accuracy, the three methods do not differ significantly in performance.</ns0:p><ns0:p>The model trained using our variants on the Massachusetts data is moderately predictive on the Thailand dataset. Conversely, the model from the Thailand dataset can also moderately predict resistance in the Massachusetts data, but to a lesser degree. One possible explanation for this limited predictive ability is the existence of resistance mechanisms unique to each dataset, hence a model trained on one dataset will not capture unobserved mechanisms and consequently the model is unable to predict resistance arising form these unknown mechanisms.</ns0:p><ns0:p>This hypothesis is supported by the strong performance observable on the diagonal: when We also evaluated our variants for predicting coat colour on the Jax6 mouse dataset <ns0:ref type='bibr'>(Fairfield, Gilbert, Barter, et al., 2011)</ns0:ref>. As few samples are available (14 labelled samples), we reduced the problem to a 2-class classification problem, classifying coat colour into agouti or not. This led to a well balanced classification problem with 8 samples in the agouti classes and 6 not.</ns0:p><ns0:p>The performance for this task was estimated at 96% AROC using leave-one-out (LOOCV) cross-validation, suggesting the variants are also predictive of heritable traits in higher level organisms. Figure <ns0:ref type='figure'>4</ns0:ref> Manuscript to be reviewed Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5'>Population structure</ns0:head><ns0:p>Finally, we investigate the population structure captured by KL variants and the SNP calls on the Massachusetts dataset. The population structures were estimated using Principle Component Analysis (PCA), a common approach whereby the top principal components derived across all genetic variants reflect underlying population structure rather than the studied phenotype of interest <ns0:ref type='bibr' target='#b15'>(Price, Zaitlen, Reich, et al., 2010)</ns0:ref>. Five sub-populations (clusters) were identified using k-means on the first two principal components from the SNP data. Projecting those 5 clusters on to the principal component scores of our variants (fig. <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>) results in highly concordant plots.</ns0:p><ns0:p>Four out of the five clusters can be easily identified using our variants, indicating the detected variation preserves population structures well. A canonical correlation analysis (CCA) was performed to further assess the similarities between the two feature sets. Regularisation was used to find the canonical vectors as the cross-covariance matrices are singular for our dataset. As there are significantly more features than samples, regularised CCA was used and the correlation between projections estimated using 100 samples of leave-one-out bootstrap <ns0:ref type='bibr' target='#b9'>(Hastie, Tibshirani, and Friedman, 2011)</ns0:ref>. We found the first three components explain all the variance (99%), with the first component alone explaining 76%. Therefore, both mapping-based SNPs and KL variants are largely capturing the same variance on the Massachusetts data. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head n='3.6'>Analysis of contexts</ns0:head><ns0:p>Computer Science <ns0:ref type='bibr' target='#b16'>(Rakov, Ubukata, and Robinson, 2011)</ns0:ref>. This suggests the variants may be tagging more complex structural elements than just single nucleotide variants.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>Conclusions</ns0:head><ns0:p>We presented a novel reference-free variant detection method for next-generation sequence data.</ns0:p><ns0:p>Our method has the advantage of no tuning parameters, rapid calling of known variants on new samples, and may be suited for targeted genotyping once a known set of variants are obtained.</ns0:p><ns0:p>Simulation experiments showed the method is relatively robust and has good power and FPR to detect common variants, but for rare variants the power was lower and a high depth and number of samples were required to reliably detect them.</ns0:p><ns0:p>In a typical genomic prediction setting the method was able to predict heritable phenotypes on both a bacterial dataset (anti-microbial resistance) and on a mouse dataset (coat-colour). On the S. pneumoniae datasets, our method was shown to have similar performance to a standard alignment-based SNP calling pipeline, with its requirements for a suitable reference genome.</ns0:p><ns0:p>Moreover, the method was shown to capture the same population structure on the Massachusetts Streptococcus bacterial datasets as an alignment-based variant calling approach. These results</ns0:p><ns0:p>show our method is capable of capturing important genomic features without a known reference.</ns0:p><ns0:p>As with other reference-free variant calling methods, interpretation of the detected variants is more difficult compared to a mapping-based approach as called variants are reported without positional information. One approach to obtain such annotations is to map the variant and its context back to a given reference. Given that most sequences with a length greater than 15bp that exist in a given bacterial reference will have a unique mapping, many variants could be easily mapped back. However, such information is unlikely to exist for variants that do not occur in the reference, or may be misleading for variants that arise through complicated procedures such as horizontal gene transfer. Alternatively, variants and their context could be examined via BLAST searches to determine whether these sequences correspond to previously identified genes or other genomic features.</ns0:p><ns0:p>In our experiments we used a combination of these approaches to investigate some of the variants found on the bacterial dataset. We identified contexts that mapped to numerous locations in the reference genome and then used BLAST to identify the likely origin of the sequence.</ns0:p><ns0:p>Through this method, variants associated with boxB repeat sequence were found, suggesting our method is capturing variance associated with complex structures.</ns0:p><ns0:p>We envisage that the method proposed here could be used to conduct a rapid initial analysis of a given dataset, such as species identification, outbreak detection or genomic risk prediction.</ns0:p><ns0:p>Our method also enables analysis of data without a suitable reference while still avoiding the computationally expensive step of assembly. Furthermore, our method scales linearly with the total number of reads, allowing application to large datasets.</ns0:p><ns0:p>The statistical framework established in this work is quite general and could be expanded in several ways. While we have examined only single nucleotide variants within this work, insertions and deletions could be explicitly modelled within this framework at the cost of increased computational expense. It may also be possible to model other types of variants, such as microsatellites, provided that a suitable representation for them could be found.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>2</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2015:12:8180:1:0:CHECK 2 May 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>denote the counts of how often b was observed with k-prefix &#960; k and k-suffix &#963; k in the read set x, where &#8226; is the Iverson bracket. Here the pseudocount encodes a weak uniform prior. The probability density estimate of observing a base b in context (&#960; k , &#963; k ) is then given by</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Message length for prefix-only contexts on two S. pneumoniae samples from the Massachusetts and Thailand datasets, and the 129S1/SvImJ mouse line from the Jax6 dataset.The optimal k under the MML framework is k = 14 for the S. pneumoniae datasets and k = 15 for Jax6.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Table 2 :Figure 4 :</ns0:head><ns0:label>24</ns0:label><ns0:figDesc>Figure4: ROC produced from leave-one-out cross-validation performance predicting agouti coat colour from KL variants on Jax6 mouse dataset. AROC is 96%.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>shows the ROC for this classification problem. 10 PeerJ Comput. Sci. reviewing PDF | (CS-2015:12:8180:1:0:CHECK 2 May 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5: First two principal components derived from alignment-based SNP calls (left) and from variants detected by our method (right) applied to the Massachusetts S. pneumoniae dataset. Each point represents a sample and the colours denotes the cluster assignment determined by k-means clustering. The similar pattern of samples in each plot indicates that the same population structure signal is detected by the two variant detection methods.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Proportion of samples in Massachusetts data by optimal k.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Optimal k Count</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>13</ns0:cell><ns0:cell>113</ns0:cell></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell>567</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2015:12:8180:1:0:CHECK 2 May 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Correlation coefficients for first 5 CCA components, estimated using 10-fold crossvalidation on Massachusetts data.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Component Correlation coefficient (&#177;95% CI)</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>0.873 &#177; 0.014</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>0.880 &#177; 0.006</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>0.877 &#177; 0.007</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>0.862 &#177; 0.007</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>0.867 &#177; 0.008</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>contexts, less than 1% failed to align, 41% aligned in a single location, and the remainder aligned</ns0:cell></ns0:row><ns0:row><ns0:cell>in two or more locations.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>One context aligned in 82 different locations in the reference genome. Further investigation</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>revealed the context corresponds to a boxB repeat sequence. Such repeats have previously</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>been used to identify population structure of S. pneumoniae isolates carrying the 12F serotype,</ns0:cell></ns0:row><ns0:row><ns0:cell>supporting our population structure findings</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2015:12:8180:1:0:CHECK 2 May 2016) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Editor’s comments The second referee is asking for major revisions to enforce experimental validation. It is especially important that the question about a possible bias in the method is answered properly. The answer might arise from an additional run on new data. Given the concerns of possible bias, we have added simulation experiments evaluating the method under different conditions. We found that the method has good power and false positive rate once a modest depth and sample size is reached. Given these new results, we believe the method is unbiased and well behaved. A second important requirement is the need for control data. Our simulation experiments add an additional source of data where the truth is known. The method performed well on simulated data as we described above. Reviewer 1 (Anonymous) Comments for the author I conclude with three (minor) typos the authors might want to correct in a revision: Line 201: “. . . is therefore high and we used. . . ”; Corrected Line 234: appli -> apply? Corrected Line 272: delete one of the two “avoiding”. Corrected Reviewer 2 (Jean-Marc Steyaert) Basic reporting One will however be surprised that the authors use systematically “statistic” and not “statistics” as a substantive, whereas it is declared as an adjective in most dictionaries. The Oxford english dictionary recognises the word statistic in both adjective and noun form. As a noun, statistic refers to a numerical summarisation of a sample of data, which fits the purpose at hand. 1 However the abstract and introduction do not show properly the goals of this study, its major achievements and the domain of validity of the method. This is a real weakness of the paper. We have adjusted the introduction and conclusion of this paper to better highlight the contributions of this study Figure 1 is good, Figure 2 average but Figure 3 is weak and hardly understandable. We have clarified the legend for Figure 3 and have added additional text to Section 3.5 (previously Section 3.3) to describe how the top principal components derived from a set of genetic variants are commonly used to detect and display population structure. Experimental design Experimental proof is weak. Only three sets of genomes have been study which is clearly not enough for validation. We believe that the additional simulation studies should resolve this issue. Validity of the findings My main problem comes from the fact that I do not see a link between the mathematical properties and the claim that this method will indeed find the relevant variants in most situations. The last paragraph of the introduction (lines 52-57) is apparently based on 3 experimental studies with no quantitative argument! It is surprising that the numbers of variants are all of the same orders of magnitude (lines 176-179) and that there is no analysis of a possible biais in the method under certain conditions. To address these concerns we have added simulation experiments to the paper. These experiments evaluate the method under different conditions such as varying MAF, sequencing depth, and number of samples. We found that the method has good power and false positive rate once a modest depth and sample size is reached, though the power to detect rare variants is low unless both sample size and depth are high. Given these new results, we believe the method is unbiased and well behaved. Please also note that the mouse dataset is exome sequencing not whole genome, which also helps to explain why the number of variants is similar across the datasets (due to their similar sizes). We also added a section examining some of the contexts found by mapping the contexts back to the reference. The new section explores contexts that map to multiple loci in the genome. We find some contexts associated with boxB repeats, which have been previously found to be differential for population structure of 12F Streptococcus serotypes, suggesting that we are indeed able to detected the presence of structure variation. 2 The last paragraph of the introduction in Section 3 (lines 186-189) is also puzzling: any computer scientist should explain why a program is so time consuming. . . This comment is a difficult one to answer given the limited insight we have into the implementation of the discoSNP software. We suspect that the limitations of the discoNSP software are due to the software building a deBruijn graph over all k-mers observed across all samples in memory. As the Thai S.pneumoniae dataset we are examining has a very large sample size (3000+ isolates), we believe the software runs against memory limitations. However, as no clear errors are reported and we have only limited details of the implementation details, we refrain from making these comments within the article and believe exploration of this is out of scope. Finally, I think that the considerations about the accuracy of the predictions should be stated by more rigorous experimental checking. I would guess that the method goes not so badly whenever the solution is close to the center of the gaussian, but that it works poorly elsewhere. We do not fully understand the criticism; the model does not rely on any Gaussian assumptions, nor does the predictive model we used for classifying drug resistance (SVM). We hope that the additional simulation experiments sufficiently addresses the underlying issue. We are happy to do further investigations if not. A first check would be to test the accuracy would be to start with the prefix/suffix model on validated data under several different genetic situations. Then I would try INS/DEL situations again under different clustering conditions. Our simulation experiments are targeting both the INDEL and SNP cases. It is impossible under our framework to differentiate between the two. Comments for the author I suggest that the authors strengthen their experimental validation by a more thorough and systematic evaluation of the relevance of their results. Paragraphs such as 2.4, 2.5 and Section 3 should be rewritten. The introduction and the conclusion will be accordingly modified. We have adjusted parts of Section 3 given the addition of simulations and our investigation into the contextualisation of detected variant contexts. The introduction and conclusion have also accordingly been updated. Sections 2.4 and 2.5 remain the same as no specific criticism of these have been provided. 3 "
Here is a paper. Please give your review comments after reading it.
235
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>In this paper, we study the air quality monitoring and improvement system based on wireless sensor and actuator network using LoRa communication. The proposed system is divided into two parts, indoor cluster and outdoor cluster, managed by a Dragino LoRa gateway. Each indoor sensor node can receive information about the temperature, humidity, air quality, dust concentration in the air and transmit them to the gateway. The outdoor sensor nodes have the same functionality, add the ability to use solar power, and are waterproof. The full-duplex relay LoRa modules which are embedded FreeRTOS is arranged to forward information from the nodes it manages to the gateway via uplink LoRa. The gateway collects and processes all of the system information and makes decisions to control the actuator to improve the air quality through the downlink LoRa. We build data management and analysis online software based on The Things Network and TagoIO platform. The system can operate with a coverage of 8.5 km, where optimal distances are established between sensor nodes and relay nodes and between relay nodes and gateways at 4.5 km and 4 km, respectively. Experimental results observed that the packet loss rate in real-time is less than 0.1% prove the effectiveness of the proposed system.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In recent years, the problem of air pollution in residential areas and factories is becoming more severe under the impact of six factors: Nitrogen Oxide (NO x ), Sulfur Oxide (SO x ), Carbon Monoxide (CO), lead, ground ozone, and dust. Among them, fine particles (PM 2.5 ) with a size of fewer than 2.5 microns cause the most serious consequences, as they can penetrate deeply into the lungs, affecting both the respiratory system and circulatory system <ns0:ref type='bibr' target='#b32'>(Wang et al., 2020)</ns0:ref>. According to the World Health Organization (WHO) statistics, annually, more than 90% of people are exposed to outdoor concentrations of PM2.5 that are higher than the air quality standards. According to the study <ns0:ref type='bibr' target='#b18'>(Liu et al., 2020)</ns0:ref>, whether people's health status is serious or not will depend on the degree and time of exposure to the polluted air. In addition to its negative impacts on the environment and human health, air pollution also reduces productivity and reduces energy efficiency. Several studies have demonstrated an increase in CO and CO 2 levels, leading to an increase in the amount of volatile organic compounds (VOCs), odors, and microorganisms in the air <ns0:ref type='bibr' target='#b27'>(Sadatshojaie and Rahimpour, 2020)</ns0:ref>. That makes a decrease in humans' ability to concentrate. Furthermore, according to <ns0:ref type='bibr' target='#b13'>(Franco and Schito, 2020;</ns0:ref><ns0:ref type='bibr' target='#b12'>Franco and Leccese, 2020)</ns0:ref> study, controlling CO and CO 2 concentration in the air can lead to up to 5% to 20% energy savings in HVAC systems in buildings. contains two sub-systems: indoor and outdoor. The system can connect to many different sensors such as dust, CO, LPG, and CH 4 concentration sensors. The system is expandable due to the operation of the full-duplex relay LoRa module. We proposed using LoRa wireless communication technology for node-to-relay layer and LoRaWAN, a network protocol using LoRa, for relay-togateway layer.</ns0:p><ns0:p>&#8226; The designing negative ion generators as actuators are proposed at the nodes with high efficiency and operating continuously for a long time.</ns0:p><ns0:p>&#8226; The ADC noise reduction mode and digital Kalman filters are proposed to enhance the reliability of the air parameter measurements. In addition, we proposed using the real-time operating system FreeRTOS to manage tasks for the full-duplex relay LoRa module.</ns0:p><ns0:p>&#8226; The design the software monitoring parameters on the web and smartphone interface using The Things Network (TTN) and TagoIO platform are proposed. This paper is organized as follows. In Section II, we introduce the literature review. In Section III, the hardware and software of the proposed system are described. In Section IV, we evaluate the system performance with the critical parameter. And the last one, we present the conclusions in Section V.</ns0:p></ns0:div> <ns0:div><ns0:head>LITERATURE REVIEW</ns0:head><ns0:p>In this section, we present studies related to the wireless sensor and actuator network and air quality monitoring and improvement system. <ns0:ref type='bibr' target='#b9'>Dhingra et al. (2019)</ns0:ref> proposed a WSN based on WiFi communication with the hardware of sensor nodes, cameras, and Android data monitoring applications. Sensor nodes detect harmful gases such as CO 2 , CO, ... and the camera will collect traffic data in the city. Data stored at Cloud servers and Android applications will help users identify routes with high levels of air pollution and choose suitable routes to move. However, the authors have not mentioned dust -the leading pollutant at present in their work.</ns0:p><ns0:p>Although the Android application helps users find less polluted routes to transport, the interface is simple, not meeting the need for visual monitoring through graphs. The study also did not specify that the most critical parameters of Wifi-based WSN, i.e., the distance of data transmission, network coverage, the lifetime of the sensor node. Furthermore, a WSN model that is entirely dependent on Wi-Fi is not reliable in practice. <ns0:ref type='bibr' target='#b23'>Marques et al. (2019)</ns0:ref> proposed a model for indoor air quality monitoring. The system uses an MHZ-16 sensor to measure CO 2 indoors and uses the ESP8266 module to transmit data to the web-server. The IAirCO2 application helps users to recognize the pollution level where the sensor is located. However, the defect of the application interface is still relatively simple, not meeting the graph's visual monitoring need. System customization and scalability are also limited because the design is based on Sparfkun's existing hardware. <ns0:ref type='bibr' target='#b4'>Arroyo et al. (2019)</ns0:ref> proposed a low-cost, low-size, low-power consumption real-time monitoring air quality system. The sensor nodes communicate with the master station via Zigbee communication.</ns0:p><ns0:p>An optimized fog computing system has been performed to store, counselor, process, and imagine the sensor network's data. Data processing and analysis are implemented in the Cloud by applying artificial intelligence techniques to optimize compounds and contaminants. Finally, the authors use a simple case study to prove the algorithm's effectiveness in detecting and classify harmful emissions.</ns0:p><ns0:p>The Zigbee network is proposed by <ns0:ref type='bibr' target='#b0'>Abraham and Li (2014)</ns0:ref> for indoor air quality monitoring. The hardware model is deployed indoor with four sensor nodes measuring air quality using the Zigbee network deployed by Arduino and XBee transmission module. Sensor nodes detect harmful gases parameters and sent to the base station for storing. The author is elaborate on processing measured data, and the parameters are represented in the form of objective graphs. However, the system's applicability is limited because the system only monitors indoor air quality and does not mention other polluting agents such as dust, bacteria in the house. Moreover, the author is not clear about the data transmission distance and protocol of Zigbee communication.</ns0:p><ns0:p>LoRa communication has been proposed and applied in many designs to solve the limited transmission distance problem of Wi-Fi, Bluetooth, or Zigbee. <ns0:ref type='bibr' target='#b2'>Alvear-Puertas et al. (2020)</ns0:ref> proposed a model to monitor parameters CO, NO 2 , PM 10 , PM 2.5 using an STM32F107 microcontroller. The system consists of a sensor Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>node that communicates with one server via LoRa communication. The authors have conducted many experiments measuring environmental parameters and comparing them with the national control station's standard data. As a result, the system can operate well within 2 km and an error of 5 to 8%. However, the proposed system model is quite simple and does not meet a wide-area WSN system's requirements. <ns0:ref type='bibr' target='#b11'>Firdaus et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b33'>Wang et al. (2017)</ns0:ref> present a similar model of using LoRa networks in air quality monitoring. In addition to the parameters of the level of environmental air pollution, the authors also examine the issues of battery life and time delay during communication. However, the authors have not given the survey when the system works with many sensor nodes and communication protocols to ensure the reliability of the data.</ns0:p><ns0:p>Most of the studies above have focused on air quality monitoring using WSNs with different transmission techniques. However, their most significant limitation is that the proposed model is quite simple and does not meet the practical requirements of WSN. The main system-specific parameters such as communication distance, packet loss rate have also not been investigated. Furthermore, the issue of improving air quality has not yet been investigated. Therefore, we propose a WSAN model for air quality monitoring and improvement using LoRa/LoRaWAN communication in this study.</ns0:p></ns0:div> <ns0:div><ns0:head>METHOD AND PROCEDURES System model</ns0:head><ns0:p>In this part, we present the proposed model of the WSAN system for indoor and outdoor air quality monitoring and improvement (WSAN-AQMIS) application using LoRa communication. The system consists of three parts: Indoor sensor cluster, Outdoor sensor cluster, and gateway which communicate through LoRa communication. The sensor nodes send information to the gateway using uplink LoRa, while the gateway controls the actuators through the downlink LoRa. Also, we use TTN to build real-time online data management software for our proposed system. Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> shows the proposed system model. Specifically, the indoor cluster includes four sensor nodes that monitor air quality parameters, including dust concentration, CO concentration, and GAS concentration. The indoor sensor node can connect to Wi-Fi with the Wi-Fi router and instantly transmit local data to the Blynk server. These nodes also feature integrated functions that can communicate with the gateway using the 433 MHz uplinks LoRa. Besides, we also design the outdoor cluster to provide long-range air quality monitoring for the system. The two outdoor sensor clusters are stratified into</ns0:p></ns0:div> <ns0:div><ns0:head>4/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2021:05:61818:1:1:NEW 23 Jul 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the outdoor node and the full-duplex relay node. Each outdoor sensor node will have an integrated solar collector responsible for collecting temperature, humidity, air quality parameters and transmitting information to the relay node. The relay nodes have a built-in function that forwards sensing data to the gateway. Depending on air quality, the gateway will make decisions to control the actuator using downlink LoRa. The actuators in our system are divided into two categories for indoor or outdoor equipment. In the indoor node area, we are equipped with an exhaust fan, and a negative ion generator (NIG) to enhance the air quality. On the outdoor, nebulizer controls will be fitted. These devices help to increase air humidity and clean the dust in large spaces. We do not use NIG outdoors due to the disadvantages of the power supply and the limited capacity of the device.</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture</ns0:head><ns0:p>In this section, we present the hardware design and technical working of the proposed system.</ns0:p><ns0:p>The Indoor cluster consists of four sensor nodes with two modes of operation. In the first mode, when the Wifi or LoRa transmission is inoperative, the nodes will communicate through ESP-NOW. It is a protocol developed by Espressif that helps nodes connect peer-to-peer does not require handshakes.</ns0:p><ns0:p>In the second mode, the nodes can communicate directly with the Blynk server via Wifi and connect to the gateway via LoRa communication. Although long-range is not required for indoor nodes, using the Lora-based for the whole system helps the data streams received by the gateway to have the same format and easily be processed. Furthermore, implementing LoRa makes it easy for us to perform Over-The-Air-Activation (OTAA), which is downloading new firmware to the ESP32 via wifi instead of using a traditional Serial port.</ns0:p><ns0:p>The Indoor nodes; as Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>; are distributed scattered in a narrow range such as households, offices, classrooms. We use the central processor, LoRa32 module manufactured by Heltec, which has a built-in LoRa communication module and peer-to-peer communication from the ESP32. It receives energy directly from the grid and communicates with dust, gas concentration, temperature, and humidity sensors. A critical component of the system is the full-duplex LoRa relay module as Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>, capable of extending the system's coverage. According to TTN Fair Access Policy, the maximum number of nodes that this gateway can support is:</ns0:p><ns0:formula xml:id='formula_0'>n = n f .ns.dc T = 1.86400.1% 30 = 28 (1)</ns0:formula><ns0:p>where n f is the number of frequency bands that the gateway supports, ns is the number of seconds in a day, dc is duty cycle, T is the air time per device per day.</ns0:p><ns0:p>While the communication range between LoRa modules is excellent, we still recommend using this module for several reasons. First, two LoRa modules work together in a transceiver to make the system easy to expand while not causing additional communication delays. One LoRa module will act as the receiver, while the other will act as the transmitter; they are managed by SPI interface with microcontroller and operate by interrupt mechanism. The second reason, if the system operates in half-duplex mode, two Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Software</ns0:head><ns0:p>This section highlights the algorithm of the proposed system.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>7</ns0:ref> describes the indoor node algorithm flowchart. The MCU sets the necessary input/output parameters for the system and also specifies the ESP-NOW protocol. We used LoRaWAN as the default protocol for the proposed system, but in the case, LoRa modules do not guarantee communication, the ESP-32 is set up to communicate peer to peer using the MAC address thus improving system reliability.</ns0:p><ns0:p>In case the user wants to use Blynk for on-the-spot surveillance with a smartphone without going through the gateway, the Blynk server will be set up. In case the ESP-NOW protocol is enabled, indoor sensor nodes can communicate with each other directly via the super energy-saving 2.4 GHz channel. ESP-NOW has built-in sending and receiving callback functions to secure communication, so it is a reliable protocol.</ns0:p><ns0:p>Before transmitting, the nodes will perform a MAC address pairing operation. After pairing is complete, the devices have established a network and can communicate with peer-to-peer. If a node in the network connects to the Dragino gateway, the network data will be automatically sent. Data is collected at the nodes using the 'Read sensor data' module. Sensor information is collected by the ADC module integrated into the micro-controller. We use Kalman Filter, a powerful digital filter that combines current uncertainty with environmental noise into a new, more reliable form of information for future prediction. The strength of Kalman filter is very fast running and high stability. Furthermore, we deploy ADC noise reduction, an internal noise reduction mechanism in the microcontroller by limiting the operation of the IO module, to increase measurement accuracy. With the help of two techniques, the nodes ensure accurate data measurement.</ns0:p><ns0:p>The gateway's feedback is an Air quality index (AQI) value that helps sensor nodes perform on/off actuators. The procedure for calculating the AQI of the parameters SO 2 ,CO, NO 2 , PM 10 , PM 2.5 is as follow:</ns0:p><ns0:formula xml:id='formula_1'>AQI x = I i+1 &#8722; I i BP i+1 &#8722; BP i (C x &#8722; BP i ) + I i<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where:</ns0:p><ns0:p>&#8226; AQI x is the air quality index of x parameter.</ns0:p></ns0:div> <ns0:div><ns0:head>8/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61818:1:1:NEW 23 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science &#8226; BP i is the lower limit concentration of observed parameter value specified in each region/country corresponding to the level i.</ns0:p><ns0:p>&#8226; BP i+1 is the upper limit concentration of observed parameter value specified in each region/country corresponding to level i + 1.</ns0:p><ns0:p>&#8226; I i is the AQI value at level i following the BP i value.</ns0:p><ns0:p>&#8226; I i + 1 is the AQI value at level i + 1 following the BP i+1 value.</ns0:p><ns0:p>&#8226; C x is specified as follows: For PM 2.5 and PM 10 parameters, C x is the average value collected after 24 hours. For SO 2 , NO 2 and CO parameters, C x is the average maximum value of one hour per day.</ns0:p><ns0:p>After having AQI x value of each parameter, the maximum value is chosen to be aggregated AQI value according to the following formula:</ns0:p><ns0:formula xml:id='formula_2'>AQI = max (AQI x )<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Notice the aggregated AQI value is rounded to an integer. The outdoor node's operation is similar to the indoor node but does not include the ESP-NOW protocol and focuses on energy saving. Figure <ns0:ref type='figure' target='#fig_10'>8</ns0:ref> depicts the Outdoor node operation algorithm flowchart. First, the system sets the Input/Output ports, SPI Thus, two LoRa Ra02 modules controlled by RTOS ensure the information acquisition and forwarding of received information to the gateway smoothly. The full-duplex protocol allows the system to operate without additional communication delay. However, the trade-off is that the packet loss rate is difficult to guarantee because only one module does the LoRa data collection service. Moreover, the last Task, which is lowest priority, is optional and ensures the visual mechanisms on OLED for debugging. We use Prioritized Pre-emptive Scheduling with Time Slicing algorithm to determine which tasks should be put into the Running state <ns0:ref type='bibr' target='#b29'>(Trivedi (2014)</ns0:ref>). Thus, the task in the Ready state with the highest priority will be executed first or take over the execution of the running task.These data use the same data resources and are controlled and synchronized using the Mutex binary semaphore. A mutex can be viewed as a token and assigned to a data resource. Whenever a task wants to access a resource, a token must be held. Then other tasks will be queued until the token is released.</ns0:p></ns0:div> <ns0:div><ns0:head>Management Software</ns0:head><ns0:p>In this section, we discuss about the structure of the management software of WSAN-AQMIS on the The Things Network (TTN) and TagoIO tool.</ns0:p><ns0:p>The Things Network is a global collaborative Internet of Things network that allows all members to bring their network together like one big Internet. The main difference between TTN and earlier networks is that it is a community project that does not depend on any corporate networks. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science TagoIO is a tool that can be directly connected to TTN thanks to Authorization by device-token and used to design a dashboard. TagoIO dashboard contains widgets that help users efficiently observe and manipulate real-time data. All data are stored in Data Buckets. Once the LoRa device is connected to TTN, TagoIO will create a bucket to hold the corresponding data. This study designed a dashboard consisting of two tabs to manage the indoor sensor cluster and outdoor sensor cluster. The Indoor sensor nodes tab displays parameters of dust concentration, gas concentration, temperature, and humidity. These parameters are represented as graphs as follows:</ns0:p><ns0:p>&#8226; Gauge chart: display instant information of environmental parameters</ns0:p><ns0:p>&#8226; Vertical and horizontal bar graphs: information 5 and 10 nearest signal samples are displayed; the higher the value, the higher AQI.</ns0:p><ns0:p>&#8226; Time chart: graph of parameters on the 1-hour scale.</ns0:p><ns0:p>The user can use the node option to access the Dashboard of the node of interest. The charting functionality is similarly designed for the Outdoor sensor nodes tab. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science network. Two dashboards are active at the same time ensure that the data monitoring process is always reliable. Furthermore, the Blynk dashboard is a reference tool to debug the system, thanks to its friendly user interface support. </ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENTATION AND RESULTS</ns0:head><ns0:p>In this section, we evaluate system performance in various scenarios using the parameter of packet loss rate. We set the experimental parameters in Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>.</ns0:p><ns0:p>The completed hardware deployed in work is shown in Figure <ns0:ref type='figure' target='#fig_4'>13</ns0:ref>, positioned with nodes as Figure <ns0:ref type='figure' target='#fig_5'>14</ns0:ref>.</ns0:p><ns0:p>The steps to perform the experiment are as follows:</ns0:p><ns0:p>&#8226; Determine experimental model: In this step, we select the devices participating in the experiment.</ns0:p><ns0:p>&#8226; Determine evaluation parameters: we only choose 1 or 2 parameters to be investigated for each experiment; other parameters are kept fixed.</ns0:p><ns0:p>&#8226; Conduct experiments: carry out the survey 50 times for each experiment and take the average value for the measurements.</ns0:p><ns0:p>In the first test scenario, we examine the packet loss rate according to communication distance. The scenario experiment consists of two outdoor sensor nodes, one full-duplex relay node, and one gateway.</ns0:p><ns0:p>Gateway is permanently located close to the Wifi router so that the dashboard can operate stably. Figure <ns0:ref type='figure' target='#fig_18'>15</ns0:ref> depicts the packet loss rate with increasing distance from the outdoor node to the relay node. We set the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>sensor nodes send 500 frames of numbered data to the gateway at bandwidth BW is 125 KHz, code rate CR is 1. Each payload frame long 30 bytes. The dashboard will receive and record the number of packets lost, duplicated, and incorrectly formatted during this communication. We increase the distance between the outdoor node and the relay d nr from 100m to 6km and measure the packet loss rate. The obtained results show that the system works well in the distance of 4 km and the rate packet loss is less than 1%. We continue to investigate the impact of FreeRTOS in the recommendation system. Using FreeRTOS ensures the full-duplex relay node transmits and receives data simultaneously and minimizes the possibility of data conflicts on the transmission line, thereby improving system performance. The experimental results</ns0:p><ns0:p>show that using FreeRTOS at the relay node can reduce the packet loss rate to less than 0.01% over a distance of 4.5km. Moreover, using FreeRTOS significantly improves the communication distance.</ns0:p><ns0:p>It can be explained as follows: When not using FreeRTOS, Ra01 modules receive transmissions from nodes when the LoRa signal has a good enough signal-to-noise (SNR) ratio. In case the communication distance is too large, the two Ra01 modules on the relay node get the internal communication interference with each other because the distance between them is very close. Meanwhile, using FreeRTOS helps the system divide tasks according to available hardware, thereby eliminating internal communication interference, increasing SNR, and helping the system operate over longer distances. After 50 times of this experiment, we recommend the optimal location for the outdoor sensor nodes with relay node to be 4 km in the case not use RTOS system and 4.5 km when use the RTOS system. Figure <ns0:ref type='figure' target='#fig_21'>16</ns0:ref> depicts a similar experimental scenario, with the distance between two outdoor nodes and the relay (d nr ) is fixed at 4km and gradually increases the distance between the relay node to the gateway. All system parameters are kept as in the first scenario. The results once again confirm the use of FreeRTOS can reduce the number of packet loss in the proposed system. We also recommend that the distance between the relay node and the gateway is 3.3 km in not using the RTOS and 4 km in using the RTOS.</ns0:p><ns0:p>In the next experiment, we investigate the effect of the payload on the transmission speed, expressed and increment the payload in this experiment. The results show in Figure <ns0:ref type='figure' target='#fig_22'>17</ns0:ref> demonstrates that the higher the payload, the higher the ToA, which means the slower the transmission speed. Another observation is that the higher the SF, the higher the ToA. It concludes that a high SF can cause great latency; however, a high SF will be used in some cases where an increase in coverage is desired.</ns0:p><ns0:p>We continue to investigate indoor air quality improvement for the entire system with the parameters given in Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>. The parameters are collected in 24 consecutive hours and statistically by Dashboard TTN in 2 cases: (i) system does not use the actuators and (ii) system uses such devices. The results show that using a negative ion generator and exhaust fan significantly improves air quality, especially for areas with narrow space as Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>This work presents the AQMIS based on WSAN. The proposed system involves four indoor nodes that operate synchronously with two different protocols,i.e., LoRaWAN and ESP-NOW, under a gateway's management. Besides, the outdoor sub-system cluster allows for remote air quality monitoring. The system has extensive coverage thanks to the full-relay LoRa module's operation under the real-time operating system FreeRTOS. Furthermore, we apply ADC noise reduction and Kalman filter to increase measurement accuracy. The system is monitored online via a dashboard based on TTN and TagoIO. The control signals are automatically fed back to the corresponding actuator through the downlink LoRa.</ns0:p><ns0:p>The experiment results show that the system performance is highly achieved and capable of practical applications. The amount of air quality data that we have collected over the past three months is of great value in environmental management.</ns0:p><ns0:p>In future work, we will continue to expand the system by equipping mobile for indoor nodes. Besides, we will install more outdoor nodes for the planning of air quality maps. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61818:1:1:NEW 23 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. System model of Wireless sensor and actuator network for Air monitoring and improvement application.</ns0:figDesc><ns0:graphic coords='5,141.73,379.06,413.57,232.63' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Furthermore, the module can work well with many different types of sensors in the MQ brand, i.e., MQ2 to MQ9 family. The LoRa32 module is designed with a 0.96-inch OLED screen capable of displaying all collected sensor parameters. Besides, the LED indicator system is designed to show four pollution levels in the monitoring area according to standards: Good (Air quality index -AQI &lt; 50) -average (51 &lt; AQI &lt; 200) -poor (201 &lt; AQI &lt; 300) -hazardous (AQI &gt; 300). Depending on the control signal received from the gateway, the indoor node will control two corresponding actuators, the exhaust fan and the negative ion generator.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Indoor node schematic</ns0:figDesc><ns0:graphic coords='6,141.73,457.37,413.56,215.40' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Negative Ion generator schematic</ns0:figDesc><ns0:graphic coords='7,141.73,146.08,413.58,231.91' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Outdoor node schematic.</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,413.57,279.44' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Relay node schematic.</ns0:figDesc><ns0:graphic coords='8,141.73,446.76,413.55,236.92' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. The LoRa gateway Dragino.</ns0:figDesc><ns0:graphic coords='9,263.50,218.88,170.04,131.88' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. The algorithm flowchart of Indoor node.</ns0:figDesc><ns0:graphic coords='10,196.13,63.78,304.78,435.13' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61818:1:1:NEW 23 Jul 2021)Manuscript to be reviewedComputer Scienceports for the LoRa module, the Interrupt services with low priority, and the Deep sleep mechanism with high priority. Deep sleep and periodic wake-up mechanisms for the outdoor node ensure that the outdoor nodes consume the lowest power levels. We recommend using an interrupt service routine for receiving control data from the LoRa relay. After receiving control data, the MCU controls the respective actuators' opening/closing to improve the air quality in the surveillance area.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. The algorithm flowchart of Outdoor node.</ns0:figDesc><ns0:graphic coords='11,211.43,134.69,274.18,241.74' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9 shows the algorithm flowchart for the full-duplex LoRa relay module. We proposed embedding the FreeRTOS real-time operating system with MCU to ensure simultaneous sending and receiving tasks (Kampmann et al. (2019); Docekal and Slanina (2017)). FreeRTOS is an open-source Real-Time Operating System developed by Real Time Engineers Ltd. FreeRTOS is designed with straightforward functionality such as basic task and memory management, synchronization API functions, with a total size of only 4.3 KB. Here, the MCU performs three tasks simultaneously with decreasing priority as follows. The first Task, which has the highest priority, ensures that LoRa data is always listened to by the LoRa modules. The second task, which has a lower priority, takes on the role of the LoRa transmitter.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. The algorithm flowchart of Full-duplex relay node.</ns0:figDesc><ns0:graphic coords='12,163.70,63.78,369.65,133.42' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. The Tago dashboard intergrated with TTN for the Indoor sensor node.</ns0:figDesc><ns0:graphic coords='12,162.89,298.51,371.26,171.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>Figure 10 depicts the live data of indoor node ID 1, while Figure 11 depicts the data at two outdoor nodes managed by a relay. A mighty function that we exploit in TagoIO is data statistics. The scripts that run at TagoIO are programmed with Node.js. We use scripts to calculate AQI, export reports, and control the downlink actuator with the Actions tool. Data is stored and exported to any email as CSV format. All designs on the dashboard are synchronized on the real-time smartphone interface as Figure 12. Besides, the indoor monitoring system can work well thanks to the ESP-NOW protocol and monitor by Blynk when there is no WIFi 11/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61818:1:1:NEW 23 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. The Tago dashboard intergrated with TTN for the Outdoor sensor node.</ns0:figDesc><ns0:graphic coords='13,168.61,63.78,359.81,176.63' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. The Tago dashboard on Smartphone and Blynk dashboard</ns0:figDesc><ns0:graphic coords='13,222.52,329.43,252.00,149.69' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 13 .Figure 14 .</ns0:head><ns0:label>1314</ns0:label><ns0:figDesc>Figure 13. The complete model of the proposed system</ns0:figDesc><ns0:graphic coords='14,189.79,254.09,317.46,234.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15. The packet loss rate vs. the distance from Outdoor node to relay node.</ns0:figDesc><ns0:graphic coords='15,243.52,292.02,210.00,157.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head /><ns0:label /><ns0:figDesc>by the Time on-air (ToA) parameter as Figure17. The experiment scenario consists of an outdoor sensor node, a full-duplex relay node, and a gateway. The system parameters are set as follows: BW = 125 KHz, CR = 1, d nr = 4 km, d rg = 4 km. In turn, we set up different spreading factor (SF) values (from 7 to 12)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61818:1:1:NEW 23 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>Figure 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure 16. The packet loss rate vs. the distance from relay node to gateway.</ns0:figDesc><ns0:graphic coords='16,243.52,63.77,210.00,157.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head>Figure 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Figure 17. The Time on air vs. different payload.</ns0:figDesc><ns0:graphic coords='16,243.52,255.45,210.00,157.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>TABLE 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell cols='2'>. Simulation Parameters</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Parameters</ns0:cell><ns0:cell>Notation</ns0:cell><ns0:cell>Typical Values</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of users in Indoor cluster</ns0:cell><ns0:cell>M</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of users in Outdoor cluster</ns0:cell><ns0:cell>N</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of relay node</ns0:cell><ns0:cell>K</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>Bandwidth</ns0:cell><ns0:cell>BW</ns0:cell><ns0:cell>125 KHz</ns0:cell></ns0:row><ns0:row><ns0:cell>Code rate</ns0:cell><ns0:cell>CR</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>Spreading factor</ns0:cell><ns0:cell>SF</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of payload frame</ns0:cell><ns0:cell>P</ns0:cell><ns0:cell>500</ns0:cell></ns0:row><ns0:row><ns0:cell>Payload length</ns0:cell><ns0:cell>L</ns0:cell><ns0:cell>30 byte</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Distance from the relay node to the gateway d rg</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>TABLE 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>. Simulation Parameters</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Time</ns0:cell><ns0:cell>Location</ns0:cell><ns0:cell>AQI case (i)</ns0:cell><ns0:cell>AQI case (ii)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>00:00 to 05:59 Kitchen</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>06:00 to 11:59 Kitchen</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>12:00 to 17:59 Kitchen</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>18:00 to 23:59 Kitchen</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>00:00 to 05:59 Living room</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>06:00 to 11:59 Living room</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>12:00 to 17:59 Living room</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>18:00 to 23:59 Living room</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>00:00 to 05:59 Bedroom</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>06:00 to 11:59 Bedroom</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>12:00 to 17:59 Bedroom</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>18:00 to 23:59 Bedroom</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>2</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='17'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61818:1:1:NEW 23 Jul 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Responses to the reviewers’ comments A novel air quality monitoring and improvement system based on wireless sensor and actuator networks using LoRa communication Van-Truong Truong, Anand Nayyar, and Mehedi Masud July 23, 2021 Dear Editor and Reviewers, We would like to express our sincere thanks to the Editor and Reviewers for the time spent evaluating our manuscript and providing us with such a variety of constructive comments, which helped us improve the presentation of our manuscript. We have carefully addressed all comments of the editor and reviewers and provided a pointby-point reply. The comments of the editor and reviewers are typeset in italic font, and our responses are shown in black with yellow background. Additionally, in the revised manuscript, explanations, and justifications according to the reviewers' comments are shown in italic font. Minor corrections and rearrangements have not been highlighted. Yours sincerely, Van-Truong Truong Anand Nayyar Mehedi Masud 1. Responses to the comments of Editor (Muhammad Tariq) First, we would like to thank you for reading our manuscript and for providing us with such a variety of constructive comments. We have revised our manuscript according to these comments and provide detailed responses to each comment as follows. 1.1 I find it difficult to recognize the contribution to the body of knowledge in this work. The authors need to justify the novelty of the manuscript. Reply: We are sorry to hear that. Please let us explain the contributions of our paper as follows. In this paper, we proposed two layers WSAN based on LoRa/LoRaWAN communication for AQMIS, which contains two sub-systems: indoor cluster and outdoor clusters. The system can connect to many different sensors such as dust, CO, LPG, and CH4 concentration sensors. Not only has the monitoring function, but the system can also improve air quality thanks to the negative ion generating modules and other actuators such as exhaust fans, nebulizers. The test results show the effectiveness of the proposed model. In addition, we proposed the full-duplex relay LoRa module to expand the system. We proposed using LoRa wireless communication technology for node-to-relay layer and LoRaWAN, a network protocol using LoRa, for relay-to-gateway layer. This design solves the problem in LoRaWAN networks of not supporting node-to-node, which is very important in WSN. To ensure that the packet loss rate is less than 0.1%, we suggest applying FreeRTOS in the full-duplex relay module. Surveys have proven that this method can improve system performance, such as reducing packet loss rate, increasing communication distance. Furthermore, we use ADC noise reduction mode and digital Kalman filters to enhance the reliability of the air parameter measurements. We also design the software monitoring parameters on the web and smartphone interface using The Things Network (TTN) and TagoIO platform. Revision: We updated the main contribution of paper. - Line 94: To the best of our knowledge, there are currently no works on the application of WSAN based on two layers LoRa/LoRaWAN in AQMIS, so it motivates us to do this study. Our proposed model well solves the disadvantages of LoRaWAN: it does not support nodeto-node communication. Moreover, communication layering also enhances the scalability of the system. The main contribution of our work is as follow: - We proposed two layers WSAN based on LoRa/LoRaWAN communication for AQMIS, which contains two sub-systems: indoor and outdoor. The system can connect to many different sensors such as dust, CO, LPG, and CH4 concentration sensors. The system is expandable due to the operation of the full-duplex relay LoRa module. - We proposed using LoRa wireless communication technology for node-to-relay layer and LoRaWAN, a network protocol using LoRa, for relay-to-gateway layer. - We proposed designing negative ion generators as actuators at the nodes with high efficiency and operating continuously for a long time. - We proposed using ADC noise reduction mode and digital Kalman filters to enhance the reliability of the air parameter measurements. In addition, we proposed using the real-time operating system FreeRTOS to manage tasks for the full-duplex relay LoRa module. - We design the software monitoring parameters on the web and smartphone interface using The Things Network (TTN) and TagoIO platform. 1.2 Why are the authors used a schematic diagram for the indoor node in Figure 2? Reply: We use schematic diagrams and provide actual PCB images of indoor nodes, outdoor nodes, and relay nodes. In the indoor node, we use the central processor, Heltec's LoRa32 board, while in other boards, we use Microchip's Atmega328P chip. We design the indoor nodes to be integrated with LoRa32 because besides integrating the LoRa module, we can also take advantage of the power of ESP32 with ESP-NOW protocol. Other nodes use Atmega328P to save energy and reduce system costs. Revision: We updated the sentence to clarify the MCU using in schematic diagram -Line 201: We use the central processor, LoRa32 module manufactured by Heltec, which has a built-in LoRa communication module and peer-to-peer communication from the ESP32. -Line 225: The heart of the device is the Atmega328P from Microchip… 1.3 Is PCB design by authors themselves? If not then a proper reference should be provided? Reply: We would like to thank the Reviewer for this comment. All schematic and PCB are designed by ourselves. We carry out the design and optimization stages of circuits using Orcad 16.6 software and then use printed circuit manufacturers' services to produce PCBs. The remaining stages, such as soldering, testing, and 3D printing the box, are all performed by the team. We have provided a full schematic used in manuscript, here, we have presented some more pictures of the PCB design. All PCB image and design file (.MAX files) will send to the Supplemental Files. Indoor node Relay node Outdoor node Revision: We have not updated the manuscript for this comment. 1.4 There are various state-of-the-art air quality monitoring systems are developed. The authors should draw a comparison with the existing state-of-the-art works. Reply: Many thanks for this comment. In the manuscript, we did a literature review of several articles related to our work. We found that in those studies there are still some main disadvantages, which can be listed as follows: - There are no specific circuit designs that use assembled components, which do not guarantee the anti-interference properties of the system. - The critical parameters of the system such as packet loss rate, transmission distance have not been investigated. - Have not proposed and analyzed the scalability for the system. - Protocols and techniques to improve system performance have not yet been proposed. Revision: We updated the Related work to clarify comparisons of proposed systems and those already implemented. - Line 124: The study also did not specify that the most critical parameters of Wifi-based WSN, i.e., the distance of data transmission, network coverage, the lifetime of the sensor node. Furthermore, a WSN model that is entirely dependent on Wi-Fi is not reliable in practice. - Line 130: However, the defect of the application interface is still relatively simple, not meeting the graph's visual monitoring need. System customization and scalability are also limited because the design is based on Sparfkun's existing hardware. -Line 144: However, the system's applicability is limited because the system only monitors indoor air quality and does not mention other polluting agents such as dust, bacteria in the house. Moreover, the author is not clear about the data transmission distance and protocol of Zigbee communication. -Line 160: Most of the studies above have focused on air quality monitoring using WSNs with different transmission techniques. However, their most significant limitation is that the proposed model is quite simple and does not meet the practical requirements of WSN. The main system-specific parameters such as communication distance, packet loss rate have also not been investigated. Furthermore, the issue of improving air quality has not yet been investigated. Therefore, we propose a WSAN model for air quality monitoring and improvement using LoRa/LoRaWAN communication in this study. 1.5 LoRa is used for long-range (up to 2 KM range) and low data rate communication. The authors should justify why they have used it in indoor communication where longer range is not always desired. Reply: Thank you for your technical question. We use LoRa for indoor nodes to synchronize with the Dragino LoRa gateway in the system. Although long-range is not required for indoor nodes, using the same protocol for the whole system helps the data streams received by the gateway to have the same format and easily be processed, specifically as follows: Preample Payload length CR CRC Payload Payload CRC - The preamble is used to detect the start of the packet by the receiver. - The payload length (bytes). - The forward error correction code rate (CR). - The 16 bits Cyclic Redundancy Check (CRC) for the payload. - The payload is a variable-length field that includes the actual data. An optional payload CRC may be added. Furthermore, implementing LoRa for the entire system makes it easy for us to perform Over-The-Air-Activation (OTAA), which is the process of downloading new firmware to the ESP32 via wifi instead of using a traditional Serial port. Revision: - Line 160: Although long-range is not required for indoor nodes, using the Lora-based for the whole system helps the data streams received by the gateway to have the same format and easily be processed. Furthermore, implementing LoRa makes it easy for us to perform Over-The-Air-Activation (OTAA), which is downloading new firmware to the ESP32 via wifi instead of using a traditional Serial port. We hope that the Editor satisfies with our responses. Again, thank you very much for your helpful comments. 2. Responses to the comments of Reviewer 1 First, we would like to thank you for reading our manuscript and for providing us with such a variety of constructive comments. We have revised our manuscript according to these comments and provide detailed responses to each comment as follows. The Research Paper needs the following Minor Revisions and is subject for the following revisions and after revisions, the paper is recommended for re-review: 2.1 Make sure you use the same style of notation and make sure all be consistent. Reply: Many thanks for this comment. We double-checked the entire manuscript and made sure the symbols were consistent and presented in the same style. 2.2 Thoroughly revise the manuscript to correct all typo/grammatical/spelling errors. The authors consider editing some sentences to be more appropriate, for example: - Line 63: Besides hardware, wireless communication techniques in WSAN networks are also a significant issue - Line 273: The second Task, the second, transmitters every time it is received from the nodes to gateway. Moreover, the last Task is optional and ensures the visual mechanisms on OLED for debugging. - Line 284: Anyone can build apps without permission from big companies or the government Reply: We would like to thank the Reviewer for this comment. We have corrected the typos and improve the writing of the manuscript. Along with that effort, we also edit the grammar, phrasing, and punctuation. We hope the manuscript has been improved the flow and readability. We have updated the manuscript by improving the presentation and English. Revision: Following your comments, we updated the sentences as follow: - Line 63: Besides the challenges of hardware deployment of WSAN systems, software and perfecting communication protocols in sensor networks are also very complex. - Line 301: The second task, which has a lower priority, takes on the role of the LoRa transmitter. Thus, two LoRa Ra02 modules controlled by RTOS ensure the information acquisition and forwarding of received information to the gateway smoothly. The fullduplex protocol allows the system to operate without additional communication delay. However, the trade-off is that the packet loss rate is difficult to guarantee because only one module does the LoRa data collection service. - Line 319: That means developers can build their applications completely independently and get great support from the open LoRa community, independent of other service providers. 2.3 Authors should clearly define the terms LoRa and LoRaWAN used in manuscripts. Some places there is confusion between these two terms, for example line 94, line 95, line 156, line 160. Reply: Many thanks to the reviewer for this deep technical insight. Our proposed system is implemented in 2 layers: (i) layer of sensors communicating with relay node, and (ii) layer of relay nodes communicating with Dragino gateway. Correspondingly, we use LoRa wireless communication technology for layer (i) and LoRaWAN, a network protocol using LoRa, for layer (ii). Thus, this proposed system works according to the LoRaWAN standard and supports node-to-node communication. The advantage is that the system can scale without deploying too many gateways. However, as suggested by the reviewer, we did not clarify these two concepts in the manuscript. Revision: We updated the information about LoRa and LoRaWAN to clarify the advantages of the proposed system. - Line 94: To the best of our knowledge, there are currently no works on the application of WSAN based on two layers LoRa/LoRaWAN in AQMIS, so it motivates us to do this study. - Line 102: We proposed using LoRa wireless communication technology for node-to-relay layer and LoRaWAN, a network protocol using LoRa, for relay-to-gateway layer. 2.4 a. The author clarifies the impact of FreeRTOS on system performance. Reply: We would like to thank the Reviewer for this comment. Using FreeRTOS in the proposed system plays an essential role in ensuring a low packet loss rate. We investigated the impact of FreeRTOS in experiments in Figures 17 and 18. The obtained results demonstrate the advantage of FreeRTOS in reducing packet loss rate. Revision: We updated the sentence in Experimentation and discussion section. - Line 365: We continue to investigate the impact of FreeRTOS in the recommendation system. Using FreeRTOS ensures the full-duplex relay node transmits and receives data simultaneously and minimizes the possibility of data conflicts on the transmission line, thereby improving system performance. The experimental results show that using FreeRTOS at the relay node can reduce the packet loss rate to less than 0.01\% over a distance of 4.5km. Moreover, using FreeRTOS significantly improves the communication distance. It can be explained as follows: When not using FreeRTOS, Ra01 modules receive transmissions from nodes when the LoRa signal has a good enough signal-to-noise (SNR) ratio. In case the communication distance is too large, the two Ra01 modules on the relay node get the internal communication interference with each other because the distance between them is very close. Meanwhile, using FreeRTOS helps the system divide tasks according to available hardware, thereby eliminating internal communication interference, increasing SNR, and helping the system operate over longer distances. b. What is the size of this operating system? Is FreeRTOS really effective when applied to a small microcontroller like Atmega328P? Get some line updates for an explanation FreeRTOS applied in this system. Reply: Thank you for your technical question. We use FreeRTOS embedded in a fullduplex relay for the proposed system. FreeRTOS is an open-source Real-Time Operating System developed by Real Time Engineers Ltd. FreeRTOS is designed with straightforward functionality such as basic task and memory management, synchronization API functions, and no provisioning network interfaces or file systems. FreeRTOS supports many different microcontroller architectures, compact size (4.3 Kbytes after compiling on Arduino), written in C language. In addition, it also allows implementing moderation mechanisms between processes such as queues, counting semaphores, mutexes. Therefore, FreeRTOS is a very suitable choice for small microcontrollers like Atmega328P. Revision: We updated the sentence to explain FreeRTOS in proposed system. - Line 294: We proposed embedding the FreeRTOS real-time operating system with MCU to ensure simultaneous sending and receiving tasks [R1], [R2]. FreeRTOS is an opensource Real-Time Operating System developed by Real Time Engineers Ltd. FreeRTOS is designed with straightforward functionality such as basic task and memory management, synchronization API functions, with a total size of only 4.3 KB. [R1] Kampmann, A., Wüstenberg, A., Alrifaee, B., & Kowalewski, S. (2019, October). A portable implementation of the real-time publish-subscribe protocol for microcontrollers in distributed robotic applications. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC) (pp. 443-448). IEEE. [R2] Docekal, T., & Slanina, Z. (2017, May). Control system based on freertos for data acquisition and distribution on swarm robotics platform. In 2017 18th International Carpathian Control Conference (ICCC) (pp. 434-439). IEEE. 2.5 The author updates more information about the ESP-NOW protocol used in the system. State the reason for using this protocol in the proposed system, when LoRaWAN has a very optimal coverage distance. Reply: Thank you for your technical question. We agree with the comment that LoRaWAN is a protocol that ensures excellent coverage for WSNs, so we used LoRaWAN as the default protocol for the proposed system. Besides, we introduced the second protocol for the system, ESP-NOW, to take advantage of the power from ESP-32 modules. If LoRa modules do not guarantee communication, the ESP-32 is set up to communicate peer to peer using the MAC address, thus improving system reliability. Revision: We updated the sentence to explain ESP-NOW in proposed system. - Line 259: We used LoRaWAN as the default protocol for the proposed system, but in the case, LoRa modules do not guarantee communication, the ESP-32 is set up to communicate peer to peer using the MAC address thus improving system reliability. 2.6 a. Why do the authors suggest using a hybrid protocol between LoRa and LoRaWAN? Reply: Thank you for your technical question. As explained in question 2.3, we use LoRa for the sensor-to-relay layer and standard LoRaWAN for the relay-to-gateway layer. We proposed 2-layer communication process to solve the disadvantage of LoRaWAN: does not support node-to-node communication. With the proposed system and protocol, we ensure that communication can happen at the node-to-node and node-to-gateway levels, and that is also one of our research contributions. Revision: We add this sentence to clarify the reason for using LoRa and LoRaWAN protocols. - Line 102: We proposed using LoRa wireless communication technology for node-to-relay layer and LoRaWAN, a network protocol using LoRa, for relay-to-gateway layer. b. Can the recommendation system fully utilize LoRaWAN? The author presents some discussion on this issue. Reply: Thank you for your question. A WSAN system can completely use only LoRaWAN protocol to work like many other studies, but expanding LoRaWAN systems is very expensive. The reason is that each LoRa gateway can support a finite number of nodes with a finite transmission distance. Once the system needs to increase the number of nodes or increase the coverage, the gateway stations, including the LoRa gateway module, electrical infrastructure, and the Internet, must also increase and take more cost. Our proposed model is a solution to this problem: using relay nodes can help to expand the system, and at the same time can increase the number of sensor nodes in the network, moreover without increasing the communication delay time if the system is set up correctly. Revision: We add this sentence to clarify the reason for using two layer protocol. - Line 95: Our proposed model well solves the disadvantages of LoRaWAN: it does not support node-to-node communication. Moreover, communication layering also enhances the scalability of the system. 2.7 In the proposed figure, do outdoor nodes use an air quality improvement mechanism? Get some line updates for this issue. Reply: We would like to thank the Reviewer for this comment. We have implemented an outdoor nodes nebulizer devices. These devices help to increase air humidity and clean the dust in large spaces. We do not use negative ion generators (NIG) outdoors due to the disadvantages of the power supply and the limited capacity of the device. The medium power NIG used in this study only works in closed spaces, with the efficiency statistics given in Table 2. Revision: We add this sentence to clarify the air quality improvement mechanism for outdoor node. - Line 186: These devices help to increase air humidity and clean the dust in large spaces. We do not use NIG outdoors due to the disadvantages of the power supply and the limited capacity of the device. 2.8 The full-duplex relay node algorithm is ambiguous. Tasks have not been clearly shown their priority. The graphical representation of the sequential algorithm has not yet exploded the advantage of the parallel work of FreeRTOS. Reply: We sincerely thank you for pointing out this lack in our manuscript. We implemented FreeRTOS consisting of 3 Tasks with decreasing priority from Task 1 to Task 3. Task 1 receives LoRa signals from sensor nodes. Task 2 transfers the received signal to the gateway, and Task 3 displays the decoded information from the LoRa signal. We use “Prioritized Pre-emptive Scheduling with Time Slicing” to determine which tasks should be put into the Running state. Thus, the task in the Ready state with the highest priority will be executed first or take over the execution of the running task; Tasks with the same priority will be split to execute concurrently by time slices. Revision: We updated the algorithm flowchart to clarify the operation of the embedded FreeRTOS in the full-duplex relay node. - Line 307: We use Prioritized Pre-emptive Scheduling with Time Slicing algorithm to determine which tasks should be put into the Running state [R3]. Thus, the task in the Ready state with the highest priority will be executed first or take over the execution of the running task. [R3] Trivedi, P. A. (2014). Real Time Operating System (RTOS) With Its Effective Scheduling Techniques. International Journal of Engineering Development and Research (IJEDR), 1, 92-95. 2.9 Where is the Kalman filter used in this proposed system? Is it really necessary to use digital filters in the proposed model? Reply: Thank you for your technical question. As shown in the manuscript, we use Kalman filter to read CO, CH4, LPG sensor signals at sensor nodes. Digital filters play an essential role in filtering signal noise from sensors. We have experimented repeatedly and concluded that this noise source could come from the sensor hardware itself and make the obtained parameters fluctuate in a small range (about 1%). Therefore, the use of a Kalman filter can eliminate this amount of noise. Therefore, using this digital filter is essential to ensure accuracy in measuring sensor parameters in our study. Revision: We updated algorithm flowchart and some sentence for Indoor node in page 10 to clarify the Kalman filter in proposed system - Line 270: We use Kalman Filter, a powerful digital filter that combines current uncertainty with environmental noise into a new, more reliable form of information for future prediction. The strength of Kalman Filter is very fast running and high stability. Furthermore, we deploy ADC noise reduction, an internal noise reduction mechanism in the microcontroller by limiting the operation of the IO module, to increase measurement accuracy. We hope that the reviewer satisfies with our responses. Again, thank you very much for your helpful comments. 3. Responses to the comments of Reviewer 2 First, we would like to thank you for reading our manuscript and for providing us with such a variety of constructive comments. We have revised our manuscript according to these comments and provide detailed responses to each comment as follows. After reviewing this manuscript, the reviewer suggests the following corrections. After that, the final decision for this manuscript will be made: 3.1 The authors check the abbreviations to ensure that they are consistent throughout in the manuscript. The authors correct some unclear sentences in the manuscripts, for example: - Line 215 and 216: First, as mentioned, two LoRa modules’ simultaneous operation enables the system to expand without increasing communication delay. - Line 219: … in half-duplex mode, the system performance will be improved - Line 223: The downside of this way of working is to increase communication delay. However, it can increase redundancy for the system Reply: We would like to thank the Reviewer for this comment. We have corrected the typos and improve the writing of the manuscript. Along with that effort, we also edit the grammar, phrasing, and punctuation. We hope the manuscript has been improved the flow and readability. We have updated the manuscript by improving the presentation and English. Revision: Following your comments, we updated the sentences as follow: - Line 235: First, two LoRa modules work together in a transceiver to make the system easy to expand while not causing additional communication delays. - Line 238: The second reason, if the system operates in half-duplex mode, two LoRa modules simultaneously receive packets and ensure that the system's packet loss rate is reduced. - Line 241: This implementation method ensures a low packet loss rate but increases the communication delay time. 3.2 What is the purpose of using the Blynk dashboard, while TagoIO already provides dashboards that can be displayed on Smartphones very well? The authors provide some reference to the Blynk dashboard. Reply: Thank you for your technical question. In our system, two dashboards, i.e., Blynk and TagoIO, are used in parallel to monitor parameters on smartphones. Dashboard built from TagoIO is used for both PC and smartphones thanks to its perfect data synchronization. However, we still use Blynk to monitor indoor devices. This context aims to ensure that the data monitoring process is always reliable when there are two dashboards to monitor and store data. Furthermore, the Blynk dashboard is a reference tool to debug the system, thanks to its friendly user interface support. Revision: We updated the reason to use two dashboards in our work. - Line 343: Two dashboards are active at the same time ensure that the data monitoring process is always reliable. Furthermore, the Blynk dashboard is a reference tool to debug the system, thanks to its friendly user interface support. 3.3 What communication class is the LoRa modules in the LoRaWAN system using? The authors present some information about this communication class. Reply: Thank you for your technical question. We use class A for sensor nodes to save energy. Node class A is usually applied to nodes that use batteries and send data through the gateway at any time. After sending, the node will stay awake two times RX listening Window. Revision: We updated the information about LoRa module. - Line 226: We use class A for sensor nodes to save energy. Class A is usually applied to nodes that use batteries and send data through the gateway at any time. 3.4 Add a few sentences in the conclusion to highlight the contributions of the work. Reply: Thank you for your suggestion. We updated the conclusion to highlight the contributions of our work. Revision: We updated the conclusion to highlight the contributions of our work. - Line 399: This work presents the AQMIS based on WSAN. The proposed system involves four indoor nodes that operate synchronously with two different protocols,i.e., LoRaWAN and ESP-NOW, under a gateway's management. Besides, the outdoor sub-system cluster allows for remote air quality monitoring. The system has extensive coverage thanks to the full-relay LoRa module's operation under the real-time operating system FreeRTOS. Furthermore, we apply ADC noise reduction and Kalman filter to increase measurement accuracy. The system is monitored online via a dashboard based on TTN and TagoIO. The control signals are automatically fed back to the corresponding actuator through the downlink LoRa. 3.5 a. Add some sentences describing the scalability of the clusters in the system. How many sensors is the system capable of supporting? Reply: Thank you for your suggestion. Our proposed system is scalable and works well with many different sensors. Sensor nodes can connect well with MQ series sensors such as MQ2 to MQ9 and MQ135 to measure various environmental parameters: smoke, alcohol, methane, methane natural, LPG, carbon monoxide, hydrogen, combustible and air quality. Besides, the system can also measure other parameters such as temperature, humidity, dust concentration in the air. Revision: We update scalable system sensors. - Line 204: Furthermore, the module can work well with many different types of sensors in the MQ brand, i.e., MQ2 to MQ9 family. b. How many nodes can the system scale up? We are grateful for and appreciative of this comment. Since the proposed model consists of 2 layers deployed LoRa and LoRaWAN, determining the exact number of nodes that can participate in the system is done through 2 stages: - The maximum number of relay nodes (nodes directly connected to the Dragino gateway) depends on the uplink limit of the TTN Fair Access Policy. This policy prescribes a golden rule of 30 seconds of air-time per device per day for uplink. In our proposed model, Dragino LG01N is a single channel gateway designed with a 1% receive duty cycle, so in theory, the maximum number of nodes that this gateway can support is: n nf .ns.dc 1.86, 400.0.01   28 T 30 Where:  nf is the number of frequency bands that the gateway supports  ns is the number of seconds in a day  dc is duty cycle  T is the air time per device per day - The maximum number of nodes connected to the relay node uses LoRa raw, so it does not follow the regulations from TTN. The number of nodes depends entirely on the processing capacity of the relay node. In the proposed model, the system works fine with two sensor nodes and even fine with four sensor nodes. In the following studies, we will continue to expand the system with more nodes. Revision: We update the number of scalable sensor nodes for the system - Line 232: According to TTN Fair Access Policy, the maximum number of nodes that this gateway can support is: n nf .ns.dc 1.86, 400.0.01   28 T 30 where nf is the number of frequency bands that the gateway supports, ns is the number of seconds in a day, dc is duty cycle, T is the air time per device per day 3.6 In general, the modules in the whole manuscript are presented clearly, however, the author should add a complete model of the system to highlight the contribution of this paper. Reply: Thank you for your suggestion. We updated the complete model of the system to highlight the contribution of our paper. Revision: We updated the complete model of the system in EXPERIMENTATION AND RESULTS section. - Line 349: The completed hardware deployed in work is shown in Figure 16, positioned with nodes as Figure 17. 3.7 The tests in the EXPERIMENTATION AND RESULTS section are not clear. Discuss how these tests are performed and the time it takes to collect the data. Reply: Thank you for your suggestion. We have described in detail the step for each experiment in the manuscript. Operations for the experiment generally include the following main steps: - Determine experimental model: In this step, we select the devices participating in the experiment. - Determine evaluation parameters: Because the system operates with many input parameters, we only choose 1 or 2 parameters to be investigated for each experiment; other parameters are kept fixed. - Conduct experiments: carry out the survey 50 times for each experiment and take the average value for the measurements. Revision: We updated the steps to perform the experiment - Line 350: The steps to perform the experiment are as follows: - Determine experimental model: In this step, we select the devices participating in the experiment. - Determine evaluation parameters: we only choose 1 or 2 parameters to be investigated for each experiment; other parameters are kept fixed. - Conduct experiments: carry out the survey 50 times for each experiment and take the average value for the measurements. 3.8 Some system parameters have not been fully added to table 1, such as SF. Please check again and complete this table. Reply: Thank you very much for your detailed check. We have fully updated the acronyms in this manuscript. Revision: We updated the Table 1: Simulation parameter. - Line 339: We hope that the reviewer satisfies with our responses. Again, thank you very much for your helpful comments. "
Here is a paper. Please give your review comments after reading it.
236
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>In this paper, we study the air quality monitoring and improvement system based on wireless sensor and actuator network using LoRa communication. The proposed system is divided into two parts, indoor cluster and outdoor cluster, managed by a Dragino LoRa gateway. Each indoor sensor node can receive information about the temperature, humidity, air quality, dust concentration in the air and transmit them to the gateway. The outdoor sensor nodes have the same functionality, add the ability to use solar power, and are waterproof. The full-duplex relay LoRa modules which are embedded FreeRTOS is arranged to forward information from the nodes it manages to the gateway via uplink LoRa. The gateway collects and processes all of the system information and makes decisions to control the actuator to improve the air quality through the downlink LoRa. We build data management and analysis online software based on The Things Network and TagoIO platform. The system can operate with a coverage of 8.5 km, where optimal distances are established between sensor nodes and relay nodes and between relay nodes and gateways at 4.5 km and 4 km, respectively. Experimental results observed that the packet loss rate in real-time is less than 0.1% prove the effectiveness of the proposed system.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In recent years, the problem of air pollution in residential areas and factories is becoming more severe under the impact of six factors: Nitrogen Oxide (NO x ), Sulfur Oxide (SO x ), Carbon Monoxide (CO), lead, ground ozone, and dust. Among them, fine particles (PM 2.5 ) with a size of fewer than 2.5 microns cause the most serious consequences, as they can penetrate deeply into the lungs, affecting both the respiratory system and circulatory system <ns0:ref type='bibr' target='#b33'>(Wang et al., 2020)</ns0:ref>. According to the World Health Organization (WHO) statistics, annually, more than 90% of people are exposed to outdoor concentrations of PM2.5 that are higher than the air quality standards. According to the study <ns0:ref type='bibr' target='#b17'>(Liu et al., 2020)</ns0:ref>, whether people's health status is serious or not will depend on the degree and time of exposure to the polluted air. In addition to its negative impacts on the environment and human health, air pollution also reduces productivity and reduces energy efficiency. Several studies have demonstrated an increase in CO and CO 2 levels, leading to an increase in the amount of volatile organic compounds (VOCs), odors, and microorganisms in the air <ns0:ref type='bibr' target='#b28'>(Sadatshojaie and Rahimpour, 2020)</ns0:ref>. That makes a decrease in humans' ability to concentrate. Furthermore, according to <ns0:ref type='bibr' target='#b12'>(Franco and Schito, 2020;</ns0:ref><ns0:ref type='bibr' target='#b11'>Franco and Leccese, 2020)</ns0:ref> study, controlling CO and CO 2 concentration in the air can lead to up to 5% to 20% energy savings in HVAC systems in buildings. contains two sub-systems: indoor and outdoor. The system can connect to many different sensors such as dust, CO, LPG, and CH 4 concentration sensors. The system is expandable due to the operation of the full-duplex relay LoRa module. We proposed using LoRa wireless communication technology for node-to-relay layer and LoRaWAN, a network protocol using LoRa, for relay-togateway layer.</ns0:p><ns0:p>&#8226; The designing negative ion generators as actuators are proposed at the nodes with high efficiency and operating continuously for a long time.</ns0:p><ns0:p>&#8226; The ADC noise reduction mode and digital Kalman filters are proposed to enhance the reliability of the air parameter measurements. In addition, we proposed using the real-time operating system FreeRTOS to manage tasks for the full-duplex relay LoRa module.</ns0:p><ns0:p>&#8226; The design the software monitoring parameters on the web and smartphone interface using The Things Network (TTN) and TagoIO platform are proposed. This paper is organized as follows. In Section II, we introduce the literature review. In Section III, the hardware and software of the proposed system are described. In Section IV, we evaluate the system performance with the critical parameter. And the last one, we present the conclusions in Section V.</ns0:p></ns0:div> <ns0:div><ns0:head>LITERATURE REVIEW</ns0:head><ns0:p>In this section, we present studies related to the wireless sensor and actuator network and air quality monitoring and improvement system. <ns0:ref type='bibr' target='#b8'>Dhingra et al. (2019)</ns0:ref> proposed a WSN based on WiFi communication with the hardware of sensor nodes, cameras, and Android data monitoring applications. Sensor nodes detect harmful gases such as CO 2 , CO, ... and the camera will collect traffic data in the city. Data stored at Cloud servers and Android applications will help users identify routes with high levels of air pollution and choose suitable routes to move. However, the authors have not mentioned dust -the leading pollutant at present in their work.</ns0:p><ns0:p>Although the Android application helps users find less polluted routes to transport, the interface is simple, not meeting the need for visual monitoring through graphs. The study also did not specify that the most critical parameters of Wifi-based WSN, i.e., the distance of data transmission, network coverage, the lifetime of the sensor node. Furthermore, a WSN model that is entirely dependent on Wi-Fi is not reliable in practice. <ns0:ref type='bibr' target='#b22'>Marques et al. (2019a)</ns0:ref> proposed a model for indoor air quality monitoring. The system uses an MHZ-16 sensor to measure CO 2 indoors and uses the ESP8266 module to transmit data to the web-server.</ns0:p><ns0:p>The IAirCO2 application helps users to recognize the pollution level where the sensor is located. However, the defect of the application interface is still relatively simple, not meeting the graph's visual monitoring need. System customization and scalability are also limited because the design is based on Sparfkun's existing hardware. <ns0:ref type='bibr' target='#b3'>Arroyo et al. (2019)</ns0:ref> proposed a low-cost, low-size, low-power consumption real-time monitoring air quality system. The sensor nodes communicate with the master station via Zigbee communication.</ns0:p><ns0:p>An optimized fog computing system has been performed to store, counselor, process, and imagine the sensor network's data. Data processing and analysis are implemented in the Cloud by applying artificial intelligence techniques to optimize compounds and contaminants. Finally, the authors use a simple case study to prove the algorithm's effectiveness in detecting and classify harmful emissions.</ns0:p><ns0:p>The Zigbee network is proposed by <ns0:ref type='bibr' target='#b0'>Abraham and Li (2014)</ns0:ref> for indoor air quality monitoring. The hardware model is deployed indoor with four sensor nodes measuring air quality using the Zigbee network deployed by Arduino and XBee transmission module. Sensor nodes detect harmful gases parameters and sent to the base station for storing. The author is elaborate on processing measured data, and the parameters are represented in the form of objective graphs. However, the system's applicability is limited because the system only monitors indoor air quality and does not mention other polluting agents such as dust, bacteria in the house. Moreover, the author is not clear about the data transmission distance and protocol of Zigbee communication.</ns0:p><ns0:p>LoRa communication has been proposed and applied in many designs to solve the limited transmission distance problem of Wi-Fi, Bluetooth, or Zigbee. <ns0:ref type='bibr' target='#b1'>Alvear-Puertas et al. (2020)</ns0:ref> proposed a model to monitor parameters CO, NO 2 , PM 10 , PM 2.5 using an STM32F107 microcontroller. The system consists of a sensor Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>node that communicates with one server via LoRa communication. The authors have conducted many experiments measuring environmental parameters and comparing them with the national control station's standard data. As a result, the system can operate well within 2 km and an error of 5 to 8%. However, the proposed system model is quite simple and does not meet a wide-area WSN system's requirements. <ns0:ref type='bibr' target='#b10'>Firdaus et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b34'>Wang et al. (2017)</ns0:ref> present a similar model of using LoRa networks in air quality monitoring. In addition to the parameters of the level of environmental air pollution, the authors also examine the issues of battery life and time delay during communication. However, the authors have not given the survey when the system works with many sensor nodes and communication protocols to ensure the reliability of the data.</ns0:p><ns0:p>We summarize some studies on air quality monitoring systems and related specifications in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head>METHOD AND PROCEDURES</ns0:head></ns0:div> <ns0:div><ns0:head>System model</ns0:head><ns0:p>In this part, we present the proposed model of the WSAN system for indoor and outdoor air quality monitoring and improvement (WSAN-AQMIS) application using LoRa communication. The system consists of three parts: Indoor sensor cluster, Outdoor sensor cluster, and gateway which communicate through LoRa communication. The sensor nodes send information to the gateway using uplink LoRa, while the gateway controls the actuators through the downlink LoRa. Also, we use TTN to build real-time online data management software for our proposed system.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> shows the proposed system model. Specifically, the indoor cluster includes four sensor nodes that monitor air quality parameters, including dust concentration, CO concentration, and GAS local data to the Blynk server. These nodes also feature integrated functions that can communicate with the gateway using the 433 MHz uplinks LoRa. Besides, we also design the outdoor cluster to provide long-range air quality monitoring for the system. The two outdoor sensor clusters are stratified into the outdoor node and the full-duplex relay node. Each outdoor sensor node will have an integrated solar collector responsible for collecting temperature, humidity, air quality parameters and transmitting information to the relay node. The relay nodes have a built-in function that forwards sensing data to the gateway. Depending on air quality, the gateway will make decisions to control the actuator using downlink</ns0:p><ns0:p>LoRa. The actuators in our system are divided into two categories for indoor or outdoor equipment. In the indoor node area, we are equipped with an exhaust fan, and a negative ion generator (NIG) to enhance the air quality. On the outdoor, nebulizer controls will be fitted. These devices help to increase air humidity and clean the dust in large spaces. We do not use NIG outdoors due to the disadvantages of the power supply and the limited capacity of the device.</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture</ns0:head><ns0:p>In this section, we present the hardware design and technical working of the proposed system.</ns0:p><ns0:p>The Indoor cluster consists of four sensor nodes with two modes of operation. In the first mode, when the Wifi or LoRa transmission is inoperative, the nodes will communicate through ESP-NOW. It is a protocol developed by Espressif that helps nodes connect peer-to-peer does not require handshakes.</ns0:p><ns0:p>In the second mode, the nodes can communicate directly with the Blynk server via Wifi and connect to the gateway via LoRa communication. Although long-range is not required for indoor nodes, using the Lora-based for the whole system helps the data streams received by the gateway to have the same format and easily be processed. Furthermore, implementing LoRa makes it easy for us to perform Over-The-Air-Activation (OTAA), which is downloading new firmware to the ESP32 via wifi instead of using a traditional Serial port.</ns0:p><ns0:p>The Indoor nodes; as Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>; are distributed scattered in a narrow range such as households, offices, classrooms. We use the central processor, LoRa32 module manufactured by Heltec, which has a built-in LoRa communication module and peer-to-peer communication from the ESP32. It receives energy directly from the grid and communicates with dust, gas concentration, temperature, and humidity sensors. A critical component of the system is the full-duplex LoRa relay module as Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>, capable of extending the system's coverage. According to TTN Fair Access Policy, the maximum number of nodes that this gateway can support is:</ns0:p><ns0:formula xml:id='formula_0'>n = n f .ns.dc T = 1.86400.1% 30 = 28 (1)</ns0:formula><ns0:p>where n f is the number of frequency bands that the gateway supports, ns is the number of seconds in a day, dc is duty cycle, T is the air time per device per day.</ns0:p><ns0:p>While the communication range between LoRa modules is excellent, we still recommend using this module for several reasons. First, two LoRa modules work together in a transceiver to make the system easy to expand while not causing additional communication delays. One LoRa module will act as the receiver, while the other will act as the transmitter; they are managed by SPI interface with microcontroller and operate by interrupt mechanism. The second reason, if the system operates in half-duplex mode, two LoRa modules simultaneously receive packets and ensure that the system's packet loss rate is reduced.</ns0:p><ns0:p>After receiving all the packets in the cluster they manage, an idle LoRa module converts the task into </ns0:p></ns0:div> <ns0:div><ns0:head>Software</ns0:head><ns0:p>This section highlights the algorithm of the proposed system. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>ESP-32 is set up to communicate peer to peer using the MAC address thus improving system reliability.</ns0:p><ns0:p>In case the user wants to use Blynk for on-the-spot surveillance with a smartphone without going through the gateway, the Blynk server will be set up. In case the ESP-NOW protocol is enabled, indoor sensor nodes can communicate with each other directly via the super energy-saving 2.4 GHz channel. ESP-NOW has built-in sending and receiving callback functions to secure communication, so it is a reliable protocol.</ns0:p><ns0:p>Before transmitting, the nodes will perform a MAC address pairing operation. After pairing is complete, the devices have established a network and can communicate with peer-to-peer. If a node in the network connects to the Dragino gateway, the network data will be automatically sent. Data is collected at the nodes Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The gateway's feedback is an Air quality index (AQI) value that helps sensor nodes perform on/off actuators. The procedure for calculating the AQI of the parameters SO 2 ,CO, NO 2 , PM 10 , PM 2.5 is as follow:</ns0:p><ns0:formula xml:id='formula_1'>AQI x = I i+1 &#8722; I i BP i+1 &#8722; BP i (C x &#8722; BP i ) + I i<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where:</ns0:p><ns0:p>&#8226; AQI x is the air quality index of x parameter.</ns0:p><ns0:p>&#8226; BP i is the lower limit concentration of observed parameter value specified in each region/country corresponding to the level i.</ns0:p><ns0:p>&#8226; BP i+1 is the upper limit concentration of observed parameter value specified in each region/country corresponding to level i + 1.</ns0:p><ns0:p>&#8226; I i is the AQI value at level i following the BP i value.</ns0:p><ns0:p>&#8226; I i + 1 is the AQI value at level i + 1 following the BP i+1 value.</ns0:p><ns0:p>&#8226; C x is specified as follows: For PM 2.5 and PM 10 parameters, C x is the average value collected after 24 hours. For SO 2 , NO 2 and CO parameters, C x is the average maximum value of one hour per day.</ns0:p><ns0:p>After having AQI x value of each parameter, the maximum value is chosen to be aggregated AQI value according to the following formula:</ns0:p><ns0:formula xml:id='formula_2'>AQI = max (AQI x ) (3)</ns0:formula><ns0:p>Notice the aggregated AQI value is rounded to an integer. The outdoor node's operation is similar to the indoor node but does not include the ESP-NOW protocol and focuses on energy saving. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Operating System developed by Real Time Engineers Ltd. FreeRTOS is designed with straightforward functionality such as basic task and memory management, synchronization API functions, with a total size of only 4.3 KB. Here, the MCU performs three tasks simultaneously with decreasing priority as follows. The first Task, which has the highest priority, ensures that LoRa data is always listened to by the LoRa modules. The second task, which has a lower priority, takes on the role of the LoRa transmitter.</ns0:p><ns0:p>Thus, two LoRa Ra02 modules controlled by RTOS ensure the information acquisition and forwarding of received information to the gateway smoothly. The full-duplex protocol allows the system to operate without additional communication delay. However, the trade-off is that the packet loss rate is difficult to guarantee because only one module does the LoRa data collection service. Moreover, the last Task, which is lowest priority, is optional and ensures the visual mechanisms on OLED for debugging. We use Prioritized Pre-emptive Scheduling with Time Slicing algorithm to determine which tasks should be put into the Running state <ns0:ref type='bibr' target='#b30'>(Trivedi (2014)</ns0:ref>). Thus, the task in the Ready state with the highest priority will be executed first or take over the execution of the running task.These data use the same data resources and are controlled and synchronized using the Mutex binary semaphore. A mutex can be viewed as a token and assigned to a data resource. Whenever a task wants to access a resource, a token must be held. Then other tasks will be queued until the token is released. </ns0:p></ns0:div> <ns0:div><ns0:head>Management Software</ns0:head><ns0:p>In this section, we discuss about the structure of the management software of WSAN-AQMIS on the The Things Network (TTN) and TagoIO tool.</ns0:p><ns0:p>The Things Network is a global collaborative Internet of Things network that allows all members to bring their network together like one big Internet. The main difference between TTN and earlier networks is that it is a community project that does not depend on any corporate networks. That means developers can build their applications completely independently and get great support from the open LoRa community, independent of other service providers. Furthermore, users contribute ports to upload data from LoRaWAN sensor nodes to the Internet. According to their open standards, TTN grow extremely fast, with an estimated 10,000 LoRaWAN gateways in 147 countries currently connected through gateway operators using management and security tools.</ns0:p><ns0:p>TagoIO is a tool that can be directly connected to TTN thanks to Authorization by device-token and used to design a dashboard. TagoIO dashboard contains widgets that help users efficiently observe and manipulate real-time data. All data are stored in Data Buckets. Once the LoRa device is connected to TTN, TagoIO will create a bucket to hold the corresponding data. This study designed a dashboard consisting of two tabs to manage the indoor sensor cluster and outdoor sensor cluster. The Indoor sensor nodes tab displays parameters of dust concentration, gas concentration, temperature, and humidity. These parameters are represented as graphs as follows:</ns0:p><ns0:p>&#8226; Gauge chart: display instant information of environmental parameters</ns0:p><ns0:p>&#8226; Vertical and horizontal bar graphs: information 5 and 10 nearest signal samples are displayed; the higher the value, the higher AQI.</ns0:p><ns0:p>&#8226; Time chart: graph of parameters on the 1-hour scale.</ns0:p><ns0:p>The user can use the node option to access the Dashboard of the node of interest. The charting functionality is similarly designed for the Outdoor sensor nodes tab. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENTATION AND RESULTS</ns0:head><ns0:p>In this section, we evaluate system performance in various scenarios using the parameter of packet loss rate. We set the experimental parameters in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The completed hardware deployed in work is shown in Figure <ns0:ref type='figure' target='#fig_20'>13</ns0:ref>, positioned with nodes as Figure <ns0:ref type='figure' target='#fig_21'>14</ns0:ref>.</ns0:p><ns0:p>The steps to perform the experiment are as follows:</ns0:p><ns0:p>&#8226; Determine experimental model: In this step, we select the devices participating in the experiment.</ns0:p><ns0:p>&#8226; Determine evaluation parameters: we only choose 1 or 2 parameters to be investigated for each experiment; other parameters are kept fixed.</ns0:p><ns0:p>&#8226; Conduct experiments: carry out the survey 50 times for each experiment and take the average value for the measurements. Gateway is permanently located close to the Wifi router so that the dashboard can operate stably. Figure <ns0:ref type='figure' target='#fig_22'>15</ns0:ref> depicts the packet loss rate with increasing distance from the outdoor node to the relay node. We set the distance between the relay node and the gateway d rg fixed to be 3.5 km. In each experiment, the outdoor sensor nodes send 500 frames of numbered data to the gateway at bandwidth BW is 125 KHz, code rate CR is 1. Each payload frame long 30 bytes. The dashboard will receive and record the number of packets lost, duplicated, and incorrectly formatted during this communication. We increase the distance between the outdoor node and the relay d nr from 100m to 6km and measure the packet loss rate. The obtained results show that the system works well in the distance of 4 km and the rate packet loss is less than 1%. We continue to investigate the impact of FreeRTOS in the recommendation system. Using FreeRTOS ensures Manuscript to be reviewed</ns0:p><ns0:p>Computer Science show that using FreeRTOS at the relay node can reduce the packet loss rate to less than 0.01% over a distance of 4.5km. Moreover, using FreeRTOS significantly improves the communication distance.</ns0:p><ns0:p>It can be explained as follows: When not using FreeRTOS, Ra01 modules receive transmissions from nodes when the LoRa signal has a good enough signal-to-noise (SNR) ratio. In case the communication distance is too large, the two Ra01 modules on the relay node get the internal communication interference with each other because the distance between them is very close. Meanwhile, using FreeRTOS helps the system divide tasks according to available hardware, thereby eliminating internal communication interference, increasing SNR, and helping the system operate over longer distances. After 50 times of this experiment, we recommend the optimal location for the outdoor sensor nodes with relay node to be 4 km in the case not use RTOS system and 4.5 km when use the RTOS system. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 16</ns0:note><ns0:p>. The packet loss rate vs. the distance from relay node to gateway.</ns0:p><ns0:p>the payload, the higher the ToA, which means the slower the transmission speed. Another observation is that the higher the SF, the higher the ToA. It concludes that a high SF can cause great latency; however, a high SF will be used in some cases where an increase in coverage is desired. We continue to investigate indoor air quality improvement for the entire system with the parameters given in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. The parameters are collected in 24 consecutive hours and statistically by Dashboard TTN in 2 cases: (i) system does not use the actuators and (ii) system uses such devices. The results show that using a negative ion generator and exhaust fan significantly improves air quality, especially for areas with narrow space as Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>system has extensive coverage thanks to the full-relay LoRa module's operation under the real-time operating system FreeRTOS. Furthermore, we apply ADC noise reduction and Kalman filter to increase measurement accuracy. The system is monitored online via a dashboard based on TTN and TagoIO. The control signals are automatically fed back to the corresponding actuator through the downlink LoRa.</ns0:p><ns0:p>The experiment results show that the system performance is highly achieved and capable of practical applications. The amount of air quality data that we have collected over the past three months is of great value in environmental management.</ns0:p><ns0:p>In future work, we will continue to expand the system by equipping mobile for indoor nodes. Besides, we will install more outdoor nodes for the planning of air quality maps.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61818:2:0:CHECK 15 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. System model of Wireless sensor and actuator network for Air monitoring and improvement application.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Furthermore, the module can work well with many different types of sensors in the MQ brand, i.e., MQ2 to MQ9 family. The LoRa32 module is designed with a 0.96-inch OLED screen capable of displaying all collected sensor parameters. Besides, the LED indicator system is designed to show four pollution levels in the monitoring area according to standards: Good (Air quality index -AQI &lt; 50) -average5/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61818:2:0:CHECK 15 Aug 2021)Manuscript to be reviewedComputer Science(51 &lt; AQI &lt; 200) -poor (201 &lt; AQI &lt; 300) -hazardous (AQI &gt; 300). Depending on the control signal received from the gateway, the indoor node will control two corresponding actuators, the exhaust fan and the negative ion generator.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Indoor node schematic</ns0:figDesc><ns0:graphic coords='7,141.73,114.37,413.56,215.40' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Negative Ion generator schematic</ns0:figDesc><ns0:graphic coords='7,141.73,474.39,413.58,231.91' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Outdoor node schematic.</ns0:figDesc><ns0:graphic coords='8,141.73,229.89,413.57,279.44' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61818:2:0:CHECK 15 Aug 2021) Manuscript to be reviewed Computer Science a transmitting module to send signals back to the gateway. This implementation method ensures a low packet loss rate but increases the communication delay time.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Relay node schematic.</ns0:figDesc><ns0:graphic coords='9,141.73,98.86,413.55,236.92' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. The LoRa gateway Dragino.</ns0:figDesc><ns0:graphic coords='9,289.01,535.11,119.03,92.32' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure7describes the indoor node algorithm flowchart. The MCU sets the necessary input/output parameters for the system and also specifies the ESP-NOW protocol. We used LoRaWAN as the default protocol for the proposed system, but in the case, LoRa modules do not guarantee communication, the</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>using the 'Read sensor data' module. Sensor information is collected by the ADC module integrated into the micro-controller. We use Kalman Filter, a powerful digital filter that combines current uncertainty with environmental noise into a new, more reliable form of information for future prediction. The strength of Kalman filter is very fast running and high stability. Furthermore, we deploy ADC noise reduction, an internal noise reduction mechanism in the microcontroller by limiting the operation of the IO module, to increase measurement accuracy. With the help of two techniques, the nodes ensure accurate data measurement.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. The algorithm flowchart of Indoor node.</ns0:figDesc><ns0:graphic coords='10,196.13,256.26,304.78,435.13' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>Figure 8 depicts the Outdoor node operation algorithm flowchart. First, the system sets the Input/Output ports, SPI ports for the LoRa module, the Interrupt services with low priority, and the Deep sleep mechanism with high priority. Deep sleep and periodic wake-up mechanisms for the outdoor node ensure that the outdoor nodes consume the lowest power levels. We recommend using an interrupt service routine for receiving control data from the LoRa relay. After receiving control data, the MCU controls the respective actuators' opening/closing to improve the air quality in the surveillance area.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. The algorithm flowchart of Outdoor node.</ns0:figDesc><ns0:graphic coords='11,211.43,419.14,274.18,241.74' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure9shows the algorithm flowchart for the full-duplex LoRa relay module. We proposed embedding the FreeRTOS real-time operating system with MCU to ensure simultaneous sending and receiving tasks<ns0:ref type='bibr' target='#b14'>(Kampmann et al. (2019)</ns0:ref>;<ns0:ref type='bibr' target='#b9'>Docekal and Slanina (2017)</ns0:ref>). FreeRTOS is an open-source Real-Time</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. The algorithm flowchart of Full-duplex relay node.</ns0:figDesc><ns0:graphic coords='12,163.70,266.44,369.65,133.42' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>Figure 10 depicts the live data of 11/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61818:2:0:CHECK 15 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. The Tago dashboard intergrated with TTN for the Indoor sensor node.</ns0:figDesc><ns0:graphic coords='13,201.98,63.77,293.10,135.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. The Tago dashboard intergrated with TTN for the Outdoor sensor node.</ns0:figDesc><ns0:graphic coords='13,204.60,233.40,287.85,141.30' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. The Tago dashboard on Smartphone and Blynk dashboard</ns0:figDesc><ns0:graphic coords='13,264.52,537.77,168.00,99.79' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13. The complete model of the proposed system</ns0:figDesc><ns0:graphic coords='14,226.42,383.05,244.20,180.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>Figure 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14. The location of the nodes is displayed in TagoIO</ns0:figDesc><ns0:graphic coords='15,236.69,63.77,223.68,176.16' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15. The packet loss rate vs. the distance from Outdoor node to relay node.</ns0:figDesc><ns0:graphic coords='15,261.02,434.65,175.00,131.25' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_23'><ns0:head>Figure 16</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure16depicts a similar experimental scenario, with the distance between two outdoor nodes and the relay (d nr ) is fixed at 4km and gradually increases the distance between the relay node to the gateway. All system parameters are kept as in the first scenario. The results once again confirm the use of FreeRTOS can reduce the number of packet loss in the proposed system. We also recommend that the distance between the relay node and the gateway is 3.3 km in not using the RTOS and 4 km in using the RTOS.In the next experiment, we investigate the effect of the payload on the transmission speed, expressed by the Time on-air (ToA) parameter as Figure17. The experiment scenario consists of an outdoor sensor node, a full-duplex relay node, and a gateway. The system parameters are set as follows: BW = 125 KHz, CR = 1, d nr = 4 km, d rg = 4 km. In turn, we set up different spreading factor (SF) values (from 7 to 12) and increment the payload in this experiment. The results show in Figure17demonstrates that the higher</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_24'><ns0:head>Figure 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Figure 17. The Time on air vs. different payload.</ns0:figDesc><ns0:graphic coords='16,261.02,285.80,175.00,131.25' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>TABLE 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='3'>. The AQMIS and relevant parameters in existing studies</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Parameter</ns0:cell><ns0:cell cols='2'>Connectivity Type</ns0:cell><ns0:cell>Packet</ns0:cell><ns0:cell>No. of</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>loss</ns0:cell><ns0:cell>nodes</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Abraham and Li (2014) CO, CO 2 , volatile organic</ns0:cell><ns0:cell>Zigbee</ns0:cell><ns0:cell>Indoor</ns0:cell><ns0:cell>NM</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>compounds</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Arroyo et al. (2019)</ns0:cell><ns0:cell>Gas concentration</ns0:cell><ns0:cell>Zigbee</ns0:cell><ns0:cell>Outdoor</ns0:cell><ns0:cell>NM</ns0:cell><ns0:cell>NM</ns0:cell></ns0:row><ns0:row><ns0:cell>Botero-Valencia et al.</ns0:cell><ns0:cell>Gas concentration, PM 2.5</ns0:cell><ns0:cell>LoRa</ns0:cell><ns0:cell>Outdoor</ns0:cell><ns0:cell>NM</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>(2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Chaturvedi and Shrivas-</ns0:cell><ns0:cell>CO 2 , SO 2 , NO 2</ns0:cell><ns0:cell>Zigbee</ns0:cell><ns0:cell>Indoor</ns0:cell><ns0:cell>NM</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>tava (2020)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Dhingra et al. (2019);</ns0:cell><ns0:cell>CO, CO 2 , CH 4</ns0:cell><ns0:cell>Wifi and</ns0:cell><ns0:cell>Outdoor</ns0:cell><ns0:cell>NM</ns0:cell><ns0:cell>NM</ns0:cell></ns0:row><ns0:row><ns0:cell>Marques and Pitarma</ns0:cell><ns0:cell /><ns0:cell>GPS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>(2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Firdaus et al. (2019);</ns0:cell><ns0:cell>CO, CO 2 , Temperature, Hu-</ns0:cell><ns0:cell>LoRa</ns0:cell><ns0:cell>Outdoor</ns0:cell><ns0:cell>6%</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>Zhao et al. (2019)</ns0:cell><ns0:cell>midity</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Mansour et al. (2014)</ns0:cell><ns0:cell cols='2'>CO, CO 2 , NO 2 , CH 4 , NH 3 Zigbee</ns0:cell><ns0:cell>Outdoor</ns0:cell><ns0:cell>NM</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(500 m)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Pitarma et al. (2017)</ns0:cell><ns0:cell>Gas concentration</ns0:cell><ns0:cell>Zigbee</ns0:cell><ns0:cell>Indoor</ns0:cell><ns0:cell>NM</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(50 m)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Marques et al. (2019b) Gas concentration</ns0:cell><ns0:cell>Bluetooth</ns0:cell><ns0:cell>Indoor</ns0:cell><ns0:cell>NM</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(50 m)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>NM: Not mentioned</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>Most of the studies above have focused on air quality monitoring using WSNs with different transmis-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>sion techniques. However, their most significant limitation is that the proposed model is quite simple and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>does not meet the practical requirements of WSN (short distance communication, few number of sensor</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>node). The main system-specific parameters such as communication distance, packet loss rate have also</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>not been investigated. Furthermore, the issue of improving air quality has not yet been investigated. There-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>fore, we propose a WSAN model for air quality monitoring and improvement using LoRa/LoRaWAN</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>communication in this study.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>TABLE 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell cols='2'>. Simulation Parameters</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Parameters</ns0:cell><ns0:cell>Notation</ns0:cell><ns0:cell>Typical Values</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of users in Indoor cluster</ns0:cell><ns0:cell>M</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of users in Outdoor cluster</ns0:cell><ns0:cell>N</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of relay node</ns0:cell><ns0:cell>K</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>Bandwidth</ns0:cell><ns0:cell>BW</ns0:cell><ns0:cell>125 KHz</ns0:cell></ns0:row><ns0:row><ns0:cell>Code rate</ns0:cell><ns0:cell>CR</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>Spreading factor</ns0:cell><ns0:cell>SF</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of payload frame</ns0:cell><ns0:cell>P</ns0:cell><ns0:cell>500</ns0:cell></ns0:row><ns0:row><ns0:cell>Payload length</ns0:cell><ns0:cell>L</ns0:cell><ns0:cell>30 byte</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Distance from the relay node to the gateway d rg</ns0:cell><ns0:cell>3.5 km</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Distance from indoor nodes to the gateway d ig</ns0:cell><ns0:cell>20 m</ns0:cell></ns0:row><ns0:row><ns0:cell>Distance from outdoor nodes to the relay</ns0:cell><ns0:cell>d nr</ns0:cell><ns0:cell>4 km</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>TABLE 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='4'>. Indoor AQI improvement vs time</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Time</ns0:cell><ns0:cell>Location</ns0:cell><ns0:cell>Case</ns0:cell><ns0:cell>Case</ns0:cell><ns0:cell>Time</ns0:cell><ns0:cell>Location</ns0:cell><ns0:cell>Case</ns0:cell><ns0:cell>Case</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>(i)</ns0:cell><ns0:cell>(ii)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>(i)</ns0:cell><ns0:cell>(ii)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>00:00 to 05:59 Kitchen</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='3'>12:00 to 17:59 Living room 10</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>06:00 to 11:59 Kitchen</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='3'>18:00 to 23:59 Living room 8</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>12:00 to 17:59 Kitchen</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>00:00 to 05:59 Bedroom</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>18:00 to 23:59 Kitchen</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>06:00 to 11:59 Bedroom</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>00:00 to 05:59 Living room 6</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell cols='2'>12:00 to 17:59 Bedroom</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>06:00 to 11:59 Living room 14</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>18:00 to 23:59 Bedroom</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>CONCLUSION AND FUTURE WORK</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>This work presents the AQMIS based on WSAN. The proposed system involves four indoor nodes that operate synchronously with two different protocols,i.e., LoRaWAN and ESP-NOW, under a gateway's management. Besides, the outdoor sub-system cluster allows for remote air quality monitoring. The15/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61818:2:0:CHECK 15 Aug 2021)</ns0:note></ns0:figure> <ns0:note place='foot' n='17'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61818:2:0:CHECK 15 Aug 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Responses to the reviewers’ comments A novel air quality monitoring and improvement system based on wireless sensor and actuator networks using LoRa communication Van-Truong Truong, Anand Nayyar, and Mehedi Masud August 14, 2021 Dear Editor and Reviewers, We would like to express our sincere thanks to the Editor and Reviewers for the time spent evaluating our manuscript and providing us with such a variety of constructive comments, which helped us improve the presentation of our manuscript. We have carefully addressed all comments of the editor and reviewers and provided a pointby-point reply. The comments of the editor and reviewers are typeset in italic font, and our responses are shown in blue. Additionally, in the revised manuscript, explanations, and justifications according to the reviewers' comments are shown in italic font. Minor corrections and rearrangements have not been highlighted. Yours sincerely, Van-Truong Truong Anand Nayyar Mehedi Masud 1. Responses to the comments of Editor (Muhammad Tariq) First, we would like to thank you for reading our manuscript and for providing us with such a variety of constructive comments. We have revised our manuscript according to these comments and provide detailed responses to each comment as follows. 1.1 The manuscript has improved significantly. However, the authors need to provide a table that shows a comparison with the state-of-the-art air quality monitoring techniques by using any key performance indicator. Reply: To clarify your concerns, we have re-performed the literature review and synthesized the techniques used in these related work. Revision: We updated the table that summarizes the related works and some key system parameter as Line 160. We summarize some studies on air quality monitoring systems and related specifications in Table 1 We hope that the reviewer satisfies with our responses. Again, thank you very much for your helpful comments. 2. Responses to the comments of Reviewer 1 Basic reporting The revision is now better than the previous version and good for reading. Experimental design The questions and objectives of this paper is now clear and well-defined. The experimental results can also validate what the authors find. Validity of the findings The contributions now are clear. Additional comments This paper is revised accordingly and can be accepted for publication. The reviewer appreciates the work. Reply: We would like to thank you for reading our manuscript and for providing us with such a variety of constructive comments. We sincerely thank for your appreciation of our work. 3. Responses to the comments of Reviewer 2 Basic reporting The revised version of the paper is well written and organized and the authors perfectly reply to all the reviewer comments. Experimental design All good Validity of the findings No more work reqiured. Additional comments No comments Reply: We would like to thank you for reading our manuscript and for providing us with such a variety of constructive comments. We sincerely thank for your appreciation of our work. "
Here is a paper. Please give your review comments after reading it.
237
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Coronavirus Disease 2019 (COVID-19) pandemic has been ferociously destroying global health and economics. According to World Health Organisation (WHO), until May 2021, more than one hundred million infected cases and 3.2 million deaths have been reported in over 200 countries. Unfortunately, the numbers are still on the rise. Therefore, scientists are making a significant effort in researching accurate, efficient diagnoses. Several studies advocating artificial intelligence proposed COVID diagnosis methods on lung images with high accuracy. Furthermore, some affected areas in the lung images can be detected accurately by segmentation methods. This work has considered state-of-the-art Convolutional Neural Network architectures, combined with the Unet family and Feature Pyramid Network (FPN) for COVID segmentation tasks on Computed Tomography (CT) scanner samples the Italian Society of Medical and Interventional Radiology dataset. The experiments show that the decoder-based Unet family has reached the best (a mean Intersection Over Union (mIoU) of 0.9234, 0.9032 in dice score, and a recall of 0.9349) with a combination between SE ResNeXt and Unet++. The decoder with the Unet family obtained better COVID segmentation performance in comparison with Feature Pyramid Network. Furthermore, the proposed method outperforms recent segmentation state-ofthe-art approaches such as the SegNet-based network, ADID-UNET, and A-SegNet + FTL.</ns0:p><ns0:p>Therefore, it is expected to provide good segmentation visualizations of medical images.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>programs to computerize and digitize health records. Many research and application programs are implemented in hospitals and health facilities with hospital information systems <ns0:ref type='bibr' target='#b6'>(Ferdousi et al. (2020)</ns0:ref>), communication systems <ns0:ref type='bibr' target='#b26'>(Nayak and Patgiri (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b1'>Belasen et al. (2020)</ns0:ref>), robot-based surgeon systems <ns0:ref type='bibr' target='#b20'>(Lee et al. (2021)</ns0:ref>), and nursing care information systems <ns0:ref type='bibr' target='#b2'>(Booth et al. (2021)</ns0:ref>). Medical records, images of x-ray, ultrasound, magnetic resonance imaging, positron-emission tomography become rich and diverse.</ns0:p><ns0:p>Modern medicine with information technology applications can make disease diagnosis faster based on various clinical symptoms and subclinical symptoms (subclinical diagnosis). In subclinical diagnostics cases, doctors usually evaluate and examine images generated and screened from medical imaging devices and equipment. Modern and high-tech medical machines with computer support software make the image clearer and more accurate with a very high resolution. The diagnostic imaging methods are diversified, such as radiological diagnosis, ultrasound imaging, ultrasound -color Doppler, endoscopic images (commonly used as gastrointestinal endoscopy and urinary endoscopy), Computed Tomography (CT) Scanner, Magnetic Resonance Imaging (MRI) and so on.</ns0:p><ns0:p>Image segmentation is to divide a digital image into various parts, which can be the collections of pixels or superpixels <ns0:ref type='bibr' target='#b36'>(Shapiro (2001)</ns0:ref>). The goal of image segmentation aims to simplify and or represent an image into something more meaningful and easier to analyze. In recent years, deep learning algorithms have provided great tools for medical segmentation, which plays an essential role in disease diagnosis and is one of the most crucial tasks in medical image processing and analysis. Diagnostic based on segmented medical images holds an essential contribution to improving accuracy, timeliness, and disease diagnosis efficiency <ns0:ref type='bibr' target='#b35'>(Saood and Hatem (2021)</ns0:ref>; &#220;mit <ns0:ref type='bibr' target='#b43'>Budak et al. (2021)</ns0:ref>, <ns0:ref type='bibr' target='#b52'>Zhou et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b29'>Raj et al. (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b50'>Yakubovskiy (2020)</ns0:ref>). For example, for ultrasound images, the physician and doctors can accurately detect and measure the size of the solid organs in the abdomen and abnormal masses on the segmented areas in the images <ns0:ref type='bibr' target='#b27'>(Ouahabi and Taleb-Ahmed (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b53'>Zhou et al. (2021)</ns0:ref>). Another type of medical image is Chest x-ray <ns0:ref type='bibr' target='#b28'>(Rahman et al. (2020)</ns0:ref>), where cancer tumors can be detected and segmented for surgeons and efficient treatment monitors. The CT scanner images marked abnormal regions also help physicians identify signs of brain diseases, especially identifying intracranial hematoma, brain tumors <ns0:ref type='bibr' target='#b31'>(Ramesh et al. (2021)</ns0:ref>, <ns0:ref type='bibr' target='#b25'>Munir et al. (2020)</ns0:ref>). Signs of the disease can be revealed via such segmented images, but sometimes these signs can be too small to be observed by humans. Moreover, in a short time, doctors may have many patients for diagnosis simultaneously. Besides, it takes so much time to train a doctor to perform medical image analysis. Artificial intelligence algorithms can outperform human ability in image classification and provide techniques to interpret the results <ns0:ref type='bibr' target='#b9'>(Geirhos et al. (2018)</ns0:ref>). Therefore, leveraging artificial intelligence's development with segmentation image techniques is crucial to accelerate medicine advancement and improve human health. With an outbreak of the COVID-19 pandemic, it requires significant efforts from all citizens worldwide to stop the pandemic, but human resources seem insufficient. Technology-based medical approaches are necessary and urgent to support humans to reduce and prevent the pandemic, so algorithms on image processing for diagnosis of COVID-19 have attracted the attention of numerous scientists. The efficiency of the Unet family in image segmentation and outstanding image classification performance of well-known convolutional neural network architectures revealed in a vast of previous studies has brought a significant research question on the benefits of their &#8226; The visualizations including segmented regions in lungs are examined with various metric performances and exhibit similar infected areas compared to the ground truths.</ns0:p><ns0:p>&#8226; Augmentation on the COVID-19 lung images dataset is performed with mirror, contrast, and brightness transforms. In addition, gamma and Gaussian noise are manipulated before adding spatial transforms. Such techniques help to enhance the segmentation performance. Besides, we evaluate the performance of COVID-19 segmentation without augmentation techniques.</ns0:p><ns0:p>&#8226; The training time and inference time are measured and compared among the considered configurations.</ns0:p><ns0:p>&#8226; From the obtained results, we found that the integration between SE ResNeXt and Unet++ has revealed the best performance in COVID-19 segmentation tasks.</ns0:p><ns0:p>The rest of this study is organized as follows. Section 2 introduces the main related works. Section 3 presents a brief introduction of the segmentation network based on encoder-decoder architectures, the dedicated loss function for segmentation tasks, and augmentation techniques. Afterward, we present our settings, the public COVID-19 dataset, and the evaluation metrics in Section 4. Section 5 exhibits and analyzes the obtained results. We conclude the study and discuss future work in Section 6.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>Machine learning for medical imaging analysis has gained popularity in recent years. Advancements in computer techniques have also been proposed with an increase in quality and quantity. To support doctors better, researchers have focused on model explanation and segmentation methods for medical images.</ns0:p><ns0:p>Section 2.1 examines the robust studies of deep learning in the healthcare sector. Section 2.2 reviews the main related approaches in the domain of COVID-19 detection from CT images. <ns0:ref type='bibr' target='#b32'>Rav&#236; et al. (2017)</ns0:ref> presented several robust applications of deep learning to health informatics. We have obtained benefits from rapid improvements in computational power, fast data storage, and parallelization, and so there are more and more efficient proposed models for health services. Furthermore, <ns0:ref type='bibr' target='#b39'>Srivastava et al. (2017)</ns0:ref> gave us an overview of recent trends and future directions in health informatics using deep learning. In another study, <ns0:ref type='bibr' target='#b15'>Huynh et al. (2020)</ns0:ref> introduced a shallow convolutional neural network (CNN) architecture with only a few convolution layers to perform the skin lesions classification, but the performance is considerable. The authors conducted the experiments on a dataset including 25,331 samples.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Applications of deep learning in healthcare</ns0:head><ns0:p>For discriminating between melanoma and vascular lesion, the proposed model obtained an accuracy of 0.961 and an Area Under the Curve (AUC) of 0.874. Several studies deal with abnormality bone detection <ns0:ref type='bibr' target='#b3'>(Chetoui and Akhloufi (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b45'>Varma et al. (2019)</ns0:ref>) also revealed a promising result. For instance, Chetoui and Akhloufi (2020) developed a deep learning architecture based on EfficientNet <ns0:ref type='bibr'>(Tan and Le (2019)</ns0:ref>) to detect referable diabetic and diabetic retinopathy on two public datasets, namely EyePACS and APTOS 2019. The proposed method achieved the highest AUC of 0.990 and 0.998 on EyePACS and APTOS 2019, respectively. Similarly, an approach to detect abnormalities on musculoskeletal images has been proposed by using CNN architecture. The authors collected a massive dataset of 93,455 radiographs. The obtained AUC is recorded of 0.880, sensitivity and specificity of 0.714 and 0.961, respectively.</ns0:p><ns0:p>To recognize COVID-19 from chest CT images, a deep architecture named COVNet has been proposed by <ns0:ref type='bibr' target='#b21'>Li et al. (2020)</ns0:ref>. Community-acquired pneumonia and healthy control are utilized in the testing phase Manuscript to be reviewed Computer Science and collected from 6 hospitals. AUC, specificity, and sensitivity report the performance. For detecting COVID-19, the proposed method obtained an AUC of 0.96, a specificity of 0.96, and a sensitivity of 0.90. <ns0:ref type='bibr' target='#b38'>Shi et al. (2021)</ns0:ref> presented a review on emerging artificial intelligence technologies to support medical specialists. The authors also stated that 'image segmentation plays an essential role in COVID-19 applications'. <ns0:ref type='bibr' target='#b46'>Wang et al. (2021)</ns0:ref> leveraged features from the self-created CNN, deep feature fusion, and graph convolutional network (GCN) to build an effective COVID-19 classifier, namely FGCNet. FGCNet is structured based on a deep feature fusion combination of a CNN and GCN model. The authors evaluated the proposed approach on a dataset including 320 COVID-19 patents and 320 healthy control subjects. Among eight proposed networks, the best models obtained an accuracy, Matthews correlation coefficient (MCC), and Fowlkes-Mallows index (FMI) of 97.15 &#177; 1.25%, 94.23 &#177; 2.52%, and 97.16 &#177; 1.25% respectively.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Applications of deep learning in COVID-19 detection</ns0:head><ns0:p>In an attempt to minimize computation cost and enhance the performance of detecting COVID-19, <ns0:ref type='bibr' target='#b47'>Wang et al. (2020)</ns0:ref> proposed a hybrid combination of wavelet Renyi, neural network, and three-segment biogeography-based optimization algorithm. The evaluation conducts on a dataset consisting of 296 CT images and 10-fold cross-validation. The proposed method outperformed the state-of-the-art approaches with the obtained accuracy of 86.12 &#177; 2.75% and MCC of 72.42 &#177; 5.55%. <ns0:ref type='bibr' target='#b35'>Saood and Hatem (2021)</ns0:ref> proposed an approach for image tissue classification by leveraging segmentation networks, namely SegNet and U-Net. The purpose of using both models is to distinguish the infected and healthy lung tissue. The networks are trained on 72 and validated on ten images. The proposed method obtained an accuracy of 0.95 with SegNet and 0.91 with U-Net. Empirically, the authors also stated that the mini-batch size affects the performance negatively. &#220;mit <ns0:ref type='bibr' target='#b43'>Budak et al. (2021)</ns0:ref> presented a new procedure for automatic segmentation of COVID-19 in CT images using SegNet and attention gate mechanism. A dataset with 473 CT images has been utilized as the evaluation data. The performance of the proposed method is judged based on Dice, Tversky, and focal Tversky loss functions. The authors reported that the obtained sensitivity, specificity, and dice scores are 92.73%, 99.51%, and 89.61%, respectively. <ns0:ref type='bibr' target='#b52'>Zhou et al. (2020)</ns0:ref> proposed an effective model to segment COVID-19 from CT images. In comparison to other existing studies ( &#220;mit <ns0:ref type='bibr' target='#b43'>Budak et al. (2021)</ns0:ref>), the model obtained comparable results. For each CT slice, the proposed method takes 0.29 seconds to generate the segmented results and obtained a Dice of 83.1%, Hausdorff of 18.8. However, the method is conceived to segment the single class and on a small dataset. A recent approach <ns0:ref type='bibr' target='#b29'>(Raj et al. (2021)</ns0:ref>) leverages a depth network, namely ADID-UNET, to enhance the COVID-19 segmentation performance on CT images. The proposed method is evaluated on public datasets and achieved a 97.01% accuracy, a precision of 87.76%, and an F 1 score of 82.00%.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>METHOD</ns0:head><ns0:p>This Section contains 5 parts. Section 3.1 describes systematically the complete architecture of segmentation models. We present the explanation of the encoders and decoder for a general segmentation architecture are presented in Section 3.2 and Section 3.3 respectively. Afterwards, the description of loss function and several data augmentation methods are explained in Section 3.4 and Section 3.5 respectively.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Segmentation Network Architecture</ns0:head><ns0:p>The overall system architecture is visualized in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>. We leveraged the segmentation model with architecture to discriminate the COVID-19 infections from the lung on the medical images, including the encoder and decoder. The trained weights are validated on a separated dataset to find the optimal weights during the training section. To segment lung and COVID-19 infection regions from medical images, we utilized the encoder built based on ResNet, ResNeSt, SE ResNeXt, and Res2Net, with the structures are presented in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. Furthermore, the encoder's generated feature maps are handled by the Unet family and FPN. Each encoder has a distinct feature map dimension. The input of the decoder needs to be adapted based on this dependence. Moreover, the encoders leverage the Imagenet pre-trained weights to reduce the computation cost during training. The medical images are pass into the input layer with the dimension of 512 &#215; 512. It is forwarded through 5 different layers with complex structures to extract the most meaningful features.</ns0:p></ns0:div> <ns0:div><ns0:head>4/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_5'>2021:05:61111:1:1:NEW 24 Jul 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Afterward, the decoders control the critical stages by generating the masks, including the important </ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Efficient deep learning model-based Encoder</ns0:head><ns0:p>In an attempt to improve the performance of image segmentation task on medical images, we leveraged the advantages of numerous pre-trained models as the encoder, which are modern architectures and have impressive performances on classification tasks, such as EfficientNet <ns0:ref type='bibr'>(Tan and Le (2019)</ns0:ref>), ResNet <ns0:ref type='bibr' target='#b13'>(He et al. (2016)</ns0:ref>), or ResNeSt <ns0:ref type='bibr' target='#b51'>(Zhang et al. (2020)</ns0:ref>). The responsibilities of the encoder are learning features and providing the initial low-resolution representations. The segmentation architecture will refine the encoder's outputs, referred to as the decoder network. To avoid the vanishing gradient problem <ns0:ref type='bibr' target='#b17'>(Kolen and Kremer (2009)</ns0:ref>) and retrieve the fine-grained information from the previous layer. The skip connections are utilized between the encoder and decoder network or between the encoder and decoder network layers.</ns0:p><ns0:p>Assuming x denotes the input, the expected underlying mapping obtaining by training is f (x), the block within the dotted-line box demands to apply the residual mapping f (x) &#8722; x. In the case of f (x) = x, the identity mapping is the desired underlying mapping. The weights and biases of the block within the dotted-line box need to be set at 0. features at a granular level. By leveraging kernel size of 7 &#215; 7 instead of 3 &#215; 3, the computation of multi-scale feature extraction ability is enhanced but achieved a similar cost.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Decoders for segmentation network</ns0:head><ns0:p>The emergence of artificial intelligence and especially Convolutional Neural Network architectures in computer vision brings the field of image processing to light. Once considered untouchable, several image processing tasks now present promising results like image classification, image recognition, or image segmentation. The image segmentation task's primary purpose is to divide the image into different segment regions, representing the discriminate entity. Compared with classification tasks, segmentation tasks require the feature maps and reconstructing the feature maps' images. In this study, we leveraged the advantages of several CNN-based architectures as the decoder of segmentation models.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.1'>Unet Family</ns0:head><ns0:p>We use the implementations of Unet <ns0:ref type='bibr' target='#b33'>(Ronneberger et al. (2015)</ns0:ref>) with the architecture includes three sections, namely the contraction, the bottle-neck, and the expansion section. Generally, the contraction section is the combination of several contraction blocks. Each block consists of 3 &#215; 3 convolution and 2 &#215; 2 max-pooling layers. The number of feature maps gets double at each max-pooling layer. It helps the architecture learn complex patterns effectively. Furthermore, the kernels of size 3 &#215; 3 are widely used as filters for the widespread deep neural networks <ns0:ref type='bibr' target='#b4'>(Chollet (2017)</ns0:ref>; <ns0:ref type='bibr'>Tan et al. (2019)</ns0:ref>). Besides, the model's performance also depends on the size of kernels and improves the efficiency of capturing high-resolution images. Similar to contraction blocks, the bottle-neck layers also consist of the 3 &#215; 3 convolution but followed by 2 &#215; 2 up convolution layers. The most crucial section of Unet architecture is the expansion section. This section consists of several expansion blocks, and the number of expansion blocks should be equal to the number of contractions. Each block also contains 3 &#215; 3 convolution and 2 &#215; 2 up convolution layers, half of the feature maps after each block are leveraged to maintain symmetry. Moreover, the feature maps corresponding with the contraction layers also include the input. Hence, the image will be reconstructed based on the learned features while contracting the image. To produce the output, a 1 &#215; 1 convolution layer is utilized to generate the feature maps with the number equal to the desired segments.</ns0:p><ns0:p>Also, <ns0:ref type='bibr' target='#b54'>Zhou et al. (2018</ns0:ref><ns0:ref type='bibr' target='#b55'>Zhou et al. ( , 2019) )</ns0:ref> Furthermore, the spatial resolution decreases as going up and detecting more high-level structures, the semantic value for each layer can potentially increase. Nevertheless, the bottom layers are in high resolution but can not be utilized for detection due to the semantic value is unsuitable for justifying the slow-down training computation based on it. By applying the 1 &#215; 1 convolution layer at the top-down pathway, the channel dimensions of feature maps from the bottom-up pathway can be reduced and become the top-down pathway's first feature map. Furthermore, element-wise addition is applied to merge the feature maps, the bottom-up pathway, and the top-down pathway.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4'>Loss function for segmentation task</ns0:head><ns0:p>In deep learning and computer vision, the boundary detection definition comes from extracting features to produce significant representations of the objects. More specifically, boundary detection aims to identify object boundaries from images. Therefore, we can consider boundary detection as segmentation problems and target the boundaries to 1 and the rest of the image to 0 as the label. Thus, the loss function can be formulated with the classical function, namely Cross-entropy or Hinge loss. However, in terms of segmentation tasks, the classical loss function models can work imperfectly due to the highly unbalanced label distribution of each class and the per-pixel intrinsic of the classical loss function.</ns0:p><ns0:p>To enhance segmentation models' performance, we considered using Dice loss <ns0:ref type='bibr' target='#b24'>(Milletari et al. (2016)</ns0:ref>)</ns0:p><ns0:p>originates from the Dice coefficient. We will introduce more details about the Dice coefficient in section 4.3. The ground truth and predicted pixels can be considered as two sets. By leveraging Dice loss, the two sets are trained to overlap little by little, and the reduction of Dice loss can be obtained when the predicted pixels overlap only the ground-truth pixels. Furthermore, with Dice loss, the total number of pixels at the global scale is investigated as the denominator, whereas the numerator pays attention to the overlap between two sets at the local scale. Hence, the loss of information globally and locally is utilized by Dice loss and critically improves accuracy. Moreover, in thin boundaries, the model utilizing Dice loss can achieve better performance than others.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5'>Data Augmentation</ns0:head><ns0:p>Deep learning requires more data to improve classification and regression tasks, although it is not easy to Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In this study, we adapted several techniques as follows. Firstly, the outlier will be cut off from the original image, and we set 10% and 90% for the cut-off lower and upper percentiles, respectively. Then, the low resolution of the cut-off outlier image is simulated. We also applied the mirror, contrast, and brightness transform. Furthermore, gamma and Gaussian noise are utilized before adding spatial transforms. Figure <ns0:ref type='figure' target='#fig_8'>3</ns0:ref> visualizes the original medical image, and the image leveraged data augmentation techniques.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>EVALUATION</ns0:head><ns0:p>In this section, we present our experimental configurations in Section 4.1. Furthermore, the research information referred as the COVID-19 dataset and the considered evaluation metrics are explained in Section 4.2 and Section 4.3 respectively.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Settings for experiment</ns0:head><ns0:p>We implemented the trained models built based on Pytorch framework 3 . We also accelerated the training section by utilizing the weights trained on the ImageNet dataset <ns0:ref type='bibr' target='#b34'>(Russakovsky et al. (2015)</ns0:ref>). The processing includes three main phases: training and validation, as described in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>. For the first stage, the medical images dataset is split into training and validation sections. Afterward, we evaluated several segmentation architectures and stored internal parameters with the highest performance. A robust computational resource is required for the segmentation model. Thus, we used a server with configurations listed in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref> to conduct our experiments.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Dataset</ns0:head><ns0:p>We investigated our approach's performance on the COVID-19 segmentation dataset from the Italian Society of Medical and Interventional Radiology 4 . The dataset includes 829 slices belonging to 9 axial volumetric CTs. Furthermore, the experienced radiologist has evaluated, segmented, and marked as COVID-19 on 373 out of the total of 829 slices. The medical images have been transformed to greyscaled </ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Metrics for comparison</ns0:head><ns0:p>In this study, we considered using two metrics to evaluate the current approach's performance, namely dice <ns0:ref type='bibr' target='#b40'>(S&#248;rensen (1948)</ns0:ref>; <ns0:ref type='bibr' target='#b5'>Dice (1945)</ns0:ref>) and Jaccard coefficients. Intuitively, the segmentation performance is measured by evaluating the overlap between the predictions and the ground-truth object. The results with more overlap regions with the ground truth reveal better performance than those with fewer overlap regions. Both Dice and Jaccard indices are in the range between 0 and 1. We assume that A and B are Manuscript to be reviewed</ns0:p><ns0:p>Computer Science 0. Thus, the Dice coefficient can be evaluated by 2 &#215; the area of overlap divided by the total number of pixels in both masks as in Equation <ns0:ref type='formula'>1</ns0:ref>. </ns0:p><ns0:formula xml:id='formula_0'>Jaccard(A, B) = IoU = ||A &#8745; B|| ||A &#8746; B|| (2)</ns0:formula><ns0:p>We also consider the following definitions:</ns0:p><ns0:p>&#8226; True Positive (TP) reveals the number of positives. In other words, the predictions match with the ground-truth label.</ns0:p></ns0:div> <ns0:div><ns0:head>11/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:1:1:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; True Negative (TN) indicates the predictions do not belong to the ground-truth and are not segmented.</ns0:p><ns0:p>&#8226; False Positive (FP) demonstrates the predicted masks unmatch with the ground-truth masks.</ns0:p><ns0:p>&#8226; False Negative (FN) expresses the predictions belong to the ground-truth, but it is not segmented correctly.</ns0:p><ns0:p>Furthermore, in terms of the confusion matrix, the Dice and IoU equation can be rephrased as in Equation <ns0:ref type='formula'>3</ns0:ref>and Equation <ns0:ref type='formula'>4</ns0:ref>. We also present the measuring of the segmentation errors in Figure <ns0:ref type='figure' target='#fig_12'>5</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_1'>Dice = 2T P 2T P + FP + FN (3) Jaccard = IoU = T P T P + FP + FN (4)</ns0:formula><ns0:p>The formula of computing Dice is relevant to the F 1 score. In other words, the Dice and F 1 achieve the same value in comparison with each other. Moreover, Figure <ns0:ref type='figure' target='#fig_13'>6</ns0:ref> visualizes in detail the differences between Dice and Jaccard/Iou indices. Figure <ns0:ref type='figure' target='#fig_13'>6</ns0:ref>, the left image represents the Dice coefficient, whereas the right image exhibits the Intersection over Union between the predicted mask and the ground-truth mask.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>EXPERIMENTAL RESULTS</ns0:head><ns0:p>In this section, we present in detail our experimental results. Section 5.1 presents the segmentation performance of the configurations introduced in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>. All the models are trained over 100 epochs, and the model that achieved the best performance will be stored for inferring purposes. Afterward, the discussion of the proposed methods and the other systems is presented in section 5.2.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1'>Segmentation performance on the medical image dataset</ns0:head><ns0:p>We describe the results of both cases, non-augmented and augmented data in Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref> and Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref> respectively. More specifically, the IoU, mean IoU (mIoU), Precision, Recall, and F 1 score of our experimental configurations are reported to express the performance when applying data augmentation techniques and vice versa.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref> reveals the results of inferring the trained model without using data augmentation techniques. The architectures built based on the Unet decoder family obtained better results compared to FPN. Among the proposed configurations, C 9 acquires the best mIoU of 0.9262, whereas C 7 gets 0.9259 in the second place.</ns0:p><ns0:p>In terms of the Dice coefficient, C 9 achieves 0.9016 for the COVID-19 category, being the best architecture.</ns0:p><ns0:p>Considering Precision and Recall, C 9 earns 0.9129, 0.8906, and gives a comparable performance to the others. Table <ns0:ref type='table' target='#tab_5'>6 presents</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The comparison of not using data augmentation and using data augmentation reveals the architectures with Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='5.2'>Benchmark</ns0:head></ns0:div> <ns0:div><ns0:head n='5.2.1'>Lung segmentation</ns0:head><ns0:p>The comparison of lung segmentation is presented in Table <ns0:ref type='table' target='#tab_7'>8</ns0:ref>. As observed from the results, our configurations with different decoders have outperformed the approach by <ns0:ref type='bibr' target='#b35'>Saood and Hatem (2021)</ns0:ref>. More specifically, the work of <ns0:ref type='bibr' target='#b35'>Saood and Hatem (2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Unet++ decoder is the best segmentation model among the others. Our configurations get a promising performance compared with the others.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>CONCLUSION</ns0:head><ns0:p>In The proposed architecture of COVID-19 segmentation system on medical images.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>2 https://www.who.int/emergencies/diseases/novel-coronavirus-2019, accessed 11 May 2021 2/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:1:1:NEW 24 Jul 2021) Manuscript to be reviewed Computer Science combinations to enhance the performance in COVID-19 lung CT image segmentation. This study has leveraged well-known deep learning architectures as encoders and Unet family, Feature Pyramid Network techniques as decoders to produce segmentation on chest slices for supporting COVID-19 diagnosis. The principal contributions include as follows: &#8226; Numerous configurations generated by combining five well-known deep learning architectures (ResNet, ResNeSt, SE ResNeXt, Res2Net, and EfficientNet-B0) and Unet family are evaluated and compared to the state-of-the-art to reveal the efficiency in the COVID-19 lung CT image segmentation. Moreover, we also include Feature Pyramid Network (FPN), a famous architecture for segmentation tasks in configurations for comparison.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61111:1:1:NEW 24 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The proposed architecture of COVID-19 segmentation system on medical images.</ns0:figDesc><ns0:graphic coords='6,141.73,63.78,413.58,212.77' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The process diagram of segmentation model.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>In the scope of this study, we leveraged ResNet, ResNeSt, SE ResNeXt, and Res2Net as the encoders to generate the features map and transfer the features to the decoder network segmenting the COVID-19 regions on medical images. ResNeSt is a new network inherited from ResNet with an attention mechanism, performed promising results on image classification and segmentation tasks. In this architecture, the feature maps are split into G groups where G = #Cardinality &#215; #Radix. The introduction of Cardinality is presented from Resnext (Xie et al. (2017)), which repeats the bottle-neck blocks and breaks channel information into smaller groups, whereas Radix represents the block of Squeeze-and-Excitation Networks (SENet) (Hu et al. (2018)). In summary, this architecture combines the Cardinality of Resnext and Attention of Squeeze and Excitation Networks to formulate the Split Attention. In other words, Split Attention is the modification of the gating mechanism. Recently, Gao et al. (2021) proposed a novel building block for CNN constructs hierarchical residual-like connections within one single residual block, namely Res2Net. In other words, the bottle-neck block of the ResNet architecture is re-designed and contributes to increasing the range of receptive fields for each network layer by representing multi-scale 6/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:1:1:NEW 24 Jul 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>proposed an Unet-like architecture, namely Unet++. The advantages of 7/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:1:1:NEW 24 Jul 2021) Manuscript to be reviewed Computer Science Unet++ can be considered as capturing various levels of features, integrating features, and leveraging a shallow Unet structure. The discriminations between Unet and Unet++ are the skip connection associating two sub-networks and utilizing the deep supervision. The segmentation results are available at numerous nodes in the structure of Unet++ by training with deep supervision. Another Unet-like architecture has been proposed by Guan et al. (2020) for detaching artifacts from 2D photoacoustic tomography images. We leverage the Unit, Unet++, and Unet 2d as the decoder architectures to conduct the experiments. When compared with original Unet architecture, Unet++ and Unet2d have been built to reduce the semantic gap between the feature maps and efficiently learn the global and local features.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>FPN (Lin et al. (2017)) is also a famous architecture appropriate to segmentation tasks. FPN is a feature extractor with a single-scale image of a stochastic dimension and generates the feature maps with proportional size. The primary purpose of FPN is to build feature pyramids inside convolutional neural networks to be used in segmentation or object recognition tasks. The architecture of the FPN involves a bottom-up pathway and a top-down pathway. The bottom-up pathway defines a convolutional neural network with feature extraction, and it composes several convolution blocks. Each block consists of convolution layers. The last layer's output is leveraged as the reference set of feature maps for enriching the top-down pathway by lateral connection. Each lateral connection merges feature maps of the same spatial size from the bottom-up and top-down pathways. Thus, FPN architecture consists of a top-down pathway to construct higher resolution layers from a semantic-rich layer. To improve predicting locations' performance, we deploy the lateral to connect between reconstructed layers, and the corresponding feature maps are utilized due to the reconstructed layers are semantic strong, whereas the locations of objects are not precise. It works similarly to skip connections of ResNet.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>collect needed data. Augmentation techniques was introduced in the work of van Dyk and Meng (2001), and leveraged by Zoph et al. (2020); Frid-Adar et al. (2018); Xu et al. (2020); Ahn et al. (2020) in a vast of studies. More specific, data augmentation is the technique to abound the number of samples in the dataset by modifying the existing samples or generating newly synthetic data. Leveraging the advantages 8/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:1:1:NEW 24 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The visualization of the data augmentation techniques. The original medical image is on the left, whereas the image with data augmentation techniques is on the right.</ns0:figDesc><ns0:graphic coords='10,183.09,63.78,330.85,239.02' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>in NIFTI file format. The segmented labels include the infection masks but also lung masks. Therefore, it could be more attractive for performing segmentation tasks on this dataset. Figure4visualizes a sample of the dataset; the left CT slide presents the original image of a COVID-19 patient, the right image includes the lung, and the infection region visualizing by blue and orange color, respectively.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. A sample of the COVID-19 segmentation dataset. The left image presents a CT slice of a COVID-19 patient, whereas the right image visualizes the lung and infection region of the patient.</ns0:figDesc><ns0:graphic coords='11,141.73,210.63,413.57,147.05' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>prediction and ground-truth masks, respectively, for a given class. If A and B match perfectly, the value of both Dice and Jaccard indices is equal to 1. Otherwise, A and B are no overlap, and the value is equal to 10/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:1:1:NEW 24 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The illustration of the segmentation error.</ns0:figDesc><ns0:graphic coords='12,212.04,100.03,272.96,155.46' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. The visualization of Dice and IoU. The left image presents the Dice coefficient, the right describes the IoU.</ns0:figDesc><ns0:graphic coords='12,212.04,444.23,272.95,139.25' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>the performance of our experimental configurations. The proposed networks are trained under data augmentation techniques. Overall, almost the configuration with decoder-based Unet family (C 1 to C 15 ) achieved mIoU over 0.9, whereas the architecture-based FPN obtained approximately at 0.8. Between the configurations, C 6 achieves the best performance with the mIoU obtained of 0.9283. Configuration C 6 represents the combination of ResNet and Unet2d. Furthermore, the second place gets the mIoU of 0.9234 with configuration C 13 trained with SE ResNeXt and Unet++ model as the encoder and decoder.As for lung and COVID-19 segmentation, the average IoU achieves 0.8 on overall configurations for COVID-19 infection regions and 0.94 for lung masks. The configurations of C 6 and C 13 exhibit promising performance on the COVID-19 segmentation task with obtained IoU of 0.8241 and 0.8234. Meanwhile, the configurations of C 13 and C 14 get the maximum F 1 score for the COVID-19 category with an obtained value of 0.9032 0.9031. By examining Precision and Recall, C 6 acquires the best precision performance, whereas C 13 gains the maximum Recall. Figure7depicts the confusion matrix for configuration C 13 . We also normalize the confusion matrix over rows for analyzing purposes. The confusion matrix values reveal that most of the misjudgments of COVID-19 infection regions are categorized as lung and vice versa.12/19PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:1:1:NEW 24 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>FPN</ns0:head><ns0:label /><ns0:figDesc>-based tend to be more effective when not utilizing augmentation techniques. It can be demonstrated by the results of C 16 , C 17 , and C 18 . By the Unet family decoder, the outcomes depict that the models with nearly equivalent performance, i.e., C 2 , C 4 , C 8 , C 10 , and C 12 . The most discriminate configurations are C 11 and C 14 . The performance of C 11 and C 14 are strongly boosted by applying data augmentation techniques. Furthermore, C 11 and C 14 also obtain better performance. In this respect, we can conclude that the augmentation techniques affect the results with almost all configurations. Furthermore, the training and inference times are reported in Table 7. The training presents the total time needs for 100 epochs, whereas the inference expresses the average time to segment each slice. With the Unet and FPN decoder, the architectures are trained with lower computation costs than the rest. The least and most time-consuming configurations for training/inference are C 16 and C 13 , respectively. As observed from the results, with the same encoders, architectures FPN decoder-based segment fastest for each slice. We also visualize several samples with ground truth and the prediction masks in and Figure 8. The lung is 13/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:1:1:NEW 24 Jul 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>Figure 8C, or Figure 8E, the boundary of COVID-19 segmentation is equivalent to the corresponding 420</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61111:1:1:NEW 24 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head /><ns0:label /><ns0:figDesc>obtained a Dice of 0.7490 and Sensitivity of 0.9560 with the SegNet method, while our best configuration, C 13 , achieved a maximum Dice of 0.9748 and Sensitivity of 0.9755. In particular, by leveraging SE ResNeXt and Unet++ architecture, we get the maximum score compared to the others.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. The COVID-19 samples, ground-truth and predictions from test set. The purple regions denote the lung, whereas the yellows represent the COVID-19 infection.</ns0:figDesc><ns0:graphic coords='17,214.58,63.78,267.89,496.28' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head /><ns0:label /><ns0:figDesc>this study, we systematically presented a viable solution for lung and COVID-19 segmentation from CT images. The proposed model is implemented based on the convolutional neural networks, i.e., ResNet, ResNeSt, SE ResNeXt, Res2Net, or EfficientNet decoder-based Unet family and Feature Pyramid Network as the encoders and decoders of segmentation models. We evaluated the proposed method by the open segmentation dataset with numerous model structures. The experimental results reveal that the model with the decoder-based Unet family obtained better performance than FPN. Furthermore, the segmentation results are compared with the ground truth annotated by an experienced radiologist and exhibit promising performance. More specifically, the best architecture obtained a mIoU of 0.9234, 0.9032 of F 1 -score, 0.8735, and 0.9349 of Precision and Recall, respectively. Besides, segmenting the minimal infection regions still challenges us due to their size and ambiguous regions.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,199.12,525.00,378.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,219.37,525.00,186.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,298.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,199.12,525.00,267.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,354.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The model structures with Unet family and FPN decoder.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>ResNet 34</ns0:cell><ns0:cell>ResNeSt</ns0:cell><ns0:cell>SE ResNeXt</ns0:cell><ns0:cell>Res2Net</ns0:cell><ns0:cell>EfficientNet-B0</ns0:cell></ns0:row><ns0:row><ns0:cell>Input Layer</ns0:cell><ns0:cell>Input Layer</ns0:cell><ns0:cell>Input Layer</ns0:cell><ns0:cell>Input Layer</ns0:cell><ns0:cell>Input Layer</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>512</ns0:cell></ns0:row><ns0:row><ns0:cell>Layer 1</ns0:cell><ns0:cell>Layer 1</ns0:cell><ns0:cell>Layer 1</ns0:cell><ns0:cell>Layer 1</ns0:cell><ns0:cell>Layer 1</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 64 &#215; 256 &#215;</ns0:cell><ns0:cell>Output: 64 &#215; 256 &#215;</ns0:cell><ns0:cell>Output: 64 &#215; 256 &#215;</ns0:cell><ns0:cell>Output: 64 &#215; 256 &#215;</ns0:cell><ns0:cell>Output: 32 &#215; &#215;</ns0:cell></ns0:row><ns0:row><ns0:cell>256</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>256</ns0:cell></ns0:row><ns0:row><ns0:cell>Layer 2</ns0:cell><ns0:cell>Layer 2</ns0:cell><ns0:cell>Layer 2</ns0:cell><ns0:cell>Layer 2</ns0:cell><ns0:cell>Layer 2</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 64 &#215; 128 &#215;</ns0:cell><ns0:cell>Output: 256 &#215; 128 &#215;</ns0:cell><ns0:cell>Output: 256 &#215; 128 &#215;</ns0:cell><ns0:cell>Output: 256 &#215; 128 &#215;</ns0:cell><ns0:cell>Output: 24 &#215; &#215;</ns0:cell></ns0:row><ns0:row><ns0:cell>128</ns0:cell><ns0:cell>128</ns0:cell><ns0:cell>128</ns0:cell><ns0:cell>128</ns0:cell><ns0:cell>128</ns0:cell></ns0:row><ns0:row><ns0:cell>Layer 3</ns0:cell><ns0:cell>Layer 3</ns0:cell><ns0:cell>Layer 3</ns0:cell><ns0:cell>Layer 3</ns0:cell><ns0:cell>Layer 3</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 128 &#215; 64 &#215;</ns0:cell><ns0:cell>Output: 512 &#215; 64 &#215;</ns0:cell><ns0:cell>Output: 512 &#215; 64 &#215;</ns0:cell><ns0:cell>Output: 512 &#215; 64 &#215;</ns0:cell><ns0:cell>Output: 40 &#215; 64 &#215;</ns0:cell></ns0:row><ns0:row><ns0:cell>64</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Layer 4</ns0:cell><ns0:cell>Layer 4</ns0:cell><ns0:cell>Layer 4</ns0:cell><ns0:cell>Layer 4</ns0:cell><ns0:cell>Layer 4</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 256 &#215; 32 &#215;</ns0:cell><ns0:cell>Output: 1024 &#215; 32 &#215;</ns0:cell><ns0:cell>Output: 1024 &#215; 32 &#215;</ns0:cell><ns0:cell>Output: 1024 &#215; 32 &#215;</ns0:cell><ns0:cell>Output: 112 &#215; &#215;</ns0:cell></ns0:row><ns0:row><ns0:cell>32</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>32</ns0:cell></ns0:row><ns0:row><ns0:cell>Layer 5</ns0:cell><ns0:cell>Layer 5</ns0:cell><ns0:cell>Layer 5</ns0:cell><ns0:cell>Layer 5</ns0:cell><ns0:cell>Layer 5</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 512 &#215; 16 &#215;</ns0:cell><ns0:cell>Output: 2048 &#215; 16 &#215;</ns0:cell><ns0:cell>Output: 2048 &#215; 16 &#215;</ns0:cell><ns0:cell>Output: 2048 &#215; 16 &#215;</ns0:cell><ns0:cell>Output: 320 &#215; &#215;</ns0:cell></ns0:row><ns0:row><ns0:cell>16</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>16</ns0:cell></ns0:row><ns0:row><ns0:cell>Decoder Layer</ns0:cell><ns0:cell>Decoder Layer</ns0:cell><ns0:cell>Decoder Layer</ns0:cell><ns0:cell>Decoder Layer</ns0:cell><ns0:cell>Decoder Layer</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 16 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 16 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 16 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 16 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 16 &#215; &#215;</ns0:cell></ns0:row><ns0:row><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell></ns0:row><ns0:row><ns0:cell>Segmentation</ns0:cell><ns0:cell>Segmentation</ns0:cell><ns0:cell>Segmentation</ns0:cell><ns0:cell>Segmentation</ns0:cell><ns0:cell>Segmentation</ns0:cell></ns0:row><ns0:row><ns0:cell>Layer</ns0:cell><ns0:cell>Layer</ns0:cell><ns0:cell>Layer</ns0:cell><ns0:cell>Layer</ns0:cell><ns0:cell>Layer</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 3 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 3 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 3 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 3 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 3 &#215; &#215;</ns0:cell></ns0:row><ns0:row><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell></ns0:row></ns0:table><ns0:note>Output: 1 &#215; 512 &#215; 512 Output: 1 &#215; 512 &#215; 512 Output: 1 &#215; 512 &#215; 512 Output: 1 &#215; 512 &#215; 512 Output: 1 &#215; &#215;</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The complete architecture and configurations of segmentation models</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Configuration</ns0:cell><ns0:cell>Encoder</ns0:cell><ns0:cell>Decoder</ns0:cell><ns0:cell># of trainable</ns0:cell><ns0:cell>Model size</ns0:cell></ns0:row><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>params</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>C 1</ns0:cell><ns0:cell>ResNet</ns0:cell><ns0:cell /><ns0:cell>24,430,677</ns0:cell><ns0:cell>293.4 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 2</ns0:cell><ns0:cell>ResNeSt</ns0:cell><ns0:cell /><ns0:cell>24,033,525</ns0:cell><ns0:cell>288.8 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 3</ns0:cell><ns0:cell>SE ResNeXt</ns0:cell><ns0:cell>Unet</ns0:cell><ns0:cell>34,518,277</ns0:cell><ns0:cell>414.8 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 4</ns0:cell><ns0:cell>Res2Net</ns0:cell><ns0:cell /><ns0:cell>32,657,501</ns0:cell><ns0:cell>392.5 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 5</ns0:cell><ns0:cell>EfficientNet-B0</ns0:cell><ns0:cell /><ns0:cell>6,251,473</ns0:cell><ns0:cell>72.2 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 6</ns0:cell><ns0:cell>ResNet</ns0:cell><ns0:cell /><ns0:cell>13,394,437</ns0:cell><ns0:cell>160.9 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 7</ns0:cell><ns0:cell>ResNeSt</ns0:cell><ns0:cell /><ns0:cell>13,394,437</ns0:cell><ns0:cell>160.9 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 8</ns0:cell><ns0:cell>SE ResNeXt</ns0:cell><ns0:cell>Unet2d</ns0:cell><ns0:cell>13,394,437</ns0:cell><ns0:cell>160.9 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 9</ns0:cell><ns0:cell>Res2Net</ns0:cell><ns0:cell /><ns0:cell>13,394,437</ns0:cell><ns0:cell>160.9 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 10</ns0:cell><ns0:cell>EfficientNet-B0</ns0:cell><ns0:cell /><ns0:cell>13,394,437</ns0:cell><ns0:cell>160.9 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 11</ns0:cell><ns0:cell>ResNet</ns0:cell><ns0:cell /><ns0:cell>26,072,917</ns0:cell><ns0:cell>313.2 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 12</ns0:cell><ns0:cell>ResNeSt</ns0:cell><ns0:cell /><ns0:cell>40,498,165</ns0:cell><ns0:cell>468.4 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 13</ns0:cell><ns0:cell>SE ResNeXt</ns0:cell><ns0:cell>Unet++</ns0:cell><ns0:cell>50,982,917</ns0:cell><ns0:cell>612.5 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 14</ns0:cell><ns0:cell>Res2Net</ns0:cell><ns0:cell /><ns0:cell>49,122,141</ns0:cell><ns0:cell>590.2 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 15</ns0:cell><ns0:cell>EfficientNet-B0</ns0:cell><ns0:cell /><ns0:cell>6,569,585</ns0:cell><ns0:cell>76.1 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 16</ns0:cell><ns0:cell>ResNet</ns0:cell><ns0:cell /><ns0:cell>23,149,637</ns0:cell><ns0:cell>270.8 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 17</ns0:cell><ns0:cell>ResNeSt</ns0:cell><ns0:cell /><ns0:cell>17,628,389</ns0:cell><ns0:cell>211.9 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 18</ns0:cell><ns0:cell>SE ResNeXt</ns0:cell><ns0:cell>FPN</ns0:cell><ns0:cell>28,113,141</ns0:cell><ns0:cell>337.9 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 19</ns0:cell><ns0:cell>Res2Net</ns0:cell><ns0:cell /><ns0:cell>26,252,365</ns0:cell><ns0:cell>315.6 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 20</ns0:cell><ns0:cell>EfficientNet-B0</ns0:cell><ns0:cell /><ns0:cell>5,759,425</ns0:cell><ns0:cell>66.3 MB</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Hardware and software configurations.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The information of the considered dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Type</ns0:cell><ns0:cell># of samples</ns0:cell></ns0:row><ns0:row><ns0:cell>Lung Masks</ns0:cell><ns0:cell>829</ns0:cell></ns0:row><ns0:row><ns0:cell>Infection Masks</ns0:cell><ns0:cell>829</ns0:cell></ns0:row><ns0:row><ns0:cell>Infection Masks with COVID-19</ns0:cell><ns0:cell>373</ns0:cell></ns0:row><ns0:row><ns0:cell>Training set</ns0:cell><ns0:cell>300</ns0:cell></ns0:row><ns0:row><ns0:cell>Testing set</ns0:cell><ns0:cell>73</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The experimental results in details of various configurations described in Table2. The data augmentation techniques are not utilized to perform the experiment.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Configuration C 1</ns0:cell><ns0:cell>C 2</ns0:cell><ns0:cell>C 3</ns0:cell><ns0:cell>C 4</ns0:cell><ns0:cell>C 5</ns0:cell><ns0:cell>C 6</ns0:cell><ns0:cell>C 7</ns0:cell><ns0:cell>C 8</ns0:cell><ns0:cell>C 9</ns0:cell><ns0:cell>C 10</ns0:cell></ns0:row><ns0:row><ns0:cell>IoU</ns0:cell><ns0:cell>Not Lung Nor COVID-19 Lung</ns0:cell><ns0:cell cols='9'>0.9954 0.9948 0.9956 0.9956 0.9941 0.9956 0.9968 0.9956 0.9971 0.9954 0.9449 0.9407 0.9393 0.9464 0.9315 0.9492 0.9575 0.9475 0.9581 0.9460</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>COVID-19</ns0:cell><ns0:cell cols='9'>0.7978 0.7860 0.7631 0.8059 0.7498 0.8175 0.8233 0.8177 0.8233 0.8127</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>mIOU</ns0:cell><ns0:cell cols='9'>0.9127 0.9072 0.8993 0.9160 0.8918 0.9208 0.9259 0.9204 0.9262 0.9219</ns0:cell></ns0:row><ns0:row><ns0:cell>F 1 -score/Dice</ns0:cell><ns0:cell>Not Lung Nor COVID-19 Lung COVID-19</ns0:cell><ns0:cell cols='9'>0.9977 0.9975 0.9978 0.9978 0.9971 0.9978 0.9975 0.9977 0.9977 0.9978 0.9717 0.9695 0.9687 0.9725 0.9645 0.9724 0.9716 0.9725 0.9735 0.9722 0.8875 0.8802 0.8656 0.8925 0.8570 0.8996 0.8934 0.8993 0.9016 0.8967</ns0:cell></ns0:row><ns0:row><ns0:cell>Precision</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung COVID-19</ns0:cell><ns0:cell cols='9'>0.9986 0.9982 0.9981 0.9986 0.9988 0.9974 0.9971 0.9981 0.9977 0.9981 0.9691 0.9721 0.9656 0.9747 0.9633 0.9764 0.9707 0.9706 0.9718 0.9749 0.8683 0.8386 0.8737 0.8502 0.8036 0.8883 0.9145 0.8979 0.9129 0.8730</ns0:cell></ns0:row><ns0:row><ns0:cell>Recall</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung</ns0:cell><ns0:cell cols='9'>0.9968 0.9967 0.9975 0.9969 0.9953 0.9981 0.9979 0.9974 0.9977 0.9976 0.9742 0.9669 0.9718 0.9702 0.9658 0.9684 0.9725 0.9745 0.9752 0.9696</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>COVID-19</ns0:cell><ns0:cell cols='9'>0.9077 0.9261 0.8577 0.9394 0.9180 0.9112 0.8732 0.9008 0.8906 0.9233</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Configuration C 11</ns0:cell><ns0:cell>C 12</ns0:cell><ns0:cell>C 13</ns0:cell><ns0:cell>C 14</ns0:cell><ns0:cell>C 15</ns0:cell><ns0:cell>C 16</ns0:cell><ns0:cell>C 17</ns0:cell><ns0:cell>C 18</ns0:cell><ns0:cell>C 19</ns0:cell><ns0:cell>C 20</ns0:cell></ns0:row><ns0:row><ns0:cell>IoU</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung</ns0:cell><ns0:cell cols='9'>0.9959 0.9956 0.9959 0.9956 0.9950 0.9941 0.9942 0.9948 0.9944 0.9933 0.9488 0.9476 0.9475 0.9415 0.9358 0.9351 0.9305 0.9379 0.9369 0.9234</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>COVID-19</ns0:cell><ns0:cell cols='9'>0.8168 0.8138 0.8073 0.7776 0.7608 0.7745 0.7311 0.7758 0.7786 0.7299</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>mIOU</ns0:cell><ns0:cell cols='9'>0.9205 0.9191 0.9169 0.9049 0.8972 0.9013 0.8852 0.9029 0.9033 0.8822</ns0:cell></ns0:row><ns0:row><ns0:cell>F 1 -score/Dice</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung COVID-19</ns0:cell><ns0:cell cols='9'>0.9979 0.9978 0.9979 0.9978 0.9975 0.9971 0.9971 0.9974 0.9972 0.9966 0.9737 0.9731 0.9731 0.9646 0.9668 0.9665 0.9640 0.9680 0.9674 0.9602 0.8992 0.8973 0.8934 0.8749 0.8641 0.8729 0.8446 0.8738 0.8755 0.8439</ns0:cell></ns0:row><ns0:row><ns0:cell>Precision</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung COVID-19</ns0:cell><ns0:cell cols='9'>0.9988 0.9991 0.9989 0.9975 0.9981 0.9987 0.9986 0.9986 0.9982 0.9978 0.9726 0.9689 0.9744 0.9752 0.9722 0.9625 0.9666 0.9709 0.9665 0.9647 0.8742 0.8711 0.8522 0.8548 0.8139 0.8350 0.7823 0.8187 0.8445 0.7822</ns0:cell></ns0:row><ns0:row><ns0:cell>Recall</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung</ns0:cell><ns0:cell cols='9'>0.9971 0.9964 0.9970 0.9981 0.9969 0.9954 0.9955 0.9962 0.9962 0.9954 0.9749 0.9773 0.9717 0.9646 0.9615 0.9705 0.9615 0.9651 0.9684 0.9557</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>COVID-19</ns0:cell><ns0:cell cols='9'>0.9254 0.9252 0.9386 0.8959 0.9209 0.9144 0.9177 0.9370 0.9089 0.9162</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Experimental results in details with various configurations of the architectures mentioned and described in Table2.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Configuration C 1</ns0:cell><ns0:cell>C 2</ns0:cell><ns0:cell>C 3</ns0:cell><ns0:cell>C 4</ns0:cell><ns0:cell>C 5</ns0:cell><ns0:cell>C 6</ns0:cell><ns0:cell>C 7</ns0:cell><ns0:cell>C 8</ns0:cell><ns0:cell>C 9</ns0:cell><ns0:cell>C 10</ns0:cell></ns0:row><ns0:row><ns0:cell>IoU</ns0:cell><ns0:cell>Not Lung Nor COVID-19 Lung</ns0:cell><ns0:cell cols='9'>0.9956 0.9951 0.9952 0.9958 0.9945 0.9981 0.9952 0.9951 0.9955 0.9953 0.9468 0.9422 0.9351 0.9477 0.9341 0.9628 0.9443 0.9492 0.9462 0.9465</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>COVID-19</ns0:cell><ns0:cell cols='9'>0.8121 0.7868 0.7569 0.8103 0.7581 0.8241 0.8169 0.8193 0.8147 0.8239</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>mIOU</ns0:cell><ns0:cell cols='9'>0.9182 0.9081 0.8957 0.9181 0.8955 0.9283 0.9188 0.9212 0.9188 0.9219</ns0:cell></ns0:row><ns0:row><ns0:cell>F 1 -score/Dice</ns0:cell><ns0:cell>Not Lung Nor COVID-19 Lung COVID-19</ns0:cell><ns0:cell cols='9'>0.9978 0.9976 0.9976 0.9979 0.9972 0.9978 0.9976 0.9979 0.9976 0.9977 0.9727 0.9703 0.9664 0.9731 0.9661 0.9734 0.9714 0.9737 0.9714 0.9725 0.8963 0.8807 0.8616 0.8952 0.8623 0.8941 0.8992 0.9003 0.8979 0.9034</ns0:cell></ns0:row><ns0:row><ns0:cell>Precision</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung COVID-19</ns0:cell><ns0:cell cols='9'>0.9985 0.9986 0.9983 0.9985 0.9981 0.9972 0.9977 0.9981 0.9978 0.9974 0.9721 0.9712 0.9651 0.9735 0.9712 0.9761 0.9726 0.9732 0.9697 0.9776 0.8716 0.8393 0.8432 0.8709 0.8061 0.9025 0.8884 0.8948 0.9027 0.8844</ns0:cell></ns0:row><ns0:row><ns0:cell>Recall</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung</ns0:cell><ns0:cell cols='9'>0.9971 0.9965 0.9969 0.9973 0.9964 0.9984 0.9975 0.9976 0.9975 0.9979 0.9733 0.9694 0.9677 0.9728 0.9607 0.9707 0.9702 0.9743 0.9732 0.9675</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>COVID-19</ns0:cell><ns0:cell cols='9'>0.9224 0.9263 0.8801 0.9209 0.9272 0.9258 0.9103 0.9059 0.8932 0.9233</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Configuration C 11</ns0:cell><ns0:cell>C 12</ns0:cell><ns0:cell>C 13</ns0:cell><ns0:cell>C 14</ns0:cell><ns0:cell>C 15</ns0:cell><ns0:cell>C 16</ns0:cell><ns0:cell>C 17</ns0:cell><ns0:cell>C 18</ns0:cell><ns0:cell>C 19</ns0:cell><ns0:cell>C 20</ns0:cell></ns0:row><ns0:row><ns0:cell>IoU</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung</ns0:cell><ns0:cell cols='9'>0.9958 0.9956 0.9961 0.9954 0.9936 0.9947 0.9941 0.9951 0.9945 0.9931 0.9498 0.9476 0.9508 0.9465 0.9293 0.9327 0.9341 0.9308 0.9344 0.9235</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>COVID-19</ns0:cell><ns0:cell cols='9'>0.8186 0.8138 0.8234 0.8232 0.7671 0.7495 0.7616 0.7395 0.7524 0.7356</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>mIOU</ns0:cell><ns0:cell cols='9'>0.9214 0.9191 0.9234 0.9217 0.8966 0.8923 0.8966 0.8884 0.8938 0.8841</ns0:cell></ns0:row><ns0:row><ns0:cell>F 1 -score/Dice</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung COVID-19</ns0:cell><ns0:cell cols='9'>0.9979 0.9978 0.9981 0.9977 0.9968 0.9973 0.9971 0.9974 0.9973 0.9965 0.9742 0.9731 0.9748 0.9725 0.9633 0.9652 0.9659 0.9642 0.9661 0.9602 0.9002 0.8973 0.9032 0.9031 0.8681 0.8568 0.8647 0.8502 0.8587 0.8476</ns0:cell></ns0:row><ns0:row><ns0:cell>Precision</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung COVID-19</ns0:cell><ns0:cell cols='9'>0.9989 0.9991 0.9989 0.9991 0.9986 0.9975 0.9981 0.9975 0.9965 0.9982 0.9721 0.9689 0.9741 0.9669 0.9583 0.9726 0.9682 0.9737 0.9721 0.9613 0.8762 0.8711 0.8735 0.8836 0.8327 0.8102 0.8172 0.7988 0.8528 0.7874</ns0:cell></ns0:row><ns0:row><ns0:cell>Recall</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung</ns0:cell><ns0:cell cols='9'>0.9969 0.9964 0.9971 0.9963 0.9945 0.9971 0.9959 0.9974 0.9981 0.9948 0.9765 0.9773 0.9755 0.9781 0.9684 0.9579 0.9637 0.9548 0.9601 0.9591</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>COVID-19</ns0:cell><ns0:cell cols='9'>0.9256 0.9252 0.9349 0.9234 0.9066 0.9091 0.9179 0.9088 0.8648 0.8879</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>The report (in second(s)) of training time (1) and inference time (2) of 20 configurations.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>C 1</ns0:cell><ns0:cell>C 2</ns0:cell><ns0:cell>C 3</ns0:cell><ns0:cell>C 4</ns0:cell><ns0:cell>C 5</ns0:cell><ns0:cell>C 6</ns0:cell><ns0:cell>C 7</ns0:cell><ns0:cell>C 8</ns0:cell><ns0:cell>C 9</ns0:cell><ns0:cell>C 10</ns0:cell></ns0:row><ns0:row><ns0:cell>(1)</ns0:cell><ns0:cell>5,949</ns0:cell><ns0:cell>6,390</ns0:cell><ns0:cell>7,900</ns0:cell><ns0:cell>6,659</ns0:cell><ns0:cell>6,248</ns0:cell><ns0:cell>7,351</ns0:cell><ns0:cell>7,377</ns0:cell><ns0:cell>7,497</ns0:cell><ns0:cell>7,465</ns0:cell><ns0:cell>7,526</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>(2) 0.5753 0.5892 0.6164 0.6027 0.6301 0.5891 0.6438 0.6287 0.6287 0.6301</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>C 11</ns0:cell><ns0:cell>C 12</ns0:cell><ns0:cell>C 13</ns0:cell><ns0:cell>C 14</ns0:cell><ns0:cell>C 15</ns0:cell><ns0:cell>C 16</ns0:cell><ns0:cell>C 17</ns0:cell><ns0:cell>C 18</ns0:cell><ns0:cell>C 19</ns0:cell><ns0:cell>C 20</ns0:cell></ns0:row><ns0:row><ns0:cell>(1)</ns0:cell><ns0:cell>6,656</ns0:cell><ns0:cell cols='2'>8,884 10,207</ns0:cell><ns0:cell>9,147</ns0:cell><ns0:cell>6,505</ns0:cell><ns0:cell>5,832</ns0:cell><ns0:cell>6,127</ns0:cell><ns0:cell>7,481</ns0:cell><ns0:cell>6,388</ns0:cell><ns0:cell>5,999</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>(2) 0.6109 0.6931 0.7328 0.7027 0.5986 0.5684 0.5889 0.6054 0.5972 0.5794</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>colored by slate blue and orange for COVID-19. As observed from the results, our predictions express</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>The performance of lung segmentation, the lung includes non of infected COVID-19 region</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Method</ns0:cell><ns0:cell cols='2'>Dice Sensitivity</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>C4</ns0:cell><ns0:cell>0.9731</ns0:cell><ns0:cell>0.9728</ns0:cell></ns0:row><ns0:row><ns0:cell>Our Configurations</ns0:cell><ns0:cell>C6 C13</ns0:cell><ns0:cell>0.9734 0.9748</ns0:cell><ns0:cell>0.9707 0.9755</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>C19</ns0:cell><ns0:cell>0.9661</ns0:cell><ns0:cell>0.9601</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Saood and Hatem (2021) SegNet</ns0:cell><ns0:cell>0.7490</ns0:cell><ns0:cell>0.9560</ns0:cell></ns0:row></ns0:table><ns0:note>Figure 7. The visualization of confusion matrix for segmentation model-based SE ResNeXt and Unet++ 5.2.2 COVID-19 segmentation</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>The summarization of quantitative results of infected COVID-19 regions, -represents for no relevant information in the original study.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Method</ns0:cell><ns0:cell cols='3'>Dice Sensitivity Precision</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>C1</ns0:cell><ns0:cell>0.8963</ns0:cell><ns0:cell>0.9224</ns0:cell><ns0:cell>0.8716</ns0:cell></ns0:row><ns0:row><ns0:cell>Our Configurations</ns0:cell><ns0:cell>C10 C13</ns0:cell><ns0:cell>0.9034 0.9032</ns0:cell><ns0:cell>0.9233 0.9349</ns0:cell><ns0:cell>0.8844 0.8711</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>C17</ns0:cell><ns0:cell>0.8966</ns0:cell><ns0:cell>0.9179</ns0:cell><ns0:cell>0.8172</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>&#220;mit Budak et al. (2021) A-SegNet + FTL</ns0:cell><ns0:cell>0.8961</ns0:cell><ns0:cell>0.9273</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Zhou et al. (2020)</ns0:cell><ns0:cell cols='2'>Unet + attention mechanism 0.8310</ns0:cell><ns0:cell>0.8670</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Raj et al. (2021)</ns0:cell><ns0:cell>ADID-UNET</ns0:cell><ns0:cell>0.8031</ns0:cell><ns0:cell>0.7973</ns0:cell><ns0:cell>0.8476</ns0:cell></ns0:row></ns0:table><ns0:note>Table 9 reveals the the compared performance of the proposed approach with the state-of-the-art methods including SegNet ( &#220;mit Budak et al. (2021)), Unet (Zhou et al. (2020)) with attention mechanism, and 15/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:1:1:NEW 24 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>, the architecture based on 16/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:1:1:NEW 24 Jul 2021)</ns0:note></ns0:figure> <ns0:note place='foot' n='3'>https://pytorch.org, Date accessed: 10 May 2021 4 http://medicalsegmentation.com/covid19/, accessed 10 March 2021 9/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:1:1:NEW 24 Jul 2021)</ns0:note> </ns0:body> "
"Dear Reviewers/Editors, Paper ID: 61111 (#CS-2021:05:61111:0:2:REVIEW) Original Title: Decoders configurations based on Unet family and Feature Pyramid Network for COVID-19 Segmentation on CT images Thank you for allowing us to submit a revised version of our paper. We appreciate the time you have dedicated to providing your valuable feedback on our paper. We have been able to incorporate changes as your suggestions with highlights with tracked changes. The indexes of tables, figures, equations we mentioned in this letter are corresponding to the cleaned manuscript file (PeerJ_Journal_61111_covid19_clean.pdf). Thank you very much for your reviews and comments. We look forward to hearing from you! Sincerely, Authors ----------- Reviewer 1 Basic reporting English needs to be moderately revised to be Grammarly correct. → Thank you so much for your comments. As the reviewer suggested, we checked and corrected some grammar issues in the manuscript. What is the novelty of the proposed research, what is your main contribution? → We added the novelty and contributions at the end of the introduction section. The principal contributions include as follows: • Numerous configurations generated by combining five well-known deep learning architectures (ResNet, ResNeSt, SE ResNeXt, Res2Net, and EfficientNet-B0) and Unet family are evaluated and compared to the state-of-the-art to reveal the efficiency in the COVID-19 lung CT image segmentation. Moreover, we also include Feature Pyramid Network (FPN), a famous architecture for segmentation tasks in configurations for comparison. • The visualizations including segmented regions in lungs are examined with various metric performances and exhibit similar infected areas compared to the ground truths. • Augmentation on the COVID-19 lung images dataset is performed with mirror, contrast, and brightness transforms. In addition, gamma and Gaussian noise are manipulated before adding spatial transforms. Such techniques help to enhance the segmentation performance. Besides, we evaluate the performance of COVID-19 segmentation without augmentation techniques. • The training time and inference time are measured and compared among the considered configurations. • From the obtained results, we found that the integration between SE ResNeXt and Unet++ has revealed the best performance in COVID-19 segmentation tasks. Could you please explain the difference the Unet family? → Thank you for your question, it indicates our omissions about the Unet family in the manuscript. We also added the quote “When compared with original Unet architecture, Unet++ and Unet2d have been built with the aim to reduce the semantic gap between the feature maps and efficiently learn the global and local features” to the manuscript (Lines 260-262). “Ronneberger et al. (2015a) and ? demonstrated that the improvement of segmentation tasks recently relied 138 more on the encoder-decoder than other architecture.” What does the question mark mean? Do you mean that all the features from Unet family are used? Or any specific Unet is finally used? → We apologize about the citation error. We have modified the paragraph to: “Ronneberger et al. (2015a) and and Long et al. (2015) demonstrated that the improvement of segmentation tasks recently relied 138 more on the encoder-decoder than other architectures.”. Yes, we used all the features from the Unet family to conduct the COVID-19 segmentation. The segmentation models have been built based on Unet, Unet++, and Unet2d architectures, we call them the Unet family. What the meaning of Cn? → Cn is the n-th configuration (as described in Table 2: “The complete architecture and configurations of segmentation models”, in the manuscript) in the combination between encoders and decoders in our proposed method. Check equation 2 and equation 7? Why do you have two definitions for Jaccard? → Thank you for pointing this out. Jaccard is the definition of the size of the intersection of the sets divided by the size of their union, as in Equation 2. But in terms of the true positive (TP), false positive (FP), true negative (TN), and false negative (FN), it can be rephrased as in Equation 7. The visualization of the Jaccard calculation is presented in the below image. We also attached it in the revised version (Figure 7: “The visualization of Dice and IoU. The left image presents the Dice coefficient, the right describes the IoU”, in the manuscript as shown below). Could you compare your proposed method with the recently published method, such as “Covid-19 Classification by FGCNet with Deep Feature Fusion from Graph Convolutional Network and Convolutional Neural Network” “Diagnosis of COVID-19 by Wavelet Renyi Entropy and Three-Segment Biogeography-based Optimization”. → Thank you for your suggestion with interesting works. These are two exciting approaches to classify COVID-19 but unfortunately, they are not within the scope of our research. We are inspired a lot by these studies and will apply them to future works. Hopefully, we can get promising results. We also reviewed and added the summarization of studies to Section 2 as the main related works. Reviewer 2 Basic reporting For the most part, this report is clear and well organized. I very much appreciate the clear structure of the paper and the concise language. However, there are still some minor issues throughout the paper. Here are several issues that I found. 1. In the abstract, line 24 to 26 has grammar issues. 2. There are almost no citations that are included in the introduction part. 3. Line 137 to 140 has some grammar issues. → Thank you so much for your detailed comments. We revised as per your suggestions. We revised and added some citations in the introduction section. 4. Quality of Figure 6 can be improved 5. Table 5 might be more clear if it's rotated by 90 degrees. 6. Equation 2 is weirdly placed. 7. The discussion about recall precision and F1 score is a bit redundant since it is a fairly commonly used metric. → Thank you for your suggestions, we have improved the quality of several Figures. In order to be concise, we removed redundant figures, i.e. figures that were drawn from Unet and Feature Pyramid Network architectures, and provided more informative images related to our proposed architectures and experimental results. We also relocated the Equation 2. Furthermore, we agree with the reviewer that the discussion about recall, precision, and F1 score is redundant, so we have removed it as well. Experimental design The experiment of this paper is well designed. However, It might be beneficial to include the result of the segmentation on the unaugmented image also. Given that in the benchmark papers, the authors didn't use the same augmentation step, it is a weaker claim to say that the improvement in results presented in this work mainly comes from the superior structure without showing those results. → We agreed. As your suggestion, we experimented and presented more results for the comparison in the manuscript. The results of using non-augmented and augmented techniques are presented in Table 5 “The experimental results in details of various configurations described in Table 2. The data augmentation techniques are not utilized to perform the experiment.” and the Table 6 with caption “Experimental results in details with various configurations of the architectures mentioned and described in Table 2” respectively. Validity of the findings The finding of the paper is valid. Reviewer 3 Basic reporting English writing is not clear, many sentences are awkward. In many places upper-case letters are used instead of lower case where is not necessary. → We thank the reviewer for pointing out these errors. We revised and highlighted the text. The text in most figures is not clear. Try saving your drawings as tiff images and not pngs. Make them bigger from the beginning. → Thank you for your comment. We enlarged the figures to have a better resolution. The introduction is totally unrelated to the paper’s topic. The paper is about developing, identifying, and evaluating deep learning methods for image segmentation and not about information technology in medicine. → We mentioned the problem from a general perspective, then we went deeper into specific issues in the following paragraphs. We added some paragraphs to describe deep learning methods for image segmentation. The related works section is very badly written. It is just an enumeration of citations with no descriptions in between. It doesn’t tell a story nor it presents a clear logical → We thank the reviewer for pointing this out. We refined the Related work section and separated it into 2 subsections, namely: Applications of deep learning in healthcare and Applications of deep learning in COVID-19 detection. Lines 115-119 in the Methods section, should be moved at the end of the Introduction. Line 42 – “We” needs to be changes to lowercase “we”. Line 58 - “Imaging” needs to be changes to lowercase “imaging”. Line 137 – There is a missing reference. Figure 6 and 9 – Text is not clear. Try to improve the quality of your image. → Thank you for the suggestion. We have corrected all the grammatical errors. We have modified the figures and hope that it is now clear. The paragraph in lines 115-119 is a brief outline of Section 3, so we moved it at the beginning of Section 3. Furthermore, we also add the brief outline for the whole study at the end of the Introduction section, and it is quoted below: “The rest of this study is organized as follows: Section 2 introduces the main related works. Section 3 presents a brief introduction of the segmentation network based on encoder-decoder architectures, the dedicated loss function for segmentation tasks, and augmentation techniques. Afterward, we present our settings, the public COVID-19 dataset, and the evaluation metrics in Section 4. Section 5 exhibits and analyzes the obtained results. We conclude the study and discuss future work in Section 6”. (Lines 110-114). Experimental design This paper proposes a new improved method that combines pretrained ResNet architectures with Unet++ for image segmentation. The application described here is lung segmentation in CT images. The research question addressed in the paper is unclear. → We significantly revised the introduction with the gap research at the end of the introduction section. The research question aims to find the segmentation architectures that give the best performance. The methods section is more like a second related works section. Evaluation metrics are described in more details compared to the actual methods. → We moved and combined a part of the mentioned section to the related work section. Validity of the findings It looks like the proposed approach improves the results compared to other methods, but at what cost. No training or inference time are reported in the paper. What is the main advantage of the presented method? → Thank you for your suggestion. We added the inference time of the experiments in the manuscript, details of training and inference time are reported in Table 7 “The report (in second(s)) of training time (1) and inference time (2) of 20 configurations”. We leveraged the advancements from well-known convolutional neural networks for encoder-decoder architectures to provide promising performance in lung, non-lung, infected areas by covid segmentation tasks compared to some state-of-the-art methods. The paper specifies a github repository, however the code uploaded there is not enough to replicate the experiments in the paper. It is unclear to me what makes some of the 20 configurations new. → We agreed. We updated the guideline to the github repository to reproduce the experimental results. Comments for the Author The proposed method looks more like a brute force approach than a new improved method. The paragraph about a graphical interface is irrelevant to the paper. → We conducted previous studies that the efficiency of Unet families in image segmentation and outstanding image classification performance of well-known convolutional neural network architectures can give a significant research question on the benefits of their combination to enhance the performance in COVID-19 lung CT image segmentation. From this observation, we have leveraged well-known deep learning architectures as encoders and Unet family, Feature Pyramid Network techniques as decoders to produce segmentation on chest slices for supporting COVID-19 diagnosis. "
Here is a paper. Please give your review comments after reading it.
238
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Coronavirus Disease 2019 (COVID-19) pandemic has been ferociously destroying global health and economics. According to World Health Organisation (WHO), until May 2021, more than one hundred million infected cases and 3.2 million deaths have been reported in over 200 countries. Unfortunately, the numbers are still on the rise. Therefore, scientists are making a significant effort in researching accurate, efficient diagnoses. Several studies advocating artificial intelligence proposed COVID diagnosis methods on lung images with high accuracy. Furthermore, some affected areas in the lung images can be detected accurately by segmentation methods. This work has considered state-of-the-art Convolutional Neural Network architectures, combined with the Unet family and Feature Pyramid Network (FPN) for COVID segmentation tasks on Computed Tomography (CT) scanner samples the Italian Society of Medical and Interventional Radiology dataset. The experiments show that the decoder-based Unet family has reached the best (a mean Intersection Over Union (mIoU) of 0.9234, 0.9032 in dice score, and a recall of 0.9349) with a combination between SE ResNeXt and Unet++. The decoder with the Unet family obtained better COVID segmentation performance in comparison with Feature Pyramid Network. Furthermore, the proposed method outperforms recent segmentation state-ofthe-art approaches such as the SegNet-based network, ADID-UNET, and A-SegNet + FTL.</ns0:p><ns0:p>Therefore, it is expected to provide good segmentation visualizations of medical images.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>programs to computerize and digitize health records. Many research and application programs are implemented in hospitals and health facilities with hospital information systems <ns0:ref type='bibr' target='#b6'>(Ferdousi et al. (2020)</ns0:ref>), communication systems <ns0:ref type='bibr' target='#b26'>(Nayak and Patgiri (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b1'>Belasen et al. (2020)</ns0:ref>), robot-based surgeon systems <ns0:ref type='bibr' target='#b20'>(Lee et al. (2021)</ns0:ref>), and nursing care information systems <ns0:ref type='bibr' target='#b2'>(Booth et al. (2021)</ns0:ref>). Medical records, images of x-ray, ultrasound, magnetic resonance imaging, positron-emission tomography become rich and diverse.</ns0:p><ns0:p>Modern medicine with information technology applications can make disease diagnosis faster based on various clinical symptoms and subclinical symptoms (subclinical diagnosis). In subclinical diagnostics cases, doctors usually evaluate and examine images generated and screened from medical imaging devices and equipment. Modern and high-tech medical machines with computer support software make the image clearer and more accurate with a very high resolution. The diagnostic imaging methods are diversified, such as radiological diagnosis, ultrasound imaging, ultrasound -color Doppler, endoscopic images (commonly used as gastrointestinal endoscopy and urinary endoscopy), Computed Tomography (CT) Scanner, Magnetic Resonance Imaging (MRI) and so on.</ns0:p><ns0:p>Image segmentation is to divide a digital image into various parts, which can be the collections of pixels or superpixels <ns0:ref type='bibr' target='#b36'>(Shapiro (2001)</ns0:ref>). The goal of image segmentation aims to simplify and or represent an image into something more meaningful and easier to analyze. In recent years, deep learning algorithms have provided great tools for medical segmentation, which plays an essential role in disease diagnosis and is one of the most crucial tasks in medical image processing and analysis. Diagnostic based on segmented medical images holds an essential contribution to improving accuracy, timeliness, and disease diagnosis efficiency <ns0:ref type='bibr' target='#b35'>(Saood and Hatem (2021)</ns0:ref>; &#220;mit <ns0:ref type='bibr' target='#b43'>Budak et al. (2021)</ns0:ref>, <ns0:ref type='bibr' target='#b50'>Zhou et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b29'>Raj et al. (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b48'>Yakubovskiy (2020)</ns0:ref>). For example, for ultrasound images, the physician and doctors can accurately detect and measure the size of the solid organs in the abdomen and abnormal masses on the segmented areas in the images <ns0:ref type='bibr' target='#b27'>(Ouahabi and Taleb-Ahmed (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b51'>Zhou et al. (2021)</ns0:ref>). Another type of medical image is Chest x-ray <ns0:ref type='bibr' target='#b28'>(Rahman et al. (2020)</ns0:ref>), where cancer tumors can be detected and segmented for surgeons and efficient treatment monitors. The CT scanner images marked abnormal regions also help physicians identify signs of brain diseases, especially identifying intracranial hematoma, brain tumors <ns0:ref type='bibr' target='#b31'>(Ramesh et al. (2021)</ns0:ref>, <ns0:ref type='bibr' target='#b25'>Munir et al. (2020)</ns0:ref>). Signs of the disease can be revealed via such segmented images, but sometimes these signs can be too small to be observed by humans. Moreover, in a short time, doctors may have many patients for diagnosis simultaneously. Besides, it takes so much time to train a doctor to perform medical image analysis. Artificial intelligence algorithms can outperform human ability in image classification and provide techniques to interpret the results <ns0:ref type='bibr' target='#b9'>(Geirhos et al. (2018)</ns0:ref>). Therefore, leveraging artificial intelligence's development with segmentation image techniques is crucial to accelerate medicine advancement and improve human health. With an outbreak of the COVID-19 pandemic, it requires significant efforts from all citizens worldwide to stop the pandemic, but human resources seem insufficient. Technology-based medical approaches are necessary and urgent to support humans to reduce and prevent the pandemic, so algorithms on image processing for diagnosis of COVID-19 have attracted the attention of numerous scientists. The efficiency of the Unet family in image segmentation and outstanding image classification performance of well-known convolutional neural network architectures revealed in a vast of previous studies has brought a significant research question on the benefits of their &#8226; The visualizations including segmented regions in lungs are examined with various metric performances and exhibit similar infected areas compared to the ground truths.</ns0:p><ns0:p>&#8226; Augmentation on the COVID-19 lung images dataset is performed with mirror, contrast, and brightness transforms. In addition, gamma and Gaussian noise are manipulated before adding spatial transforms. Such techniques help to enhance the segmentation performance. Besides, we evaluate the performance of COVID-19 segmentation without augmentation techniques.</ns0:p><ns0:p>&#8226; The training time and inference time are measured and compared among the considered configurations.</ns0:p><ns0:p>&#8226; From the obtained results, we found that the integration between SE ResNeXt and Unet++ has revealed the best performance in COVID-19 segmentation tasks.</ns0:p><ns0:p>The rest of this study is organized as follows. Section 2 introduces the main related works. Section 3 presents a brief introduction of the segmentation network based on encoder-decoder architectures, the dedicated loss function for segmentation tasks, and augmentation techniques. Afterward, we present our settings, the public COVID-19 dataset, and the evaluation metrics in Section 4. Section 5 exhibits and analyzes the obtained results. We conclude the study and discuss future work in Section 6.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>Machine learning for medical imaging analysis has gained popularity in recent years. Advancements in computer techniques have also been proposed with an increase in quality and quantity. To support doctors better, researchers have focused on model explanation and segmentation methods for medical images.</ns0:p><ns0:p>Section 2.1 examines the robust studies of deep learning in the healthcare sector. Section 2.2 reviews the main related approaches in the domain of COVID-19 detection from CT images. <ns0:ref type='bibr' target='#b32'>Rav&#236; et al. (2017)</ns0:ref> presented several robust applications of deep learning to health informatics. We have obtained benefits from rapid improvements in computational power, fast data storage, and parallelization, and so there are more and more efficient proposed models for health services. Furthermore, <ns0:ref type='bibr' target='#b38'>Srivastava et al. (2017)</ns0:ref> gave us an overview of recent trends and future directions in health informatics using deep learning. In another study, <ns0:ref type='bibr' target='#b15'>Huynh et al. (2020)</ns0:ref> introduced a shallow convolutional neural network (CNN) architecture with only a few convolution layers to perform the skin lesions classification, but the performance is considerable. The authors conducted the experiments on a dataset including 25,331 samples.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Applications of deep learning in healthcare</ns0:head><ns0:p>For discriminating between melanoma and vascular lesion, the proposed model obtained an accuracy of 0.961 and an Area Under the Curve (AUC) of 0.874. Several studies deal with abnormality bone detection <ns0:ref type='bibr' target='#b3'>(Chetoui and Akhloufi (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b45'>Varma et al. (2019)</ns0:ref>) also revealed a promising result. For instance, Chetoui and Akhloufi (2020) developed a deep learning architecture based on EfficientNet <ns0:ref type='bibr'>(Tan and Le (2019)</ns0:ref>) to detect referable diabetic and diabetic retinopathy on two public datasets, namely EyePACS and APTOS 2019. The proposed method achieved the highest AUC of 0.990 and 0.998 on EyePACS and APTOS 2019, respectively. Similarly, an approach to detect abnormalities on musculoskeletal images has been proposed by using CNN architecture. The authors collected a massive dataset of 93,455 radiographs. The obtained AUC is recorded of 0.880, sensitivity and specificity of 0.714 and 0.961, respectively.</ns0:p><ns0:p>To recognize COVID-19 from chest CT images, a deep architecture named COVNet has been proposed by <ns0:ref type='bibr' target='#b21'>Li et al. (2020)</ns0:ref>. Community-acquired pneumonia and healthy control are utilized in the testing phase Manuscript to be reviewed Computer Science and collected from 6 hospitals. AUC, specificity, and sensitivity report the performance. For detecting COVID-19, the proposed method obtained an AUC of 0.96, a specificity of 0.96, and a sensitivity of 0.90. <ns0:ref type='bibr' target='#b37'>Shi et al. (2021)</ns0:ref> presented a review on emerging artificial intelligence technologies to support medical specialists. The authors also stated that 'image segmentation plays an essential role in COVID-19 applications'. <ns0:ref type='bibr' target='#b35'>Saood and Hatem (2021)</ns0:ref> proposed an approach for image tissue classification by leveraging segmentation networks, namely SegNet and U-Net. The purpose of using both models is to distinguish the infected and healthy lung tissue. The networks are trained on 72 and validated on ten images. The proposed method obtained an accuracy of 0.95 with SegNet and 0.91 with U-Net. Empirically, the authors also stated that the mini-batch size affects the performance negatively. &#220;mit <ns0:ref type='bibr' target='#b43'>Budak et al. (2021)</ns0:ref> presented a new procedure for automatic segmentation of COVID-19 in CT images using SegNet and attention gate mechanism. A dataset with 473 CT images has been utilized as the evaluation data. The performance of the proposed method is judged based on Dice, Tversky, and focal Tversky loss functions. The authors reported that the obtained sensitivity, specificity, and dice scores are 92.73%, 99.51%, and 89.61%, respectively. <ns0:ref type='bibr' target='#b50'>Zhou et al. (2020)</ns0:ref> proposed an effective model to segment COVID-19 from CT images. In comparison to other existing studies <ns0:ref type='bibr' target='#b43'>( &#220;mit Budak et al. (2021)</ns0:ref>), the model obtained comparable results. For each CT slice, the proposed method takes 0.29 seconds to generate the segmented results and obtained a Dice of 83.1%, Hausdorff of 18.8. However, the method is conceived to segment the single class and on a small dataset. A recent approach <ns0:ref type='bibr' target='#b29'>(Raj et al. (2021)</ns0:ref>) leverages a depth network, namely ADID-UNET, to enhance the COVID-19 segmentation performance on CT images. The proposed method is evaluated on public datasets and achieved a 97.01% accuracy, a precision of 87.76%, and an F 1 score of 82.00%.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Applications of deep learning in COVID-19 detection</ns0:head></ns0:div> <ns0:div><ns0:head n='3'>METHOD</ns0:head><ns0:p>This Section contains 5 parts. Section 3.1 describes systematically the complete architecture of segmentation models. We present the explanation of the encoders and decoder for a general segmentation architecture are presented in Section 3.2 and Section 3.3 respectively. Afterwards, the description of loss function and several data augmentation methods are explained in Section 3.4 and Section 3.5 respectively. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Segmentation Network Architecture</ns0:head><ns0:p>The overall system architecture is visualized in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. We leveraged the segmentation model with architecture to discriminate the COVID-19 infections from the lung on the medical images, including the Afterward, the decoders control the critical stages by generating the masks, including the important regions. The final result obtained a 3 &#215; 512 &#215; 512 array, including three masks. The first mask consists of the non-lung nor COVID-19 regions. The second mask includes the lung's pixels, whereas the final mask reveals COVID-19 infection regions' information. Besides, the combination between the encoder and decoder architectures is presented in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. We considered to use 20 experimental configurations, i.e., C i = 1,..., 20, ,In this study, we also present the process diagram of the segmentation model in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>. The input takes medical images where the output generates two masks consisting of lung and infection regions. <ns0:ref type='bibr' target='#b33'>Ronneberger et al. (2015)</ns0:ref> and <ns0:ref type='bibr' target='#b23'>Long et al. (2015)</ns0:ref> demonstrated that the improvement of segmentation tasks recently relied more on the encoder-decoder than other architectures. Furthermore, this architecture was inspired by the Convolutional Neural Network <ns0:ref type='bibr' target='#b18'>(LeCun et al. (1989)</ns0:ref>) and added the decoder network, which effectively tackled the pixel-wise prediction.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Efficient deep learning model-based Encoder</ns0:head><ns0:p>In an attempt to improve the performance of image segmentation task on medical images, we leveraged the advantages of numerous pre-trained models as the encoder, which are modern architectures and have impressive performances on classification tasks, such as EfficientNet <ns0:ref type='bibr'>(Tan and Le (2019)</ns0:ref>), ResNet <ns0:ref type='bibr' target='#b12'>(He et al. (2016)</ns0:ref>), or ResNeSt <ns0:ref type='bibr' target='#b49'>(Zhang et al. (2020)</ns0:ref>). The responsibilities of the encoder are learning features and providing the initial low-resolution representations. The segmentation architecture will refine the encoder's outputs, referred to as the decoder network. To avoid the vanishing gradient problem <ns0:ref type='bibr' target='#b17'>(Kolen and Kremer (2009)</ns0:ref>) and retrieve the fine-grained information from the previous layer. The skip connections are utilized between the encoder and decoder network or between the encoder and decoder network layers.</ns0:p><ns0:p>Assuming x denotes the input, the expected underlying mapping obtaining by training is f (x), the block within the dotted-line box demands to apply the residual mapping f (x) &#8722; x. In the case of f (x) = x, the</ns0:p></ns0:div> <ns0:div><ns0:head>5/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_9'>2021:05:61111:2:1:CHECK 17 Aug 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.1'>Unet Family</ns0:head><ns0:p>We use the implementations of Unet <ns0:ref type='bibr' target='#b33'>(Ronneberger et al. (2015)</ns0:ref>) with the architecture includes three sections, namely the contraction, the bottle-neck, and the expansion section. Generally, the contraction section is the combination of several contraction blocks. Each block consists of 3 &#215; 3 convolution and 2 &#215; 2 max-pooling layers. The number of feature maps gets double at each max-pooling layer. It helps the architecture learn complex patterns effectively. Furthermore, the kernels of size 3 &#215; 3 are widely used as filters for the widespread deep neural networks <ns0:ref type='bibr' target='#b4'>(Chollet (2017)</ns0:ref>; <ns0:ref type='bibr'>Tan et al. (2019)</ns0:ref>). Besides, the model's performance also depends on the size of kernels and improves the efficiency of capturing high-resolution images. Similar to contraction blocks, the bottle-neck layers also consist of the 3 &#215; 3 convolution but followed by 2 &#215; 2 up convolution layers. The most crucial section of Unet architecture is the expansion section. This section consists of several expansion blocks, and the number of expansion blocks should be equal to the number of contractions. Each block also contains 3 &#215; 3 convolution and 2 &#215; 2 up convolution layers, half of the feature maps after each block are leveraged to maintain symmetry. Moreover, the feature maps corresponding with the contraction layers also include the input. Hence, the image will be reconstructed based on the learned features while contracting the image. To produce the output, a 1 &#215; 1 convolution layer is utilized to generate the feature maps with the number equal to the desired segments.</ns0:p><ns0:p>Also, <ns0:ref type='bibr' target='#b52'>Zhou et al. (2018</ns0:ref><ns0:ref type='bibr' target='#b53'>Zhou et al. ( , 2019) )</ns0:ref> Furthermore, the spatial resolution decreases as going up and detecting more high-level structures, the semantic value for each layer can potentially increase. Nevertheless, the bottom layers are in high resolution but can not be utilized for detection due to the semantic value is unsuitable for justifying the slow-down training computation based on it. By applying the 1 &#215; 1 convolution layer at the top-down pathway, the channel dimensions of feature maps from the bottom-up pathway can be reduced and become the top-down pathway's first feature map. Furthermore, element-wise addition is applied to merge the feature maps, the bottom-up pathway, and the top-down pathway.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4'>Loss function for segmentation task</ns0:head><ns0:p>In deep learning and computer vision, the boundary detection definition comes from extracting features to produce significant representations of the objects. More specifically, boundary detection aims to identify object boundaries from images. Therefore, we can consider boundary detection as segmentation problems and target the boundaries to 1 and the rest of the image to 0 as the label. Thus, the loss function can be formulated with the classical function, namely Cross-entropy or Hinge loss. However, in terms of segmentation tasks, the classical loss function models can work imperfectly due to the highly unbalanced label distribution of each class and the per-pixel intrinsic of the classical loss function.</ns0:p><ns0:p>To enhance segmentation models' performance, we considered using Dice loss <ns0:ref type='bibr' target='#b24'>(Milletari et al. (2016)</ns0:ref>)</ns0:p><ns0:p>originates from the Dice coefficient. We will introduce more details about the Dice coefficient in section 4.3. The ground truth and predicted pixels can be considered as two sets. By leveraging Dice loss, the two sets are trained to overlap little by little, and the reduction of Dice loss can be obtained when the predicted pixels overlap only the ground-truth pixels. Furthermore, with Dice loss, the total number of pixels at the global scale is investigated as the denominator, whereas the numerator pays attention to the overlap between two sets at the local scale. Hence, the loss of information globally and locally is utilized by Dice loss and critically improves accuracy. Moreover, in thin boundaries, the model utilizing Dice loss can achieve better performance than others.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5'>Data Augmentation</ns0:head><ns0:p>Deep learning requires more data to improve classification and regression tasks, although it is not easy to In this study, we adapted several techniques as follows. Firstly, the outlier will be cut off from the original image, and we set 10% and 90% for the cut-off lower and upper percentiles, respectively. Then, the low resolution of the cut-off outlier image is simulated. We also applied the mirror, contrast, and brightness transform. Furthermore, gamma and Gaussian noise are utilized before adding spatial transforms. Figure <ns0:ref type='figure' target='#fig_6'>3</ns0:ref> visualizes the original medical image, and the image leveraged data augmentation techniques.</ns0:p></ns0:div> <ns0:div><ns0:head>8/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:2:1:CHECK 17 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head n='4'>EVALUATION</ns0:head><ns0:p>In this section, we present our experimental configurations in Section 4.1. Furthermore, the research information referred as the COVID-19 dataset and the considered evaluation metrics are explained in Section 4.2 and Section 4.3 respectively.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Settings for experiment</ns0:head><ns0:p>We implemented the trained models built based on Pytorch framework 3 . We also accelerated the training section by utilizing the weights trained on the ImageNet dataset <ns0:ref type='bibr' target='#b34'>(Russakovsky et al. (2015)</ns0:ref>). The processing includes three main phases: training and validation, as described in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. For the first stage, the medical images dataset is split into training and validation sections. Afterward, we evaluated several segmentation architectures and stored internal parameters with the highest performance. A robust computational resource is required for the segmentation model. Thus, we used a server with configurations listed in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> to conduct our experiments. </ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Dataset</ns0:head><ns0:p>We investigated our approach's performance on the COVID-19 segmentation dataset from the Italian Society of Medical and Interventional Radiology 4 . The dataset includes 829 slices belonging to 9 axial volumetric CTs. Furthermore, the experienced radiologist has evaluated, segmented, and marked as COVID-19 on 373 out of the total of 829 slices. The medical images have been transformed to greyscaled with 630&#215;630 and in NIFTI file format. The segmented labels include the infection masks but also lung masks. Therefore, it could be more attractive for performing segmentation tasks on this dataset. </ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Metrics for comparison</ns0:head><ns0:p>In this study, we considered using two metrics to evaluate the current approach's performance, namely dice <ns0:ref type='bibr' target='#b40'>(S&#248;rensen (1948)</ns0:ref>; <ns0:ref type='bibr' target='#b5'>Dice (1945)</ns0:ref>) and Jaccard coefficients. Intuitively, the segmentation performance is measured by evaluating the overlap between the predictions and the ground-truth object. The results with more overlap regions with the ground truth reveal better performance than those with fewer overlap regions. Both Dice and Jaccard indices are in the range between 0 and 1. We assume that A and B are prediction and ground-truth masks, respectively, for a given class. If A and B match perfectly, the value of both Dice and Jaccard indices is equal to 1. Otherwise, A and B are no overlap, and the value is equal to 0. Thus, the Dice coefficient can be evaluated by 2 &#215; the area of overlap divided by the total number of pixels in both masks as in Equation <ns0:ref type='formula' target='#formula_0'>1</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_0'>Dice(A, B) = 2 &#215; ||A &#8745; B|| ||A|| + ||B||<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>Like the Dice coefficient, the Jaccard index is also one of the most common metrics in image segmentation tasks. Jaccard index can be referred to as Intersection-Over-Union (IoU). Generally, the IoU is the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science overlap regions between the predicted and the ground-truth mask divided by the union region between the predicted and the ground truth mask. In terms of binary or multi-class segmentation, we need to calculate the IoU of each class and average them to achieve the mean IoU. The Jaccard index or IoU can be calculated as in Equation <ns0:ref type='formula'>2</ns0:ref>. Besides, the Precision, Recall, and F 1 score are also utilized to evaluate the segmentation performance. </ns0:p><ns0:formula xml:id='formula_1'>Jaccard(A, B) = IoU = ||A &#8745; B|| ||A &#8746; B|| (2)</ns0:formula><ns0:p>We also consider the following definitions:</ns0:p><ns0:p>&#8226; True Positive (TP) reveals the number of positives. In other words, the predictions match with the ground-truth label.</ns0:p><ns0:p>&#8226; True Negative (TN) indicates the predictions do not belong to the ground-truth and are not segmented.</ns0:p><ns0:p>&#8226; False Positive (FP) demonstrates the predicted masks unmatch with the ground-truth masks.</ns0:p><ns0:p>&#8226; False Negative (FN) expresses the predictions belong to the ground-truth, but it is not segmented correctly.</ns0:p><ns0:p>Furthermore, in terms of the confusion matrix, the Dice and IoU equation can be rephrased as in Equation Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>3 and Equation <ns0:ref type='formula'>4</ns0:ref>. We also present the measuring of the segmentation errors in Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_2'>Dice = 2T P 2T P + FP + FN (3) Jaccard = IoU = T P T P + FP + FN (4)</ns0:formula><ns0:p>The formula of computing Dice is relevant to the F 1 score. In other words, the Dice and F 1 achieve the 360 same value in comparison with each other. Moreover, Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> visualizes in detail the differences between 361</ns0:p><ns0:p>Dice and Jaccard/Iou indices. Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref>, the left image represents the Dice coefficient, whereas the right 362 image exhibits the Intersection over Union between the predicted mask and the ground-truth mask.</ns0:p></ns0:div> <ns0:div><ns0:head>363</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_8'>5</ns0:ref>. The experimental results in details of various configurations described in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. The data augmentation techniques are not utilized to perform the experiment. </ns0:p><ns0:formula xml:id='formula_3'>Configuration C 1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 C 10 IoU Not Lung</ns0:formula></ns0:div> <ns0:div><ns0:head n='5'>EXPERIMENTAL RESULTS</ns0:head><ns0:p>In this section, we present in detail our experimental results. Section 5.1 presents the segmentation performance of the configurations introduced in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>. All the models are trained over 100 epochs, and the model that achieved the best performance will be stored for inferring purposes. Afterward, the discussion of the proposed methods and the other systems is presented in section 5.2.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1'>Segmentation performance on the medical image dataset</ns0:head><ns0:p>We describe the results of both cases, non-augmented and augmented data in Table <ns0:ref type='table' target='#tab_8'>5 and Table 6</ns0:ref> respectively. More specifically, the IoU, mean IoU (mIoU), Precision, Recall, and F 1 score of our experimental configurations are reported to express the performance when applying data augmentation techniques and vice versa. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In terms of the Dice coefficient, C 9 achieves 0.9016 for the COVID-19 category, being the best architecture.</ns0:p><ns0:p>Considering Precision and Recall, C 9 earns 0.9129, 0.8906, and gives a comparable performance to the others. Table <ns0:ref type='table' target='#tab_7'>6 presents</ns0:ref> We also visualize several samples with ground truth and the prediction masks in and Figure <ns0:ref type='figure'>8</ns0:ref>. The lung is colored by slate blue and orange for COVID-19. As observed from the results, our predictions express promising segmentation performance. In terms of complex COVID-19 infection regions as in Figure <ns0:ref type='figure'>8B</ns0:ref>, Figure <ns0:ref type='figure'>8C</ns0:ref>, or Figure <ns0:ref type='figure'>8E</ns0:ref>, the boundary of COVID-19 segmentation is equivalent to the corresponding ground-truth. For the different view as in Figure <ns0:ref type='figure'>8A</ns0:ref> and Figure <ns0:ref type='figure'>8D</ns0:ref>, the interesting regions are segmented quite correctly. Meanwhile, the lungs are also produced identically in comparison with the ground truth.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2'>Benchmark</ns0:head></ns0:div> <ns0:div><ns0:head n='5.2.1'>Lung segmentation</ns0:head><ns0:p>The comparison of lung segmentation is presented in Table <ns0:ref type='table' target='#tab_10'>8</ns0:ref>. As observed from the results, our configurations with different decoders have outperformed the approach by <ns0:ref type='bibr' target='#b35'>Saood and Hatem (2021)</ns0:ref>. More specifically, the work of <ns0:ref type='bibr' target='#b35'>Saood and Hatem (2021)</ns0:ref> The proposed architecture of COVID-19 segmentation system on medical images.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>2 https://www.who.int/emergencies/diseases/novel-coronavirus-2019, accessed 11 May 2021 2/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:2:1:CHECK 17 Aug 2021) Manuscript to be reviewed Computer Science combinations to enhance the performance in COVID-19 lung CT image segmentation. This study has leveraged well-known deep learning architectures as encoders and Unet family, Feature Pyramid Network techniques as decoders to produce segmentation on chest slices for supporting COVID-19 diagnosis. The principal contributions include as follows: &#8226; Numerous configurations generated by combining five well-known deep learning architectures (ResNet, ResNeSt, SE ResNeXt, Res2Net, and EfficientNet-B0) and Unet family are evaluated and compared to the state-of-the-art to reveal the efficiency in the COVID-19 lung CT image segmentation. Moreover, we also include Feature Pyramid Network (FPN), a famous architecture for segmentation tasks in configurations for comparison.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The proposed architecture of COVID-19 segmentation system on medical images.</ns0:figDesc><ns0:graphic coords='5,141.73,444.68,413.58,212.77' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The process diagram of segmentation model.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>identity mapping is the desired underlying mapping. The weights and biases of the block within the dotted-line box need to be set at 0.In the scope of this study, we leveraged ResNet, ResNeSt, SE ResNeXt, and Res2Net as the encoders to generate the features map and transfer the features to the decoder network segmenting the COVID-19 regions on medical images. ResNeSt is a new network inherited from ResNet with an attention mechanism, performed promising results on image classification and segmentation tasks. In this architecture, the feature maps are split into G groups where G = #Cardinality &#215; #Radix. The introduction of Cardinality is presented from Resnext (Xie et al. (2017)), which repeats the bottle-neck blocks and breaks channel information into smaller groups, whereas Radix represents the block of Squeeze-and-Excitation Networks (SENet) (Hu et al. (2018)). In summary, this architecture combines the Cardinality of Resnext and Attention of Squeeze and Excitation Networks to formulate the Split Attention. In other words, Split Attention is the modification of the gating mechanism. Recently, Gao et al. (2021) proposed a novel building block for CNN constructs hierarchical residual-like connections within one single residual block, namely Res2Net. In other words, the bottle-neck block of the ResNet architecture is re-designed and contributes to increasing the range of receptive fields for each network layer by representing multi-scale features at a granular level. By leveraging kernel size of 7 &#215; 7 instead of 3 &#215; 3, the computation of multi-scale feature extraction ability is enhanced but achieved a similar cost. 3.3 Decoders for segmentation network The emergence of artificial intelligence and especially Convolutional Neural Network architectures in computer vision brings the field of image processing to light. Once considered untouchable, several image processing tasks now present promising results like image classification, image recognition, or image segmentation. The image segmentation task's primary purpose is to divide the image into different segment regions, representing the discriminate entity. Compared with classification tasks, segmentation tasks require the feature maps and reconstructing the feature maps' images. In this study, we leveraged the advantages of several CNN-based architectures as the decoder of segmentation models. 6/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:2:1:CHECK 17 Aug 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>proposed an Unet-like architecture, namely Unet++. The advantages of Unet++ can be considered as capturing various levels of features, integrating features, and leveraging a shallow Unet structure. The discriminations between Unet and Unet++ are the skip connection associating two sub-networks and utilizing the deep supervision. The segmentation results are available at numerous nodes in the structure of Unet++ by training with deep supervision. Another Unet-like architecture has been proposed by<ns0:ref type='bibr' target='#b11'>Guan et al. (2020)</ns0:ref> for detaching artifacts from 2D photoacoustic tomography images.We leverage the Unit, Unet++, and Unet 2d as the decoder architectures to conduct the experiments. When compared with original Unet architecture, Unet++ and Unet2d have been built to reduce the semantic gap between the feature maps and efficiently learn the global and local features. FPN<ns0:ref type='bibr' target='#b22'>(Lin et al. (2017)</ns0:ref>) is also a famous architecture appropriate to segmentation tasks. FPN is a feature extractor with a single-scale image of a stochastic dimension and generates the feature maps with proportional size. The primary purpose of FPN is to build feature pyramids inside convolutional neural networks to be used in segmentation or object recognition tasks. The architecture of the FPN involves a bottom-up pathway and a top-down pathway. The bottom-up pathway defines a convolutional neural network with feature extraction, and it composes several convolution blocks. Each block consists of convolution layers. The last layer's output is leveraged as the reference set of feature maps for enriching the top-down pathway by lateral connection. Each lateral connection merges feature maps of the same spatial size from the bottom-up and top-down pathways. Thus, FPN architecture consists of a top-down pathway to construct higher resolution layers from a semantic-rich layer. To improve predicting locations' performance, we deploy the lateral to connect between reconstructed layers, and the corresponding feature maps are utilized due to the reconstructed layers are semantic strong, whereas the locations of objects are not precise. It works similarly to skip connections of ResNet.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>collect needed data. Augmentation techniques was introduced in the work of van Dyk and Meng (2001), and leveraged by Zoph et al. (2020); Frid-Adar et al. (2018); Xu et al. (2020); Ahn et al. (2020) in a vast of studies. More specific, data augmentation is the technique to abound the number of samples in the dataset by modifying the existing samples or generating newly synthetic data. Leveraging the advantages of the augmentation approach can help reduce over-fitting during the training section. In terms of image segmentation, the most general techniques for data augmentation are adjusting brightness or contrast, zoom in/out, cropping, shearing, rotation, noise, or flipping.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The visualization of the data augmentation techniques. The original medical image is on the left, whereas the image with data augmentation techniques is on the right.</ns0:figDesc><ns0:graphic coords='10,183.09,63.78,330.85,239.02' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61111:2:1:CHECK 17 Aug 2021) Manuscript to be reviewed Computer Science visualizes a sample of the dataset; the left CT slide presents the original image of a COVID-19 patient, the right image includes the lung, and the infection region visualizing by blue and orange color, respectively.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. A sample of the COVID-19 segmentation dataset. The left image presents a CT slice of a COVID-19 patient, whereas the right image visualizes the lung and infection region of the patient.</ns0:figDesc><ns0:graphic coords='11,141.73,99.34,413.57,147.05' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The illustration of the segmentation error.</ns0:figDesc><ns0:graphic coords='12,212.04,63.78,272.96,155.46' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. The visualization of Dice and IoU. The left image presents the Dice coefficient, the right describes the IoU.</ns0:figDesc><ns0:graphic coords='12,212.04,336.01,272.95,139.25' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>the performance of our experimental configurations. The proposed networks are trained under data augmentation techniques. Overall, almost the configuration with decoder-based Unet family (C 1 to C 15 ) achieved mIoU over 0.9, whereas the architecture-based FPN obtained approximately at 0.8. Between the configurations, C 6 achieves the best performance with the mIoU obtained of 0.9283. Configuration C 6 represents the combination of ResNet and Unet2d. Furthermore, the second place gets the mIoU of 0.9234 with configuration C 13 trained with SE ResNeXt and Unet++ model as the encoder and decoder. As for lung and COVID-19 segmentation, the average IoU achieves 0.8 on overall configurations for COVID-19 infection regions and 0.94 for lung masks. The configurations of C 6 and C 13 exhibit promising performance on the COVID-19 segmentation task with obtained IoU of 0.8241 and 0.8234. Meanwhile, the configurations of C 13 and C 14 get the maximum F 1 score for the COVID-19 category with an obtained value of 0.9032 0.9031. By examining Precision and Recall, C 6 acquires the best precision performance, whereas C 13 gains the maximum Recall. Figure 7 depicts the confusion matrix for configuration C 13 . We also normalize the confusion matrix over rows for analyzing purposes. The confusion matrix values reveal that most of the misjudgments of COVID-19 infection regions are categorized as lung and vice versa. The comparison of not using data augmentation and using data augmentation reveals the architectures with FPN-based tend to be more effective when not utilizing augmentation techniques. It can be demonstrated by the results of C 16 , C 17 , and C 18 . By the Unet family decoder, the outcomes depict that the models with nearly equivalent performance, i.e., C 2 , C 4 , C 8 , C 10 , and C 12 . The most discriminate configurations are C 11 and C 14 . The performance of C 11 and C 14 are strongly boosted by applying data augmentation techniques. Furthermore, C 11 and C 14 also obtain better performance. In this respect, we can conclude that the augmentation techniques affect the results with almost all configurations. Furthermore, the training and inference times are reported in Table 7. The training presents the total time needs for 100 epochs, whereas the inference expresses the average time to segment each slice. With the Unet and FPN decoder, the architectures are trained with lower computation costs than the rest. The least and most time-consuming configurations for training/inference are C 16 and C 13 , respectively. As observed from the results, with the same encoders, architectures FPN decoder-based segment fastest for each slice.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>obtained a Dice of 0.7490 and Sensitivity of 0.9560 with the SegNet method, while our best configuration, C 13 , achieved a maximum Dice of 0.9748 and 14/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:2:1:CHECK 17 Aug 2021) Manuscript to be reviewed Computer Science Sensitivity of 0.9755. In particular, by leveraging SE ResNeXt and Unet++ architecture, we get the maximum score compared to the others.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>(2021)), Unet<ns0:ref type='bibr' target='#b50'>(Zhou et al. (2020)</ns0:ref>) with attention mechanism, and ADID-UNET Raj et al. (2021). We compared our configurations with the others on Dice, Sensitivity, and Precision values. Our approach achieves a better performance in terms of Dice, Sensitivity, and Precision concerning three other methods. Specifically, the study of &#220;mit<ns0:ref type='bibr' target='#b43'>Budak et al. (2021)</ns0:ref> acquired a Dice of 0.8961 and a Sensitivity of 0.9273 with SegNet. Also, Zhou et al. (2020) leveraged Unet with attention mechanism and obtained the performance of 0.8310 and 0.8670 by Dice and Sensitivity, respectively. The</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='6,141.73,63.78,413.57,232.63' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,214.58,63.78,267.89,496.28' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,199.12,525.00,378.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,219.37,525.00,186.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,298.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,199.12,525.00,267.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,354.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The model structures with Unet family and FPN decoder.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>ResNet 34</ns0:cell><ns0:cell>ResNeSt</ns0:cell><ns0:cell>SE ResNeXt</ns0:cell><ns0:cell>Res2Net</ns0:cell><ns0:cell>EfficientNet-B0</ns0:cell></ns0:row><ns0:row><ns0:cell>Input Layer</ns0:cell><ns0:cell>Input Layer</ns0:cell><ns0:cell>Input Layer</ns0:cell><ns0:cell>Input Layer</ns0:cell><ns0:cell>Input Layer</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 1 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 1 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 1 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 1 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 1 &#215; 512 &#215;</ns0:cell></ns0:row><ns0:row><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell></ns0:row><ns0:row><ns0:cell>Layer 1</ns0:cell><ns0:cell>Layer 1</ns0:cell><ns0:cell>Layer 1</ns0:cell><ns0:cell>Layer 1</ns0:cell><ns0:cell>Layer 1</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 64 &#215; 256 &#215;</ns0:cell><ns0:cell>Output: 64 &#215; 256 &#215;</ns0:cell><ns0:cell>Output: 64 &#215; 256 &#215;</ns0:cell><ns0:cell>Output: 64 &#215; 256 &#215;</ns0:cell><ns0:cell>Output: 32 &#215; 256 &#215;</ns0:cell></ns0:row><ns0:row><ns0:cell>256</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>256</ns0:cell></ns0:row><ns0:row><ns0:cell>Layer 2</ns0:cell><ns0:cell>Layer 2</ns0:cell><ns0:cell>Layer 2</ns0:cell><ns0:cell>Layer 2</ns0:cell><ns0:cell>Layer 2</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 64 &#215; 128 &#215;</ns0:cell><ns0:cell>Output: 256 &#215; 128 &#215;</ns0:cell><ns0:cell>Output: 256 &#215; 128 &#215;</ns0:cell><ns0:cell>Output: 256 &#215; 128 &#215;</ns0:cell><ns0:cell>Output: 24 &#215; 128 &#215;</ns0:cell></ns0:row><ns0:row><ns0:cell>128</ns0:cell><ns0:cell>128</ns0:cell><ns0:cell>128</ns0:cell><ns0:cell>128</ns0:cell><ns0:cell>128</ns0:cell></ns0:row><ns0:row><ns0:cell>Layer 3</ns0:cell><ns0:cell>Layer 3</ns0:cell><ns0:cell>Layer 3</ns0:cell><ns0:cell>Layer 3</ns0:cell><ns0:cell>Layer 3</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 128 &#215; 64 &#215;</ns0:cell><ns0:cell>Output: 512 &#215; 64 &#215;</ns0:cell><ns0:cell>Output: 512 &#215; 64 &#215;</ns0:cell><ns0:cell>Output: 512 &#215; 64 &#215;</ns0:cell><ns0:cell>Output: 40 &#215; 64 &#215; 64</ns0:cell></ns0:row><ns0:row><ns0:cell>64</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Layer 4</ns0:cell><ns0:cell>Layer 4</ns0:cell><ns0:cell>Layer 4</ns0:cell><ns0:cell>Layer 4</ns0:cell><ns0:cell>Layer 4</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 256 &#215; 32 &#215;</ns0:cell><ns0:cell>Output: 1024 &#215; 32 &#215;</ns0:cell><ns0:cell>Output: 1024 &#215; 32 &#215;</ns0:cell><ns0:cell>Output: 1024 &#215; 32 &#215;</ns0:cell><ns0:cell>Output: 112 &#215; 32 &#215;</ns0:cell></ns0:row><ns0:row><ns0:cell>32</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>32</ns0:cell></ns0:row><ns0:row><ns0:cell>Layer 5</ns0:cell><ns0:cell>Layer 5</ns0:cell><ns0:cell>Layer 5</ns0:cell><ns0:cell>Layer 5</ns0:cell><ns0:cell>Layer 5</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 512 &#215; 16 &#215;</ns0:cell><ns0:cell>Output: 2048 &#215; 16 &#215;</ns0:cell><ns0:cell>Output: 2048 &#215; 16 &#215;</ns0:cell><ns0:cell>Output: 2048 &#215; 16 &#215;</ns0:cell><ns0:cell>Output: 320 &#215; 16 &#215;</ns0:cell></ns0:row><ns0:row><ns0:cell>16</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>16</ns0:cell></ns0:row><ns0:row><ns0:cell>Decoder Layer</ns0:cell><ns0:cell>Decoder Layer</ns0:cell><ns0:cell>Decoder Layer</ns0:cell><ns0:cell>Decoder Layer</ns0:cell><ns0:cell>Decoder Layer</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 16 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 16 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 16 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 16 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 16 &#215; 512 &#215;</ns0:cell></ns0:row><ns0:row><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell></ns0:row><ns0:row><ns0:cell>Segmentation</ns0:cell><ns0:cell>Segmentation</ns0:cell><ns0:cell>Segmentation</ns0:cell><ns0:cell>Segmentation</ns0:cell><ns0:cell>Segmentation</ns0:cell></ns0:row><ns0:row><ns0:cell>Layer</ns0:cell><ns0:cell>Layer</ns0:cell><ns0:cell>Layer</ns0:cell><ns0:cell>Layer</ns0:cell><ns0:cell>Layer</ns0:cell></ns0:row><ns0:row><ns0:cell>Output: 3 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 3 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 3 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 3 &#215; 512 &#215;</ns0:cell><ns0:cell>Output: 3 &#215; 512 &#215;</ns0:cell></ns0:row><ns0:row><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The complete architecture and configurations of segmentation models</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Configuration</ns0:cell><ns0:cell>Encoder</ns0:cell><ns0:cell>Decoder</ns0:cell><ns0:cell># of trainable</ns0:cell><ns0:cell>Model size</ns0:cell></ns0:row><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>params</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>C 1</ns0:cell><ns0:cell>ResNet</ns0:cell><ns0:cell /><ns0:cell>24,430,677</ns0:cell><ns0:cell>293.4 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 2</ns0:cell><ns0:cell>ResNeSt</ns0:cell><ns0:cell /><ns0:cell>24,033,525</ns0:cell><ns0:cell>288.8 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 3</ns0:cell><ns0:cell>SE ResNeXt</ns0:cell><ns0:cell>Unet</ns0:cell><ns0:cell>34,518,277</ns0:cell><ns0:cell>414.8 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 4</ns0:cell><ns0:cell>Res2Net</ns0:cell><ns0:cell /><ns0:cell>32,657,501</ns0:cell><ns0:cell>392.5 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 5</ns0:cell><ns0:cell>EfficientNet-B0</ns0:cell><ns0:cell /><ns0:cell>6,251,473</ns0:cell><ns0:cell>72.2 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 6</ns0:cell><ns0:cell>ResNet</ns0:cell><ns0:cell /><ns0:cell>13,394,437</ns0:cell><ns0:cell>160.9 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 7</ns0:cell><ns0:cell>ResNeSt</ns0:cell><ns0:cell /><ns0:cell>13,394,437</ns0:cell><ns0:cell>160.9 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 8</ns0:cell><ns0:cell>SE ResNeXt</ns0:cell><ns0:cell>Unet2d</ns0:cell><ns0:cell>13,394,437</ns0:cell><ns0:cell>160.9 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 9</ns0:cell><ns0:cell>Res2Net</ns0:cell><ns0:cell /><ns0:cell>13,394,437</ns0:cell><ns0:cell>160.9 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 10</ns0:cell><ns0:cell>EfficientNet-B0</ns0:cell><ns0:cell /><ns0:cell>13,394,437</ns0:cell><ns0:cell>160.9 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 11</ns0:cell><ns0:cell>ResNet</ns0:cell><ns0:cell /><ns0:cell>26,072,917</ns0:cell><ns0:cell>313.2 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 12</ns0:cell><ns0:cell>ResNeSt</ns0:cell><ns0:cell /><ns0:cell>40,498,165</ns0:cell><ns0:cell>468.4 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 13</ns0:cell><ns0:cell>SE ResNeXt</ns0:cell><ns0:cell>Unet++</ns0:cell><ns0:cell>50,982,917</ns0:cell><ns0:cell>612.5 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 14</ns0:cell><ns0:cell>Res2Net</ns0:cell><ns0:cell /><ns0:cell>49,122,141</ns0:cell><ns0:cell>590.2 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 15</ns0:cell><ns0:cell>EfficientNet-B0</ns0:cell><ns0:cell /><ns0:cell>6,569,585</ns0:cell><ns0:cell>76.1 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 16</ns0:cell><ns0:cell>ResNet</ns0:cell><ns0:cell /><ns0:cell>23,149,637</ns0:cell><ns0:cell>270.8 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 17</ns0:cell><ns0:cell>ResNeSt</ns0:cell><ns0:cell /><ns0:cell>17,628,389</ns0:cell><ns0:cell>211.9 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 18</ns0:cell><ns0:cell>SE ResNeXt</ns0:cell><ns0:cell>FPN</ns0:cell><ns0:cell>28,113,141</ns0:cell><ns0:cell>337.9 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 19</ns0:cell><ns0:cell>Res2Net</ns0:cell><ns0:cell /><ns0:cell>26,252,365</ns0:cell><ns0:cell>315.6 MB</ns0:cell></ns0:row><ns0:row><ns0:cell>C 20</ns0:cell><ns0:cell>EfficientNet-B0</ns0:cell><ns0:cell /><ns0:cell>5,759,425</ns0:cell><ns0:cell>66.3 MB</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Hardware and software configurations.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell>RAM</ns0:cell><ns0:cell>CPU</ns0:cell><ns0:cell>GPU</ns0:cell><ns0:cell>OS</ns0:cell></ns0:row><ns0:row><ns0:cell>Description</ns0:cell><ns0:cell>64GB</ns0:cell><ns0:cell>Intel&#174; i9-10900F</ns0:cell><ns0:cell>NVIDIA GeForce GTX</ns0:cell><ns0:cell>Ubuntu 20.04 LTS</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>CPU @ 2.80GHz</ns0:cell><ns0:cell>2060 SUPER</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The information of the considered dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Type</ns0:cell><ns0:cell># of samples</ns0:cell></ns0:row><ns0:row><ns0:cell>Lung Masks</ns0:cell><ns0:cell>829</ns0:cell></ns0:row><ns0:row><ns0:cell>Infection Masks</ns0:cell><ns0:cell>829</ns0:cell></ns0:row><ns0:row><ns0:cell>Infection Masks with COVID-19</ns0:cell><ns0:cell>373</ns0:cell></ns0:row><ns0:row><ns0:cell>Training set</ns0:cell><ns0:cell>300</ns0:cell></ns0:row><ns0:row><ns0:cell>Testing set</ns0:cell><ns0:cell>73</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Experimental results in details with various configurations of the architectures mentioned and described in Table2.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Nor</ns0:cell><ns0:cell cols='9'>0.9954 0.9948 0.9956 0.9956 0.9941 0.9956 0.9968 0.9956 0.9971 0.9954</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>COVID-19</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Lung</ns0:cell><ns0:cell cols='9'>0.9449 0.9407 0.9393 0.9464 0.9315 0.9492 0.9575 0.9475 0.9581 0.9460</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>COVID-19</ns0:cell><ns0:cell cols='9'>0.7978 0.7860 0.7631 0.8059 0.7498 0.8175 0.8233 0.8177 0.8233 0.8127</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>mIOU</ns0:cell><ns0:cell cols='9'>0.9127 0.9072 0.8993 0.9160 0.8918 0.9208 0.9259 0.9204 0.9262 0.9219</ns0:cell></ns0:row><ns0:row><ns0:cell>F 1 -score/Dice</ns0:cell><ns0:cell>Not Lung Nor COVID-19 Lung COVID-19</ns0:cell><ns0:cell cols='9'>0.9977 0.9975 0.9978 0.9978 0.9971 0.9978 0.9975 0.9977 0.9977 0.9978 0.9717 0.9695 0.9687 0.9725 0.9645 0.9724 0.9716 0.9725 0.9735 0.9722 0.8875 0.8802 0.8656 0.8925 0.8570 0.8996 0.8934 0.8993 0.9016 0.8967</ns0:cell></ns0:row><ns0:row><ns0:cell>Precision</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung COVID-19</ns0:cell><ns0:cell cols='9'>0.9986 0.9982 0.9981 0.9986 0.9988 0.9974 0.9971 0.9981 0.9977 0.9981 0.9691 0.9721 0.9656 0.9747 0.9633 0.9764 0.9707 0.9706 0.9718 0.9749 0.8683 0.8386 0.8737 0.8502 0.8036 0.8883 0.9145 0.8979 0.9129 0.8730</ns0:cell></ns0:row><ns0:row><ns0:cell>Recall</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung</ns0:cell><ns0:cell cols='9'>0.9968 0.9967 0.9975 0.9969 0.9953 0.9981 0.9979 0.9974 0.9977 0.9976 0.9742 0.9669 0.9718 0.9702 0.9658 0.9684 0.9725 0.9745 0.9752 0.9696</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>COVID-19</ns0:cell><ns0:cell cols='9'>0.9077 0.9261 0.8577 0.9394 0.9180 0.9112 0.8732 0.9008 0.8906 0.9233</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Configuration C 11</ns0:cell><ns0:cell>C 12</ns0:cell><ns0:cell>C 13</ns0:cell><ns0:cell>C 14</ns0:cell><ns0:cell>C 15</ns0:cell><ns0:cell>C 16</ns0:cell><ns0:cell>C 17</ns0:cell><ns0:cell>C 18</ns0:cell><ns0:cell>C 19</ns0:cell><ns0:cell>C 20</ns0:cell></ns0:row><ns0:row><ns0:cell>IoU</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung</ns0:cell><ns0:cell cols='9'>0.9959 0.9956 0.9959 0.9956 0.9950 0.9941 0.9942 0.9948 0.9944 0.9933 0.9488 0.9476 0.9475 0.9415 0.9358 0.9351 0.9305 0.9379 0.9369 0.9234</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>COVID-19</ns0:cell><ns0:cell cols='9'>0.8168 0.8138 0.8073 0.7776 0.7608 0.7745 0.7311 0.7758 0.7786 0.7299</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>mIOU</ns0:cell><ns0:cell cols='9'>0.9205 0.9191 0.9169 0.9049 0.8972 0.9013 0.8852 0.9029 0.9033 0.8822</ns0:cell></ns0:row><ns0:row><ns0:cell>F 1 -score/Dice</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung COVID-19</ns0:cell><ns0:cell cols='9'>0.9979 0.9978 0.9979 0.9978 0.9975 0.9971 0.9971 0.9974 0.9972 0.9966 0.9737 0.9731 0.9731 0.9646 0.9668 0.9665 0.9640 0.9680 0.9674 0.9602 0.8992 0.8973 0.8934 0.8749 0.8641 0.8729 0.8446 0.8738 0.8755 0.8439</ns0:cell></ns0:row><ns0:row><ns0:cell>Precision</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung COVID-19</ns0:cell><ns0:cell cols='9'>0.9988 0.9991 0.9989 0.9975 0.9981 0.9987 0.9986 0.9986 0.9982 0.9978 0.9726 0.9689 0.9744 0.9752 0.9722 0.9625 0.9666 0.9709 0.9665 0.9647 0.8742 0.8711 0.8522 0.8548 0.8139 0.8350 0.7823 0.8187 0.8445 0.7822</ns0:cell></ns0:row><ns0:row><ns0:cell>Recall</ns0:cell><ns0:cell>Not Lung nor COVID-19 Lung</ns0:cell><ns0:cell cols='9'>0.9971 0.9964 0.9970 0.9981 0.9969 0.9954 0.9955 0.9962 0.9962 0.9954 0.9749 0.9773 0.9717 0.9646 0.9615 0.9705 0.9615 0.9651 0.9684 0.9557</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>COVID-19</ns0:cell><ns0:cell cols='9'>0.9254 0.9252 0.9386 0.8959 0.9209 0.9144 0.9177 0.9370 0.9089 0.9162</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:2:1:CHECK 17 Aug 2021)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>12/19</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>reveals the results of inferring the trained model without using data augmentation techniques. The architectures built based on the Unet decoder family obtained better results compared to FPN. Among the proposed configurations, C 9 acquires the best mIoU of 0.9262, whereas C 7 gets 0.9259 in the second place.</ns0:figDesc><ns0:table /><ns0:note>13/19PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:2:1:CHECK 17 Aug 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>The report (in second(s)) of training time (1) and inference time (2) of 20 configurations.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>C 1</ns0:cell><ns0:cell>C 2</ns0:cell><ns0:cell>C 3</ns0:cell><ns0:cell>C 4</ns0:cell><ns0:cell>C 5</ns0:cell><ns0:cell>C 6</ns0:cell><ns0:cell>C 7</ns0:cell><ns0:cell>C 8</ns0:cell><ns0:cell>C 9</ns0:cell><ns0:cell>C 10</ns0:cell></ns0:row><ns0:row><ns0:cell>(1)</ns0:cell><ns0:cell>5,949</ns0:cell><ns0:cell>6,390</ns0:cell><ns0:cell>7,900</ns0:cell><ns0:cell>6,659</ns0:cell><ns0:cell>6,248</ns0:cell><ns0:cell>7,351</ns0:cell><ns0:cell>7,377</ns0:cell><ns0:cell>7,497</ns0:cell><ns0:cell>7,465</ns0:cell><ns0:cell>7,526</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>(2) 0.5753 0.5892 0.6164 0.6027 0.6301 0.5891 0.6438 0.6287 0.6287 0.6301</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>C 11</ns0:cell><ns0:cell>C 12</ns0:cell><ns0:cell>C 13</ns0:cell><ns0:cell>C 14</ns0:cell><ns0:cell>C 15</ns0:cell><ns0:cell>C 16</ns0:cell><ns0:cell>C 17</ns0:cell><ns0:cell>C 18</ns0:cell><ns0:cell>C 19</ns0:cell><ns0:cell>C 20</ns0:cell></ns0:row><ns0:row><ns0:cell>(1)</ns0:cell><ns0:cell>6,656</ns0:cell><ns0:cell cols='2'>8,884 10,207</ns0:cell><ns0:cell>9,147</ns0:cell><ns0:cell>6,505</ns0:cell><ns0:cell>5,832</ns0:cell><ns0:cell>6,127</ns0:cell><ns0:cell>7,481</ns0:cell><ns0:cell>6,388</ns0:cell><ns0:cell>5,999</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>(2) 0.6109 0.6931 0.7328 0.7027 0.5986 0.5684 0.5889 0.6054 0.5972 0.5794</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>The performance of lung segmentation, the lung includes non of infected COVID-19 region</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Method</ns0:cell><ns0:cell cols='2'>Dice Sensitivity</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>C4</ns0:cell><ns0:cell>0.9731</ns0:cell><ns0:cell>0.9728</ns0:cell></ns0:row><ns0:row><ns0:cell>Our Configurations</ns0:cell><ns0:cell>C6 C13</ns0:cell><ns0:cell>0.9734 0.9748</ns0:cell><ns0:cell>0.9707 0.9755</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>C19</ns0:cell><ns0:cell>0.9661</ns0:cell><ns0:cell>0.9601</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Saood and Hatem (2021) SegNet</ns0:cell><ns0:cell>0.7490</ns0:cell><ns0:cell>0.9560</ns0:cell></ns0:row></ns0:table><ns0:note>Figure 7. The visualization of confusion matrix for segmentation model-based SE ResNeXt and Unet++</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>5.2.2 COVID-19 segmentation</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>The summarization of quantitative results of infected COVID-19 regions, -represents for no relevant information in the original study.Table9reveals the the compared performance of the proposed approach with the state-of-the-art methods including SegNet ( &#220;mitBudak et al. </ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Method</ns0:cell><ns0:cell cols='3'>Dice Sensitivity Precision</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>C1</ns0:cell><ns0:cell>0.8963</ns0:cell><ns0:cell>0.9224</ns0:cell><ns0:cell>0.8716</ns0:cell></ns0:row><ns0:row><ns0:cell>Our Configurations</ns0:cell><ns0:cell>C10 C13</ns0:cell><ns0:cell>0.9034 0.9032</ns0:cell><ns0:cell>0.9233 0.9349</ns0:cell><ns0:cell>0.8844 0.8711</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>C17</ns0:cell><ns0:cell>0.8966</ns0:cell><ns0:cell>0.9179</ns0:cell><ns0:cell>0.8172</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>&#220;mit Budak et al. (2021) A-SegNet + FTL</ns0:cell><ns0:cell>0.8961</ns0:cell><ns0:cell>0.9273</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Zhou et al. (2020)</ns0:cell><ns0:cell cols='2'>Unet + attention mechanism 0.8310</ns0:cell><ns0:cell>0.8670</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Raj et al. (2021)</ns0:cell><ns0:cell>ADID-UNET</ns0:cell><ns0:cell>0.8031</ns0:cell><ns0:cell>0.7973</ns0:cell><ns0:cell>0.8476</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='3'>https://pytorch.org, Date accessed: 10 May 2021 4 http://medicalsegmentation.com/covid19/, accessed 10 March 2021</ns0:note> <ns0:note place='foot' n='15'>/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61111:2:1:CHECK 17 Aug 2021)</ns0:note> </ns0:body> "
"Dear Reviewers/Editors, Paper ID: 61111 (#CS-2021:05:61111:0:2:REVIEW) Original Title: Decoders configurations based on Unet family and Feature Pyramid Network for COVID-19 Segmentation on CT images Thank you so much for your comments. We have been able to incorporate changes as your suggestions with highlights in red. We look forward to hearing from you! Sincerely, Authors ----------- Editor comments The article is close to Acceptance. However, we noticed that you chose to include the references suggested by Reviewer 1. After a review, PeerJ staff and I do not feel that these references were the most appropriate to have included. Therefore, I ask you to critically evaluate the citations which were added and remove them, unless you are strongly of the opinion that they are highly relevant to your manuscript. → Thank you so much for your comments and suggestions. We checked and critically evaluated the citations. We decided to remove them, which were not the most appropriate. We have been able to incorporate changes as your suggestions with highlights in red. "
Here is a paper. Please give your review comments after reading it.
239
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Edge preserving filters aim to simplify the representation of images (e.g., by reducing noise or eliminating irrelevant detail) while preserving their most significant edges. These filters are typically nonlinear and locally smooth the image structure while minimizing both blurring and over-sharpening of visually important edges. Here we present the Alternating Guided Filter (AGF) that achieves edge preserving smoothing by combining two recently introduced filters: the Rolling Guided Filter (RGF) and the Smooth and iteratively Restore Filter (SiR). We show that the integration of RGF and SiR in an alternating iterative framework results in a new smoothing operator that preserves significant image edges while effectively eliminating small scale details. The AGF combines the large scale edge and local intensity preserving properties of the RGF with the edge restoring properties of the SiR while eliminating the drawbacks of both previous methods (i.e., edge curvature smoothing by RGF and local intensity reduction and restoration of small scale details near large scale edges by SiR). The AGF is simple to implement and efficient, and produces high-quality results. We demonstrate the effectiveness of AGF on a variety of images, and provide a public code to facilitate future studies.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>Natural scenes contain meaningful visual elements at different spatial scales. Small elements typically represent texture, small objects, and noise, while large scale structures generally represent object or region boundaries, spatial color transitions, or homogeneous regions. Spatial filtering is a common operation in image processing and computer vision that is typically used to reduce noise or eliminate small spurious details (e.g., texture) and enhance image contrast. In spatial filtering the value of the filtered image at a given location is a function of the original pixel values in a small neighborhood of the same location. In linear filtering the value of an output pixel is a linear combination (e.g., a weighted average) of the values of the pixels in the input pixel's neighborhood. When the image intensity varies smoothly over space, nearby pixels are likely to have similar values (i.e., they will be correlated). In contrast, the noise that corrupts these pixel values will be less correlated than the signal values, so that averaging reduces noise while preserving the mean signal value. However, the assumption of smooth variation evidently does not hold near edges. Thus, although linear filtering can effectively reduce image noise and eliminate unwanted details smaller than the filter kernel size, it degrades the articulation of (blurs) the remaining edges, lines and other details that are important for the interpretation of the image. Therefore, edge preserving filters have been developed that reduce small (relative to the filter kernel size) scale image variations (noise or texture) while preserving larger scale discontinuities (edges). Some well-known non-linear edge-preserving smoothing filters are for instance anisotropic diffusion <ns0:ref type='bibr' target='#b5'>(Perona &amp; Malik, 1990)</ns0:ref>, robust smoothing <ns0:ref type='bibr' target='#b0'>(Black et al., 1998)</ns0:ref> and the bilateral filter <ns0:ref type='bibr' target='#b7'>(Tomasi &amp; Manduchi, 1998)</ns0:ref>. However, anisotropic diffusion tends to oversharpen edges (i.e., it produces halos) and is computationally expensive, which makes it less suitable for application, for instance, in multiresolution schemes <ns0:ref type='bibr' target='#b1'>(Farbman et al., 2008)</ns0:ref>. The nonlinear bilateral filter (BLF) assigns each pixel a weighted mean of its neighbors, with the weights decreasing both with spatial distance and with difference in value <ns0:ref type='bibr' target='#b7'>(Tomasi &amp; Manduchi, 1998)</ns0:ref>. While the BLF is quite effective at smoothing small intensity changes while preserving strong edges and has efficient implementations, it also tends to blur across edges at larger spatial scales, thereby limiting its value for application in multiscale image decomposition schemes <ns0:ref type='bibr' target='#b1'>(Farbman et al., 2008)</ns0:ref>. In addition, the BLF has the undesirable property that it can reverse the intensity gradient near sharp edges (the weighted average becomes unstable when a pixel has only few similar pixels in its neighborhood: <ns0:ref type='bibr' target='#b2'>He, Sun &amp; Tang, 2013)</ns0:ref>. In the joint (or cross) bilateral filter (JBLF) a second or guidance image serves to steer the edge stopping range filter thus preventing over-or under-blur near edges <ns0:ref type='bibr' target='#b6'>(Petschnigg et al., 2004)</ns0:ref>. The recently introduced Guided Filter (GF: He, Sun &amp; Tang, 2013) is a computationally efficient, edge-preserving translation-variant operator based on a local linear model which avoids the drawbacks of bilateral filtering and other previous approaches. When the input image also serves as the guidance image, the GF behaves like the edge preserving BLF. However, similar to the BLF, the GF also tends to produce halos near significant edges. Two recently presented iterative guided filtering frameworks have the ability to perform edge preserving filtering without the introduction of halos. <ns0:ref type='bibr' target='#b9'>Zhang et al. (2014)</ns0:ref> showed that the application of the JBLF in an iterative Rolling Guided Filter (RGF) framework results in size selective filtering of small scale details combined with the recovery of larger scale edges. However, the RGF also has a drawback: it tends to smooth the curvature of large scale edges. <ns0:ref type='bibr' target='#b3'>Kniefacz and Kropatsch (2015)</ns0:ref> recently introduced a similar framework called the Smooth and iteratively Restore (SiR) filter. SiR restores large scale edges while preserving their curvature. However, SiR also has two drawbacks: it reduces the local image intensity and restores small scale details in the neighborhood of large scale edges. In this paper we propose an Alternating Guided Filter (AGF) scheme that integrates the RGF and SiR in a single framework, resulting in a new image smoothing operator that preserves significant edges while effectively eliminating small scale details. AGF combines the edge preserving properties of both the RGF (i.e., the elimination of small details in combination with local intensity preservation) and the SiR (i.e., the articulated restoration of large scale edges) while it does not suffer from their respective drawbacks (the curvature smoothing of large scale edges by RGF and local intensity reduction in combination with the restoration of small scale details near large scale edges by SiR). The rest of this paper is organized as follows. Section 2 briefly discusses some related work on edge preserving filtering and introduces the RGF and SiR filter techniques on which the proposed AGF scheme is based. Section 3 presents the proposed Alternating Guided Filter scheme. Section 4 presents the results of the application of the AGF filter to natural and synthetic images, compares the performance of this new framework with the RGF and SiR filter schemes, and provides some runtime estimates. Finally, in Section 5 the conclusions of this study are presented.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>Related work</ns0:head><ns0:p>In this section we briefly review the edge preserving bilateral and joint bilateral filters and show how they are related to the guided filter. We also discuss two recently introduced frameworks that iteratively apply joint bilateral filters to achieve selective guided filtering of small scale details in combination with large scale edge restoration.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Bilateral Filter</ns0:head><ns0:p>The bilateral filter is a non-linear filter that computes the output at each pixel as a Gaussian weighted average of their spatial and intensity distances. It prevents blurring across edges by assigning larger weights to pixels that are spatially close and have similar intensity values <ns0:ref type='bibr' target='#b7'>(Tomasi &amp; Manduchi, 1998)</ns0:ref>. It uses a combination of (typically Gaussian) spatial and a range (intensity) filter kernels that perform a blurring in the spatial domain weighted by the local variation in the intensity domain. It combines a classic low-pass filter with an edgestopping function that attenuates the filter kernel weights at locations where the intensity difference between pixels is large. Bilateral filtering was developed as a fast alternative to the computationally expensive technique of anisotropic diffusion, which uses gradients of the filtering images itself to guide a diffusion process, thus avoiding edge blurring <ns0:ref type='bibr' target='#b5'>(Perona &amp; Malik, 1990)</ns0:ref>. More formally, at a given image location (pixel) i, the filtered output O i is given by:</ns0:p><ns0:formula xml:id='formula_0'>1 (|| ||) (|| ||) i j i j j i O I f i j g I I K &#61646;&#61527; &#61501; &#61485; &#61485; &#61669; (1)</ns0:formula><ns0:p>where f is the spatial filter kernel (e.g., a Gaussian centered at i), g is the range or intensity (edge-stopping) filter kernel (centered at the image value at i),</ns0:p><ns0:p>&#61527; is the spatial support of the kernel, i K is a normalizing factor (the sum of the fg &#61655; filter weights) . The bilateral filter is controlled by only two parameters: the extent of respectively the spatial kernel and the range kernel. Intensity edges are preserved since the bilateral filter decreases not only with the spatial distance but also with the intensity distance. Though the filter has efficient implementations <ns0:ref type='bibr' target='#b4'>(Paris &amp; Durand, 2006)</ns0:ref> and effectively reduces noise while preserving edges in many situations, it has the undesirable property that it can reverse the intensity gradient near sharp edges (the weighted average becomes unstable when a pixel has only few similar pixels in its neighborhood <ns0:ref type='bibr' target='#b2'>(He, Sun &amp; Tang, 2013)</ns0:ref>). In the joint (or cross) bilateral filter (JBLF) the range filter g is applied to a second or guidance image G <ns0:ref type='bibr' target='#b6'>(Petschnigg et al., 2004)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_1'>1 (|| ||) (|| ||) i j i j j i O I f i j g G G K &#61646;&#61527; &#61501; &#61655; &#61485; &#61655; &#61485; &#61669; (2)</ns0:formula><ns0:p>The JBLF can prevent over-or under-blur near edges by using a related image G to guide the edge stopping behavior of the range filter. That is, the JBLF smooths the image I while preserving edges that are also represented in the image G . The JBLF is particularly favored when the edges in the image that is to be filtered are unreliable (e.g., due to noise or distortions) and when a companion image with well-defined edges is available (e.g., in the case of flash /no-flash image pairs).</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Guided Filter</ns0:head><ns0:p>A Guided Filter (GF: He, Sun &amp; Tang, 2013) is a translation-variant filter based on a local linear model. Guided image filtering involves an input image I , a guidance image G (which may be identical to the input image), and an output image O . The two filtering conditions are (i) that the local filter output is a linear transform of the guidance image G and (ii) as similar as possible to the input image I . The first condition implies that</ns0:p><ns0:formula xml:id='formula_2'>i k i k k O a G b i &#61559; &#61501; &#61483; &#61474; &#61646;<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where k &#61559; is a square window of size ( <ns0:ref type='formula'>2</ns0:ref> </ns0:p><ns0:formula xml:id='formula_3'>E : &#61480; &#61481; 22 ( , ) ( ) k k k k i k i k i E a b a G b I a &#61559; &#61541; &#61646; &#61501; &#61483; &#61485; &#61483; &#61669; (4)</ns0:formula><ns0:p>where &#61541; is a regularization parameter penalizing large <ns0:ref type='table'>2016:04:10165:1:1:NEW 2 Jun 2016)</ns0:ref> Manuscript to be reviewed </ns0:p><ns0:formula xml:id='formula_4'>Computer Science 2 1 || k i i k k i k k G I G I a &#61559; &#61559; &#61555;&#61541; &#61646; &#61485; &#61501; &#61483; &#61669; (5) k k k k b I a G &#61501;&#61485;<ns0:label>(</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>&#61480; &#61481; | 1 || k i k k k ki O a G b &#61559; &#61559; &#61646; &#61501;&#61483; &#61669; (7) Since | ki kk k i k aa &#61559;&#61559; &#61646;&#61646; &#61501;</ns0:formula><ns0:p>&#61669;&#61669; due to the symmetry of the box window Equation ( <ns0:ref type='formula'>7</ns0:ref>) can be written as</ns0:p><ns0:formula xml:id='formula_6'>i i i i O a G b &#61501;&#61483; (8)</ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_7'>1 || i ik k aa &#61559; &#61559; &#61646; &#61501; &#61669; and 1 || i ik k bb &#61559; &#61559; &#61646; &#61501; &#61669;</ns0:formula><ns0:p>are the average coefficients of all windows overlapping i . Although the linear coefficients ( , ) ii ab vary spatially, their gradients will be smaller than those of G near strong edges (since they are the output of a mean filter). As a result we have OaG &#61649; &#61627; &#61649; , meaning that abrupt intensity changes in the guiding image G are still largely preserved in the output image O . Equations ( <ns0:ref type='formula'>5</ns0:ref>), ( <ns0:ref type='formula'>6</ns0:ref>) and ( <ns0:ref type='formula'>8</ns0:ref>) define the GF. When the input image also serves as the guidance image, the GF behaves like the edge preserving bilateral filter, with the parameters &#61541; and the window size r having the same effects as respectively the range and the spatial variances of the bilateral filter. Equation ( <ns0:ref type='formula'>8</ns0:ref>) can be rewritten as</ns0:p><ns0:formula xml:id='formula_8'>() i ij j j O W G I &#61501; &#61669; (9)</ns0:formula><ns0:p>with the weighting kernel ij W depending only on the guidance image G :</ns0:p><ns0:formula xml:id='formula_9'>22 :( , ) ( )( ) 1 1 || k i k j k ij k i j k G G G G W &#61559; &#61559; &#61555; &#61541; &#61646; &#61670;&#61686; &#61485;&#61485; &#61501;&#61483; &#61671;&#61687; &#61671;&#61687; &#61483; &#61672;&#61688; &#61669; (10) Since ( ) 1 ij j WG&#61501;</ns0:formula><ns0:p>&#61669; this kernel is already normalized. The GF is a computationally efficient, edge-preserving operator which avoids the gradient reversal artefacts (i.e., over sharpening of edges) of the BLF. However, just like the BLF, GF has the limitation that it tends to produce halos (unwanted blurring) near large scale edges <ns0:ref type='bibr' target='#b2'>(He, Sun &amp; Tang, 2013)</ns0:ref>. <ns0:ref type='bibr' target='#b9'>Zhang et al. (2014)</ns0:ref> showed that the application of the joint bilateral filter (Equation <ns0:ref type='formula'>2</ns0:ref>) in an iterative framework results in effective size selective filtering of small scale details combined with the recovery of larger scale edges. In their Rolling Guidance Filter (RGF) framework the G &#61483; of the t-th iteration is obtained from the joint bilateral filtering of the input image I using the result t G of the previous iteration step as the guidance image:</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Rolling Guidance Filter</ns0:head><ns0:formula xml:id='formula_10'>1 1 (|| ||) (|| ||) t t t i j i j j i G I f i j g G G K &#61483; &#61646;&#61527; &#61501; &#61655; &#61485; &#61655; &#61485; &#61669; (11)</ns0:formula><ns0:p>In the RGF scheme details smaller than the Gaussian kernel of the bilateral filter are initially removed while the edges of the remaining details are restored by iteratively updating the guidance image. At the start of the iteration process the term || || tt ij GG &#61485; is almost zero, making the range filter g inoperative, so that the joint bilateral filter effectively behaves like a Gaussian filter. Details removed by this filter cannot be recovered later in the process. After each iteration step the influence of the range filter gradually increases due to its increasing weights. Hence, guided by the input image I, the RGF scheme gradually restores the edges that remain after the initial Gaussian filtering. Note that the initial guidance image 1 G can simply be a constant (e.g., zero) valued image since it updates to the Gaussian filtered input image in the first iteration step. The pseudocode for the RGF scheme is presented in Table <ns0:ref type='table'>1</ns0:ref>. The RGF scheme is not restricted to the JBF <ns0:ref type='bibr' target='#b9'>(Zhang et al., 2014)</ns0:ref>. Any average-based joint edge aware averaging filter (e.g., the Guided Filter) can be applied in the RGF framework. In practice the RGF converges rapidly after only a few (3-5) iterations steps. The RGF scheme is simple to implement and can be efficiently computed. Although the RGF enables effective and efficient size selective filtering, it has as drawback that the remaining large scale edges are smoothly curved. <ns0:ref type='bibr' target='#b3'>Kniefacz and Kropatsch (2015)</ns0:ref> recently introduced the Smooth and iteratively Restore (SiR) filter. Similar to RGF, SiR initially removes small scale details (e.g. through Gaussian filtering) and uses an edge-aware filter to iteratively restore larger scale edges. Whereas RGF iteratively filters the (fixed) input image while iteratively restoring the initially blurred guidance image, SiR does the opposite: it gradually restores the initially blurred image using the original input image as (fixed) guidance. Thereto, small details are initially removed through blurring with a Gaussian kernel:</ns0:p></ns0:div> <ns0:div><ns0:head n='2.4'>Smooth and iteratively Restore Filter</ns0:head><ns0:formula xml:id='formula_11'>0 1 (|| ||) ij j G I f i j K &#61646;&#61527; &#61501; &#61655; &#61485; &#61669; (12)</ns0:formula><ns0:p>Then, the edges of the remaining details are iteratively restored by repeatedly computing an updated image 1 t G &#61483; by joint bilateral filtering of the blurred image t G using the input image I as the guidance image:</ns0:p><ns0:formula xml:id='formula_12'>1 1 (|| ||) (|| ||) tt i j i j j i G G f i j g I I K &#61483; &#61646;&#61527; &#61501; &#61655; &#61485; &#61655; &#61485; &#61669;<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>Similar to RGF, SiR converges rapidly after only a few (3-5) iterations steps. Compared to RGF, SiR has the advantage that it produces edges with articulated curvature, since it attempts to restore them as similar as possible to the edges in the original image. In contrast, the curvature of edges produced by RGF is smoothed (less articulated). SiR also has two drawbacks <ns0:ref type='bibr' target='#b3'>(Kniefacz &amp; Kropatsch, 2015)</ns0:ref>. First, the overall intensity of the result is less than the overall intensity of the original image, since SiR restores edges starting from the initially blurred input image. RGF does not have this problem because it operates on the original image itself. Second, SiR tends to restore small scale details that are close to large scale edges. This is a result of a spill-over effect (blurring) of the large scale edges which enables the JBF (Equation <ns0:ref type='formula' target='#formula_12'>13</ns0:ref>) to restore small scale details guided by the original input image. As will be shown in Section 4.1 this restoration of small details near larger edges by SiR can effectively be suppressed by applying a median filter at the end of each iteration step with a kernel that is smaller than the Gaussian kernel of the bilateral spatial filter (e.g., 33 &#61620; ). At each iteration step the median filer 'cleans' the image by removing small details that are recovered by the bilateral filter. In the rest of this paper we will refer to this modification of the original SiR algorithm as SiRmed (Smooth and iteratively Restore including median filtering). The pseudocode for the SiRmed algorithm is presented in Table <ns0:ref type='table'>2</ns0:ref>. The pseudocode for the SiR algorithm is obtained by omitting Step 5 (the median filter).</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>Alternating Guided Filter</ns0:head><ns0:p>In this section we propose a new filter scheme that effectively integrates the RGF and SiR frameworks in such a way that it retains the desired effects of both earlier schemes (i.e., the elimination of small details and restoration of large scale edges) while eliminating their respective drawbacks (i.e., the smoothing of the curvature of large scale edges by RGF and the loss of local image intensity by SiR). Each iteration in the AGF framework consists of three consecutive steps. First, joint bilateral filtering is applied to the input image I, using the result t G of the previous iteration step as the guidance image (Equation ( <ns0:ref type='formula'>11</ns0:ref>). Second, joint bilateral filtering is applied to the result from the first step, using the original image I as the guidance image (Equation <ns0:ref type='formula' target='#formula_12'>13</ns0:ref>).Third, a median filter with a small kernel size (e.g., 33 &#61620; ) is applied to the result of the second step. The alternating use of the original image and the filtered result as either input or guidance images guarantees that both the overall image intensity (positive effect of the RGF scheme) and the curvature of the large scale edges (positive effect of the SiR scheme) are preserved, while the integration of the median filter prevents the reintroduction of filtered small scale details near large scale edges (a negative side effect of SiR). Hence, the AGF scheme combines the positive effects of both the RGF (intensity preservation) and SiR (edge curvature preservation) while eliminating their negative side effects (edge curvature smoothing by the RGF scheme and contrast reduction plus the reintroduction of small details near larger edges by the SiR scheme). The pseudo code for the AGF scheme is presented in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>Experimental results</ns0:head><ns0:p>In this section we present the results of some experiments that were performed to study the performance of the proposed AGF framework. We tested AGF by applying it to a wide range of different natural images (for an overview of all results see the Supplemental Information with this paper). Also, we compare the performance of AGF with that of RGF and SiRmed. The images used in this study have a width of 638 pixels, and a height that varies between 337 and 894 pixels. We selected a set of test images with widely varying features and statistics, such as landscapes and aerial images, portraits and mosaics, embroidery and patchwork. The full set of test images with the results of the different filter schemes are provided as Supplemental Information. In this study, color images are processed by applying the different algorithms to each channel in RGB color space independently. This was done to enable straightforward comparison with the results from previous studies <ns0:ref type='bibr' target='#b3'>(Kniefacz &amp; Kropatsch, 2015;</ns0:ref><ns0:ref type='bibr' target='#b9'>Zhang et al., 2014)</ns0:ref>. Note that in practical applications, filtering should preferably be performed in the CIE-Lab color space, so that only perceptually similar colors are averaged and only perceptually significant edges are preserved. As noted before, a bilateral filter (see Equation <ns0:ref type='formula'>1</ns0:ref>) is controlled by two parameters: the size of the spatial filter kernel (&#963; spatial ) and that of the range filter kernel (&#963; range ). Except when stated otherwise, we used the same constant values for the spatial and range kernels in the bilateral filters that are included in all three frameworks investigated in this study (i.e., RGF, SiRmed and AGF): &#963; spatial = 5 and &#963; range = 0.05. These values were empirically determined and resulted in an effective performance for all three frameworks on the entire set of selected test images.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Relative performance of SiR and SiRmed</ns0:head><ns0:p>Figures <ns0:ref type='figure'>1 and 2</ns0:ref> illustrate the effect of including a median filter with a kernel size of 3&#215;3 at the end of each iteration step in the SiR framework (see Table <ns0:ref type='table'>2</ns0:ref>). Figure <ns0:ref type='figure'>1</ns0:ref> shows the original input image (a) with the result of respectively SiR (Figure <ns0:ref type='figure'>1d</ns0:ref>) and SiRmed (Figure <ns0:ref type='figure'>1e</ns0:ref>) after 5 iterations. In addition, this figure also shows the 1D image intensity distribution along a horizontal cross section of the input image (yellow line in Figure <ns0:ref type='figure'>1a</ns0:ref>) after each of the first 3 iterations of both SiR (Figure <ns0:ref type='figure'>1c</ns0:ref>) and SiRmed (Figure <ns0:ref type='figure'>1d</ns0:ref>). The lowest curves represent the original intensity distribution (Figure <ns0:ref type='figure'>1a</ns0:ref>). Comparison of Figure <ns0:ref type='figure'>1d</ns0:ref> and Figure <ns0:ref type='figure'>1e</ns0:ref> shows that SiRmed effectively removes small details all over the image, while SiR reintroduces small details near large scale edges (notice for instance the light poles all over the stadium, that are restored in Figure <ns0:ref type='figure'>1d</ns0:ref> but are nicely removed in Figure <ns0:ref type='figure'>1e</ns0:ref>). A comparison of Figure <ns0:ref type='figure'>1b</ns0:ref> and Figure <ns0:ref type='figure'>1c</ns0:ref> shows that SiR indeed restores high frequency details near large edge discontinuities, while SiRmed restores these large scale edges without the reintroduction of small scale details. This effect is also clearly demonstrated in Figure <ns0:ref type='figure'>2</ns0:ref>, where SiR fails to remove the small scale details all around the outlines of the tentacles and the fishes, while SiRmed effectively filters small elements all around these larger objects while both preserving their outlines and smoothing their interior.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Relative performance of RGF, SiRmed and AGF</ns0:head><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> shows the results of RGF, SiRmed and AGF filtering after the first 3 iteration steps. Notice that all three filter schemes iteratively restore large scale image edges proceeding from an initial low resolution version of the input image. In this process RGF gradually smoothes the curvature of the large scale edges, while SiRmed smoothes the overall image intensity resulting in a global contrast reduction. In contrast, AGF restores the large scale edges without smoothing their curvature and with preservation of local image contrast. Figures <ns0:ref type='figure' target='#fig_8'>4b-d</ns0:ref> illustrate these effects by showing 1D horizontal cross sections. The contrast reducing effect of SiR is clearly shown in Figure <ns0:ref type='figure' target='#fig_8'>4c</ns0:ref> as an overall reduction of the height of the large scale peaks. Comparison of Figure <ns0:ref type='figure' target='#fig_8'>4e</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_8'>4g</ns0:ref> illustrates that RGF smoothes the curvature of large scale edges while AGF preserves edge curvature.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref> shows the details that are removed by filtering with respectively RGF (Figure <ns0:ref type='figure'>5b</ns0:ref>), SiRmed (Figure <ns0:ref type='figure'>5c</ns0:ref>) and AGF (Figure <ns0:ref type='figure'>5d</ns0:ref>) after 5 iterations, together with the original input PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10165:1:1:NEW 2 Jun 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science image (Figure <ns0:ref type='figure'>5a</ns0:ref>). These images are obtained by subtracting the original image from the filtered ones and computing the mean of the absolute value across the three color channels. This example illustrates that each of the three filters removes small scale details while preserving the larger scale edges to a certain extent. Note that AGF removes small image details all over the image support, while the original large scale edges (e.g., the outlines of the temple) remain relatively unaffected. In contrast, both RGF and SiRmed significantly alter the articulation of the original large scale edges. RGF smoothes the local edge curvature while SiRmed affects the local mean image intensity (resulting in local contrast reduction). As a result, large scale image contours (e.g., the outlines of the temple) are clearly visible in Figure <ns0:ref type='figure'>5b</ns0:ref> and Figure <ns0:ref type='figure'>5c</ns0:ref>, and much less pronounced in Figure <ns0:ref type='figure'>5d</ns0:ref>. Figure <ns0:ref type='figure'>6</ns0:ref> further illustrates the differential performance of all four filters investigated here (AGF, RGF, SiR, and SiRmed) on an artificial image with noise. Figure <ns0:ref type='figure'>6a</ns0:ref> shows some simple geometric shapes (cross, triangle, square and wheel) with different sizes on a stepedge background with 30% additional Gaussian noise. Figures <ns0:ref type='figure'>6b-e</ns0:ref> show the results of respectively AGF, RGF, SiR and SiRmed filtering of Figure <ns0:ref type='figure'>6a</ns0:ref> after five iteration steps. This example shows that SiR restores noise near the edges of larger details and along the vertical step edge in the background, while AGF, RGF and SiRmed effectively reduce noise all over the image plane. Also, RGF (Figure <ns0:ref type='figure'>6c</ns0:ref>) smoothes edge curvature, in contrast to AGF, SiR and SiRmed which retain the original edge curvature. This can for instance be seen in Figure <ns0:ref type='figure'>6c</ns0:ref> where the sharp edges of the crosses, the corners of the triangles and squares, and the spokes of the wheels are all rounded after filtering. This example also illustrates that both SiR (Figure <ns0:ref type='figure'>6d</ns0:ref>) and SiRmed (Figure <ns0:ref type='figure'>6e</ns0:ref>) effectively reduce image contrast due to the intensity smoothing that is inherent in this method. AGF does not introduce any halos. In contrast, RGF produces high intensity halos, and SiR and SiRmed both produce halos with a large spatial extent (see Figure <ns0:ref type='figure'>6c</ns0:ref> near the crosses and wheels).</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Effects of different parameter settings</ns0:head><ns0:p>The results of RGF, SiRmed and AGF for different values of the parameters s &#61555; and r &#61555; are shown in Figures <ns0:ref type='figure' target='#fig_12'>7-9</ns0:ref>. In these figures rows correspond to different amounts of domain filtering (i.e., different values of s &#61555; ) and columns to different amounts of range filtering (i.e., different values of r &#61555; ). These figures show that large spatial kernels and large range kernels both result in blurred image representations. Large spatial kernels cause averaging over large image areas, while large range kernels cause averaging over a wider range of image values. In RGF (Figure <ns0:ref type='figure' target='#fig_10'>7</ns0:ref>) the smoothing of large scale edge curvature increases with increasing s &#61555; . This effect can for instance clearly be seen in the first column of Figure <ns0:ref type='figure' target='#fig_10'>7</ns0:ref> ). Notice that the curves in the roof of the temple just above the pillars remain unaffected, even for the largest size of the spatial kernel ( 9 s &#61555; &#61501; ). In contrast to SiRmed, AGF also preserves local image intensity (contrast).</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4'>Runtime evaluation</ns0:head><ns0:p>In this study we used a Matlab implementation of the RGF written by <ns0:ref type='bibr' target='#b9'>Zhang et al. (2014)</ns0:ref> that is freely available from the authors (at http://www.cs.cuhk.edu.hk/~leojia/projects/rollguidance). The JBF, which is part of this code, is also used here to implement both the SiR filter (see Section 2.1) and the AGF filter (see Section 3). We made no effort to optimize the code of the algorithms. We conducted a runtime test on a Dell Latitude laptop with an Intel i5 2 GHz CPU and 8 GB memory. The algorithms were implemented in Matlab 2016a. Only a single thread was used without involving any SIMD instructions. For this test we used a set of 22 natural RGB images (also provided in the Supplemental Information with this paper). The spatial and range kernels used in the bilateral filters for this test were fixed at respectively &#963; spatial = 5 and &#963; range = 0.05, and the number of iterations was set to 5. The mean runtimes for AGF, RGF, SiR and SiRmed were respectively 3.80&#177;0.27, 1.64&#177;0.12, 1.80&#177;0.14 and 2.17&#177;0.16 seconds. This shows that the mean runtime of AGF is about equal to the sum of the runtimes of RGF and SiRmed. This result is as expected, since the steps involved in AGF are a combination of the steps involved in both RGF and SiRmed.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>Conclusions</ns0:head><ns0:p>In this paper we presented the new edge-preserving Alternating Guided Filter (AGF) smoothing filter that eliminates small scale image details while preserving both the articulation and curvature of large scale edges and local mean image intensity. The AGF framework integrates the recently introduced RGF and (a slightly adapted version of) SiR filters in an alternating iterative scheme. AGF combines the large scale edge and local intensity preserving image smoothing properties of the RGF with the large scale edge restoring properties of the SiR. However, it does not suffer from the drawbacks of both of its components: the curvature smoothing of large scale edges by RGF and the local intensity reduction and restoration of small scale details near large scale edges by SiR. The AGF is simple to implement and efficient. Application to a wide range of images has shown that AGF consistently produces high-quality results. Possible applications areas for AGF are for instance detail enhancement, denoising, edge extraction, JPEG artifact removal, multi-scale structure decomposition, saliency detection etc. (see e.g. <ns0:ref type='bibr' target='#b2'>He, Sun &amp; Tang, 2013;</ns0:ref><ns0:ref type='bibr' target='#b3'>Kniefacz &amp; Kropatsch, 2015;</ns0:ref><ns0:ref type='bibr' target='#b9'>Zhang et al., 2014)</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>local linear model ensures that the output image O has an edge only at locations where the guidance image G has one, because can be estimated by minimizing the squared difference between the output image O and the input image I (the second filtering condition) in the window k &#61559; , i.e. by minimizing the cost function</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>be solved by linear regression (He, Sun &amp; Tang, 2013): PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>depends on the window over which it is calculated. This can be accounted for by averaging over all possible values of i O :</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10165:1:1:NEW 2 Jun 2016) Manuscript to be reviewed Computer Science result 1 t</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Figure9shows the effects of different parameter settings on the performance of the proposed AGF framework. In contrast to RGF, AGF preserves large scale edge curvature, as can be</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Figures</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Figure 1. Effects of SiR and SiRmed filtering on image intensity. (a) Input image (Photo credit: Adrian Mendoza). (b,c) 1D Image intensity distribution along a horizontal cross section of (a) (yellow line) after the first 3 iterations of respectively SiR and SiRmed. The lowest curves show the original intensity distribution in (a). (d,e) Results after 5 iterations of respectively SiR and SiRmed. The spatial and range kernels used in the bilateral filters for this example were respectively &#963; spatial = 5 and &#963; range = 0.05.</ns0:figDesc><ns0:graphic coords='13,72.00,142.19,453.50,514.19' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 2 .Figure 3 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 2. Effects of SiR and SiRmed filtering on image intensity As Figure 1.</ns0:figDesc><ns0:graphic coords='14,72.00,143.40,453.50,493.33' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Comparison of RGF, SiRmed and AGF filtering.</ns0:figDesc><ns0:graphic coords='16,72.00,95.80,453.49,323.90' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5. The elimination of small scale details by RGF, SiRmed and AGF filtering.Absolute difference between input image (a) and the results of respectively RGF (b), SiRmed (c) and AGF (d) after 5 iterations. The images have been enhanced for visual display. The spatial and range kernels used in the bilateral filters for this example were respectively &#963; spatial = 5 and &#963; range = 0.05.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. The effects of different parameter settings on RGF filtering. Results of RGF filtering of the input image on the upper left for different values of the variances of the spatial ( 3, 6 and 9 s &#61555; &#61501;) and range (</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. The effects of different parameter settings on SiRmed filtering. Results of SiRmed filtering of the input image on the upper left for different values of the variances of the spatial ( 3, 6 and 9 s &#61555; &#61501;) and range (</ns0:figDesc><ns0:graphic coords='20,72.00,99.60,453.48,246.20' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. The effects of different parameter settings on AGF filtering. Results of AGF filtering of the input image on the upper left for different values of the variances of the spatial ( 3, 6 and 9 s &#61555; &#61501;) and range (</ns0:figDesc><ns0:graphic coords='21,72.00,99.60,453.47,247.10' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='15,72.00,119.60,453.50,243.40' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,72.00,127.19,453.50,340.38' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,72.00,72.00,453.46,247.55' type='bitmap' /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10165:1:1:NEW 2 Jun 2016)</ns0:note> </ns0:body> "
"RE: #CS-2016:04:10165:0:0:REVIEW ('Alternating guided image filtering') Dear Professor Klara Kedem, Hereby we submit a revision of our initial PeerJ submission: ' Alternating guided image filtering ' (#CS-2016:04:10165:0:0:REVIEW). In this revised version we  have addressed the issues that were raised by both reviewers as well as the Editor’s comments. Below we provide a pointwise listing of the reviewers’ and the Editor’s comments together with a description of the actions we have taken in response to these comments. We would like to thank the Editor and the reviewers for their helpful suggestions and for spending their valuable time on this review process. We hope that you will find the revised version of our manuscript acceptable for publication in PeerJ. Yours sincerely, Alexander Toet Editor's Comments: Line 53 : (i.e., its -> (i.e., it Line 54: for application in for instance -> for application, for instance, in Lines 77, 78, and 79 have a sentence fragment accidentally repeated Line 248: “in Section 7”, there is no Section 7 (Figure 7?) -Subsection numbering in 2: 1.1-1.3 and 2.1; should be 2.1-2.4 -Caption of Fig.2, 'As Figure 1, for a the input image shown in (a)' just write 'As Figure 1' Authors’ reply: We corrected all these minor errors. ____________________________________________________________________________ Comments of Reviewer #1: Lines 77, 78, and 79 have a sentence fragment accidently repeated. Otherwise, no comments. Authors’ reply: We corrected this error. ____________________________________________________________________________ Comments of Reviewer #2: .. the related work should be shortened and summarized. The text is very similar to the text on the references, so no long descriptions are needed. Authors’ reply: We intended to provide a compact but complete description of the prior algorithms as a service to the reader, thus eliminating the need to look all details in the individual papers. In our view such an overview of the prior algorithms and their communalities is essential in an introduction to the proposed AGF filter and to appreciate its contribution to this field. Since this section only presents the essentials we think the related work section can not be further condensed. Also, the solution proposed in Section 2.1 regarding the drawback in SiR should be explained later in Section 3 with more details and examples. Authors’ reply: The application of the small-sized median filter (i.e., the remedy to the drawback of SiR) is explained in Section 3: “Third, a median filter with a small kernel size (e.g., ) is applied to the result of the second step.” and “ … the integration of the median filter prevents the reintroduction of filtered small-scale details near large-scale edges (a negative side effect of SiR).” The effects of this remedy are extensively discussed in Section 4.1 and illustrated in Figs 1 and 2: “SiR reintroduces small details near large-scale edges (notice for instance the light poles all over the stadium, that are restored in (d) but are nicely removed in (e))” and “SiRmed restores these large-scale edges without the reintroduction of small-scale details. “ and “where SiR fails to remove the small-scale details all around the outlines of the tentacles and the fishes, while SiRmed effectively filters small elements all around these larger objects while both preserving their outlines and smoothing their interior”. As well as in the discussion of the newly introduced toy example Fig 6: “This example shows that SiR restores noise near the edges of larger details and along the vertical step edge in the background, while AGF, RGF and SiRmed effectively reduce noise all over the image plane”. We therefore believe that sufficient details and examples have now been provided. A close-up of interesting features, such as sharp edges, must be shown. That will allow to watch and compare the effects of the different filters, such as the presence of halos. Authors’ reply: The newly introduced toy example (Fig 6) now clearly illustrates the effects of sharp edges and the occurrence of halo effects. All figures must include a short description and the simulation parameters used for each experiment. That includes the figures in the supplemental material. Authors’ reply: A short description and the simulation parameters used have been included in the captions of the images, both in the text and in the supplemental material. Some minor errors: Line 53 : (i.e., its -> (i.e., it Line 54: for application in for instance -> for application, for instance, in Line 248: “in Section 7”, there is no Section 7 Authors’ reply: These errors have been corrected. … the proposed solution should be discussed in detail to clarify how it solves the problems on previous methods (RGF and SiR): “the curvature smoothing of large scale edges by RGF and local intensity reduction in combination with the restoration of small scale details near large scale edges by SiR”. I encourage the authors to use a toy example (as presented in Figure 5[1]) to illustrate and validate the proposed approach. In addition, the convergence of the method should be described in the manuscript. Authors’ reply: We appreciate this suggestion of the reviewer, since it will clarify the strong points of the proposed AGF algorithm. We therefore included a toy example in Section 4.2 with the following text illustrating the images: “Figure 6 illustrates the differential performance of all four filters investigated here (AGF, RGF, SiR, and SiRmed) on an artificial image with some geometric shapes (cross, triangle, square and wheel) and 30% additional Gaussian noise. Figure 6a shows a noisy test image consisting of shapes with different sizes on a step-edge background. Figures 6b-e show the results of respectively AGF, RGF, SiR and SiRmed filtering of (a) after five iteration steps. This example shows that SiR restores noise near the edges of larger details and along the vertical step edge in the background, while AGF, RGF and SiRmed effectively reduce noise all over the image plane. Also, RGF (c) smoothes edge curvature, in contrast to AGF, SiR and SiRmed. This can for instance be seen in Figure 6c where the sharp edges of the crosses, the corners of the triangles and squares , and the spokes of the wheels are all rounded after filtering. This example also illustrates that both SiR (Figure 6d) and SiRmed (Figure 6e) effectively reduce image contrast due to the intensity smoothing that is inherent in this method. In contrast to AGF, which does not introduce any halos, RGF produces high intensity halos and both SiR and SiRmed produce halos with a large spatial extent (see Figure 6c near the crosses and wheels).” In Section 4, it is said that: “filtering should preferably be performed in the CIE-Lab color space”. Maybe some results using CIE-Lab color space instead of RGB could be presented. It would also be interesting to see a comparison with RGF, SiR and SiRMed using CIE-Lab color space. Authors’ reply: As we stated in section 4, we performed all operation in RGB color space to enable a straightforward comparison of our present results with the results from previous studies (who also used the RGB color space: Kniefacz & Kropatsch, 2015; Zhang et al., 2014). The actual color space used in the examples is actually not relevant to demonstrate the principle of the algorithms. We merely wanted to remind the reader that it may be preferable to apply the algorithms in CIE-Lab color space in conditions where the perceptual quality (as judged by human observers) of the final output is essential. As well, whether the method produces halos, or not, should be reported. Authors’ reply: We now address the halo issue for the different methods in Section 4.2, and illustrate it with the newly included toy example Figure 6. Also, I would suggest to add some text describing the computational cost and the running time for methods being compared. Authors’ reply: We performed a runtime test and now present the results in Section 4.4 Besides, standard bilateral filtering could be included in the experiments. Authors’ reply: Since Zhang et al already discussed and evaluated the difference between RGF and bilateral filtering, we believe that this addition will not provide any new insights for the present study. "
Here is a paper. Please give your review comments after reading it.
240
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Artificial neural network (ANN) is one of the techniques in artificial intelligence, which has been widely applied in many fields for prediction purposes, including wind speed prediction. The aims of this research is to determine the topology of neural network that are used to predict wind speed. Topology determination means finding the hidden layers number and the hidden neurons number for corresponding hidden layer in the neural network. The difference between this research and previous research is that the objective function of this research is regression, while the objective function of previous research is classification. Determination of the topology of the neural network using principal component analysis (PCA) and K-Means clustering. PCA is used to determine the hidden layers number, while clustering is used to determine the hidden neurons number for corresponding hidden layer. The selected topology is then used to predict wind speed.</ns0:p><ns0:p>Then the performance of topology determination using PCA and clustering is then compared with several other methods. The results of the experiment show that the performance of the neural network topology determined using PCA and clustering has better performance than the other methods being compared. Performance is determined based on the RMSE value, the smaller the RMSE value, the better the neural network performance. In future research, it is necessary to apply a correlation or relationship between input attribute and output attribute and then analyzed, prior to conducting PCA and clustering analysis.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The energy requirement continuously grows as the world population increases. Such energy requirement increases sometimes is not accompanied with the increase of supporting facilities and infrastructure development making several locations do not obtain sufficient electricity input. This case encourages the utilization of renewable energy in order to meet the world energy demand in various sectors including agriculture, education, health, road lighting, and community economy driving force <ns0:ref type='bibr' target='#b8'>(Jamil &amp; Zeeshan, 2019;</ns0:ref><ns0:ref type='bibr'>Zhang et al., 2019)</ns0:ref>. Wind-based energy has long been utilized in irrigation sector, while other sources reveal that wind energy was firstly used in India <ns0:ref type='bibr'>(Mathew, 2016)</ns0:ref>. Although it has long been used in various sectors, wind speed prediction does not belong to an easy work due to its high strong randomness and volatility. Whereas accurate wind speed prediction is needed in our life. One of the important factors in predicting wind speed is its accuracy <ns0:ref type='bibr' target='#b25'>( Peiris, Jayasinghe &amp; Rathnayake, 2021;</ns0:ref><ns0:ref type='bibr' target='#b34'>Yadav, Muneender &amp; Santhosh, 2021)</ns0:ref>. As an example, the accuracy of wind speed prediction is essential in terms of wind power plant <ns0:ref type='bibr'>(Zhang et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Several different techniques have been used to predict the wind speed, including physical method <ns0:ref type='bibr'>(Lange &amp; Focken 2009;</ns0:ref><ns0:ref type='bibr' target='#b12'>Li et al., 2013)</ns0:ref>, statistical method, and combination method between them. The use of physical method can be seen in the use of Computational Fluid Dynamics (CFD), where such approach does not depend on the historical data and can be used wider in all kinds of wind power plant including the newest wind power plant <ns0:ref type='bibr' target='#b12'>(Li et al., 2013)</ns0:ref>. The use of statistical method is in the form of the use of auto regressive model (AR), moving average model (MA), autoregressive moving average model (ARMA), and auto regressive integrate moving average model (ARIMA) <ns0:ref type='bibr' target='#b11'>(Lei et al., 2009)</ns0:ref>. In addition to these two methods, neural network is recently often used to predict the wind speed <ns0:ref type='bibr' target='#b8'>(Jamil &amp; Zeeshan, 2019;</ns0:ref><ns0:ref type='bibr' target='#b15'>Madhiarasan &amp; Deepa, 2016</ns0:ref><ns0:ref type='bibr'>, 2017;</ns0:ref><ns0:ref type='bibr'>Zhang et al., 2019)</ns0:ref>. The combination of the existing methods is also often used to predict the wind speed, such as the use of autoregressive fractionally integrated moving average and improved back-propagation neural network <ns0:ref type='bibr' target='#b33'>(Wang &amp; Li, 2019)</ns0:ref>.</ns0:p><ns0:p>Artificial neural networks, as a part of artificial intelligence methods have been widely used in many fields for prediction purposes <ns0:ref type='bibr' target='#b2'>(Bakhashwain &amp; Sagheer, 2021;</ns0:ref><ns0:ref type='bibr' target='#b27'>Rahman et al., 2021;</ns0:ref><ns0:ref type='bibr'>Zhao &amp; Liu, 2021)</ns0:ref>, including wind speed prediction. One of the crucial factor for designing a neural network is its structure or topology, namely determining the hidden layers number and the hidden neurons number for corresponding hidden layer because it is closely related to the topological performance <ns0:ref type='bibr' target='#b0'>(Aggarwal, 2018;</ns0:ref><ns0:ref type='bibr' target='#b9'>Koutsoukas et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b23'>Nitta, 2017 )</ns0:ref>, but until now topology determination is still a complex and difficult problem <ns0:ref type='bibr' target='#b10'>( Lee et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b20'>Naitzat, Zhitnikov &amp; Lim, 2020;</ns0:ref><ns0:ref type='bibr' target='#b27'>Rahman et al., 2021)</ns0:ref>. Topology is one of the important hyperparameters in neural networks. Determining the topology that does not match the needs caused overfitting or underfitting in neural networks. Several researchers have conducted research to determine the neural network topology in various ways: methods based solely on the number of input and output attributes <ns0:ref type='bibr' target='#b29'>(Sartori &amp; Antsaklis, 1991;</ns0:ref><ns0:ref type='bibr' target='#b32'>Tamura &amp; Tateishi, 1997)</ns0:ref>, trial and error <ns0:ref type='bibr' target='#b3'>(Blanchard &amp; Samanta, 2020;</ns0:ref><ns0:ref type='bibr' target='#b14'>Madhiarasan, 2020;</ns0:ref><ns0:ref type='bibr' target='#b15'>Madhiarasan &amp; Deepa, 2016</ns0:ref><ns0:ref type='bibr'>, 2017;</ns0:ref><ns0:ref type='bibr' target='#b30'>&#350;en &amp; &#214;zcan, 2021)</ns0:ref> , and the rule of thumb <ns0:ref type='bibr' target='#b2'>(Bakhashwain &amp; Sagheer, 2021;</ns0:ref><ns0:ref type='bibr' target='#b4'>Carballal et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b27'>Rahman et al., 2021)</ns0:ref>.</ns0:p><ns0:p>In this research, the determination of the neural network topology use PCA and K-Means clustering <ns0:ref type='bibr' target='#b26'>(Rachmatullah et al., 2020)</ns0:ref>, but for the new objective function. Whereas in the previous research ( Ibnu <ns0:ref type='bibr'>Choldun R., Surendro &amp; Santoso, 2020;</ns0:ref><ns0:ref type='bibr' target='#b26'>Rachmatullah, Surendro &amp; Santoso, 2020)</ns0:ref> the determination of the neural network topology was used for the classification objective function, in this research it was used for the regression objective function, spesifically to predict wind speed. So that the scientific major contribution of this research is the use of a new method to determine the neural network topology using PCA and clustering for the regression objective function. The main difference is that the attribute classification objective function is categorical while the output regression objective function must be numeric. The performance measurement is also different, if the classification uses the accuracy rate, while if the regression uses error rate. The purpose of this research is to perform regression, so that the cumulative variance required is expected to be greater than classification, because the output domain for regression is continuous, while for classification is discrete. Then topology performance of neural network in this research compared with several other methods, namely: the Sartori method <ns0:ref type='bibr' target='#b29'>(Sartori &amp; Antsaklis, 1991)</ns0:ref>, the Tamura and Tateishi method <ns0:ref type='bibr' target='#b32'>(Tamura &amp; Tateishi, 1997)</ns0:ref>, the Madhiarasan and Deepa method <ns0:ref type='bibr' target='#b16'>(Madhiarasan &amp; Deepa, 2017)</ns0:ref>, the Madhiarasan method <ns0:ref type='bibr' target='#b14'>(Madhiarasan, 2020)</ns0:ref>, and the Mahdi method <ns0:ref type='bibr' target='#b17'>(Mahdi, Yousif &amp; Melhum, 2021)</ns0:ref>.</ns0:p><ns0:p>The next section of this paper is structured as follows. Materials &amp; Methods section contains the methodology of the proposed method starting from the data preparation to the topology evaluation. Results and Discussion section explains the results of the experiment and its discussion, especially about topology determination and topology evaluation. Section Conclusions dan Future Work concludes and proposes future works containing a summary of the results of this study and provide direction for subsequent research studies.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>The methods is presented with a clear outline as illustrated in Fig. <ns0:ref type='figure'>1</ns0:ref>. In general, the methods is divided into two main step, namely the pre-training and the topology evaluation. The pre-training step was conducted before the model formation process of learning including preparation or selection of datasets, pre-processing data, and measurement of the topology of neural networks. Stages of topology evaluation applied to manage the learning performance of neural networks based on the topology. The stage involves training, testing, and performance calculation. The proposed method focuses on determining the topology of neural networks for regression objective function which includes three main steps, namely: 1.</ns0:p><ns0:p>Analyzing the dataset by applying PCA, therefore it can obtained significant principal components. 2.</ns0:p><ns0:p>Performing clustering using the K-Means technique for each corresponding principal component by changing the clusters number 3.</ns0:p><ns0:p>Determining the optimal clusters number for each corresponding principal component by applying Elbow criteria, so that the optimal clusters number is obtained for each corresponding principal component.</ns0:p><ns0:p>Each stage will be explained in the next section.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63314:1:1:NEW 28 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> The Methodology of Proposed Method</ns0:p></ns0:div> <ns0:div><ns0:head>Data preparation</ns0:head><ns0:p>This study aims to predict wind speed, so researchers choose a dataset providing attributes to predict wind speed. The dataset was meteorological data (London Meteorological data) downloaded from http://www.urban-climate.net/content/data/9-data for 2016 consisting of 8784 data. This dataset had many features, but researchers only selected attributes related to wind speed prediction. These attributes are input and output as in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> : Temperature. These attributes have a role to predict the output attribute (Wind Speed). The range of values for these five attributes can be seen in column 4, while, the average can be seen in column 5.</ns0:p></ns0:div> <ns0:div><ns0:head>Standardization</ns0:head><ns0:p>Standardization utilizes the normalization process so that data can be obtained with the consistent scale attributes. The normalization used is the Min-Max with a value between 0 and 1 using Eq. ( <ns0:ref type='formula'>1</ns0:ref>) <ns0:ref type='bibr' target='#b5'>(Dharamvir, 2020)</ns0:ref>. The formula for normalization with Min-Max technique is as follows:</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>&#119889; ' = &#119889; -&#119898;&#119894;&#119899; 1 &#119898;&#119886;&#119909; 1 -&#119898;&#119894;&#119899; 1 (&#119898;&#119886;&#119909; 2 -&#119898;&#119894;&#119899; 2 ) + &#119898;&#119894;&#119899; 2</ns0:formula><ns0:p>d'=the new value of data, d= old data, min 1 = the lowest value of corresonding attribute, max 1 = the highest of corresponding attribute, min 2 = 0, new_max 2 = 1.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 2 Normalized Attributes</ns0:head><ns0:p>After reaching the normalization stage with min-max (0-1), Table <ns0:ref type='table'>2</ns0:ref> shows that the five attributes now have the same range between 0 and 1 as shown in the fourth column and the average value shown in the fifth column.</ns0:p></ns0:div> <ns0:div><ns0:head>Determining of Neural Network Topology</ns0:head><ns0:p>In this research, the determination of the neural network topology to predict wind speed is based on previous research (Ibnu Choldun R., Surendro &amp; Santoso, 2020; Rachmatullah, Surendro &amp; Santoso, 2020) that uses PCA and clustering with the K-Means technique as illustrated in Fig. <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>, but for the regression objective function. In a neural network, increasingly complex features represent increasingly higher information content. Meanwhile, the high content of information in PCA is represented in the principal PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63314:1:1:NEW 28 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science component which has a high variance. Therefore, the hidden layers that have more complex features are consistent with the PCA components that have higher variance. Accordance with this rationale, the hidden layer number in neural networks needed is consistent with the principal components number in PCA. Hence in this research, the determination of hidden layer number in neural networks based on principal components number obtained through PCA. This determination is accordance with the consideration that the PCA cumulative variance is compatible with the complexity of hidden layer in neural network as in Eq. ( <ns0:ref type='formula'>2</ns0:ref>).</ns0:p><ns0:p>(2)</ns0:p><ns0:p>&#8721; &#119881;&#119886;&#119903;&#119894;&#119886;&#119899;&#119888;&#119890;(&#119875;&#119862; &#119894; ) &#8776; &#8721; &#119862;&#119900;&#119898;&#119901;&#119897;&#119890;&#119909;&#119894;&#119905;&#119910;(&#8462; &#119894; )</ns0:p><ns0:p>PC i is PCA component and h i is the neural network hidden layer</ns0:p><ns0:p>So for example in the figure above, a dataset that has four attributes of input, after a principal component analysis is carried out, there will be four principal components where the principal component equation is a input attributes linear combination <ns0:ref type='bibr' target='#b13'>(Liu &amp; Ding, 2020;</ns0:ref><ns0:ref type='bibr' target='#b28'>Ratner, 2017)</ns0:ref>.</ns0:p><ns0:p>Since the input attributes number is four, the principal component equation is as follows:</ns0:p><ns0:formula xml:id='formula_1'>&#119875;&#119862; 1 = &#119908; 11 &#119909; 1 + &#119908; 12 &#119909; 2 + &#119908; 13 &#119909; 3 + &#119908; 14 &#119909; 4 + &#119908; 15 &#119909; 5 &#119875;&#119862; 2 = &#119908; 21 &#119909; 1 + &#119908; 22 &#119909; 2 + &#119908; 23 &#119909; 3 + &#119908; 24 &#119909; 4 + &#119908; 25 &#119909; 5 &#119875;&#119862; 3 = &#119908; 31 &#119909; 1 + &#119908; 32 &#119909; 2 + &#119908; 33 &#119909; 3 + &#119908; 34 &#119909; 4 + &#119908; 35 &#119909; 5 (3) &#119875;&#119862; 4 = &#119908; 41 &#119909; 1 + &#119908; 42 &#119909; 2 + &#119908; 43 &#119909; 3 + &#119908; 44 &#119909; 4 + &#119908; 45 &#119909; 5 w=weight, x i = i th input attribute</ns0:formula><ns0:p>Of the four components, for example, only two principal components that have a cumulative variance q% were selected <ns0:ref type='bibr' target='#b35'>(Yang, 2019)</ns0:ref>. The two principal components selected with the cumulative variance q% are the basis for determining the hidden layers number in neural network, namely using two hidden layers. Then we clustered each selected component using K-Means clustering <ns0:ref type='bibr' target='#b1'>(Alguliyev, Aliguliyev, &amp; Sukhostat, 2020;</ns0:ref><ns0:ref type='bibr' target='#b6'>Hancer, Xue &amp; Zhang, 2020)</ns0:ref>. The optimal clusters number for corresponding principal component was determined using the Elbow criteria <ns0:ref type='bibr' target='#b31'>( Shmueli et al., 2020)</ns0:ref>. The optimal clusters number for each component is the basis for determining the hidden neurons number in corresponding hidden layer on the neural network.</ns0:p></ns0:div> <ns0:div><ns0:head>Topology Evaluation</ns0:head><ns0:p>The training process was carried out with the amount of data as much as 70% of the dataset, while testing was carried out with 30% of the data from the dataset <ns0:ref type='bibr' target='#b22'>(Nguyen et al., 2021)</ns0:ref>. Both the training and testing processes for each topology were repeated ten times by varying the initial weight values. For the regression objective function, the following performance measures can be used: Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE), Root Mean Square Error (RMSE), or Mean Square Error (MSE). MAPE and MAE are suitable for time series datasets. In this research, the dataset used is not time series data so that the performance measurement was selected using RMSE <ns0:ref type='bibr' target='#b21'>(Namasudra, Dhamodharavadhani &amp; Rathipriya, 2021)</ns0:ref>. Calculation of the topology performance of neural network is based on the value of the root mean of squared error (RMSE) with the formula:</ns0:p><ns0:formula xml:id='formula_2'>(4) &#119877;&#119872;&#119878;&#119864; = 1 &#119873; &#8721; &#119899; &#119894; = 1 (&#119910; &#119894; -&#119910; &#119894; ) 2</ns0:formula><ns0:p>where, N: the number of data, : the prediction value, y i : the target value. Lower RMSE &#119910; &#119894; presents higher performance.</ns0:p><ns0:p>To compare the topology performance based on PCA and clustering, it is necessary to compare with several other methods. Some of these methods are: 1. Sartori method <ns0:ref type='bibr' target='#b29'>(Sartori, 1991)</ns0:ref> used one hidden layer with</ns0:p><ns0:p>The neuron number= N-1 (5) N = the input feature number.</ns0:p><ns0:p>2. Tamura and Tateishi method <ns0:ref type='bibr' target='#b32'>(Tamura &amp; Tateishi, 1997)</ns0:ref> used two hidden layers, where</ns0:p><ns0:p>The neuron number of the corresponding hidden layer: N/2+3 (6) N = the number of input feature.</ns0:p><ns0:p>3. Madhiarasan and Deepa method <ns0:ref type='bibr' target='#b16'>(Madhiarasan &amp; Deepa, 2017)</ns0:ref> used one hidden layer and through trial and error, it was found that the number of neurons was 14.</ns0:p><ns0:p>4. Madhiarasan method (Madhiarasan, 2020), used one hidden layer and through trial and error, it was found that the number of neurons was 44.</ns0:p><ns0:p>5. Mahdi et al. method <ns0:ref type='bibr' target='#b17'>(Mahdi, Yousif &amp; Melhum, 2021)</ns0:ref> used one hidden layer and through trial and error, it was found that the number of neurons was 20.</ns0:p><ns0:p>To calculate the performance of the topology we use the Windows 10 operating system and Rapidminer 9.5 tools.</ns0:p></ns0:div> <ns0:div><ns0:head>Results and Discussion</ns0:head><ns0:p>In this section, the process of determining the neural network topology will be presented as a result of the application of principal component analysis(PCA), clustering with K-Means method and the application of modified Elbow criteria to the wind dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Determine the Topology of Neural Networks Using PCA and K-Means Clustering</ns0:head><ns0:p>After normalization, both input attributes and output attributes have a value range between 0 and 1 as presented in Standardization section. Principal component analysis was done on the four normalized input attributes.. The PCA results can be seen in the second and third columns of Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>. The first PCA principal component has a highest variance which is of 0.943 (94.3%) so that the cumulative variance is also 0.943. The second principal component of PCA has a a second highest variance which is 0.056 (5.6%) so the cumulative variance is 0.943 + 0.056 = 0.999. The variance for each main component was obtained from the proportion between the eigenvalues of a component and the total eigenvalues of all components. Likewise for the third component and fourth component can be seen in the second and third columns of Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For each component that has been generated from the PCA process, clustering is carried out so that the optimal clusters number of clusters is obtained. The clustering application is carried out using the K-Means method, while to determine the optimal cluster number using the modified Elbow criteria. The optimal clusters number has been obtained based on the wss value, that is, when the wss value in a row (at least three in a row) has remained relatively unchanged <ns0:ref type='bibr' target='#b26'>(Rachmatullah, Surendro &amp; Santoso, 2020)</ns0:ref>. The examples of applying the modified Elbow criteria are shown in Table <ns0:ref type='table'>4</ns0:ref>, which is the result of applying these criteria to the first PCA principal component that has the highest variance. Hence, the number of clusters gradually increased from 2 to 50, while the wss value calculation results appeared. At N = 10, 11, and 12 the value of the three wss values in a row does not change, so it can be concluded that the number of neurons for the first component is 10.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 4 The result of application of K-Means clustering and modified Elbow criteria for the first component</ns0:head><ns0:p>The results of K-Means clustering and applying the modified Elbow criteria for corresponding component can be seen in the fourth column of Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>. For example, for the PCA first component the optimal clustersnumber is 10, for the PCA second component the optimal clusters number is 3, and so on with the same explanation. The results obtained from the PCA process and K-Means clustering were used in determining the neural network topology. The determination of this topology consists of determining the hidden layers number and the neurons number in corresponding of these hidden layers. Since it had reached the cumulative variance of 90% using one component, the topology of the neural network was evaluated using one hidden layer to four hidden layers. As an explanation, if one component is to be used which has the optimal number of clusters is 10, then the topology is to use one hidden layer with the neurons number is 10; if two components are to be used, then the hidden layers number is two in which the first hidden layer has 3 neurons and meanwhile the second hidden layer has 10 neurons; and so on in the same explanation as shown in Fig. <ns0:ref type='figure' target='#fig_1'>3</ns0:ref>. This figure shows the topology that use four hidden layers. The first principal component of PCA, which has the highest variance, corresponds to the hidden layer closest to the output layer. </ns0:p></ns0:div> <ns0:div><ns0:head>Performance Comparison</ns0:head><ns0:p>The performance comparison of the topology of neural network from the results of PCA and clustering, it needs analysis of each of the topologies and a comparison with five methods proposed by other researchers. The Sartori method and the Tamura and Tateishi method with N = 4 (input attribute number) respectively get topology (3) and topology <ns0:ref type='bibr'>(5,</ns0:ref><ns0:ref type='bibr'>5)</ns0:ref>. Three other methods are the Madhiarasan and Deepa methods, the Madhiarasan method, and the Mahdi method which uses one hidden layer sequentially using topology ( <ns0:ref type='formula'>14</ns0:ref>), topology (44), and topology (20). These PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63314:1:1:NEW 28 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>five method proposed by the other researchers is used as a comparison with the the method used by researchers.</ns0:p><ns0:p>Table <ns0:ref type='table'>5</ns0:ref> presents the RMSE mean of each topology determined by PCA process and K-Means clustering, where each topology is applied to the process of learning using a neural network with 100 cycles. For each topology, each experiment was repeated ten times by varying the seed. The column 'Topology' presents the hidden layers number and the neurons number for each topology. As an explanation, column '3,10' shows a topology consisting of two hidden layers with the first hidden layer has 3 neurons and the second hidden layer has 10 neurons. The values in the table shows the RMSE value, while the bottom row present the mean of RMSE from 10 repetitions.</ns0:p><ns0:p>Table <ns0:ref type='table'>5</ns0:ref> The RMSE value of Topology with Cycles is 100</ns0:p><ns0:p>In the same way, the experiments in Table <ns0:ref type='table'>5</ns0:ref> also implement some different cycles, including 200 cycle, 500 cycle, and 1000 cycle. The summary that shows the mean value of each cycles is displayed in Table <ns0:ref type='table' target='#tab_3'>6</ns0:ref>, while in graphical form is manifested in Fig. <ns0:ref type='figure' target='#fig_2'>4</ns0:ref>. The values in Table <ns0:ref type='table' target='#tab_3'>6</ns0:ref> explains the average RMSE value of 10 repetitions for each topology, the details of which can be worked out following table 5. Through the same experiment, Table <ns0:ref type='table' target='#tab_3'>6</ns0:ref> were also carried out for the number of cycles of 200 cycle, 500 cycle, and 1000 cycle. In Fig. <ns0:ref type='figure' target='#fig_2'>4</ns0:ref>, the horizontal axis is the cycles number, and the vertical axis is the mean value of RMSE. The graph presents the mean of RMSE values for each topology in each cycle. The graph in Fig. <ns0:ref type='figure' target='#fig_2'>4</ns0:ref> shows the topology (3,10) has a tendency of RMSE mean values lower than other topologies, then followed by topology (10), topology (2,3,10) and topology (2,2,3,10). The graph also presents the addition of the hidden layers number does not provide a guarantee to reduce the RMSE value. Based on the topology from PCA and clustering, it has two hidden layers, which gives the lowest RMSE value, so it proves the best performance of the topology. Mapping from PCA and clustering into the topology of neural network gives the best performance, namely topology (3,10) as shown in Fig. <ns0:ref type='figure' target='#fig_3'>5</ns0:ref>. This topology that uses two hidden layers requires a cumulative variance of 99%, so that the selected topology will be compared with the topology of other researchers. This also proves that the cumulative variance PCA required for the regression objective function is greater than the classification objective function as has been done in PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63314:1:1:NEW 28 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>previous research: for multi-class classification it needs a PCA cumulative variance of about 70% ( Ibnu Choldun R., <ns0:ref type='bibr' target='#b26'>Surendro &amp; Santoso, 2020;</ns0:ref><ns0:ref type='bibr' target='#b26'>Rachmatullah, Surendro &amp; Santoso, 2020)</ns0:ref> while for binary classification it needs a PCA cumulative variance of about 40% <ns0:ref type='bibr' target='#b26'>(Rachmatullah, Surendro &amp; Santoso, 2020)</ns0:ref>. In this research we also processes the dataset as listed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> using the topologies that are compared so that each topology has an RMSE value. Table <ns0:ref type='table' target='#tab_4'>7</ns0:ref> shows the experiment results on the dataset using the method used by researchers and five other methods used by other researchers. This table presents the comparison of the RMSE between all topologies. Each topology is applied to the learning process using a neural network with 100 cycles. The presentation of Table <ns0:ref type='table' target='#tab_4'>7</ns0:ref> is the same as the presentation of Table <ns0:ref type='table'>5</ns0:ref> as previously explained and also implementedin some different cycles, including 200 cycle, 500 cycle, and 1000. The summary that shows the mean the mean values of each cycle is as shown in Table <ns0:ref type='table' target='#tab_5'>8</ns0:ref>, whereas the graph can be seen in Fig. <ns0:ref type='figure' target='#fig_4'>6</ns0:ref>. The horizontal axis, vertical axis, and graphs for Fig. <ns0:ref type='figure' target='#fig_4'>6</ns0:ref> are explained in the same way as Fig. <ns0:ref type='figure' target='#fig_2'>4</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_4'>6</ns0:ref> exposes the RMSE value for comparison between the topology used by researcher(PCA and Clustering method) and the five topologies that used by other researchers, namely: the Sartori method, the Tamura and Tateishi method, the Madhiarasan and Deepa method, the Madhiarasan method, and Mahdi method. Manuscript to be reviewed Computer Science layers, which gives the lowest RMSE value, so this proves that this topology has the best performance compared to the topologies used by other researchers.</ns0:p><ns0:p>Patterson and Gibson proposed to provide hidden neurons in large numbers in the network so that the performance of neural networks is better. But network performance can be degraded when the number of neurons is too large because it may have several false connections <ns0:ref type='bibr' target='#b24'>(Patterson &amp; Gibson 2017)</ns0:ref>. For example, using a higher number of neurons in <ns0:ref type='bibr'>Madhiarasan(44)</ns0:ref> than in Madhiarasan and Deepa topology( <ns0:ref type='formula'>14</ns0:ref>), Mahdi topology(20), and Sartori topology(3), can improve the neural networks performance. However, increasing the number of neurons using only one hidden layer does not always guarantee an increase in performance, such as the performance of <ns0:ref type='bibr'>Madhiarasan(44)</ns0:ref> which is lower than the topology that uses two hidden layers with fewer neurons, the Tamura&amp;Tateishi(5.5) topology and the proposed topology( 3.10). This study also shows that the the cumulative variance for the regression objective function, in this study 99% greater than the cumulative variance for the classification objective function in previous studies <ns0:ref type='bibr' target='#b26'>(Rachmatullah, Surendro &amp; Santoso, 2020)</ns0:ref>, where for binary classification needs a PCA cumulative variance of 38.9 %, while the multi-class classification needs a PCA cumulative variance of 69.7%.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions and Future Work</ns0:head><ns0:p>In this research paper, performance analysis of various neural network is compared to predict the wind speed. Comparison was made between the PCA and clustering method and several other methods. The PCA and clustering method uses PCA to set the hidden layers number, whereas K-Means clustering of these components formed from PCA is used to determine the optimal clusters number used as a guidance to set the neurons number in corresponding hidden layer. The experimental results report that the topology originating from PCA and clustering has a fairly good performance compared to other methods by looking at the mean value of RMSE. The topology of Neural network determination using PCA and clustering can provide optimal performance.</ns0:p><ns0:p>In future research, it is necessary to apply a correlation or relationship between input attributes and output attributes and then analyzed, prior to conducting PCA and clustering analysis. Variations in input attributes also need to be analyzed before implementing PCA and K-Means clustering. Considering correlation of attributes and the variation of attributes is expected to produce a topology of neural network design that has better performance. Future researches can also use other clustering methods to determine the number of neuron. 0.035 15 0.001 27 0.000 39 0.000 3 0.019 16 0.001 28 0.000 40 0.000 4 0.012 17 0.001 29 0.000 41 0.000 5 0.008 18 0.001 30 0.000 42 0.000 6 0.006 19 0.001 31 0.000 43 0.000 7 0.004 20 0.001 32 0.000 44 0.000 8 0.003 21 0.001 33 0.000 45 0.000 9 0.003 22 0.000 34 0.000 46 0.000 10 0.002 23 0.000 35 0.000 47 0.000 11 0.002 24 0.000 36 0.000 48 0.000 12 0.002 25 0.000 37 0.000 49 0.000 13 0.001 26 0.000 38 0.000 50 0.000 14 0.001 </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2Determine neural netwoks topology for regression using PCA and clustering</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 Mapping PCA to Neural network</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Fig. 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Fig. 4 RMSE value for topology determined using PCA and clustering and other topology</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 The original topology which has the best performance</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6The comparison of the RMSE value between topology used by researchers and other topologies In Fig.6, the horizontal axis is the cycles number, and the vertical axis is the mean value of RMSE. The graph presents the mean of RMSE values for each topology in each cycle. The graph in Fig.6shows the topology used by researchers has a tendency of RMSE mean values lower than other topologies, then followed by Tamura and Tateishi topology(5,5),Madhiarasan(44),Madhiarasan and Deepa topology(14), Mahdi topology(20), and Sartori topology(3). The graph also shows that the using two hidden layers tends to have a lower RMSE than using only one hidden layer. The topology used by researchers based on PCA and clustering with two hidden</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63314:1:1:NEW 28 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 Attribute Used</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Table 1 explains there are four input attributes, including CR10 Temperature, Last Minute Average Temperature, Maximum Hourly Air Temperature, and Minimum Hourly Air</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 PCA and clustering of Wind dataset</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63314:1:1:NEW 28 Aug 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 6 RMSE Mean of Topology (PCA and K-Means Clustering)</ns0:head><ns0:label>6</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 7 The Comparison of topology RMSE value with Cycles is 100</ns0:head><ns0:label>7</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 8 RMSE Mean of Topology (PCA + Clustering) and Other Topology</ns0:head><ns0:label>8</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> </ns0:body> "
" School of Electrical Engineering and Informatics Institut Teknologi Bandung Jl. Ganecha 10, Bandung, Indonesia August 15th 2021 Dear Editors We thank to reviewers for their constructive comments on the manuscript and we have carefully revised the manuscript to address their concerns. We are uploading (a) our point-by-point response to the comments (below) (response to reviewers), (b) an updated manuscript with blue highlighting indicating changes, and (c) a clean updated manuscript without highlights. We believe that the manuscript is now suitable for publication in PeerJ Computer Science. Muh. Ibnu Choldun Rachmatullah On behalf all authors. Reviewer 1 (Anonymous) Basic reporting Concern # 1: Major contributions of the paper must be represented point-wise. Author response: We have carefully revised our manuscript completely according to the suggestion. We added an explanation about the major contribution of this paper that whereas the previous paper used PCA and clustering in determining the topology of neural networks for the classification objective function, in this paper for the regression objective function. The main difference is that the attribute classification objective function is categorical while the output regression objective function must be numeric. The performance measurement is also different, if the classification uses the accuracy rate, while if the regression uses error rate. The purpose of this research is to perform regression, so that the cumulative variance required is expected to be greater than classification, because the output domain for regression is continuous, while for classification is discrete. Author action: We refined the manuscript by adding detailed explanation of the main contribution in 4th paragraph of section Introduction. Concern # 2: The last paragraph of the Introduction must be the Structure of the paper. Author response: We have carefully revised our manuscript completely according to the suggestion. We added the paper structure in the introduction section. Author action: We refined the manuscript by adding the paper structure in the last paragraph of the Introduction section. Concern # 3: Why topology is important? Author response: We have carefully revised our manuscript completely according to the suggestion. The topology determination of neural networks is important becauses it has a significant role which affects the performance of neural networks. Determining the topology that does not match the needs can cause overfitting or underfitting in neural networks. Author action: We refined the manuscript by providing additional explanation with two sentences 'Topology is one of the important hyperparameters in neural networks. Determining the topology that does not match the needs cause overfitting or underfitting in neural networks' in the 3rd paragraph of the Introduction section. Concern # 4: Why PCA is used. Mention properly. Author response: We have carefully revised our manuscript completely according to the suggestion. A more detailed explanation of the use of PCA is in our previous (Rachmatullah, Surendro & Santoso, 2020). As a summary, in neural networks, the lower layer (closer to the input layer) detects simpler features, while layer closer to the output layer will detect more complex features. The more complex feature represents higher information content. The high information content in PCA is represented in the principal components that have high variance. In other words, the more complex features of the hidden layer neural networks are aligned with the principal components that have higher variance. Based on this rationale, the number of hidden layer in neural networks needed will be consistent with the number of principal components in principal component analysis. Therefore in this research, the number of hidden layer in neural networks will be determined based on the number of principal components formed through PCA. Author action: We refined the manuscript by adding detailed descriptions to make it easier to understand. We added an explanation in the first paragraph of the Materials & Methods section, subsection Determining of Neural Network Topology based on the previous paper (Rachmatullah, Surendro & Santoso, 2020). Concern # 5: How the dataset is chosen? Whether it is genuine? Justify. Author response: For this research, we have selected a dataset in which both input and output attributes are numeric because the objective function is regression. Several previous research of determining the topology of neural networks for regression objective functions, using wind datasets (Madhiarasan & Deepa, 2017; Madhiaran, 2020). We therefore used a wind/weather dataset that can be used to determine wind speed, and the dataset is publicly accessible. One of the publicly accessible datasets is the London Meteorological data which can be downloaded from http://www.urban-climate.net/content/data/9-data, especially in this research using 2016 data. Author action: We did not revise the manuscript regarding the data used because this research have used the data as explained above. Concern # 6: How variance is calculated? Author response: We have carefully revised our manuscript completely according to the suggestion As explained in the previous paper (Rachmatullah, Surendro & Santoso, 2020), each neuron at corresponding hidden layer on neural networks, despite producing different values, is assumed to have similar level of complexity; however, each neuron in similar layer will have different characteristic value. Each value has similar characteristics within similar group, while values that have different characteristics belong to different groups. This finding refers to the assumption that each principal component produced in PCA is aligned with the hidden layer on the neural network, where the grouping or clustering will be carried out on each principal component. The optimal number of clusters formed in each principal component is assumed to represent the number of different values in each neuron within similar hidden layer. In this research, we only display the variance for each component as presented in second column of table 3. Author action: We refined the manuscript by adding detailed descriptions to make it easier to understand. We added the sentence “The variance for each main component is obtained from the proportion between the eigenvalues of a component and the total eigenvalues of all components” in the Results and Discussion section, Determine the Topology of Neural Networks Using PCA and K-Means Clustering subsection in the first paragraph. Experimental design Concern # 1: Why RMSE is chosen for evaluating topology? Author response: We have carefully revised our manuscript completely according to the suggestion. For the regression objective function, the following performance measures can be used: Root Mean Square Error (RMSE), Mean Square Error (MSE), Mean Absolute Error (MAE), or Mean Absolute Percentage Error (MAPE). MAPE and MAE are suitable for time series datasets. In this research, the dataset used is not time series data so that the performance measurement was selected using RMSE. Author action: We refined the manuscript by adding detailed descriptions to make it easier to understand. We added the explanation of the reason for using RMSE as a performance measurement in the Materials & Methods section, Topology Evaluation subsection. Concern # 2: How the number of neurons is calculated? Author response: We have carefully revised our manuscript completely according to the suggestion. As explained in the previous paper (Rachmatullah, Surendro & Santoso, 2020), each neuron at corresponding hidden layer on neural networks, despite producing different values, is assumed to have similar level of complexity; however, each neuron in similar layer will have different characteristic value. Each value has similar characteristics within similar group, while values that have different characteristics belong to different groups. This finding refers to the assumption that each principal component produced in PCA is aligned with the hidden layer on the neural network, where the grouping or clustering will be carried out on each principal component. The optimal number of clusters formed in each principal component is assumed to represent the number of different values in each neuron within similar hidden layer. Clustering is done using K-Means method, while the optimal number of clusters is estimated using modified Elbow criteria. The optimal number of clusters is obtained when the value of wss in a row (minimum of three in a row) is relatively unchanged. An example of the application of the modified Elbow criteria can be seen in Table 4 and the optimal number of neurons in each hidden layer can be seen in the fourth column in Table 3. Author action: We refined the manuscript by adding detailed descriptions to make it easier to understand. We clarified that the determination of the number of neurons in each hidden layer is based on previous research (Rachmatullah, Surendro & Santoso, 2020). In this paper, it is only briefly explained in the Results and Discussion section, Determine the Topology of Neural Networks Using PCA and K-Means Clustering subsection in the second paragraph. Concern # 3: Why training is conducted using 70% of the data from the dataset, while testing is done using 30% of the data from the dataset? Author response: We have carefully revised our manuscript completely according to the suggestion. We have taken the proportion of the dataset for training 70% and for testing 30%, based on the reference which says that based on the reference which states that by using statistical analysis the proportion of 70% for training and 30% for testing, it gives the best performance (Nguyen et al., 2021). Author action: We refined the manuscript by adding adding a reference paper that explained the selection of the 70:30 proportion in the Materials & Methods section, the Topology Evaluation subsection in the first paragraph, which is based on a paper written by Nguyen et al. (2021) Concern # 4: Discuss the experimental environment. Author response: We have carefully revised our manuscript completely according to the suggestion. The environment used in this experiment is the Windows 10 operating system and the RapidMiner 9.5 tool to evaluate the topology. Author action: We refined the manuscript by adding experimental environment, namely using the Windows 10 operating system and Rapidminer 9.5 tools to calculate topology performance, as mentioned in the last section of the Materials & Methods section, Topology Evaluation subsection. Validity of the findings Concern # 1: Technical discussion on results must be mentioned. Author response: We have carefully revised our manuscript completely according to the suggestion. We have added a technical discussion about the effect of increasing the number of neurons and the number of hidden layers on the topological performance of neural networks. Author action: We refined the manuscript by adding a technical discussion related to the effect of increasing the number of neurons and the number of hidden layers on the performance of neural networks in the last paragraph of the Results and Discussion section, Performance Comparison subsection. Concern # 2: The authors must use the proper software to draw graphs. Author response: We have carefully revised our manuscript completely according to the suggestion. Although we have not changed the use of software to draw graphs, we have tried to make the graphs more clear. Author action: We refined the manuscript by adding horizontal and vertical grids in Figures 4 and 6 to make the graphs more clear. Additional comments Concern # 1: The English language must be improved. Author response: We have carefully revised our paper in accordance with the suggestion. We have made efforts to improve the accuracy of the use of English in our manuscript. Author action: We refined the manuscript by improving the accuracy of the use of English in various words and sentences in the manuscript. Concern # 2: All the key terms of the equations must be defined. Author response: We have carefully revised our paper in accordance with the suggestion. We have added all the key terms of the equation. Author action: We refined the manuscript by adding explanation of each key term that have not been defined. Concern # 3: Draw a flowchart for a better understanding of the proposed scheme. Author response: We have carefully revised our paper in accordance with the recommendations. We added a flow chart in the Materials & Methods section to make it easier to understand the proposed method. Author action: We refined the manuscript by adding a flowchart at the beginning of the Material & Methods section, as shown in Figure 1. The main stage of the proposed method is to determine the topology of neural networks which consists of three steps. Concern # 4: Add section number. Author response: In the manuscript template there is no section numbering so we did not add section number. Author action: - Concern # 5: Try to give the figures and tables in the appropriate places. Author response: In the manuscript template, figures and tables are presented separately with the manuscript text, so we did not make changes to the places for figures and tables. Author action: - Concern # 6: Include the following references to improve the reference section. Author response: We have carefully revised our paper in accordance with the recommendations. We used the two suggested paper as they relate to our research topic to improve the quality of the paper. Author action: We refined the manuscript by adding two suggested papers for reference. Reviewer 2 (Subha Mastan Rao T) - There is no concern that requires us to revise the paper. Reviewer 3 (Anonymous) - There is no concern that requires us to revise the paper. "
Here is a paper. Please give your review comments after reading it.
241
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. Data transmissions using the DNP3 protocol over the internet in SCADA systems are vulnerable to interruption, interception, fabrication, and modification through man-in-the-middle (MITM) attacks. This research aims to improve the security of DNP3 data transmissions and protect them from MITM attacks.</ns0:p><ns0:p>Methods. This research describes a proposed new method of improving DNP3 security by introducing BRC4 encryption. This combines Beaufort encryption, in which plain text is encrypted by applying a polyalphabetic substitution code based on the Beaufort table by subtracting keys in plain text, and RC4 encryption, a stream cipher with a variable-length key algorithm. This research contributes to improving the security of data transmission and accelerating key generation.</ns0:p></ns0:div> <ns0:div><ns0:head>Results.</ns0:head><ns0:p>Tests are carried out by key space analysis, correlation coefficient analysis, information entropy analysis, visual analysis, and time complexity analysis. The results show that to secure encryption processes from brute force attacks, a key of at least 16 characters is necessary. IL data correlation values were IL1 = -0.010, IL2 = 0.006, and IL3 = 0.001, respectively, indicating that the proposed method (BRC4) is better than the Beaufort or RC4 methods in isolation. Meanwhile, the information entropy values from IL data are IL1 = 7.84, IL2 = 7.98, and IL3 = 7.99, respectively, likewise indicating that the proposed method is better than the Beaufort or RC4 methods in isolation. Both results also show that the proposed method is secure from MITM attacks. Visual analysis, using a histogram, shows that ciphertext is more significantly distributed than plaintext, and thus secure from MITM attacks. The time complexity analysis results show that the proposed method algorithm is categorized as linear complexity.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Supervisory control and data acquisition (SCADA) is a control system architecture comprising computers, networked data communications, and graphical user interfaces (GUI) for high-level process supervisory management, as well as other peripheral devices such as programmable logic controllers (PLC) and discrete proportional-integral-derivative (PID) controllers, which are used to interface with machinery or process plants. SCADA allows operators to change setpoint data from a distance, monitor processes, and obtain measurement information. It consists of three components: a remote terminal unit (RTU) to collect data from the sensor and remote device, a master terminal unit (MTU) equipped with a Human Machine Interface (HMI) for monitoring and control, and communication infrastructure to connect components <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>- <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. SCADA requires an industrial network protocol-a real-time communication protocol made to connect interface communication systems and instruments-to communicate with controlled devices. In SCADA, security is an important factor <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>, especially in critical industrial infrastructure. SCADA security systems connected to the internet are closely related to cybersecurity, and thus require special attention in critical industries. The development of cyberinfrastructure can improve the interconnection and security of smart networks <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref>; indeed, several designs have sought to increase their investigative abilities <ns0:ref type='bibr' target='#b7'>[6]</ns0:ref>. Cyberattacks have damaged critical facilities, including nuclear facilities <ns0:ref type='bibr' target='#b8'>[7]</ns0:ref>. One example of protocol implementation is found in nuclear power plants (NPP), smart grid electricity facilities <ns0:ref type='bibr' target='#b9'>[8]</ns0:ref>- <ns0:ref type='bibr' target='#b11'>[10]</ns0:ref> that are strictly regulated owing to safety and security considerations. It is thereby necessary to ensure the safety and security of SCADA implementation <ns0:ref type='bibr' target='#b12'>[11]</ns0:ref>, <ns0:ref type='bibr' target='#b13'>[12]</ns0:ref>. Several NPPs have been found vulnerable to cyberattacks, and various attempts have been made to improve nuclear security, using systematic <ns0:ref type='bibr' target='#b9'>[8]</ns0:ref> and dynamic mapping systems to determine which assets are most vulnerable to cyberattacks <ns0:ref type='bibr' target='#b10'>[9]</ns0:ref>. Famous industrial network protocols include Modbus <ns0:ref type='bibr' target='#b12'>[11]</ns0:ref>, ICCP/TASE.2 <ns0:ref type='bibr' target='#b14'>[13]</ns0:ref>, Distributed Network Protocol/DNP3 <ns0:ref type='bibr' target='#b13'>[12]</ns0:ref>, <ns0:ref type='bibr' target='#b15'>[14]</ns0:ref>, <ns0:ref type='bibr' target='#b16'>[15]</ns0:ref>, and OPC <ns0:ref type='bibr' target='#b18'>[16]</ns0:ref>. Each has its unique characteristics, as well as its own unique methods for verifying the integrity and data security. The specific requirements of industrial networks often make protocols particularly vulnerable to interference <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>, <ns0:ref type='bibr' target='#b16'>[15]</ns0:ref>, <ns0:ref type='bibr' target='#b19'>[17]</ns0:ref>. Among the above-mentioned standard network protocols, DNP3 is the most popular. Initially, DNP3 was designed for local network communication between MTU and RTU, or between RTU and IED. As most users implement DNP3 to communicate serially, the protocol was developed to work through routable protocols such as TCP/IP <ns0:ref type='bibr' target='#b20'>[18]</ns0:ref>. As a message protocol, DNP3 was developed to work over IP, thus making RTU communication more accessible via modem networks <ns0:ref type='bibr' target='#b21'>[19]</ns0:ref>. The advantages of DNP3 over other protocols include its reliability, efficiency, and real-time transference of data, as well as its implementation of several standard data formats and support for data synchronization (both of which make real-time transmission more efficient and reliable) <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>, <ns0:ref type='bibr' target='#b23'>[20]</ns0:ref>, <ns0:ref type='bibr' target='#b24'>[21]</ns0:ref>. However, the connection of SCADA systems to the internet network through the DNP3 protocol is also problematic, as its connections are potentially open to vulnerability loopholes. Such vulnerability can be used by attackers to steal the transmitted data. Furthermore, attackers may interrupt, intercept, fabricate, and modify the data, which would also hamper SCADA <ns0:ref type='bibr' target='#b9'>[8]</ns0:ref>- <ns0:ref type='bibr' target='#b11'>[10]</ns0:ref>, <ns0:ref type='bibr' target='#b24'>[21]</ns0:ref>- <ns0:ref type='bibr' target='#b33'>[29]</ns0:ref>. It may therefore be concluded that data transmissions through DNP3 protocols in SCADA systems within critical industries are vulnerable to man-in-the-middle (MITM), brute force, eavesdropping, etc. This research aims to improve the security of plain data transmission through DNP3 protocols from the above-mentioned attacks.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Related Works</ns0:head><ns0:p>Although SCADA is widely used due to its rapid development, recent studies have highlighted its vulnerabilities from a cyber-security and cyber-physical security perspective <ns0:ref type='bibr' target='#b15'>[14]</ns0:ref>, <ns0:ref type='bibr' target='#b16'>[15]</ns0:ref>, <ns0:ref type='bibr' target='#b25'>[22]</ns0:ref>, <ns0:ref type='bibr' target='#b26'>[23]</ns0:ref>, <ns0:ref type='bibr' target='#b31'>[27]</ns0:ref>, <ns0:ref type='bibr' target='#b32'>[28]</ns0:ref>, <ns0:ref type='bibr' target='#b34'>[30]</ns0:ref>- <ns0:ref type='bibr' target='#b42'>[37]</ns0:ref>. The threat of cyberattacks looms over SCADA systems that communicate with network protocols <ns0:ref type='bibr' target='#b25'>[22]</ns0:ref>, <ns0:ref type='bibr' target='#b26'>[23]</ns0:ref>, <ns0:ref type='bibr' target='#b31'>[27]</ns0:ref>, <ns0:ref type='bibr' target='#b38'>[34]</ns0:ref>, <ns0:ref type='bibr' target='#b43'>[38]</ns0:ref>. As such, A. Hou et al. investigated the detection of attacks within SCADA systems and network protocols, including the possibility of detecting attacks in smart grids using Dirichlet <ns0:ref type='bibr' target='#b44'>[39]</ns0:ref>. Similarly, Mantere et al. analyzed the detection of anomalies that could breach security <ns0:ref type='bibr' target='#b25'>[22]</ns0:ref>. The analysis and detection of network anomalies must be carried out continuously and periodically, perhaps through modeling and simulation using the OPNET Modeler method <ns0:ref type='bibr' target='#b44'>[39]</ns0:ref>. Sniffing and DDOS attacks may be detected in smart grids with Novel IDS technology <ns0:ref type='bibr' target='#b29'>[26]</ns0:ref>. A simulation test was conducted using NS-2 in the Novell IEEE802.15.4 protocol, finding that security performance improved between 95.5 and 97% <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. Several studies related to the security of DNP3 protocols in SCADA systems have also been conducted. Such studies have investigated how security systems can be implemented, tested, or developed in SCADA systems' DNP3 protocol. One such study tested communication protocol using DNPSec <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Shahzad, meanwhile, tested three layers of DNP3 using dynamic cryptographic buffers, showing that it could reduce the effectiveness of attacks and improve security <ns0:ref type='bibr' target='#b46'>[40]</ns0:ref>. Research about the application of security authentication using a Tamarin model in a smart grid, meanwhile, showed that a DNP3-SA protocol meets security standards <ns0:ref type='bibr' target='#b13'>[12]</ns0:ref>. The testing of the DNP3 protocol was also proposed by developing a Linux-based firewall for industrial control systems, especially in the power sector, using the U32 byte matching feature <ns0:ref type='bibr' target='#b41'>[36]</ns0:ref>. Research related to broadcast security messages from MTU to several other stations in DNP3-SAB protocol in SCADA systems showed that broadcasts can be secured from several attack vectors, such as modification, injection, spoofing, and replay <ns0:ref type='bibr' target='#b38'>[34]</ns0:ref>. A vulnerability analysis of the DNP3 protocol was carried out by observing specific surface attacks within the Data Link and Application layers <ns0:ref type='bibr' target='#b42'>[37]</ns0:ref>.</ns0:p><ns0:p>Elsewhere, DNP3 safety analysis was conducted using a Colored Petri Nets model and Linear Discriminant Analysis, finding that the system can successfully detect and reduce abnormal activity <ns0:ref type='bibr' target='#b47'>[41]</ns0:ref>. The development of a security assessment framework for cyber-physical systems was conducted by investigating the DNP3 protocol's resistance to attacks, including passive network monitoring, response replay requirements, rogue interoperations, buffer flooding, and TCP veto <ns0:ref type='bibr' target='#b16'>[15]</ns0:ref>. Through vulnerability and penetration testing, MITM attacks were modeled using grid technology to evaluate cybersecurity threats to SCADA systems implementing DNP3 protocols <ns0:ref type='bibr' target='#b48'>[42]</ns0:ref>. Two attack scenarios were used: an unsolicited message attack and injection data collection <ns0:ref type='bibr' target='#b50'>[43]</ns0:ref>. Cryptography aims to ensure that the data sent is correct and accessed by the right people. Research on secure network protocols was carried out by applying encryption using the Diffie-Hellman method <ns0:ref type='bibr' target='#b51'>[44]</ns0:ref>. Premnath et al., testing the NTRU cryptographic method, found that it has a faster runtime than RSA at the same security level <ns0:ref type='bibr' target='#b52'>[45]</ns0:ref>. Testing of new encryption methods was also carried out using an i-key to implement a secure communication system <ns0:ref type='bibr' target='#b53'>[46]</ns0:ref>. The use of an i-key as a cryptographic protocol during dynamic re-locking was carried out by testing and simulating the IEEE802.1x standard protocol using WEP system encryption <ns0:ref type='bibr' target='#b54'>[47]</ns0:ref>. It is claimed that such research might prevent MITM attacks because the decryption processes can only be executed by authorized senders and recipients, i.e. those who update the i-key. The security of the DNP3 protocol in SCADA was improved using the bump-in-the-wire method, which consists of key distribution, cryptography, and intrusion detection <ns0:ref type='bibr' target='#b55'>[48]</ns0:ref>. Using IDS, Jain tested a combination of syntactic and semantic detection techniques, with the Diffie-Helman method used for locking and DNPSec used as communication protocol. Testing showed that DNP3 security can be improved through efficient management and crypto-key distribution, while DNPSec can identify other packets on the DNP3 network <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Furthermore, research into dynamic cryptographic buffers was conducted using eight remote units divided into two stations (S-bed1) and sixteen isolated groups divided into two stations (S-bed2). The study was successful in preventing an MITM attack <ns0:ref type='bibr' target='#b56'>[49]</ns0:ref>. Communication between MTU and RTU was subsequently verified using openDNP3, with Scapy equipment used for packet manipulation and penetration testing. The results showed that OpenDNP3 v1.1.0 supports the prevention of attacks that affect confidentiality, such as MITM <ns0:ref type='bibr' target='#b57'>[50]</ns0:ref>. Protecting the integrity and confidentiality of data is important for any network protocol <ns0:ref type='bibr' target='#b58'>[51]</ns0:ref>. Elliptic cryptography and hash functions have been developed to analyze performance, as has the 3PAKE protocol, and third-party key exchange authentication. Tests were carried out using AVISPA simulator software, showing that the protocol can efficiently prevent active or passive attacks <ns0:ref type='bibr' target='#b59'>[52]</ns0:ref>. Sankhanil Dey et al. analyzed the crypto security of four-bit and eight-bit crypto Sboxes, finding that S-box crypto security is better than DES and AES <ns0:ref type='bibr' target='#b60'>[53]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.'>Comparison to Other Hybrid Cipher Approaches</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59198:1:0:NEW 5 Aug 2021)</ns0:p></ns0:div> <ns0:div><ns0:head>Manuscript to be reviewed</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To prevent cyberattacks, any data communication system must include strong data transmission security, perhaps using cryptography <ns0:ref type='bibr' target='#b61'>[54]</ns0:ref>. Every cryptographic scheme has its own strengths and weaknesses, and thus the application of a single cryptographic technique has severe shortcomings. To secure data, without compromising security, a cost-effective symmetric encryption method is often used. However, in such processes, proper key distribution is problematic <ns0:ref type='bibr' target='#b62'>[55]</ns0:ref>. An asymmetric scheme has potential. Unfortunately, however, the process is slower and consumes more computer resources than symmetric encryption. The integration of several cryptographic methods is therefore proposed to provide efficient data security while simultaneously addressing the problem of key distribution, thereby overcoming each scheme's security weaknesses <ns0:ref type='bibr' target='#b62'>[55]</ns0:ref>. Some previous studies have utilized asymmetric cryptography; for example, Purevjav <ns0:ref type='bibr' target='#b64'>[56]</ns0:ref> and Harba <ns0:ref type='bibr' target='#b65'>[57]</ns0:ref> employed Rivest Shamir Adleman (RSA), while N. Hong <ns0:ref type='bibr' target='#b66'>[58]</ns0:ref> and Xin <ns0:ref type='bibr' target='#b67'>[59]</ns0:ref> used Elliptic Curve Cryptography (ECC). Other studies have applied symmetric cryptography, i.e., Altigani <ns0:ref type='bibr' target='#b69'>[60]</ns0:ref>, D'souza <ns0:ref type='bibr' target='#b71'>[61]</ns0:ref>, Xin <ns0:ref type='bibr' target='#b67'>[59]</ns0:ref>, and Harba <ns0:ref type='bibr' target='#b65'>[57]</ns0:ref> used Advanced Encryption Standard (AES), while Z. Hong <ns0:ref type='bibr' target='#b73'>[62]</ns0:ref> implemented the Data Encryption Standard (DES) in combination with Rivest Code 4 (RC4). Singh <ns0:ref type='bibr' target='#b76'>[63]</ns0:ref> similarly used symmetric encoding, while Purevjav <ns0:ref type='bibr' target='#b64'>[56]</ns0:ref> combined a public key encryption system with a symmetric hash function, thereby ensuring that messages encrypted with the public key could only be decrypted reasonably quickly using the private key. Harba <ns0:ref type='bibr' target='#b65'>[57]</ns0:ref> proposed a method of protecting data transfer using a hybrid technique: to ensure secure transmission, a symmetric AES algorithm was used to encrypt files; an asymmetric RSA algorithm was used to encrypt AES passwords; HMAC was used to encrypt passwords and symmetric data. N. Hong <ns0:ref type='bibr' target='#b66'>[58]</ns0:ref> used the ECC password algorithm and the SM2 handshake agreement to solve security problems in the information transmission process, but failed to conduct a performance evaluation. Xin <ns0:ref type='bibr' target='#b67'>[59]</ns0:ref> proposed a mixed approach to encryption, integrating MD5 with ECC and AES, but again failed to evaluate performance results. Altigani <ns0:ref type='bibr' target='#b69'>[60]</ns0:ref> proposed combining AES with the Word Shift Coding Protocol steganography protocol, producing a model that improved the confidentiality of messages and overall system security. D'souza <ns0:ref type='bibr' target='#b71'>[61]</ns0:ref> proposed a hybrid approach, combining Dynamic Key Generation and Dynamic S-box Generation with an AES algorithm. This method used Dynamic Key Generation to add data complexity, thereby increasing confusion and diffusion in the ciphertext. Z. Hong <ns0:ref type='bibr' target='#b73'>[62]</ns0:ref> offered a hybrid crypto algorithm that used the DES and RC4 encryption algorithms to encrypt communication data, but did not perform a performance evaluation. Singh <ns0:ref type='bibr' target='#b76'>[63]</ns0:ref> proposed a hybrid encryption scheme that made it difficult for attackers to learn information from messages sent through insecure data transmissions. Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref> provides a comparison of several approaches to securing data transmission offered by previous studies, including the method provided by this study. All of these studies aim to secure transmission data using hybrid cryptography and generate multiple keys to increase security. Likewise, although almost all of these studies provide a layered or graded approach to security, few provided a security analysis.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Materials &amp; Methods</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1.'>The BRC4 Super Encryption Model</ns0:head><ns0:p>This study introduces BRC4 (Beaufort RC4) super encryption, a combination of Beaufort and RC4 encryption. Beaufort encryption converts plain text to a poly-alphabetic substitution code based on the Beaufort table, and although this algorithm is simple in its calculation processes, it still generates secure random numbers <ns0:ref type='bibr' target='#b78'>[64]</ns0:ref>. RC4, meanwhile, is a stream cipher with a variablelength key algorithm, which improves the confidentiality, randomness, and security of key streams <ns0:ref type='bibr' target='#b79'>[65]</ns0:ref>. This research was conducted by simulating BRC4 super encryption on PLC program data (in the instruction list [IL] format) from industrial machines. This simulation is used to anticipate MITM attacks; in other words, BRC4 super encryption (a combination of Beaufort and RC4 encryption) is used to increase security and avoid MITM attacks (see Fig. <ns0:ref type='figure'>1</ns0:ref>). The BRC4 super-encryption model consists of two models: an encryption model, i.e. a combination of the Beaufort and the RC4 encryption processes (see Fig. <ns0:ref type='figure'>2</ns0:ref>), and a decryption model, i.e. a variety of RC4 and Beaufort decryption processes (see Fig. <ns0:ref type='figure'>3</ns0:ref>). The encryption model is installed in the data transmission section (RTU), while the decryption model is established in the data receiving section (MTU). This simulation model is built and tested using hardware with the following specifications: i7-6500U processor, 16GB RAM, and Windows 10 operating system (64-bit).</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.'>Encryption Model Design</ns0:head><ns0:p>Simulation is performed using Instruction List (IL) data from the Programmable Logic Controller (PLC) of several industrial machines. This IL data is in the form of an input-or-output logic command. Several IL data are used, with the following specifications: IL1 = 188 lines, IL2 = 881 lines, and IL3 = 4,571 lines. Each line consists of approximately 15 characters; as such, IL1 consists of 1,086 characters, IL2 consists of 7,158 characters, and IL3 consists of 33,046 characters.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.1.'>Data Reading</ns0:head><ns0:p>The plaintext data are in the form of IL, i.e. a PLC program containing lists of sequence logic instructions in the form of sequentially executed text. In this paper, the data used is in the IL1 format.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.2.'>Converting to one-line format</ns0:head><ns0:p>The plaintext data (IL1), presented in 188 lines and 1 column, is converted into a one-line array. To separate the lines, a semicolon (;) is used. This produces the following: Next, it is necessary to convert data from string format to numeric format to perform mathematical operations.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.3.'>Initial key generation</ns0:head><ns0:p>This process aims to generate a random initial key, one that cannot easily be detected by attackers. Random initial key generation is conducted during each data transmission process. Generation is carried out in the range of 1 to 256 bytes, with the length of the key depending on data size (n) and character length of the initial key (pk, i.e. 64, 128, 256, 512, 1,024, 2,048 bits); the longer the initial key, the more secure. Figure <ns0:ref type='figure' target='#fig_2'>4</ns0:ref> shows that, if the length of the plaintext (H) is less than or equal to the size of the key, the length of the key will equal the length of the data (n). Otherwise, if the length of the data is greater than the length of the key, then the length of the key is equal to the length of the chosen initial key (pk, 256).</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.4.'>Generation of Beaufort Encryption Key</ns0:head><ns0:p>In Beaufort key generation, if the initial key is shorter than plaintext, the initial key is generated repeatedly along with the plaintext. Such generation uses a keystream generator approach, following equation ( <ns0:ref type='formula'>1</ns0:ref>).</ns0:p><ns0:p>(1) </ns0:p><ns0:formula xml:id='formula_0'>&#119896; &#119894; = ( &#119896; &#119894; -&#119898; + &#119896; &#119894; -1 )</ns0:formula></ns0:div> <ns0:div><ns0:head n='3.2.5.'>Beaufort Encryption</ns0:head><ns0:p>Beaufort encryption uses the IL numeric format as plaintext and the Beaufort key. It is conducted using the following equation:</ns0:p><ns0:p>(2) Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_1'>&#119864; &#119896; (&#119875; 1, &#119875; 2, &#8230;, &#119875; &#119898; ) = (&#119870; 1 -&#119875; 1, &#119870; 2 -&#119875; 2, &#8230;, &#119870; &#119898; -&#119875; &#119898; ) &#119898;&#119900;&#119889; 256 (3) &#119863; &#119896; (&#119862; 1, &#119862; 2, ..,&#119862; &#119898; ) = (&#119870; 1 -&#119862; 1, &#119870; 2 -&#119862;</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where E (encryption), D (decryption), P (plain text), C (cipher), K (key). The encryption process begins with reading the plaintext (n), namely the IL numeric format. A subtraction operation is then carried out on each plaintext character, using a Beaufort key with a base 256 modulo, thereby forming a Beaufort cipher (CB) is formed, as shown in Fig. <ns0:ref type='figure'>6</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.6.'>Generation of RC4 Encryption Key</ns0:head><ns0:p>RC4 Key Generation consists of three steps. First, keystream generation is used to form the first cipher block. Second, permutation generation is conducted with a key scheduling algorithm function. Third, a pseudo-random number generation algorithm is used.</ns0:p><ns0:p>A key scheduling algorithm is used to initialize the permutations of array S. Key length is defined as the number of bytes contained within a key, that is, between 1 and 256. The S array is initialized to the permutation identity; the S array is processed to 256 iterations. After retrieving the random S array, it is re-initialized, with the values of i and j being zero. The PRGA process subsequently generates an RC4 key, by incrementing i, adding the values S[i] and S[j], and swapping two values. An S value with an index equal to the numeric value S[i], and S[j], modulo 256, yields the RC4 key (see Fig. <ns0:ref type='figure'>7</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.7.'>RC4 Encryption</ns0:head><ns0:p>RC4 encryption begins by reading the Beaufort cipher (CB) as plain text (Fig. <ns0:ref type='figure'>8</ns0:ref>). The system checks the plaintext (n); if n&lt;256, RC4 encryption is carried out against plain text n. Otherwise, blocks will be created using (n/256), rounded up and stored in STL. Subsequently, n is checked again, if n &lt;256, the RC4 encryption process is conducted along with the plaintext characters (n). However, if n&gt; 256, the permutation process in the next block forms the following array: S[i] and S <ns0:ref type='bibr'>[j]</ns0:ref>. The values are then exchanged.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.8.'>The insertion of the initial key</ns0:head><ns0:p>This section describes the process of inserting the key and key information by including them behind the cipher through a combination of encryption processes. The key entered is the initial key, produced by random generation, and the key information is the length of the preselected key. The key and key information are used to perform decryption when the password data has been received. A randomly generated initial key is used to avoid MITM attacks. While an attacker was to obtain ciphertext data, a combination of the cipher, initial key, and key information, said attacker would not obtain any information, as the ciphertext is random. If the attacker manages to separate the initial key from the ciphertext, the attacker will still not be able to read the cipher. The initial key is a different character length than the decryption key and can only be used after three generations: keystream generation, Beaufort decryption generation, and RC4 decryption generation.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.'>Decryption Model Design</ns0:head></ns0:div> <ns0:div><ns0:head n='3.3.1.'>Initial Key Separation</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59198:1:0:NEW 5 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>After the ciphertext is received, it is separated into three components: the cipher, the initial key, and the key information (containing the key character length). The separation process is shown in Fig. <ns0:ref type='figure'>9</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.2.'>Beaufort Decryption Key Generation</ns0:head><ns0:p>This section discusses the generation of Beaufort's decryption keys, which follow the process illustrated in Fig. <ns0:ref type='figure' target='#fig_1'>5</ns0:ref>. First, the system calculates the cipher data (n) and initial key (m). If n is more than m, one is added to the variable i; otherwise, m is kept in variable i, and the original key is saved as the Beaufort key. If the length of n is greater than i, keystream generation is performed.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.3.'>RC4 Decryption Key Generation</ns0:head><ns0:p>After generating the Beaufort decryption key, the RC4 key is produced through several stages: Keystream Generation (KG), Key Scheduling Algorithm (KSA) generation, and pseudo-random generation algorithm (PRGA). Keystream generation aims to form the first block array and obtain a block length of up to 256 characters. A random block array key scheduling algorithm is generated based on the previous key (i.e. the Beaufort decryption key). Finally, the RC4 decryption key is obtained; this key is used for the final decryption process. In its process, the generation of the RC4 decryption key resembles the generation of the RC4 encryption key. This approach is described in Fig. <ns0:ref type='figure'>7</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.4.'>RC4 Decryption</ns0:head><ns0:p>RC4 decryption is conducted, using the RC4 key, to separate the cipher data. The RC4 decryption process is described in Fig. <ns0:ref type='figure'>10</ns0:ref>. This process begins with the reading of the number of cipher characters. The subsequent processes through which keystreams and key scheduling algorithms are generated are intended to randomize the block array position. This is followed by generating a pseudo-random algorithm to obtain the RC4 key. Therefore, if n &lt;256, the RC4 decryption process will produce the Beaufort cipher (CB). If n&gt;256, a block is formed to repeat the permutation of each character in the key array and S array, as well as to swap the values of S[i] and S[j]. As a result, RC4 decryption (i.e. the Beaufort cipher) is obtained.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.5.'>Beaufort Decryption</ns0:head><ns0:p>This section describes Beaufort decryption, with the Beaufort cipher (CB) used as input, using the Beaufort Key (KB). The detailed process is shown in Fig. <ns0:ref type='figure'>11</ns0:ref>. This process begins by calculating the length of the CB, as stored in variable n. The character length of the initial key is stored in variable m. If the length of the ciphertext is greater than the length of the initial key (n&gt;m), the initial key is reproduced until the lengths are equal. The decryption process is done by adding each key character to CB, then creating an array to store the results of decryption. This process produces the plaintext (New_IL) in numeric format.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.6.'>Design of The Final Model</ns0:head><ns0:p>This stage converts the numeric-type plaintext to the string type. It then changes the original oneline plain text format, and separator (;), into the original multi-line format. The process is detailed in Fig. <ns0:ref type='figure'>12</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>Results and Discussion</ns0:head><ns0:p>This process reproduces IL data from the BRC4 super encryption system, which aims to increase the security of data transmission in SCADA systems via the DNP3 protocol. Its product is equivalent to the data sent via DNP3. As such, data can successfully be encrypted and decrypted using the BRC4 super-encryption method.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.'>Keyspace Analysis</ns0:head><ns0:p>Keyspace analysis involves the analysis of keyspace within which the encryption system secures cipher data from brute force attacks. Brute-force attacks work by counting every possible combination that can form a password, and then testing them to determine the correct password. As the lengths and combinations of passwords grow, the amount of time it takes to find the correct password increases exponentially. Encryption systems must have very large keyspaces (greater than 2 100 bits) to render brute force attacks ineffective <ns0:ref type='bibr' target='#b80'>[66]</ns0:ref>, <ns0:ref type='bibr' target='#b81'>[67]</ns0:ref>. The proposed model consists of several key generators. The initial key generator, randomly created for each session, has key lengths of 64, 128, 256, 512, 1,024, and 2,048 bits. Each initial key character is generated from 256 bytes ASCII code, and thus has a spread value of 1 to 256. A random initial key with a length of 16 characters (256 bits), using 256-byte ASCII code, would have a keyspace of 256 16 , equivalent to 2 128 . As such, an initial key that is 16 characters in length would be secure from brute force attacks, as would longer keys (i.e. 256, 512, 1,024, and 2,048 bits), as shown in Table <ns0:ref type='table'>2</ns0:ref> <ns0:ref type='bibr' target='#b82'>[68]</ns0:ref>. Table <ns0:ref type='table'>2</ns0:ref> shows that a key 64 bits in length would not be secure from brute force attacks; a minimum key size of 128 bits is necessary to guarantee a secure encryption process.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.'>Correlation Coefficient Analysis</ns0:head><ns0:p>Correlation coefficient analysis aims to determine the correlation between plaintext and ciphertext data. If the correlation value is equal to 1, it means that the two data are the same. Conversely, if the correlation value is lower than one (or close to 0), the two data are different; there is thus no relationship, and increased randomness (see Table <ns0:ref type='table'>3</ns0:ref>). The less related the text, the better, as increased randomness means increased difficulty deciphering the relationship between plain text and encoded text <ns0:ref type='bibr' target='#b81'>[67]</ns0:ref>, <ns0:ref type='bibr' target='#b83'>[69]</ns0:ref>. The correlation between plaintext and ciphertext data is formulated as: Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In this formula, r is the correlation value, x is the plaintext data, and y is the ciphertext data. Based on equation ( <ns0:ref type='formula'>4</ns0:ref>), it can be seen that the proposed method produces a correlation value of -0.010 for IL1 data, 0.006 for IL2 data, and 0.001 for IL3 data (see Table <ns0:ref type='table'>4</ns0:ref>). Referring to Pearson (Table <ns0:ref type='table'>3</ns0:ref>), the correlation value for all three data may be categorized as 'no correlation,' meaning that the plaintext and ciphertext data are not the same (i.e. uncorrelated). Furthermore, the correlation value for the three data is closer to zero than the correlation value for data encrypted using the Beaufort cipher or RC4 in isolation. This shows that BRC4 superencryption can improve the security of data transmission.</ns0:p></ns0:div> <ns0:div><ns0:head>4.3.</ns0:head></ns0:div> <ns0:div><ns0:head>Information Entropy Analysis</ns0:head><ns0:p>In cryptographic theory, information entropy is defined as a measure of the randomness of the amount of information in a message. Entropy is expressed in units of bits to express the degree of information randomness. Under random conditions, encrypted information with ciphertext data should have an optimum entropy value close to &#8776;8; an entropy close to 8, thus, indicates that an encryption system is designed to be secure from MITM attacks <ns0:ref type='bibr' target='#b84'>[70]</ns0:ref>. The entropy value may be determined using the following equation <ns0:ref type='bibr' target='#b85'>[71]</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_2'>(5) &#119867; =-&#8721; &#119899; &#119896; = 0 &#119875;(&#119896;) &#119871;&#119900;&#119892; 2 (&#119875;(&#119896;))</ns0:formula><ns0:p>With H as the entropy value, n is the number of different symbols or codes in a message, and P(k) is the probability of symbol occurrence in the ciphertext. Using the proposed method, entropy values of 7.84 (for IL1), 7.98 (for IL2), and 7.99 (for IL3) were returned. The proposed model's IL data has an entropy value close to 8, which indicates a high degree of randomness, and thus the ciphertext is secure from MITM attacks. A comparison of the entropy values for data encrypted using the Beaufort, RC4, and BRC4 models is provided in Table <ns0:ref type='table'>5</ns0:ref>. Data encrypted through BRC4 super-encryption is closest to 8.00 in value, which indicates that BRC4 super-encryption is more secure than Beaufort or RC4 encryption alone.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4.'>Visual analysis</ns0:head><ns0:p>Visual analysis aims to measure the results of IL data encryption using a histogram and compare the distribution of plaintext and ciphertext data. When a ciphertext histogram is more diverse and differently distributed than the plaintext histogram, it can be concluded that the ciphertext does not provide any clues or information that can be deciphered by MITM attacks.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>13</ns0:ref> shows the histogram results for IL data (the first 500 of 33,046 characters). It notes that data distribution between plaintext and ciphertext is very varied, and thus ciphertext data is secure from MITM attacks. The plaintext is distributed in the numeric range of 10 to 99, while the ciphertext has a distribution of 1 to 256. As such, the ciphertext is more secure from MITM attacks than texts encrypted using Beaufort or RC4 in isolation. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5.'>Time Complexity Analysis</ns0:head><ns0:p>Computer Science steps, procedures/functions, control steps, and loops <ns0:ref type='bibr' target='#b81'>[67]</ns0:ref>. For compatibility, the symbols Ci and &#8721; are used for calculation, as follows: C1 is used to symbolize the assignment; C2 is used to symbolize the number of arithmetic operators used; C3 is used to symbolize built-in procedures/functions such as input, output, or user-defined procedures/functions; C4 is used to symbolize the loop operation; and C5 is used to symbolize the structure of the branching conditions. Finally, &#8721; is used to represent the number of steps involved in each Ci symbol.</ns0:p><ns0:p>The results of the time complexity calculation for the encryption algorithm (see Appendix 1) are as follows: This analysis shows that the encryption and decryption algorithms may be categorized as having linear complexity, meaning that processing time corresponds positively and linearly with data size. In other words, if the algorithm requires n steps to handle data of n size, it will need 2n steps for data of 2n size.</ns0:p><ns0:formula xml:id='formula_3'>T (n) = (</ns0:formula></ns0:div> <ns0:div><ns0:head n='4.6.'>Cryptanalysis Solutions</ns0:head><ns0:p>Super encryption BRC4 is a proposed method that combines the Beaufort and RC4 ciphers, wherein four symmetric key generators are generated dynamically every session. We must ascertain whether this proposed method can overcome the weaknesses of the Beaufort and RC4 ciphers in isolation. According to Hughes <ns0:ref type='bibr' target='#b86'>[72]</ns0:ref>, the Beaufort cipher-as with the Vigenere cipher-has several weaknesses. In their case, Hughes used neither the Vigen&#232;re nor the Vernam ciphers, as both needed to meet the same three requirements to comply with Shannon's definition of complete secrecy. According to Alallayah <ns0:ref type='bibr' target='#b87'>[73]</ns0:ref>, the Vigenere cipher offers a combination of lowercase alphabetical characters, with a maximum length of 676 <ns0:ref type='bibr'>(26*26)</ns0:ref> bytes. On the other hand, Fluhrer <ns0:ref type='bibr' target='#b88'>[74]</ns0:ref> demonstrated and proved that the RC4 cipher is completely insecure on the Wired Equivalent Privacy (WEP) protocol, with a fixed secret key combined with an initialization vector (IV) modifier for both the 24 and 128-bit modifiers (which are known to encrypt different messages). All of these weaknesses have been anticipated by the proposed method, as shown in Tables <ns0:ref type='table' target='#tab_5'>6 and 7</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.'>Conclusions</ns0:head><ns0:p>This study shows that a key with a length of 8 characters is less secure from brute force attacks. Only keys of at least 16 characters are secure from brute force attacks. Correlation values of -0.010, 0.006, and 0.001, are produced for IL1, IL2, and IL3, respectively, indicating that the proposed method (BRC4) is better than Beaufort encryption or RC4 encryption in isolation. Meanwhile, information entropy values of 7.84, 7.98, and 7.99 are returned for IL1, IL2, and IL3, respectively, also indicating that that the proposed method (BRC4) is better than Beaufort encryption or RC4 encryption in isolation. Visual analysis, using a histogram, shows that the distribution of the ciphertext is significantly more varied than the plaintext, and thus it is secure from MITM attacks. Time complexity analysis shows that the proposed method is categorized as linear complexity. Moreover, previous studies of hybrid security approaches have also used multiple key generation, and almost all have applied an in-depth security system. However, not all have evaluated the performance, keyspace, correlation, information entropy, and time complexity, all of which are significant measures for evaluating data transmission performance. The proposed method uses this metric to measure performance and analyze security based on visual analysis, keyspace, entropy, correlation, and time complexity. Further research should explore other possible super-encryption algorithms for improving the security of data transmission in SCADA systems. -The key must be the same length as plaintext, so the key will be repeated until it is the same length as the plaintext.</ns0:p><ns0:p>&#61692; The system generates a key using the keystream generation equation until it has the same length as the plaintext, and thus the key is random and not easily solved.</ns0:p><ns0:p>-The keys have to be random.</ns0:p><ns0:p>&#61692; The system generates a random initial key for each session, which is always different.</ns0:p><ns0:p>-The key must not be reused.</ns0:p><ns0:p>&#61692; The system generates a random initial key for each session, which is always different.</ns0:p><ns0:p>-The equations used are based on the standard alphabet (modulo 26).</ns0:p><ns0:p>&#61692; The system uses modulo 256, resulting in increasingly random values of 256 bytes.</ns0:p><ns0:p>-Possible keys are combinations of lowercase letters, with a maximum length of 676 bytes.</ns0:p><ns0:p>&#61692; Possible key variations are derived from ASCII code, with a maximum length of 65,536 bytes. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science 1</ns0:note><ns0:p>Table <ns0:ref type='table' target='#tab_5'>7</ns0:ref> Cryptanalysis solutions for the weaknesses of the RC4 cipher</ns0:p><ns0:p>The weaknesses of the RC4 cipher Proposed method (BRC4)</ns0:p><ns0:p>-The same key tends to be used for all blocks in the same data package.</ns0:p><ns0:p>&#61692; The system generates a random initial key (K1), which is different every session, then generates further keys (K2, K3, K4) using a keystream generation equation until it fills up the array (K). There is thus no repetition of keys. -The original RC4 key is limited to 40 bits, and the Initialization Vector (IV) is limited to 24 bits.</ns0:p><ns0:p>&#61692; The system generates a random initial key of up to 2,048 bits (256 bytes), or even larger.</ns0:p><ns0:p>-RC4 is effective with large keys, and thus attacking a PRGA appears ineffective, even when the most well-known attacks take over 2 700 seconds. It is weak for short keys, as the key is repeated until it fills the array (K) to a full 256 bytes.</ns0:p><ns0:p>&#61692; The system generates a random initial key (K1), which is different every session, then generates further keys (K2, K3, K4) using a keystream generation equation until it fills up the array (K). There is thus no repetition of keys.</ns0:p><ns0:p>-For each PRGA permutation, the value of the array (S) changes at two locations (at the most).</ns0:p><ns0:p>&#61692; The system performs different permutations for every block array, resulting in more varied random values for array blocks.</ns0:p><ns0:p>-Permutation is performed only once for all blocks formed, forming a pattern that can be learned by attackers.</ns0:p><ns0:p>&#61692; The system performs different permutations for every block array to achieve a random value of 256 bytes. As such, the system performs permutations in the first array block, continues the permutation in the second array block, third, and so on until the last block, and as such it generates a random value that varies for every block array.</ns0:p><ns0:p>-It is possible for the same S-Box to be used. The same pseudorandom value may be generated repeatedly, as the user key is repeated to fill the 256-byte array. If a key is used to encrypt 8 bytes, it will thus be repeated 32 times to fill the array.</ns0:p><ns0:p>&#61692; If the key used for permutation is only 8 bytes in length, the system uses the keystream generator to generate fill the key byte array without repeating the initial key.</ns0:p><ns0:p>-An attacker who manages to obtain multiple ciphertext packets can obtain several bytes of the original message by performing XOR operations on two ciphertext packets.</ns0:p><ns0:p>&#61692; To perform encryption, the system generates a random initial key (K1), generates a keystream (K2), generates a key-scheduling algorithm (K3), and generates a pseudorandom key (K4). As such, even if an attacker PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59198:1:0:NEW 5 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For example, if an attacker successfully intercepts two different encrypted messages that use the same key, the attacker may perform an XOR operation to remove the key sequence's effect. If the attacker manages to uncover the plaintext of one encrypted message, the attacker will easily find other plaintext messages without knowing the correct key sequence.</ns0:p><ns0:p>can obtain the first and the second ciphertext, XOR operations still cannot be used to eliminate the effects of the key sequence, as the initial keys used for the first (K1.1) and second (K1.2) ciphertexts are different. Likewise, K1.1 and K1.2 experience further generation to produce K4.1 and K4.2, which are increasingly different.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>LD M8002;SET S0;STL S0;ZRST S20 S80;RST M0;OUT T0 K1;LD T0;ANI X001;ANI X012;MPS;ANI X015;SET S20;MPP;AND X015;SET S25;SET S30;STL S20;LD X013;ANI PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59198:1:0:NEW 5 Aug 2021) Manuscript to be reviewed Computer Science X002;OUT Y006;LD X014;ANI X003;OUT Y007;LD X015;OR X001;SET S0;LD X012;SET S90;RST S20;STL S90;LD M8013; &#8230;.etc.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5explains the process through which the Beaufort encryption key is generated. This process produces the Beaufort key for encryption, with the first 256 characters equal to the initial key and the generated results used for keys 257 to 1,086.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>( 4 )</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>&#119955; = &#119951; (&#8721;&#119961;&#119962;) -(&#8721;&#119961;) (&#8721;&#119962;) [&#119951;&#8721;&#119961; &#120784; -(&#8721;&#119961;) &#120784; ] [&#119951;&#8721;&#119962; &#120784; -(&#8721;&#119962;) &#120784; ] PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59198:1:0:NEW 5 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Time complexity, or T(n), is measured based on the number of computational steps required to run the algorithm as a function of the input size (n). Calculations are based on multiple operator PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59198:1:0:NEW 5 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>2</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59198:1:0:NEW 5 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,156.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,263.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,333.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,293.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,232.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,277.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,418.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,306.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,308.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,305.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,178.87,525.00,301.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>&#119896; 257 = ( &#119896; 1 + &#119896; 256 ) &#119898;&#119900;&#119889; 256</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>&#119898;&#119900;&#119889; 256</ns0:cell></ns0:row><ns0:row><ns0:cell>&#119896; &#119894;</ns0:cell><ns0:cell>is the key number-i,</ns0:cell><ns0:cell>&#119896; &#119894; -&#119898;</ns0:cell><ns0:cell>is the key number-i subtracted by the initial key (m), and key</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>&#119896; 257 = ( 206 + 174 ) &#119898;&#119900;&#119889; 256 &#119896; 257 = 380 &#119898;&#119900;&#119889; 256 &#119896; 257 = 124</ns0:cell></ns0:row></ns0:table><ns0:note>number-i subtracted by 1. The initial key length is 256 characters. Since the plaintext is 1,086 characters long, a Beaufort key is generated to be equal in length to the plaintext. Key numbers 257 to 1,086 are generated based on equation<ns0:ref type='bibr' target='#b0'>(1)</ns0:ref>. For example:</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>2, &#8230;, &#119870; &#119898; -&#119862; &#119898; ) &#119898;&#119900;&#119889; 256</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59198:1:0:NEW 5 Aug 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Hybrid security approach to data transmission</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Study</ns0:cell><ns0:cell>Securing Data Transmission</ns0:cell><ns0:cell>Methods</ns0:cell><ns0:cell>Performance Measuring</ns0:cell><ns0:cell>Provides layered security</ns0:cell><ns0:cell>Provide security analysis</ns0:cell></ns0:row><ns0:row><ns0:cell>N. Hong [54]</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>Handshake agreement (SM2) and ECC.</ns0:cell><ns0:cell>No performance evaluation.</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Altigani [56]</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>AES and steganography Word Shift Coding.</ns0:cell><ns0:cell>Encryption time and extraction time.</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Key exchange time,</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Xin [55]</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>MD5, AES and ECDH.</ns0:cell><ns0:cell>number of signature, number of time; key length, time of signature,</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>verification time.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Symmetric</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Singh [59]</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>encipherment and middle value</ns0:cell><ns0:cell>Encryption and decryption test</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>algorithm.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Symmetric cipher</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Purevjav [52]</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>Ping Pong-128, RSA and hash function</ns0:cell><ns0:cell>Encryption and decryption test.</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>MD5.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Z. Hong [58]</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>DES and RC4.</ns0:cell><ns0:cell>No evaluation.</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Harba [53]</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>AES, RSA and HMAC.</ns0:cell><ns0:cell>Ciphertext size, encryption time</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>AES and Dynamic</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>D'souza [57]</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>Key Generation and Dynamic S-box</ns0:cell><ns0:cell>Encryption and decryption test.</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Generation.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Keyspace analysis,</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Proposed Method</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>Super Encryption BRC4, Dynamic generation. Symmetric Four-key-</ns0:cell><ns0:cell>Correlation coefficient analysis, Encryption and analysis, Time complexity analysis, Information Entropy analysis, Visual</ns0:cell><ns0:cell>&#8730;</ns0:cell><ns0:cell>&#8730;</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>decryption test.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Cryptanalysis solutions for the weaknesses of the Vigenere (Beaufort) cipher</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Weaknesses of the Vigenere (Beaufort) and</ns0:cell><ns0:cell>Proposed method (BRC4)</ns0:cell></ns0:row><ns0:row><ns0:cell>Vernam ciphers</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 7 (on next page)</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Cryptanalysis solutions for the weaknesses of the RC4 cipher</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59198:1:0:NEW 5 Aug 2021)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59198:1:0:NEW 5 Aug 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59198:1:0:NEW 5 Aug 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"A point by point letter showing tracked changes Comments from reviewers: Response Reviewer 1: 1. However, please check again the English. Already be revised. Please see the following lines: 18, 47, 56, 80, 88, 99, 108, 114, 131, 132, 149, 237, 257, 263, 268, 315, 339, 340, 413, 416, 425, 436. Also, in subchapter 2.1 (Comparison to Other Hybrid Cipher Approaches), subchapter 4.5 (Time Complexity Analysis), subchapter 4.6 (Cryptanalysis Solutions), and in Table 6 & 7. Reviewer 2: 1. For Table 1 Hybrid security approach to data transmission. Which filter used by the author to find the studies for the comparison? Already be revised. Please see the explanation related to Table 1 at lines 190 up to 194, and also see in Table 1. 2. They should check grammatical errors in the manuscript. Already be revised. Please see the following lines: 18, 47, 56, 80, 88, 99, 108, 114, 131, 132, 149, 237, 257, 263, 268, 315, 339, 340, 413, 416, 425, 436. Also, in subchapter 2.1 (Comparison to Other Hybrid Cipher Approaches), subchapter 4.5 (Time Complexity Analysis), subchapter 4.6 (Cryptanalysis Solutions), and in Table 6 & 7. 3. The quality of images should be improved. We cannot improve the quality of the two images because they are generated from Matlab software. Eventually, we deleted both images, and the manuscript already is adjusted. Reviewer 3: 1. This version is good in this form but most of the recent references are missing; and Already be revised. Please see Chapter References at lines 502 up to 733. 2. English of the article is also poor. Already be revised. Please see the following lines: 18, 47, 56, 80, 88, 99, 108, 114, 131, 132, 149, 237, 257, 263, 268, 315, 339, 340, 413, 416, 425, 436. Also, in subchapter 2.1 (Comparison to Other Hybrid Cipher Approaches), subchapter 4.5 (Time Complexity Analysis), subchapter 4.6 (Cryptanalysis Solutions), and in Table 6 & 7. 3. Comparisons between methods are also missing. Already be revised. Please see the explanation related to Table 1 at lines 190 up to 194, and also see in Table 1. 4. add some significance points of pros and cons of the work; and Already be revised. Please see subchapter 4.6 (Cryptanalysis Solutions) at lines 461 up to 474, including tables 6 and 7. 5. revise the formatting of the references. After the critical revision, this paper can be accepted for publication. Already be revised. Please see Chapter References at lines 502 up to 733. "
Here is a paper. Please give your review comments after reading it.
242
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. Data exchange and management have been observed to be improving with the rapid growth of 5G technology, edge computing, and the Internet of Things (IoT).</ns0:p><ns0:p>Moreover, edge computing is expected to quickly serve extensive and massive data requests despite its limited storage capacity. Such a situation needs data caching and offloading capabilities for proper distribution to users. These capabilities also need to be optimized due to the experience constraints, such as data priority determination, limited storage, and execution time.</ns0:p><ns0:p>Methods. We proposed a novel framework called Genetic and Ant Colony Optimization (GenACO) to improve the performance of the cached data optimization implemented in previous research by providing a more optimum objective function value. GenACO improved the solution selection probability mechanism to ensure a more reliable balancing of the exploration and exploitation process involved in finding solutions. Moreover, the GenACO has two modes: cyclic and non-cyclic, confirmed to have the ability to increase the optimal cached data solution, improve average solution quality, and reduce the total time consumption from the previous research results.</ns0:p><ns0:p>Result. The experimental results demonstrated that the proposed GenACO outperformed the previous work by minimizing the objective function of cached data optimization from 0.4374 to 0.4350 and reducing the time consumption by up to 47%.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The conduct of work and school activities from home is the new habit adopted during the Covid-19 pandemic <ns0:ref type='bibr' target='#b29'>Nimrod (2020)</ns0:ref>. It led to a massive surge in internet and digital technology users <ns0:ref type='bibr' target='#b9'>(De', Pandey, and Pal 2020)</ns0:ref>. Government and business owners are also required to maximize websites The Cyclic ACO-GA proposed by <ns0:ref type='bibr' target='#b43'>Wang et al. (2019)</ns0:ref> can solve the cached data offloading problem. However, the dominant iteration issued to ACO makes the execution time quite long. In addition, the mechanism for selecting a solution using a roulette wheel makes the resulting solution does not converge quickly. Moreover, the iteration restrictions on the GA algorithm make GA performance not optimal. Hence, the main research question is 'how can we hybrid and enhance the performance of evolutionary algorithms to optimize the cached data offloading ?' Therefore in this study, we propose a new GenACO framework to improve Cyclic Genetic Ant Colony (CGACA). This paper proposed a hybrid method of the ACO-GA algorithm to solve the cached data offloading problem. The contributions are (i) improving solution probability calculation on cached data offloading (ii) proposed a novel framework named GenACO with cyclic and non-cyclic modes to improve previous research using more optimal profit cached data.</ns0:p><ns0:p>The following section discusses the previous related works on cached data offloading. The algorithms section comprehensively describes the working principle of ACO, GA, and CGACA framework algorithms and their performance in this research. Moreover, the methodology section discusses the proposed performance improvement of the ACO-GA hybrid as tested on a novel GenACO framework with cyclic and non-cyclic modes, followed by the dataset, simulation setup, results, and discussion sections and concluded.</ns0:p></ns0:div> <ns0:div><ns0:head>Related Works</ns0:head><ns0:p>Cached data offloading is an extraordinary capability required to be owned by an application, workstation, server, and network to manage data storage considered to be larger than its capacity <ns0:ref type='bibr'>(Zulfa, Hartanto, and Permanasari 2020)</ns0:ref>. It is a very common method used in cloud computing <ns0:ref type='bibr' target='#b43'>(Wang et al, 2019;</ns0:ref><ns0:ref type='bibr'>C. Li et al, 2020)</ns0:ref>, mobile computing <ns0:ref type='bibr' target='#b11'>(Dutta and Vandermeer, 2017;</ns0:ref><ns0:ref type='bibr'>Zhu and Reddi, 2017)</ns0:ref>, operating system (Abraham <ns0:ref type='bibr' target='#b0'>Silberschatz, Peter B. Galvin, 2008;</ns0:ref><ns0:ref type='bibr' target='#b42'>Tian and Liebelt, 2014)</ns0:ref>, and telecommunication <ns0:ref type='bibr' target='#b33'>Prerna et al. (2020)</ns0:ref>. <ns0:ref type='bibr' target='#b24'>Luo et al. (2017)</ns0:ref> examined the energy consumption optimization in Mobile Edge Computing (MEC) by formulating objective functions based on the variables of energy consumption, backhaul capacities, and content popularity. The research utilizes the GA algorithm with the optimization function to minimize the energy consumption value and validated by measuring the system average delay and average power toward adding the MEC servers and backhaul capacity. Moreover, <ns0:ref type='bibr'>Xu et al. (2019)</ns0:ref> also proposed a data offloading optimization framework named COM, designed to optimize mobile devices execution time and energy consumption. The COM framework was used to model a multi-objective optimization solved using the NSGA-III algorithm. It was validated by calculating the maximum value of the utility and resource utilization functions achieved.</ns0:p><ns0:p>(SBS) on MEC using a multi-Long Short-Term Memory (LSTM) algorithm. The results predicted were further used for data offloading by utilizing the Cross-Entropy (CE) algorithm. The research formulated the problem with an optimization approach to determine the maximum value from the system throughput. <ns0:ref type='bibr' target='#b18'>Huang et al. (2019)</ns0:ref> also developed the Deep-Q Network (DQN) framework to solve data offloading optimization and resource allocation in MEC using energy costs, computation costs, and delay costs as variables. The optimization was modeled with mixed-integer non-linear programming solved by Reinforcement Learning (RL) algorithm, and the DQN was validated by calculating the minimum total cost generated. Another research by <ns0:ref type='bibr' target='#b12'>Elgendy et al. (2019)</ns0:ref> focused on resource allocation optimization and computation offloading with additional data security protection in MEC. The security protection was conducted by adding the Advanced Encryption Standard (AES) algorithm to prevent cyber-attacks. At the same time, the optimization problem was modeled based on the knapsack problem and solved using a branch and bound algorithm. Its implementation was further validated by determining the smallest value of time and energy consumption. <ns0:ref type='bibr' target='#b19'>Kuang et al. (2021)</ns0:ref> modeled data offloading and resource allocation in one cooperative scheme to optimize power allocation and CPU cycle frequency in MEC and, subsequently, to make appropriate data offloading decisions. The research was measured by calculating the smallest possible task latency value and also adopted the Convex optimization method, dual Lagrangian decomposition, and ShenJing Formula. Another study by <ns0:ref type='bibr'>Zhong et al. (2021)</ns0:ref> discussed a caching strategy framework to optimize traffic load, and QoS in Multi-access Edge Computing named GenCOSCO. The aim was to minimize the task execution time as a Mixed Integer Non-Linear Programming (MINLP) optimization problem by considering the heterogeneity of task requests, pre-storage of application data, and cooperation of the base station variables. The GenCOSCO was used to propose the FixCS algorithm and was validated by calculating the average latency.</ns0:p><ns0:p>Moreover, <ns0:ref type='bibr' target='#b30'>Peng et al. (2021)</ns0:ref> proposed an application paradigm as a service chain for detailed data offloading and location caching mechanisms. Each service chain was limited by leasing costs and designed according to the queuing theory in computer networks. The research was further validated by calculating the average response delay towards increasing cache server time and capacity. <ns0:ref type='bibr' target='#b43'>Wang et al. (2019)</ns0:ref> also studied the cached data offloading optimization on the edge network to reduce requests made to the origin server for the same data. The focus was on the cache capacity on the limited edge network, which means each cached data needed to be prioritized. The optimization problem was proposed using a knapsack problem. This research used cyclic ACO-GA with three variables: access count, access time, and data size. Research by <ns0:ref type='bibr' target='#b43'>Wang et al. (2019)</ns0:ref> is used as the baseline in our study because, based on our manual calculations, there is an opportunity to improve the value of the formulated objective function. The previously generated objective function value is 0.4374, but this value can be even more optimal up to 0.4350. This is a strong basis for us to improve the performance of CGACA.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63564:1:0:NEW 14 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Evolutionary Algorithms</ns0:head></ns0:div> <ns0:div><ns0:head n='1.'>Genetic Algorithm (GA)</ns0:head><ns0:p>GA was first introduced by John Holland in a publication entitled Adaptation in Natural and Artificial Systems <ns0:ref type='bibr' target='#b17'>Holland (1992)</ns0:ref> and observed to have adopted several phenomena such as natural selection and mutation for survival. GA used these adaptation principles to improve solutions in each generation <ns0:ref type='bibr' target='#b34'>Purnomo (2014)</ns0:ref>. One other main principle is a crossover, a crossbreeding mechanism between two individuals (parents) to produce better quality offspring than both parents. It illustrates in Figure <ns0:ref type='figure'>2</ns0:ref>, where two individual chromosomes (parents) create new offspring working in groups with other genes to form a chromosome. This chromosome represents a solution vector from the completed case study, with each gene can mutate based on a certain probability, as illustrated in Figure <ns0:ref type='figure'>3</ns0:ref>.</ns0:p><ns0:p>Meanwhile, the quality of the genetic algorithm solution vector was measured by a fitness value. The highest value among the existing chromosomes is selected when the genetic algorithm finds the maximum optimization value. This fitness value was, calculated using the following equation (1). GA also uses elitism to maintain one or several of the best individuals in the next generation to produce other individuals with better fitness values <ns0:ref type='bibr' target='#b38'>Santosa and Ai (2017)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_0'>&#119891;&#119894;&#119905;&#119899;&#119890;&#119904;&#119904; = 1 &#119891; (&#119909;) + &#120576; (1)</ns0:formula></ns0:div> <ns0:div><ns0:head n='2.'>Ant Colony Optimization (ACO)</ns0:head><ns0:p>Dorigo first introduced ACO to find the shortest path in the Travelling Salesman Problem case study (TSP) <ns0:ref type='bibr' target='#b10'>(Dorigo, Maniezzo, and Colorni 1996)</ns0:ref>. This algorithm adopts the behavior of ant colonies from the nest to the food source. Each ant can leave a pheromone trail on each of its paths. The pheromone substances are from endocrine glands, which can identify fellow ants or their groups by serving as a solid signal to influence other ants to follow in the footsteps of this pheromone. At first, each ant can determine its route and leave a pheromone trail to identify others, as indicated in Figure <ns0:ref type='figure'>4</ns0:ref>.</ns0:p><ns0:p>Moreover, the pheromone can evaporate, which means its path is lost faster on longer routes than shorter ones. It is difficult for other ants to follow the path on a longer route due to the evaporation of the pheromone trail. In comparison, the shorter route still has traces even though the first pheromone has disappeared, but it regains the path through the pheromone from other ants. The stronger signal attracts other ants to follow the same trail, as shown in Figure <ns0:ref type='figure'>5</ns0:ref>. An ant k at node r can select a route s(i,j) based on a certain probability, and those that have completed a route leaves a pheromone trail of while the vaporized pheromone is (&#120591;) &#120591; &#119894;,&#119895; &#8592; &#120591; &#119894;,&#119895; + &#8710;&#120591; &#119896; calculated using where is a predetermined constant. &#120591; &#119894;,&#119895; &#8592;(1 -&#120588;)&#120591; &#119894;,&#119895; &#120588;</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Cyclic ACO-GA</ns0:head><ns0:p>The performance of the Cyclic ACO-GA (CGACA) proposed by <ns0:ref type='bibr' target='#b43'>Wang et al. (2019)</ns0:ref> was improvised in this study to improve its final result by obtaining a more optimal objective function value. It is important to note beforehand that CGACA conducts cyclical algorithmic exchange between ACO and GA with the ACO algorithm run in the first iteration. At the same time, the next solution search process is implemented through GA. ACO is applied when GA experiences stagnation in five successive iterations until the maximum iteration is achieved. Moreover, The ACO algorithm in CGACA used the roulette wheel, which has an unfavorable impact due to its production of random solutions and superior individuals <ns0:ref type='bibr' target='#b27'>(Moodi, Ghazvini, and Moodi, 2021;</ns0:ref><ns0:ref type='bibr' target='#b22'>Lipowski and Lipowska, 2012)</ns0:ref>. However, the CGACA framework proposed by <ns0:ref type='bibr' target='#b43'>Wang et al. (2019)</ns0:ref> does not provide settlement action in a situation where the GA does not experience stagnancy. It means the GA does not have a full role in exploiting the solution to the fullest.</ns0:p><ns0:p>The results of our initial research showed that the ACO algorithm takes the longest time when compared to GA and binary Particle Swarm Optimization (BPSO) in completing the cached data optimization. The worst-case scenario in CGACA will occur if the GA algorithm experiences a solution stagnation at the beginning of the iteration so that ACO will be rerun until the iteration is complete. Such a situation makes the ACO iteration dominate, so the total CGACA time consumption becomes very large. Table <ns0:ref type='table'>1</ns0:ref> explains the important weakness points to be considered in improving the CGACA performance.</ns0:p></ns0:div> <ns0:div><ns0:head>Methodology</ns0:head></ns0:div> <ns0:div><ns0:head n='1.'>GenACO</ns0:head><ns0:p>GenACO is an optimization framework that we propose to improve the performance of CGACA produced by <ns0:ref type='bibr' target='#b43'>Wang et al. (2019)</ns0:ref>. GenACO was proposed using the constant r0 to overcome the roulette wheel weakness in solution probabilities. The r0 value is used to balance exploring candidate solutions in the early iterations and focuses on exploiting the optimum solution at the end of the iteration. In addition, GenACO has two execution modes, namely cyclic and non-cyclic. Both managed to improve the quality of the solution.</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> describes the rationale of GenACO maintaining the cyclic ACO-GA hybrid method. Figure <ns0:ref type='figure'>6</ns0:ref> illustrates the step of the proposed hybrid GenACO. The ACO algorithm is needed to generate the best initial population for GA. The r0 has been used in this step. If the GA has a stagnant solution, the last ACO will run again until the iteration ends. Detail explanation about r0 will be explained further in the next section.</ns0:p><ns0:p>The Knapsack problem is a classic optimization problem that can be solved using brute force or greedy methods. However, the greedy method does not guarantee an optimal solution, while the brute force has a very high time complexity of O(2 n ). Based on Figure <ns0:ref type='figure'>6</ns0:ref>, assuming n is cached data, m is the number of GenACO ants, and Nc is the iteration used, then the complexity of GenACO is O(Nc*n*m). Consequently, the time complexity of GenACO is better than the classical method.</ns0:p></ns0:div> <ns0:div><ns0:head>a. Solution Probability</ns0:head><ns0:p>The ACO algorithm plays an essential role in generating the solution required by the GA to produce the best population. However, the disproportionate use of trace pheromones can cause the best solution of equation ( <ns0:ref type='formula' target='#formula_3'>4</ns0:ref>) not always to appear. The ACO algorithm in GenACO no longer uses the roulette wheel to select a solution from all the ants. It was replaced by adding r0 as a constant to be compared with a random number r [0,1], such that when r0 &gt; r, the ant selects the PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:07:63564:1:0:NEW 14 Aug 2021)</ns0:ref> Manuscript to be reviewed Computer Science cached data with the enormous pheromone value and selects the cached data randomly when otherwise.</ns0:p><ns0:formula xml:id='formula_1'>&#120591; &#119894; ( &#119905; ) = &#119908; 1 * &#119888;&#119905;(&#119894;) &#119905;&#119900;&#119905;&#119886;&#119897;(&#119888;&#119905;) + &#119908; 2 * &#119891;&#119903;(&#119894;) &#119905;&#119900;&#119905;&#119886;&#119897;(&#119891;&#119903;) + &#119908; 3 * &#119904;&#119911;(&#119894;) &#119905;&#119900;&#119905;&#119886;&#119897;(&#119904;&#119911;) (2)</ns0:formula><ns0:p>The initialization of the pheromone value follows equation ( <ns0:ref type='formula'>2</ns0:ref>) by using three cached data (&#120591;)</ns0:p><ns0:p>property values, which include the access count (ct), access time (fr), and data size (sz) with each multiplied by its respective weight w1=0.3 w2=0.3 w3=0.4 <ns0:ref type='bibr' target='#b43'>Wang et al. (2019)</ns0:ref>. Moreover, the visibility function in equation ( <ns0:ref type='formula'>3</ns0:ref>) also influences the choice of solutions made by the ant colony. Therefore, the probability of cached data is selected and entered into the cache server follows equation (4).</ns0:p><ns0:formula xml:id='formula_2'>&#120578; &#119894; ( &#119905; ) = &#119908; 1 * &#119888;&#119905; ( &#119894; ) * &#119891;&#119903; ( &#119894; ) &#119904;&#119911;(&#119894;)</ns0:formula><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_3'>&#119875; &#119894; ( &#119905; ) = &#120591; &#119894; (&#119905;) &#120572; * &#120578; &#119894; (&#119905;) &#120573; &#120591; &#119894; (&#119905;) &#120572; * &#120578; &#119894; ( &#119905; ) &#120573; + &#120591; &#119894; (&#119905;) &#120572; * &#120578; &#119894; (&#119905;) &#120573;<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>The r0 constant value becomes very important in determining the direction of ACO algorithm solution selection in GenACO. The simulation showed the r0 = 0.5 is suitable for placing in the first iteration of ACO to make the GenACO explore as many potential solutions as possible and not easily trapped on the local optimum in the initial iterations. In addition, r0 = 0.9 is planned to be set in the second part of ACO when the GA experiences a solution stagnation. This part is expected to focus more on exploiting the search for the optimum value. Based on experience, it will be difficult for the random number r[0,1] to be greater than 0.9. Therefore setting this value seems to force ACO always to choose the cached data with the most significant probability. The use of r0 in GenACO is illustrated in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>b. Cyclic and non-cyclic GenACO</ns0:head><ns0:p>GenACO proposed two modes, cyclic and non-cyclic, to improve CGACA performance. The first mode is described in Table <ns0:ref type='table'>3</ns0:ref>. In contrast, the second mode only runs the ACO algorithm once in the first iteration, after which the GA algorithm continues fully up to the maximum iteration. The non-cyclic mode does not use r0 = 0.9 to avoid the local optimum in the initial iteration. Therefore we used r0 = 0.7, which is expected to balance the exploration and exploitation of solutions in the first iteration, as illustrated in Table <ns0:ref type='table'>4</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Cached-data representation</ns0:head><ns0:p>The ACO and GA algorithms have different solution vector representations in modeling cached data. In the GA algorithm, a chromosome creates as much array space as the total data, with each representing the 0/1 condition of one cached data. A value of 1 indicates that the selected cached data will be entered into the cache server. At the same time, 0 means it will not be included. Meanwhile, in the ACO algorithm, the array space only consists of several routes selected by the ants. One ant and another may have a different number of selected routes.</ns0:p><ns0:p>Moreover, the route chosen by this ant represents a collection of cached data to be entered into the cache server, and the ACO solution representation on GenACO duplicates how ACO works in solving TSP. Figure <ns0:ref type='figure'>7</ns0:ref> and Figure <ns0:ref type='figure'>8</ns0:ref> illustrate an example solution of GA and ACO, respectively.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Multi-objective function</ns0:head><ns0:p>GenACO can solve the cached data offloading optimization problem using the knapsack problem approach as a multi-objective optimization model. Its multi-objective optimization is built using three property values which include access count (ct), access time (fr), and data size (sz) contained in each cached data. However, not all data can enter the cache server due to the limited capacity. Therefore, their priority value was determined based on these three property values.</ns0:p><ns0:p>These three variables also have their respective objective functions. The objective function of access count is in line with equation ( <ns0:ref type='formula' target='#formula_5'>8</ns0:ref>); access time with equation ( <ns0:ref type='formula' target='#formula_6'>9</ns0:ref>); and data size with equation ( <ns0:ref type='formula'>10</ns0:ref>) <ns0:ref type='bibr' target='#b43'>Wang et al. (2019)</ns0:ref>, multiplied by their respective weights , and</ns0:p><ns0:formula xml:id='formula_4'>&#119908; 1 = &#119908; 2 = 0.3 &#119908; 3</ns0:formula><ns0:p>, which is then calculated the final objective function as profit (Fx) following ( <ns0:ref type='formula'>11</ns0:ref> </ns0:p><ns0:formula xml:id='formula_5'>&#119891; &#119862;&#119879; = 1 &#119899; &#119899; &#8721; &#119895; = 1 &#119863; &#119888;&#119905; (&#119895;) -&#119863; &#119888;&#119905; (&#119898;&#119894;&#119899;) &#119863; &#119888;&#119905; (&#119898;&#119886;&#119909;) -&#119863; &#119888;&#119905; (&#119898;&#119894;&#119899;)<ns0:label>(8)</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>&#119891; &#119865;&#119877; = 1 &#119899; &#119899; &#8721; &#119895; = 1 &#119863; &#119891;&#119903; (&#119895;) -&#119863; &#119891;&#119903; (&#119898;&#119894;&#119899;) &#119863; &#119891;&#119903; (&#119898;&#119886;&#119909;) -&#119863; &#119891;&#119903; (&#119898;&#119894;&#119899;)<ns0:label>(9)</ns0:label></ns0:formula><ns0:formula xml:id='formula_7'>&#119891; &#119878;&#119885; = 1 &#119899; &#119899; &#8721; &#119895; = 1 &#119863; &#119904;&#119911; (&#119895;) -&#119863; &#119904;&#119911; (&#119898;&#119894;&#119899;)</ns0:formula><ns0:p>&#119863; &#119904;&#119911; (&#119898;&#119886;&#119909;) -&#119863; &#119904;&#119911; (&#119898;&#119894;&#119899;)</ns0:p><ns0:p>(10)</ns0:p><ns0:formula xml:id='formula_8'>&#119865; &#119909; = &#119908; 1 * (1 -&#119891; &#119862;&#119879; ) + &#119908; 2 * (1 -&#119891; &#119865;&#119877; ) + &#119908; 3 * &#119891; &#119878;&#119885; (11)</ns0:formula></ns0:div> <ns0:div><ns0:head n='4.'>Performance measurement</ns0:head><ns0:p>Evolutionary algorithms such as ACO and GA are widely used to complete optimization work. Generally, the optimization performance measurement through the continuous domain is usually measured by the optimum value of the objective function and the solution convergence time achieved <ns0:ref type='bibr' target='#b36'>(Sahu, Panigrahi, and Pattnaik, 2012;</ns0:ref><ns0:ref type='bibr' target='#b40'>Sun, Song, and Chen 2019)</ns0:ref>. Meanwhile, the knapsack problem belongs to a discrete domain. Therefore, additional performance indicators are the best and worst profit, the average profit achieved, and the total number of items in the knapsack (Rizk-Allah and Hassanien, 2018; Y. <ns0:ref type='bibr'>Li et al, 2020;</ns0:ref><ns0:ref type='bibr' target='#b23'>Liu 2020)</ns0:ref>. However, the best solution in the cached data offloading optimization case study was selected based on the highest number of knapsack items with the lowest objective function value.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiments and Evaluation</ns0:head></ns0:div> <ns0:div><ns0:head n='1.'>Dataset</ns0:head><ns0:p>GenACO was tested by datasets used in <ns0:ref type='bibr' target='#b43'>Wang et al. (2019)</ns0:ref> with three cached data property : access count (ct), access time (fr), and data size (sz) with their respective sizes in Mbit, decimal, and second. Moreover, the maximum capacity of the cache server was 950 Mbit. The dataset can be accessed through the given GitHub repository.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Simulation setup</ns0:head><ns0:p>The single ACO and GA scenario was divided into four scenarios. Moreover, the CGACA scenario was repeated three times to observe the solution stagnation position experienced by GA. The results were compared with the proposed cyclic and non-cyclic GenACO. The simulation was conducted using 10 particles and 20 iterations with the parameter settings of the six scenarios presented in Table <ns0:ref type='table'>5</ns0:ref>. Both parameters were adopted from <ns0:ref type='bibr' target='#b43'>Wang et al. (2019)</ns0:ref> then simplified because the best solution average has been found in such particles and iterations range. The simulation was conducted using PHP programming language and Mysql database to facilitate our subsequent research in implementing fog computing architecture.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Results and discussion a. Performance comparison of single AC</ns0:head><ns0:p>The complete simulation results in this paper can be seen in the Appendix at the end of this paper. Table <ns0:ref type='table'>6</ns0:ref> showed the comparison of cached data optimization results for the single ACO algorithm. According to <ns0:ref type='bibr' target='#b43'>Wang et al. (2019)</ns0:ref>, the best known objective function (Fx) for cached data offloading is 0.4374 with an optimum cached data (ch.dt) of 15. The first part of Table <ns0:ref type='table'>6</ns0:ref> showed the ACO algorithm from the CGACA section, and scenario-1a was observed to have relied on the roulette wheel on the cached data selection mechanism to be loaded in cache storage. However, this roulette wheel mechanism has two weaknesses (i) produce random solutions (ii) cause the emergence of superior individuals. The average of Fx and cached data generated in scenario-1a are the worst. The solution looks random and lacks convergence Scenarios: 1b, 1c, and 1d are single ACO obtained from parts of the GenACO framework and observed improvement in solution probability as described in section 3.1. The solution probability based on r0 was compared with a random number r[0,1]. If r0 &gt; r, then the solution will follow (6), otherwise it will follow (5). Scenario-1b was set using r0=0.3 based on the assumption that the random number r will be easier to be greater than this value, and the scenario is expected to have more variety of solutions. However, scenario-1b managed to obtain an PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63564:1:0:NEW 14 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science optimal solution of 15 cached 7 times out of 20 iterations. The exciting thing from these results is that the optimal solution was generated from different Fx values, 0.4374, 0.4405, and 0.4382. The best solution was the smallest was 0.4374. Scenario-1c was set using r0=0.5 and managed to obtain an optimal solution of 15 cached 5 times out of 20 iterations. Initially, r0=0.5 was assumed to have the capacity to produce more optimal cached data than r0=0.3, but it did not. It proves unreliable to have the random r value in a position more than or less than the r0 value. Meanwhile, the five best optimal solutions found in this scenario have the same Fx value of 0.4382, and this means all the 15 selected cached data have the same id_data. In this case, the best solution in scenario-1b is better than scenario-1c due to its smaller Fx value.</ns0:p><ns0:p>The last single ACO, scenario-1d, has the highest value of 0.7 than the previous two scenarios. The solution selection is expected to lead more to equation ( <ns0:ref type='formula'>6</ns0:ref>). Meanwhile, Table <ns0:ref type='table'>6</ns0:ref> showed this scenario-1d produces the best solution compared to the previous two scenarios by having 95% optimal solution with 19 out of the 20 iterations having 15 cached data with two dominant Fx values, which are 0.435 and 0.4405. It is important to note that the Fx value of 0.435 became the best value for the cached data offloading objective function generated from the overall single ACO simulation scenario. The results of this simulation are in accordance with our manual calculations to improve the results of previous research conducted by <ns0:ref type='bibr' target='#b43'>Wang et al. (2019)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>b. Performance comparison of single GA</ns0:head><ns0:p>Table <ns0:ref type='table'>7</ns0:ref> compared the single GA algorithm with four different mutation probability scenarios and crossover compositions. The first part of the single GA simulation was observed to have used a crossover composition of Parent-1 (P1)=15% and Parent-2 (P2)=85%. It means P1 gave the first gene to the third gene, and P2 gave the fourth gene to the twentieth gene when the crossover occurred. This calculation was used in all crossover scenarios in this single GA simulation. Another critical parameter is gene mutation probability. The probability of gene mutation in the simulation was 0.5, expecting that the crossover mechanism will not create a new fitness value smaller than the previous best value.</ns0:p><ns0:p>Scenario-2a showed the worst solution among all the single GA simulation scenarios, with the best solution produced having only 11 cached data with the same Fx values of 0.4653. At first glance, the solution seems to be rapidly converging but quickly trapped in the local optimum due to the inability of the crossover mechanism to produce a variety of solutions. Meanwhile, scenario-2b managed to determine a (near) optimal solution with 14 cached data starting from the 13th to the maximum iteration, and the search for solutions was varied. Moreover, its higher mutation probability value compared with the previous scenario caused each gene in the chromosome to have more flexibility to improve the solution quality in the next generation.</ns0:p><ns0:p>Scenario-2c uses the same mutation probability value of 0.25 as scenario-2b but has a crossover composition of P1 and P2, slightly different from scenario-2b. Therefore, the results also differ significantly from the average Fx value and the cached data produced. The last scenario-2d used a mutation probability value of 0.5, with P1 and P2 having 50% composition each. This scenario is generally similar to scenario-2a by being a fast solution towards PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63564:1:0:NEW 14 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>convergence, but it was quickly trapped at the optimum local value at the beginning of the iteration. Moreover, the solution improvement with a mutation probability value of 0.5 also failed to show, and this means the parameter setting in GA greatly affected the quality of the solution produced.</ns0:p><ns0:p>Table <ns0:ref type='table'>7</ns0:ref> has shown that some of the best solutions produced by a single GA seem to be more optimal (with a smaller Fx) than the best known Fx. However, this smaller Fx value does not indicate a better solution because the cached data (ch.dt) is less than the best-known cached data. Based on the best solution to the knapsack problem approach shown in equation ( <ns0:ref type='formula'>7</ns0:ref>), the single GA solution has not outperformed the best-known solution.</ns0:p><ns0:p>The cached data offloading solution produced by GA is no better than ACO. None of the solutions from GA achieved the best-known value. GA does not guarantee the search for a (near) optimum solution as formulated by the pheromone function in ACO. However, GA has a significant role in accelerating the convergence of cached data offloading case study solutions. If correlated between Figure <ns0:ref type='figure'>6</ns0:ref> Table <ns0:ref type='table'>7 and Table 8</ns0:ref>, ACO and GA have a vital function in their respective positions. In scenario-1b and scenario-1d, the ACO algorithm can find the optimal solution in less than five iterations. This initial solution from ACO can be used as an outstanding initial population for the GA algorithm to continue the solution search process until the maximum iteration. This is supported by Table <ns0:ref type='table'>7 and Table 8</ns0:ref>, which show the trend of solution improvement from beginning to end.</ns0:p></ns0:div> <ns0:div><ns0:head>c. Performance comparison of CGACA</ns0:head><ns0:p>Table <ns0:ref type='table'>8</ns0:ref> showed the simulation results for the CGACA framework with the ACO algorithm run in the first iteration to generate the initial population. At the same time, the GA continued the process of determining the solution. However, the GA algorithm was expected to experience solution stagnation, which led to the application of the ACO algorithm for the second time to continue the solution search process up to the maximum iteration. In the first part, the GA solution was observed to have stagnated in the fifth iteration, and the ACO continued the search process from the sixth iteration to the maximum. Therefore, this part of the experiment was the worst case of CGACA since it required the longest execution time. It was associated with the quick stagnation of GA, which led to the use of ACO to complete the solution search process until the maximum iteration was reached.</ns0:p><ns0:p>In the second part, the GA algorithm did not experience any solution stagnation. Therefore, the required execution time was the best, but it could not determine an optimal solution since the initial population created by ACO was not optimal. This is one of the disadvantages of the roulette wheel. It is also important to note that selecting a solution based on the roulette wheel was only in one iteration, and the optimal solution was not found. Moreover, the GA algorithm has a solution stagnation at the end of the iteration, precisely in the 13th iteration. It means the time consumption at the end of the CGACA simulation is not as much as the first part.</ns0:p><ns0:p>Meanwhile, the quality of the CGACA solution in the third part was the best, and the solution improvement on the last-ACO was expected to be successfully conducted by producing a (near) optimum solution of 14 cached data. The same was found in the first part of CGACA, but the Fx value produced in the third part is better. Furthermore, the simulation results in Table <ns0:ref type='table'>8</ns0:ref> showed that the CGACA total time consumption increased along with the number of iterations completed by the ACO. A more significant portion of iterations run by ACO led to the generation of more extended time consumption. It means ACO required a long execution time to update the pheromone trace on each selected cached data by all ants in each iteration. Therefore, it is very inefficient if it has to complete the solution search by a single ACO.</ns0:p></ns0:div> <ns0:div><ns0:head>d. Comparison with cyclic and non-cyclic GenACO</ns0:head><ns0:p>Appendix Table <ns0:ref type='table'>8 and Table 9</ns0:ref> marks the solution from ACO with a dark color on the cell background. Table <ns0:ref type='table'>9</ns0:ref> compared the performance of cached data offloading using the CGACA and GenACO frameworks. The GenACO simulations were divided into cyclic and non-cyclic scenarios. Both utilized ACO to run once in the first iteration while the GA continue the process and the last ACO was prepared to help when the GA experiences a solution stagnation.</ns0:p><ns0:p>The first part showed the simulation result obtained from the CGACA framework, which was observed to have succeeded in determining the optimum solution based on the best value of 15 cached data with Fx=0.4374, but this was only found once in the 18th iteration. The first part of Table <ns0:ref type='table'>9</ns0:ref> showed that the GA stagnated at the 7th iteration, leading to the last ACO from the 8th iteration to the maximum. In the end, CGACA required more than 40 seconds which is not too different from the value in Table <ns0:ref type='table'>8</ns0:ref>, and this higher total time was associated with the dominance of execution by ACO.</ns0:p><ns0:p>The second part of Table <ns0:ref type='table'>9</ns0:ref> showed the results for the GenACO simulation with cyclic mode, and the working principle was found to be precisely the same as CGACA (baseline). However, the probability of a cached data selection solution was improved. The first ACO on cyclic GenACO was set using r0=0.7 with the expectation of generating more (near) optimal solutions. The results showed that the GA algorithm has succeeded in determining the optimum solution of 15 cached data with an Fx value of 0.4350 better than the best-known Fx value between the 2nd and 12th iterations. The improvement is only 0.0024 but based on the raw data we tracked, GenACO can accommodate more small cached data. The server utility is maximized because it can enter more cached data in the future.</ns0:p><ns0:p>However, the solution produced from the 13th iteration was considered stagnant since the Fx value is the exact five times in a row. It led to the application of the last ACO starting from the 14th up to the maximum iteration. The solution produced did not get better. It was discovered that all ants could not determine the optimal solution previously obtained until the maximum iteration was run. It means the last ACO was stuck at the local optimum because the 14 cached data solutions added more pheromone concentration than the previous one, the 15 cached data solutions. Therefore, the last ACO preferred to follow the solution of the 14 cached data. It was influenced by setting r0 = 0.9 on the last ACO, which directed the ants to the path with the strongest pheromone than the new path required to be explored.</ns0:p><ns0:p>The third part of Table <ns0:ref type='table'>9</ns0:ref> showed the results of the non-cyclic GenACO simulation, which only ran ACO on the first iteration and the GA on the second iteration up to the end. The GA was observed to have stagnated solutions for five consecutive iterations but continuously searched for solutions up to the maximum iteration. Meanwhile, r0 = 0.9 was used because ACO was used only once. The ants preferred the cached data with maximum probability, and several (near) optimum solutions are expected to be obtained by ACO ants in this non-cyclic mode to form the best initial population in the GA algorithm. The 3rd to the 5th iteration results showed that the GA immediately found the optimal solution for the best known 15 cached data with Fx=0.4350 and maintained this value convergently from the 10th to the maximum iteration. Based on table 10, the proposed hybrid GenACO method, particularly non-cyclic mode, is superior to single ACO, GA, and CGACA in obtaining an average Fx, best solution, the optimal amount of cached data during iteration. Full results can be seen in the Appendix.</ns0:p></ns0:div> <ns0:div><ns0:head>e. The impact of different r0</ns0:head><ns0:p>Based on Table <ns0:ref type='table'>11</ns0:ref>, the value of r0 has an essential role in producing a solution used as an initial parent for the GA algorithm. At r0=0.3, the solutions produced by ACO vary widely. Three experiments conducted at r0=0.3 resulted in three different Fx and ch.dt values. This is in line with the first ACO goal in the GenACO framework, which is to focus on solution exploration. However, the resulting solution with r0=0.3 is quite far from the target best-known value, so that this will make GA work difficult. The results shown are also in the use of r0=0.5. However, the quality of the solution of r0=0.5 is better than r0=0.3. In general, the two values of r0 are still not close to the best-known value, and the resulting solution does not appear to be convergent. This proved our hypothesis in the Solution Probability section: the smaller the value of r0, the more the solution chosen by the ant colony refers to equation ( <ns0:ref type='formula'>5</ns0:ref>). The use of r0=0.7 and r0=0.9 resulted in better solution quality. Both r0 solutions begin to look convergent. The Fx and ch.dt values are getting closer to the best-known values. But keep in mind that r0 = 0.7 or r0 = 0.9 is not recommended for use in the first ACO because it can reduce the opportunities for solution exploration. Both are better used in the last ACO to narrow the search space to focus on exploiting the best value towards the best-known. The impact of using r0 on the GenACO framework can be seen again, as shown in Table <ns0:ref type='table'>9</ns0:ref>. Based on Table <ns0:ref type='table'>9</ns0:ref>, the ACO solution using r0=0.7 helps GA find the optimum solution and achieve solution convergence.</ns0:p></ns0:div> <ns0:div><ns0:head>f. Comparison of stagnant solution CGACA and GenACO</ns0:head><ns0:p>Figure <ns0:ref type='figure'>9</ns0:ref> showed the solution stagnation of the three ACO-GA hybrid simulations. The first ACO on CGACA obtains an initial solution that is not very good. This situation impacts GA performance which fails to improve the initial solution so that it is quickly trapped in the local optimum. Moreover, the roulette wheel mechanism and the probability of crossover and mutation cause GA to fail to improve this initial solution. In the end, GA stuck to the solution stagnation from iterations 2 to 5. The GA algorithm, which stagnated the solution far from the best known, caused the last ACO hard to improve the solution and achieve solution convergence. However, the roulette wheel mechanism used by the last ACO at CGACA failed to fix this until the maximum iteration.</ns0:p><ns0:p>The GA algorithm on Cyclic GenACO also suffers from a solution stagnation. However, before solution stagnation occurred, GA had a better initial solution. GA successfully maximized the initial solution of the first ACO to become an optimal solution. However, because the optimal solution is equal in five consecutive iterations, it is considered a stagnation of the solution. In the end, the last ACO on cyclic GenACO was executed, but the optimal solution previously generated could not be maintained. We suspect this situation was caused by set r0=0.9, so the solution fell out from the previous optimum. In future work, this will be our concern so r0 in the last ACO can be more adaptive to the previous solution.</ns0:p><ns0:p>The use of r0=0.7 in the non-cyclic GenACO simulation succeeded in obtaining a better initial solution than CGACA and cyclic GenACO. This situation makes GA job easier. In addition, the proper parameter setting on the probability of mutation and crossover makes GA performance more reliable. It can be seen in the performance of GA, which can maintain the optimal solution until the maximum iteration.</ns0:p><ns0:p>Based on Figure <ns0:ref type='figure'>9</ns0:ref>, non-cyclic GenACO is the best solution for average solution quality and total time consumption. However, GenACO cyclic and non-cyclic modes did not have a significant difference in total time consumption. Non-cyclic GenACO is only 4.5 seconds ahead of cyclic mode. Based on Table <ns0:ref type='table'>9</ns0:ref>, the saving time obtained by GenACO can be calculated using equation ( <ns0:ref type='formula'>12</ns0:ref>). Therefore, we calculated cyclic GenACO saving time consumption by up to 38%, while noncyclic GenACO saving it by 47%. Both GenACO modes can be accepted as a solution to the problem of cached data offloading. The ACO algorithm is very appropriate to use in the first iteration to create an excellent and reliable initial population, making it easier for GA to find the optimal solution quickly and good average quality of the solution.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>An edge computing framework must have a cached data offloading capability that needs to be optimized due to the limited storage capacity and maximum possible profit. This paper proposed improving the hybrid ACO-GA algorithm performance using a new GenACO framework with cyclic and non-cyclic modes. The simulation results show that GenACO minimizes the objective function of cached data optimization from 0.4374 to 0.4350. In addition, GenACO can also outperform the quality of the solution in terms of the average objective function generated. Moreover, the parameter setting in evolutionary algorithms plays an essential role in the overall algorithm performance.</ns0:p><ns0:p>Based on the simulation results, non-cyclic GenACO is the best mode for solving cached data offloading optimization. This mode can reduce time consumption by up to 47%. Our further research tests GenACO by measuring the hit ratio, examining the impact of latency and response time on the user side in an edge computing environment. The solution search process is completed by ACO over a very long time when the GA experiences a solution stagnation condition at the beginning of the iteration.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>GA iteration limitation</ns0:head><ns0:p>The CGACA framework limits the GA to a maximum of 20 iterations from a total of 100 allowed for the ACO. This limits the optimal performance of the GA.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>Solution probability</ns0:head><ns0:p>The ACO algorithm in the CGACA framework uses a roulette wheel mechanism for solution selection and this has an unfavorable impact due to its production of random solutions and superior individuals <ns0:ref type='bibr' target='#b27'>(Moodi, Ghazvini, and Moodi, 2021;</ns0:ref><ns0:ref type='bibr' target='#b22'>Lipowski and Lipowska, 2012)</ns0:ref>. 1</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>) Wang et = 0.4 al. (2019). The smaller Fx indicates high priority. The dataset consists of cached data candidates of &#119899; &#119909; 1 , that can be entered by the cache server with capacity and profit . Meanwhile, each &#119909; 2 ,&#119909; 3 ,&#8230;,&#119909; &#119895; &#119878; &#119865; &#119909; consists of access count (ct), access time (fr), and data size (sz). Furthermore, the GenACO is &#119909; &#119895; expected to determine the solution vector when to calculate objective function. The &#119909; &#119895;&#119909; &#119895; = 1 cached data is ignored when and will not be included in the objective function calculation &#119909; &#119895; = 0 following (11). Therefore problem definition following equation (7).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>condition, CGACA takes a very long time because the dominant iteration is held by ACO. 2 Dominant iteration</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,370.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure> </ns0:body> "
"The Rebuttal Letter Your article has received a MAJOR REVISIONS decision Please revise and resubmit before 07 Sep 2021 (27 days) No Comments of the Reviewers 1 Reviewer 1 Basic reporting The paper presents an interesting multi-objective cached data offloading optimization approach. Experimental design Following issues needs to be addressed. 1. Pls. format your paper according to the journal guideline.  Agreed. This manuscript has used a template, has complied with the instructions provided by PeerJ, and has passed staff checks. 2. Quantify your results in the abstract  Ok. The explanation of the main results has been added in the Abstract (lines: 33-35). 3. Add a gap analysis section in your introduction and also mention the research questions.  As recommended, the gap analysis was highlighted in introduction lines 73-83. Furthermore, the research question was added in line 84. 4. Fig. 1 is generic, plz add steps specific to your solution.  Fig.1 describes the general architecture of the fog network environment. But even so, we provide a little snippet by providing a paper icon marked with a question-mark to illustrate the data offloading problem. The solution we provide is at the algorithm level, presented in Fig.6. 5. Discussion section is missing.  The discussion section is under the Experiments and Evaluation Section. All phenomena and anomaly results have been discussed in this section (line: 324) Validity of the findings As mentioned above Additional comments As mentioned above 2 Reviewer 2 Basic reporting - Abstract, row 26 “GenACO” and row 38 “ACO-GA”; every non-usual acronym (for the general reader) must be explained at its first use. The extended versions of these abbreviations must be written at their first uses. - Row 73, “Z={3, 4,5} or Z={2.5}”. Correct: “Z={3,4,5} or Z={2,5}” - Row 78, “However, this method still leaves problems..”. Correct: “However, these methods still leave problems..” - Row 88, “resulting solution not converge quickly.”. Correct: “resulting solution does not converge quickly.” - Row 90, “CGACA”; every non-usual acronym (for the general reader) must be explained at its first use. The extended version of this abbreviation must be written at its first use. - Row 142, “Mobile Edge Computing (MEC)”. Correct: “MEC” - Row 227, “r0”. Correct: “r0”, It should be “r0” in the rest of the text - Row 250, “ct, fr, sz”. Correct: “ct, fr, sz”. These notations must be used in all formulas and in the rest of the text. - Row 251, “w1, w2, and w3 “. Correct: “w1, w2, and w3 “. Correct notations should be used in the rest of the text - Row 302, “Fx”. Correct: “Fx”.This notation should be used in the rest of the text and formulas. - Row 339, “Appendix at the of of this”. Correct: “Appendix at the end of this” - Row 356, “scenarios-1b”. Correct: “scenario-1b” - Row 374, “ conducted by Wang et al.”. Correct: “conducted by Wang et al.(years must be written)” - Row 452, “Table 9 compares…”; Row 459 “..in Table 9 showed the..”. This is the same for all table-related expressions. There must be time consistency. Simple present tense or simple past tense? - Row 471, “..five times in a row and this led to..”. Correct: “..five times in a row, and this led to..” - Row 472, “..to the maximum iteration but..”. Correct: “..to the maximum iteration, but..” - Row 491, “Based on table 10,”. Correct: “Based on Table 10,” - Row 536, “algorithms plays an important..”. Correct: “algorithms play an important..” - “Table 5 Parameters setting”. Correct: “Table 5 Parameters settings” - “Table 10 A comparison of the best result from single GA, ACO, CGACA, and GenACO”. Correct: “Table 10 A comparison of the best results from single GA, ACO, CGACA, and GenACO”.  Agreed, and I have corrected all comments about these grammatical mistakes. - And other grammatical mistakes. Grammarly software and WORD spelling&grammar tool can be used.  I have made this suggestion. All paragraphs in the manuscript have been double-checked using Grammarly. - In general, the whole paper must be proofread by a very good English speaker (and writer).  I had done this process before the manuscript was submitted to PeerJ Journal. All of these errors have been corrected.  We also have contacted copyediting@peerj.com for information on language editing services. Experimental design Research question well defined, relevant & meaningful. It is stated how research fills an identified knowledge gap. Rigorous investigation performed to a high technical & ethical standard. Validity of the findings 1) The r0 parameter was used in the study and it was stated that some r0 values gave better results. The authors should discuss the effect of different r0 values on the results in experimental studies by using specific ranges.  Agreed. We have added Table 11 and a discussion of this issue (lines 474-491). 2) It is mentioned in the text that evaporation will be more on a long path and it is difficult for ants to follow this path. Therefore, In figure5, it is said that path b is shorter, but isn't path a shorter?  I’m sorry. This is a typo while writing the caption. We have fixed it. 3) Authors should indicate the time complexity of the proposed algorithm. The degree of complexity of this problem should be described well. Otherwise, there is a perception that the problem can be solved with classical optimization problems.  Agreed. We have added time a complexity explanation (lines 227-232). 4) When comparing the results obtained, the authors show that the objective function produces more appropriate solutions than other approaches. However, statistical (Friedman's test and Wilcoxon signed-rank test etc.) methods should be used to show whether the obtained results provide a significant improvement.  Most of the previous studies in the knapsack problem and data offloading did not use statistical validation (RizkAllah et al., 2018; Y. Li et al., 2020; Liu 2020). 5) Since the proposed approach uses a single objective function, it should also be compared with the proposed state-of-the-art approaches (grey wolf optimizer, artificial algae optimization etc.) for binary optimization and discrete optimization methods.  We understand that there are limitations to our research. We can write this suggestion in the conclusion section if desired. Additional comments - 3 Reviewer 3 Basic reporting The study is about optimization of limited memory used for edge computing in the field of IoT. For this purpose, a new framework called GenACO is proposed to improve the performance of cached data optimization implemented in previous research. GenACO incorporates a solution selection probability mechanism to more reliably balance the discovery and exploitation process involved in finding solutions. Also, GenACO has two modes: cyclical and non-cyclical, it has been noted to have the ability to increase optimal cached data resolution, improve average solution quality, and reduce overall time consumption from previous research results. It has been noted that acyclic GenACO gives better results in solving cached data dump optimization. Experimental design The research question is about eliminating the disadvantages of the proposed ACO-GA method to solve the data dumping problem. These problems are that the execution time is quite long and the roulette wheel does not allow the solution to converge quickly. In addition, it is stated that the iteration constraints in the GA algorithm cause the GA performance to be non-optimal. For these reasons, a new hybrid method of the ACO-GA algorithm, named GenACO, framework is proposed to improve the performance of CGACA in this study. In summary, the research question was well defined and aimed to solve an emerging problem in the field of IoT. The proposed solution for the problem includes the use of known methods as a hybrid. However, the algorithms of the proposed method are explained with unclear figures. 1. Related methods should be made more descriptive by using pseudo code or block diagrams.  Agreed. Pseudo-code has been added in Table 3 and Table 4. 2. The studies conducted for the experiment and the datasets used here do not contain enough detail.  Ok. We have added an explanation about these comments (lines: 312; lines: 321-322) Validity of the findings The simulation results presented in the study show that GenACO's cached data optimization reduced the objective function from 0.4374 to 0.4350. It has also been stated that the non-cyclical GenACO reduces time consumption by up to 47%. However, it is not mentioned how this result related to time was obtained.  Eq.(12) has been added as well as an explanation of how to get this 47% saving time on lines: 520-522. Additional comments In line 85, the word “Wang et al.” is written repeatedly.  Ok. We have fixed it. The study is valuable in that it offers a solution to a current problem. However, the improvement achieved for memory space with the proposed method is very limited. The time-related gain, on the other hand, is not clearly expressed. The work is acceptable after the corrections stated in the other titles. "
Here is a paper. Please give your review comments after reading it.
243
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background: The planning and control of wind power production rely heavily on short-term wind speed forecasting. Due to the non-linearity and non-stationarity of wind, it is difficult to carry out accurate modeling and prediction through traditional wind speed forecasting models. Methods: In the paper, we combine empirical mode decomposition (EMD), feature selection (FS), support vector regression (SVR) and cross-validated lasso (LassoCV) to develop a new wind speed forecasting model, aiming to improve the prediction performance of wind speed. EMD is used to extract the intrinsic mode functions (IMFs) from the original wind speed time series to eliminate the non-stationarity in the time series. FS and SVR are combined to predict the high-frequency IMF obtained by EMD.</ns0:p><ns0:p>LassoCV is used to complete the prediction of low-frequency IMF and trend. Results: Data collected from two wind stations in Michigan, USA are adopted to test the proposed combined model. Experimental results show that in multi-step wind speed forecasting, compared with the classic individual and traditional EMD-based combined models, the proposed model has better prediction performance. Conclusions: Through the proposed combined model, the wind speed forecast can be effectively improved.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>As a sustainable and renewable energy alternative to traditional fossil fuels, wind power has attracted widespread attention and rapid development in recent years <ns0:ref type='bibr' target='#b14'>(Hu et al. 2018)</ns0:ref>. According to the statistical report of the Global Wind Energy Council, the world capacity is about 650.8 GW, of which the installed capacity in 2019 is 59.7 GW (GLOBAL 2020). However, with the increase of grid-connected wind power, the stability of the power system will be challenged <ns0:ref type='bibr' target='#b25'>(Liu et al. 2018a)</ns0:ref>. This is because wind power is closely related to the non-stationarity of wind speed.</ns0:p><ns0:p>Accurate wind speed forecasting will provide support for wind power planning and control, and even help reduce the impact of unexpected events on the stability of the power system <ns0:ref type='bibr' target='#b26'>(Liu et al. 2018b)</ns0:ref>. Due to the non-linearity and non-stationarity of wind, it is difficult to establish a satisfactory wind speed forecasting model. To this end, researchers have made great efforts to improve forecasting performance from different aspects, including basic predictive models, preprocessing methods, and combined or hybrid strategies.</ns0:p><ns0:p>For basic predictive models, a variety of methods has been presented, mainly including physical models, statistical models, and machine learning. Physical models usually use physical parameters such as temperature and pressure to build wind speed forecasting models <ns0:ref type='bibr' target='#b11'>(Heng et al. 2016)</ns0:ref>. One of the representative technologies is Numerical Weather Prediction (NWP). These models can PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:1:0:NEW 7 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science usually achieve better performance in medium and long-term wind speed forecasting, but are not suitable for short-term wind speed forecasting.</ns0:p><ns0:p>The statistical model is a method of using historical data to predict wind speed. Commonly used statistical models have autoregressive (AR) <ns0:ref type='bibr' target='#b30'>(Lydia et al. 2016a</ns0:ref>), autoregressive moving average (ARMA) <ns0:ref type='bibr' target='#b44'>(Torres et al. 2005</ns0:ref>) and autoregressive integrated moving average (ARIMA) <ns0:ref type='bibr'>(Wang &amp; Hu 2015)</ns0:ref>. <ns0:ref type='bibr' target='#b17'>Kavasseri et al. (Kavasseri &amp; Seetharaman 2009)</ns0:ref> proposed an f-ARIMA model for wind speed forecasting, and claimed that compared with the persistence model, their model has significantly improved the prediction accuracy. <ns0:ref type='bibr' target='#b0'>Maatallah et al. (Ait Maatallah et al. 2015)</ns0:ref> developed a Hammerstein autoregressive model to predict wind speed, and verified that their model has a better root mean square error (RMSE) than ARIMA and ANN. Poggi et al. <ns0:ref type='bibr' target='#b37'>(Poggi et al. 2003</ns0:ref>) developed a model to predict wind speeds of three Mediterranean sites in Corsica based on AR, and proved that the synthetic time series can retain the statistical characteristics of wind speeds. Likely, Lydia et al. <ns0:ref type='bibr' target='#b31'>(Lydia et al. 2016b</ns0:ref>) presented a short-term wind speed forecasting model by combining linear AR and non-linear AR. In general, the statistical model is based on the linear assumption of data, while the wind speed series have non-linear characteristics, which makes those methods unable to effectively deal with the non-linear characteristics of wind.</ns0:p><ns0:p>To solve the problem, machine learning is introduced by researchers to predict wind speed.</ns0:p><ns0:p>Normally, machine learning is used as a predictive model or parameter optimization, mainly includes the evolutionary algorithm, extreme learning machine (ELM) algorithm, ANN algorithm and SVM algorithm. <ns0:ref type='bibr' target='#b51'>Wang et al. (Wang 2017</ns0:ref>) presented a wind speed forecasting model by combining SVM and particle swarm optimization (PSO). Zhang et al. <ns0:ref type='bibr' target='#b54'>(Zhang et al. 2019)</ns0:ref> combined online sequential outlier robust ELM with hybrid mode decomposition (HMD) to predict wind speed. <ns0:ref type='bibr' target='#b47'>Wang et al. (Wang et al. 2018</ns0:ref>) developed an error correction-based ELM model for short-term wind speed forecasting. <ns0:ref type='bibr' target='#b27'>Liu et al. (Liu et al. 2020</ns0:ref>) introduced the Jaya-SVM (Jaya algorithm-based support vector machine) into wind speed forecasting. Krishnaveny et al. <ns0:ref type='bibr' target='#b32'>(Nair et al. 2017)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Boltzmann machines presented a wind speed forecasting. Hong et al. <ns0:ref type='bibr' target='#b12'>(Hong &amp; Satriani 2020)</ns0:ref> based on a convolutional neural network developed a day-ahead wind speed forecasting model.</ns0:p><ns0:p>Although researchers claim that deep learning can achieve better performance, these methods are computationally intensive and prone to overfitting on small data sets.</ns0:p><ns0:p>In addition to these basic forecasting models, preprocessing methods such as feature selection (FS) are also introduced in wind speed forecasting. FS is mainly used to select the best input for the basic predictive model, which usually greatly affects the accuracy of the model <ns0:ref type='bibr'>(Li et al. 2018a</ns0:ref>).</ns0:p><ns0:p>Paramasivan et al. <ns0:ref type='bibr' target='#b35'>(Paramasivan &amp; Lopez 2016</ns0:ref>) employed a ReliefF feature selection algorithm to identify key features, and then used a bagging neural network to predict the wind speed. Niu et al. <ns0:ref type='bibr' target='#b33'>(Niu et al. 2018</ns0:ref>) presented a multi-step wind speed forecasting model using optimal FS, modified bat algorithm and cognition strategy. Botha et al. <ns0:ref type='bibr' target='#b4'>(Botha &amp; Walt 2017)</ns0:ref> combined FS with SVM to predict short-term wind speed. <ns0:ref type='bibr' target='#b21'>Kong et al. (Kong et al. 2015)</ns0:ref> combined feature selection and reduced support vector machines (RSVM) for wind speed forecasting.</ns0:p><ns0:p>Due to the unstable nature of wind, the model of combined-or hybrid-signal processing technology has become the mainstream of wind speed forecasting. Wherein the signal processing technology is usually employed to decompose the wind speed to reduce or eliminate the instability. Commonly used signal processing techniques have empirical mode decomposition (EMD), variational mode decomposition (VMD) and wavelet transform (WT). Wang et al. <ns0:ref type='bibr' target='#b49'>(Wang et al. 2016b</ns0:ref>) decomposed wind speed into stable signals using ensemble empirical mode decomposition (EEMD). Sun et al. <ns0:ref type='bibr' target='#b40'>(Sun &amp; Wang 2018</ns0:ref>) developed a fast ensemble empirical mode decomposition model to improve the accuracy of wind speed forecasting. Tascikaraoglu et al. <ns0:ref type='bibr' target='#b41'>(Tascikaraoglu et al. 2016)</ns0:ref> based on WT proposed a wind speed forecasting model. <ns0:ref type='bibr' target='#b13'>Hu et al. (Hu &amp; Wang 2015)</ns0:ref> adopted an empirical wavelet transform (EWT) to extract key information in wind speed time series. <ns0:ref type='bibr' target='#b52'>Yu et al. (Yu et al. 2017</ns0:ref>) explored the performance of EMD, EEMD and complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) in wind speed forecasting.</ns0:p><ns0:p>In the field of wind speed forecasting, there are mainly three forecast scenarios: short-term forecasting, medium-term forecasting and long-term forecasting. Among them, short-term wind speed forecasting is essential for estimating power generation, and it is difficult to predict accurately due to the nonlinearity and instability of wind speed. Therefore, in the study, we tried to develop a new model to forecast short-term wind speed. The originality of this model is to propose a combined model of EMD, FS, SVR and Cross-validated Lasso (LassoCV) for multistep wind speed forecasting. The framework of our study is as follows: (a) EMD is used to extract the intrinsic mode functions (IMFs) from the original wind speed time series; (b) FS and SVR are combined to predict high-frequency IMF; (c) LassoCV is used to complete the prediction of low-solution is to decompose different frequencies from chaotic wind data <ns0:ref type='bibr' target='#b3'>(Bokde et al. 2019)</ns0:ref>.</ns0:p><ns0:p>Common decomposition algorithms include Wavelet transform, morphology filters, EMD and many others. Wavelet transform is not adaptive and follows the prior knowledge of its mother wavelet, so somewhat limits its ability to extract nonlinear and non-stationary components from the data. Similarly, the morphology filters have to select the shape and the length of the structural element. There is no uniform standard and depends on human experience, whereas EMD has received great attention from researchers because of its superior performance and easy-tounderstand. Therefore, in this study, we use EMD to preprocessing the wind speed.</ns0:p><ns0:p>EMD is essentially a non-linear signal analysis method that can handle non-linear and nonstationary time series <ns0:ref type='bibr' target='#b15'>(Huang et al. 1998)</ns0:ref>. EMD uses the time-scale characteristics of the data to decompose the signal, and does not need to set any basis functions in advance. In theory, EMD can be applied to any type of signal. Since EMD was proposed, it has been rapidly applied to many different engineering fields such as marine and atmospheric research, seismic record analysis and mechanical fault diagnosis.</ns0:p><ns0:p>The basic idea of EMD is to decompose non-stationary time series signals into a series of IMFs along with a residue <ns0:ref type='bibr' target='#b15'>(Huang et al. 1998)</ns0:ref>. The IMF should meet two principles: (1) the number of extreme and zero values must be equal or differ by at most one; (2) the average value of upper envelop and lower envelope must be zero <ns0:ref type='bibr' target='#b56'>(Ziqiang &amp; Puthusserypady 2007)</ns0:ref>. Let &#119904; ( &#119905; ) , &#119905; = 1,2,&#8230;, &#119897; be a time series. EMD decomposition steps are as follows:</ns0:p><ns0:p>Step 1: Identify the local minima and maxima of the time series.</ns0:p><ns0:p>Step 2: Use cubic splines to interpolate local minima and maxima values to generate lower &#119904; &#119897; and upper .</ns0:p><ns0:p>( &#119905; ) &#119904; &#119906; ( &#119905; ) Step 3: Computer the average envelope of the upper and lower envelopes</ns0:p><ns0:formula xml:id='formula_0'>&#119898; &#119905; = &#119904; &#119906; ( &#119905; ) + &#119904; &#119897; ( &#119905; ) 2</ns0:formula><ns0:p>Step 4: Subtract the average envelope from the original time series &#8462; ( &#119905; ) = &#119904; ( &#119905; ) -&#119898; &#119905; Step 5: Check if meets the two principles of IMF. If so, treat as the new IMF and</ns0:p><ns0:formula xml:id='formula_1'>&#8462; ( &#119905; ) &#8462; ( &#119905; ) &#119888; ( &#119905; )</ns0:formula><ns0:p>calculate the residual signal . Otherwise, replace with , and then repeat</ns0:p><ns0:formula xml:id='formula_2'>&#119903; ( &#119905; ) = &#119904; ( &#119905; ) -&#8462; ( &#119905; ) &#8462; ( &#119905; ) &#119904; ( &#119905; )</ns0:formula><ns0:p>steps 1 to 5.</ns0:p><ns0:p>Step 6: Set as new and repeat steps 1 to 5 until all IMFs are obtained.</ns0:p><ns0:formula xml:id='formula_3'>&#119903; ( &#119905; ) &#119904; ( &#119905; )</ns0:formula><ns0:p>Through the whole process, a set of IMFs from high to low frequency can be extracted from the time series. Therefore, the original time series can be expressed as:</ns0:p><ns0:formula xml:id='formula_4'>&#119904; ( &#119905; ) = &#119899; &#8721; &#119894; = 1 &#119888; &#119894; ( &#119905; ) + &#119903; &#119899; ( &#119905; )</ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:1:0:NEW 7 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where is the number of IMFs.</ns0:p><ns0:p>refers to the IMF, which is periodic and almost orthogonal to</ns0:p><ns0:formula xml:id='formula_5'>&#119899; &#119888; &#119894; ( &#119905; )</ns0:formula><ns0:p>each other <ns0:ref type='bibr'>(Li et al. 2018b</ns0:ref>). is the final residual representing the trend of .</ns0:p><ns0:p>&#119903; &#119899; ( &#119905; ) &#119904; ( &#119905; )</ns0:p></ns0:div> <ns0:div><ns0:head>Feature selection</ns0:head><ns0:p>After obtaining the IMF components of wind speed, we need to predict it. In the study, we use the observed and lag of the IMF components as the raw features, respectively forecast each IMF component, and add all the predicted IMF components to get the final wind speed. Despite, the raw features contain sufficient information for forecasting, some irrelevant or partially relevant features in the raw features may have a negative impact on the model. To avoid the impact, a common strategy is to use feature selection to remove irrelevant features. Commonly used feature selection algorithms include filter method, wrapper method, heuristic search algorithm, embedded method <ns0:ref type='bibr' target='#b6'>(Chandrashekar &amp; Sahin 2014)</ns0:ref>. In this study, we use the filter method. In order to obtain scores of different variables, we use the univariate linear regression test to calculate the correlation between features and output <ns0:ref type='bibr' target='#b29'>(Liu et al. 2019b)</ns0:ref>, which is defined as :</ns0:p><ns0:formula xml:id='formula_6'>&#119862;&#119900;&#119903; &#119894; = (&#119883;[ :,i ] -&#119898;&#119890;&#119886;&#119899; ( &#119883; [ :,&#119894; ])) * ( &#119910; -&#119898;&#119890;&#119886;&#119899; ( &#119910; )) &#119904;&#119905;&#119889; ( &#119883; [ :,&#119894; ]) * &#119904;&#119905;&#119889; ( &#119910; )</ns0:formula><ns0:p>where is an matrix, each column is a feature. is the vector of the output we are</ns0:p><ns0:formula xml:id='formula_7'>&#119883; &#119873; &#215; &#119872; &#119910; &#119873; &#215; 1</ns0:formula><ns0:p>interested in. Based on the rank of correlation, the irrelevant or partially relevant features are removed.</ns0:p></ns0:div> <ns0:div><ns0:head>Support vector regression</ns0:head><ns0:p>The support vector machine (SVM) is a learning method based on structural risk minimization criteria, which can minimize the expected risk and obtain better generalization performance on unknown data. The support vector regression (SVR) is an extension of SVM for regression problems <ns0:ref type='bibr' target='#b8'>(Drucker et al. 1997)</ns0:ref>. Due to the nonlinear and non-stationary nature of wind speed, SVR is widely used in short-term wind speed forecasting <ns0:ref type='bibr' target='#b20'>(Khosravi et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Liu et al. 2019a;</ns0:ref><ns0:ref type='bibr' target='#b39'>Santamar&#237;a-Bonfil et al. 2016</ns0:ref>). In the research, we use EMD to decompose the IMF components of wind speed, and the high-frequency IMF component contains the nonlinear and non-stationary part of wind speed. In order to obtain better generalization performance, we refer to existing research and use SVR to predict it.</ns0:p><ns0:p>The main idea of SVR is to implement linear regression in the high-dimensional feature space obtained by mapping the original input through a predefined function , and to minimize</ns0:p><ns0:formula xml:id='formula_8'>&#8709; ( &#119909; )</ns0:formula><ns0:p>structure risks <ns0:ref type='bibr' target='#b7'>(Chen et al. 2018)</ns0:ref>. Given a set of samples , is the output and</ns0:p><ns0:formula xml:id='formula_9'>{&#119909; &#119894; ,&#119910; &#119894; }, &#119894; = 1,2,&#8230;,&#119873; &#119910; &#119894; &#119909; &#119894;</ns0:formula><ns0:p>is the input. The objective is:</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:1:0:NEW 7 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_10'>&#119891; ( &#119909; ) = &#119882; &#119879; &#8709; ( &#119909; ) + &#119887; &#119877; [ &#119891; ] = 1 2 &#8214; &#119882; &#8214; 2 + &#119862; &#119873; &#8721; &#119894; = 1</ns0:formula><ns0:p>&#119871;(&#119909; &#119894; ,&#119910; &#119894; ,&#119891;(&#119909; &#119894; ))</ns0:p><ns0:p>where and are the regression coefficient and bias, respectively. is the penalty coefficient.</ns0:p><ns0:p>&#119882; &#119887; &#119862; &#119871; represents the loss function, and is the structure risk. The corresponding (&#119909; &#119894; ,&#119910; &#119894; ,&#119891;(&#119909; &#119894; ))</ns0:p><ns0:formula xml:id='formula_11'>&#119877; [ &#119891; ]</ns0:formula><ns0:p>constrained optimization problem can be expressed as:</ns0:p><ns0:formula xml:id='formula_12'>&#119898;&#119894;&#119899; 1 2 &#8214; &#119882; &#8214; 2 + &#119862; &#119873; &#8721; &#119894; = 1 (&#120585; &#119894; + &#120585; * &#119894; ) &#119904;.&#119905;. &#119910; &#119894; -&#119882; &#119879; &#120601; ( &#119909; ) -&#119887; &#8804; &#120598; + &#120585; &#119894; &#119882; &#119879; &#120601; ( &#119909; ) + &#119887; -&#119910; &#119894; &#8804; &#120598; + &#120585; * &#119894; &#120585; &#119894; ,&#120585; * &#119894; &#8805; 0, &#119894; = 1,2, &#8230;,&#119899;</ns0:formula><ns0:p>where and refer to the slack variables. By introducing the Lagrange multiplier, the regression</ns0:p><ns0:formula xml:id='formula_13'>&#120585; &#119894; &#120585; *</ns0:formula><ns0:p>&#119894; can be expressed as:</ns0:p><ns0:formula xml:id='formula_14'>&#119891; ( &#119909; ) = &#119873; &#8721; &#119894; = 1 (&#120572; &#119894; -&#120572; * &#119894; )&#119870;(&#119909; &#119894; ,&#119909;) + &#119887;</ns0:formula><ns0:p>where and are the Lagrange multipliers that satisfy the conditions and</ns0:p><ns0:formula xml:id='formula_15'>&#120572; &#119894; &#120572; * &#119894; &#120572; &#119894; &#8805; 0, &#120572; * &#119894; &#8805; 0 .</ns0:formula><ns0:p>is the kernel function conforming to Mercer's theorem.</ns0:p><ns0:p>&#8721; &#119873; &#119894; = 1 (&#120572; &#119894; -&#120572; * &#119894; ) = 0 &#119870;(&#119909; &#119894; ,&#119909;)</ns0:p></ns0:div> <ns0:div><ns0:head>Cross-validated lasso</ns0:head><ns0:p>The Lasso algorithm is a regression model that can perform feature selection and regularization at the same time. It was originally proposed by Robert Tibshirani of Stanford University, with better prediction accuracy and interpretability (Tibshirani 1996). Normally, in regression, we want to find a coefficient that satisfies the following:</ns0:p><ns0:formula xml:id='formula_16'>&#120573; = (&#120573; 1 ,&#8230;,&#120573; &#119901; ) &#119884; = &#119883;&#120573; + &#120576;, &#119864; [ &#120576;|&#119883; ] = 0</ns0:formula><ns0:p>where is the dependent variable, is the covariate, and is the unobserved noise. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_17'>&#120573; &#119897;&#119886;&#119904;&#119904;&#119900; = &#119886;&#119903;&#119892;&#119898;&#119894;&#119899; &#120573; &#8712; &#119877; &#119901; { 1 &#119873; &#8214; &#119910; -&#119883;&#120573; &#8214; 2 2 + &#120582; &#8214; &#120573; &#8214; 1}</ns0:formula><ns0:p>The -norm is used instead of the -norm in Lasso. Since the constraint region is diamond-</ns0:p><ns0:formula xml:id='formula_18'>&#119871; 1 &#119871; 2</ns0:formula><ns0:p>shaped, it is more likely to pick the solution that lies at the corner of the region. As a result, the solution of the lasso is sparse, with some coefficients set to exactly equal to zero, that is, Lasso performs a straightforward feature selection.</ns0:p><ns0:p>To estimate , the value of the penalty parameter is critically important. However, the optimal &#120573; &#119897;&#119886;&#119904;&#119904;&#119900; &#120582; is not given automatically. If is chosen appropriately, Lasso achieves the fast convergence &#120582; &#120582; under fairly general conditions; On the other hand (chosen inappropriately), Lasso may be inconsistent or have a slower convergence. In the paper, we adopt the cross-validated Lasso algorithm, in which the penalty parameter is chosen based on cross-validation, and this is also &#120582; the leading recommendation way in the theoretical literature <ns0:ref type='bibr' target='#b36'>(Park &amp; Casella 2008)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Prediction performance criteria</ns0:head><ns0:p>In the study, the mean absolute percentage error (MAPE), mean absolute error (MAE) and RMSE are used as performance indicators to evaluate the proposed wind forecasting model, which are defined as follows:</ns0:p><ns0:formula xml:id='formula_19'>&#119872;&#119860;&#119875;&#119864; = 1 &#119873; &#8721; &#119873; &#119894; = 1 | (&#119884; &#119894; -&#119884; &#119894; ) &#119884; &#119894; | &#119872;&#119860;&#119864; = 1 &#119873; &#8721; &#119873; &#119894; = 1 |&#119884; &#119894; -&#119884; &#119894; | &#119877;&#119872;&#119878;&#119864; = 1 &#119873; -1 &#8721; &#119873; i = 1 (&#119884; &#119894; -&#119884; &#119894; ) 2</ns0:formula><ns0:p>where and refer to the observed and predicted wind speed of data point , respectively. For &#119884; &#119894; &#119884; &#119894; &#119894; MAPE, MAE, RMSE, the smaller value, the better the performance.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head></ns0:div> <ns0:div><ns0:head>Wind speed data</ns0:head><ns0:p>The wind speed data used in the study is gathered from two wind stations in Michigan, USA Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Experiments and result analysis</ns0:head><ns0:p>To verify the effectiveness of the proposed model, we compare it with five classic individual models, including Persistence, ELM, SVR and ANN, ARIMA. The 1-to 3-step forecasting results of these models under time series #1 and #2 are displayed in Fig. <ns0:ref type='figure'>3-4</ns0:ref>, and the corresponding error estimated results are listed in Table <ns0:ref type='table'>2</ns0:ref>-5. It is worth noting that for a fair comparison, the parameters of the involved models are selected based on cross-validation. Based on the experimental results, we can get the following conclusions:</ns0:p><ns0:p>(1) (2) In the 2-step forecasting, when time series #1 is used, the proposed model has the lowest performance criteria, i.e., the values of RMSE, MAE, and MAPE are 0.7531, 0.5848, and 24.78%, respectively. In addition, for time series #2, the proposed model still achieves the lowest performance criteria value. Take MAPE as an example, the value of MAPE is 22.99%, which is significantly lower than other models.</ns0:p><ns0:p>(3) In the 3-step forecasting, the proposed model is still the model with the highest prediction accuracy, and the MAPE of time series #1 and #2 are 27.55% and 24.59%, respectively. And Persistence has the worst RMSE value among these models, with MAPE of 57.64% and 47.99%, respectively.</ns0:p><ns0:p>In general, under 1-to 3-step forecasting, the proposed model can obtain the best prediction performance compared with the classic individual models.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head></ns0:div> <ns0:div><ns0:head>Compared with traditional EMD methods</ns0:head><ns0:p>As a nonlinear signal analysis method for processing nonlinear and non-stationary time series, EMD has been widely used in time series. To further verify the effectiveness of our EMD model, we compare it with four widely used EMD models, namely EMD-ELM, EMD-SVR, EMD-SP-SVR, and EMD-ANN. It is worth noting that in this study, these methods used the same way as our proposed model, using EMD to decompose the wind speed, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science methods and the proposed method are displayed in Fig. <ns0:ref type='figure'>5</ns0:ref>-6 and Table <ns0:ref type='table'>6</ns0:ref>-9. Based on Fig. <ns0:ref type='figure'>5</ns0:ref>-6 and Table <ns0:ref type='table'>6</ns0:ref>-9, it can be observed that:</ns0:p><ns0:p>(1) Compared with the above-mentioned classic individual models, the performance of the EMDbased method is significantly improved. Take time series #1 as an example, in the 1-step forecasting, the value of RMSE of the EMD-based methods is around 0.60, while the classic individual model is around 1.20. After the wind speed is decomposed by EMD, the value of RMSE is reduced almost doubled.</ns0:p><ns0:p>(2) For time series #1, except for the MAE in the 3-step forecasting, the performance indicators obtained from the proposed model are significantly better than those EMD-based combined models. For the 3-step forecasting, the performance of EMD-SVR and EMD-SVR-SP in MAE is slightly better than the proposed combined model, but in other evaluation indicators, the proposed combined model achieves a significantly better performance. Furthermore, EMD-ANN is always worse in MAPE as compared with the other three combined models, with MAPE of 23.55%, 27.67%, and 29.31% for 1-to 3-step forecasting.</ns0:p><ns0:p>(3) For time series #2, in 1-to 3-step wind speed forecasting, the proposed combined model obtains the best prediction results. The RMSE, MAE and MAPE in the 1-step forecasting are 0.5593, 0.419, and 17.10%, respectively. In comparison, among the other four EMD-based combined models, the EMD-ELM and EMD-ANN models have similar prediction performance in 1-to 3-step forecasting, with MAPE values of 21.59%, 27.49%, 27.65% and 21.83%, 25.3%, 27.86%, respectively.</ns0:p><ns0:p>In total, the EMD-based method has obvious advantages over traditional methods, and the proposed method that using EMD, FS, SVR and LassoCV can achieve better performance.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance of SVR-SP and LassoCV on different IMFs</ns0:head><ns0:p>According to the EMD principle, the frequency of the IMF components is from high to low. The non-linear and non-stationary information of wind speed data is mainly concentrated in the highfrequency IMF, and the low-frequency IMF presents a Sin-like function curve. Based on its characteristics, in this study we use SVR-SP and LassoCV to predict IMFs of different frequencies.</ns0:p><ns0:p>In order to verify the effectiveness of this hybrid EMD model, in this section, we take time series #2 as an example to analyze the performance of the two methods on different IMF components. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science (IMF2~IMF7, Trend), and its RMSE is already close to zero at IMF4. Moreover, SVR-SP has a risk of overfitting when predicting low frequencies, resulting in poor performance. In total, the proposed model that combines the EMD decomposition characteristics and the advantages of the algorithm can achieve better performance than the traditional EMD model.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>As a sustainable and renewable energy, wind power has attracted widespread attention and rapid development in recent years. Reliable and accurate wind speed forecasting will provide support for wind power planning and control. Due to the non-linearity and non-stationarity of wind, forecasting is still a difficult yet challenging problem. In the paper, we develop a new wind speed forecasting model based on EMD, FS, SVR and LassoCV. EMD is employed to extract IMFs from the original non-stationary wind speed time series. FS and SVR are combined to predict the highfrequency IMF. LassoCV is adopted to complete the prediction of low-frequency IMF and trend.</ns0:p><ns0:p>By testing in two wind speeds obtained from Michigan, USA, the experimental results show that under 1-to 3-step forecasting the proposed model can achieve better prediction performance than the classic individual and traditional EMD combined models. Although the proposed model has achieved good performance, it still has some limitations. After the new data is updated, the model needs to be retrained. In future research, we will try to integrate online learning in our proposed method.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 1</ns0:head><ns0:p>The whole process of the proposed model. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>exploited the performance of three different models, i.e. ANN, ARIMA and hybrid model, in wind speed forecasting. Azeem et al. (Azeem et al. 2018) investigated the KNNbased and ANN-based models for wind speed forecasting. Recently, deep learning, a new branch of machine learning, has received extensive attention. It has been widely used for regression and classification problems. According to the literature, deep learning can abstract the hidden structure and inherent characteristics of data compared with shallow methods. Khodayar et al. (Khodayar &amp; Wang 2019) introduced a scalable graph convolutional deep learning (GCDLA) for wind speed forecasting. Wang et al. (Wang et al. 2016a) investigated a deep belief network model for wind speed forecasting. Khdayar et al. (Khodayar et al. 2019) combined rough set theory and restricted PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:1:0:NEW 7 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>|&#120573;</ns0:head><ns0:label /><ns0:figDesc>minimize the objective function while forcing the sum of the absolute values of the coefficients to be less than a fixed value (Hung et al. 2016): &#119895; | &#8804; &#119905; Rewritten in the Lagrangian form: PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:1:0:NEW 7 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>using a single classifier to predict each IMF component separately, and adding all the prediction results to get the final prediction wind speed. The prediction results and the error estimated results of these four EMD-based PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:1:0:NEW 7 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,178.87,525.00,349.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>In the 1-step forecasting, for time series #1, the proposed model obtains the best accuracy: RMSE, MAE, and MAPE are 0.5859, 0.4426, and 21.11%, respectively. The classic individual models from low to high based on RMSE are ELM, ANN, Persistence, SVR, and ARIMA, with MAPE values of 36.20%, 36.24%, 36.20%, 34.87%, and 34.25%, respectively. Likely, in time series #2, compared with the classic individual models, the proposed model still obtains the best performance, and the MAPE value is 17.10%.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 10</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>lists the RMSE of SVR-SP and LassoCV on different IMF components. It is worth mentioning that in multi-step prediction, the prediction accuracy of the first step is more important than the other steps, which is of great significance for the accurate estimation of wind power. It can be seen from Table10that SVR-SP can obtain significantly better performance than LassoCV at high frequency (IMF1), while LassoCV can obtain better performance at low frequencies</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:1:0:NEW 7 Jun 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Statistic indices Dataset Date Percentage Mean (m/s) Max (m/s) Min (m/s) Std. Stew. Kurt.</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='5'>Computer Science</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Models Models</ns0:cell><ns0:cell /><ns0:cell>1-step 1-step 1-step</ns0:cell><ns0:cell>2-step</ns0:cell><ns0:cell cols='2'>2-step 3-step</ns0:cell><ns0:cell>3-step 2-step</ns0:cell><ns0:cell>3-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Persistence P RMSE (%) Persistence P RMSE (%)</ns0:cell><ns0:cell>102.98 127.42</ns0:cell><ns0:cell>89.54</ns0:cell><ns0:cell cols='2'>111.04 111.11</ns0:cell><ns0:cell>122.89</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>All P MAE (%) Time series #1 P MAE (%)</ns0:cell><ns0:cell cols='4'>Sept. 2019 ~ Oct. 2019 103.24 108.97 132.24 83.49 103.08</ns0:cell><ns0:cell>100% 116.05</ns0:cell><ns0:cell>3.2729</ns0:cell><ns0:cell>14.4</ns0:cell><ns0:cell>2.398 0.916 0.949</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='5'>Training set Sept. 1, 2019 ~ Oct. 20, 2019 P MAPE (%) 71.47 100.34 P MAPE (%) 110.33 78.38 95.19</ns0:cell><ns0:cell>~83% 109.26</ns0:cell><ns0:cell>3.2975</ns0:cell><ns0:cell>14.4</ns0:cell><ns0:cell>2.378 0.871 0.865</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Testing set</ns0:cell><ns0:cell cols='4'>Oct. 21, 2019 ~ Oct. 31, 2019</ns0:cell><ns0:cell>~17%</ns0:cell><ns0:cell>3.1614</ns0:cell><ns0:cell>13.9</ns0:cell><ns0:cell>2.486 1.108 1.312</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>All P RMSE (%) Time series #2 ARIMA ARIMA P RMSE (%)</ns0:cell><ns0:cell cols='4'>Sept. 2019 ~ Oct. 2020 100.11 101.60 107.55 75.25 92.86</ns0:cell><ns0:cell>100% 106.94</ns0:cell><ns0:cell>3.6693</ns0:cell><ns0:cell>11.3</ns0:cell><ns0:cell>2.172 0.757 0.257</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='5'>Training set Sept. 1, 2019 ~ Oct. 20, 2020 P MAE (%) 103.55 97.69 P MAE (%) 121.83 74.83 89.35</ns0:cell><ns0:cell>~83% 99.60</ns0:cell><ns0:cell>3.6919</ns0:cell><ns0:cell>11.3</ns0:cell><ns0:cell>2.183 0.807 0.353</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Testing set P MAPE (%) P MAPE (%)</ns0:cell><ns0:cell cols='4'>Oct. 21, 2019 ~ Oct. 31, 2020 62.20 82.58 126.31 97.56 115.77</ns0:cell><ns0:cell>~17% 95.26</ns0:cell><ns0:cell>3.5667</ns0:cell><ns0:cell>9.3</ns0:cell><ns0:cell>2.118 0.500 -0.318</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>ELM ELM</ns0:cell><ns0:cell>P RMSE (%) P RMSE (%)</ns0:cell><ns0:cell>116.85 123.99</ns0:cell><ns0:cell>81.12</ns0:cell><ns0:cell cols='2'>105.83 100.59</ns0:cell><ns0:cell>112.35</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>P MAE (%) P MAE (%)</ns0:cell><ns0:cell>119.68 142.95</ns0:cell><ns0:cell>82.96</ns0:cell><ns0:cell cols='2'>100.57 99.60</ns0:cell><ns0:cell>100.11</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>P MAPE (%) P MAPE (%)</ns0:cell><ns0:cell>71.44 161.98</ns0:cell><ns0:cell>123.54</ns0:cell><ns0:cell cols='2'>87.82 144.54</ns0:cell><ns0:cell>100.31</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SVR SVR</ns0:cell><ns0:cell>P RMSE (%) P RMSE (%)</ns0:cell><ns0:cell>100.36 107.43</ns0:cell><ns0:cell>73.93</ns0:cell><ns0:cell cols='2'>108.16 89.84</ns0:cell><ns0:cell>109.11</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>P MAE (%) P MAE (%)</ns0:cell><ns0:cell>103.87 119.81</ns0:cell><ns0:cell>73.66</ns0:cell><ns0:cell cols='2'>103.96 86.54</ns0:cell><ns0:cell>96.76</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>P MAPE (%) P MAPE (%)</ns0:cell><ns0:cell>65.15 113.43</ns0:cell><ns0:cell>89.76</ns0:cell><ns0:cell cols='2'>88.48 103.00</ns0:cell><ns0:cell>91.62</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ANN ANN</ns0:cell><ns0:cell>P RMSE (%) P RMSE (%)</ns0:cell><ns0:cell>104.54 112.78</ns0:cell><ns0:cell>73.95</ns0:cell><ns0:cell cols='2'>103.68 106.62</ns0:cell><ns0:cell>116.09</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>P MAE (%) P MAE (%)</ns0:cell><ns0:cell>111.33 125.58</ns0:cell><ns0:cell>73.16</ns0:cell><ns0:cell>98.62</ns0:cell><ns0:cell>98.82</ns0:cell><ns0:cell>104.56</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>P MAPE (%) P MAPE (%)</ns0:cell><ns0:cell>71.63 137.50</ns0:cell><ns0:cell>90.12</ns0:cell><ns0:cell cols='2'>84.78 112.23</ns0:cell><ns0:cell>102.23</ns0:cell></ns0:row><ns0:row><ns0:cell>1 1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2 2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:1:0:NEW 7 Jun 2021)Manuscript to be reviewed</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>step Models RMSE MAE MAPE (%) RMSE MAE MAPE (%) RMSE MAE MAPE (%)</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Models</ns0:cell><ns0:cell cols='2'>1-step RMSE MAE MAPE (%)</ns0:cell><ns0:cell cols='2'>2-step RMSE MAE MAPE (%)</ns0:cell><ns0:cell cols='2'>3-step RMSE MAE MAPE (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>EMD-ELM EMD-ELM</ns0:cell><ns0:cell>0.6400 0.5128 0.6560 0.5283</ns0:cell><ns0:cell>22.63 21.59</ns0:cell><ns0:cell>0.7854 0.6316 0.8199 0.6669</ns0:cell><ns0:cell>27.22 27.49</ns0:cell><ns0:cell>0.8746 0.6937 0.8775 0.7096</ns0:cell><ns0:cell>29.02 27.65</ns0:cell></ns0:row><ns0:row><ns0:cell>EMD-SVR EMD-SVR</ns0:cell><ns0:cell>0.6379 0.5120 0.6567 0.5233</ns0:cell><ns0:cell>23.32 24.88</ns0:cell><ns0:cell>0.7768 0.6181 0.8317 0.6736</ns0:cell><ns0:cell>27.09 29.85</ns0:cell><ns0:cell>0.8583 0.6749 0.8508 0.6986</ns0:cell><ns0:cell>28.48 30.52</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>EMD-SVR-SP 0.6310 0.4867 EMD-SVR-SP 0.6437 0.4972</ns0:cell><ns0:cell>23.03 24.06</ns0:cell><ns0:cell>0.7987 0.6141 0.8211 0.6718</ns0:cell><ns0:cell>26.30 28.53</ns0:cell><ns0:cell>0.8591 0.6762 0.8894 0.7264</ns0:cell><ns0:cell>28.66 32.31</ns0:cell></ns0:row><ns0:row><ns0:cell>EMD-ANN EMD-ANN</ns0:cell><ns0:cell>0.6342 0.5055 0.6397 0.5046</ns0:cell><ns0:cell>23.55 21.83</ns0:cell><ns0:cell>0.7879 0.6221 0.7927 0.6373</ns0:cell><ns0:cell>27.67 25.34</ns0:cell><ns0:cell>0.8987 0.7040 0.8520 0.6934</ns0:cell><ns0:cell>29.31 27.86</ns0:cell></ns0:row><ns0:row><ns0:cell>The proposed The proposed</ns0:cell><ns0:cell>0.5859 0.4426 0.5593 0.4193</ns0:cell><ns0:cell cols='2'>21.11 0.7531 0.5848 17.10 0.7540 0.5966</ns0:cell><ns0:cell cols='2'>24.78 0.8528 0.6798 22.99 0.7911 0.6437</ns0:cell><ns0:cell>27.55 24.59</ns0:cell></ns0:row><ns0:row><ns0:cell>1 1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Response to Reviewer Comments We thank the editor and reviewers for their helpful comments and suggestions. Their comments have improved the manuscript effectively. We have included almost all of their suggestions and present a point-by-point response to their comments. Editor Comment 1: Two reviewers have consistent recommendations. The current version cannot be accepted, and a major revision is suggested. Response: We thank the reviewers and editors for their hard work. Your work has greatly improved the manuscript. We have carefully revised the manuscript based on the reviewer's suggestions. Reviewer 1 Basic reporting Figures presented in the manuscript should be improved: Comment 1: From Fig.1, the reader might think that from IMF(1) (high-frequency) to IMF(n-1), the proposed hybrid approach utilizes FS+SVR combination for predicting the IMF component because of the 3 dots placed at the right of the pipeline processing IMF1. However, from the codes submitted, if the reviewer correctly understands, it seems that FS+SVR is applied only to the very first IMF, while LassoCV is used for all the remaining lower frequency components, including the residue. Is it possible to clarify this issue? If I correctly understood the proposed method, it is better to modify Fig. 1 to avoid this confusion. Response: We apologize for the confusion in Fig. 1. We modified Fig. 1 in the revised manuscript. Comment 2: 2) Fig. 2-6. It is better to add the labels and scales to the X-axis. From .csv files, it is understandable that those axis represent time expressed in hours, but it can be good to put the labels and scales for X-axis. Response: Based on the reviewer’s suggestions, we added labels and scales to the X-axis in Fig. 26. The representation of some tables can be improved: Comment 3: 3) In the opinion of this reviewer, it can be better to present tables 4-5 (the improvement comparison) in form of tables 8 and 9. Response: We thank the reviewer for the helpful suggestion. we presented Tables 4-5 in form of Tables 8-9 in the revised manuscript. Experimental design The proposed methodology explanation and description should be elaborated more by the authors of the manuscript. Specifically: Comment 4: 1) The challenges of predicting high-frequency IMF and low-frequency IMF with residue are not well stated in the text of the manuscript. Why do the authors consider different models for predicting those components and thus, construct a hybrid approach? Why not use the same FS+SVR or LassoCV for all the extracted intrinsic modes? These questions should be better explained in the text of manuscript. Response: We thank the reviewer for the question that need to be further explained in the manuscript. As far as we know, each algorithm has its specific application scenarios. SVR inherits the characteristics of SVM and has strong generalization performance, which is suitable for occasions that require strong generalization performance. The Lasso algorithm is essentially a linear regression model, suitable for curve fitting applications. According to the principles of EMD, the first IMF component of EMD decomposition contains most of the high-frequency information in the wind speed. It is usually necessary to use a more generalized algorithm for prediction, which is consistent with the application scenario of the SVR algorithm. Compared with the first IMF component, the frequency of the other components decomposed by EMD is much lower and presents a Sin-like function curve. As a linear regression model, the Lasso algorithm can better fit the Sin-like function curve compared to the SVR algorithm. For a better understanding, we added a detailed description in the 'Introduction' section of the revised manuscript. Comment 5: 2) Feature Selection subsection. From this subsection, the definition of raw features is not very clear. Are these features comprise the statistical feature parameters of lags created from IMF components? Or the lags (i.e., time-series) are used as features? Please, clarify this in the text of the manuscript. Response: We thank the reviewer for the helpful comment. In the study, we use the lag of IMF components to predict each IMF component, and combine all the predicted IMF components to obtain the final wind speed. In order to facilitate understanding, we have made a clear explanation in the revised manuscript. Comment 6: 3) It is recommended to add more details in the text (formulations and explanations) regarding the step 3 from Fig.1, where the predictions of each IMF are combined in the ensemble and final prediction is made. Response: We apologize for the confusing description. In fact, the process of the ensemble is the formula of the “Prediction result” box. To avoid misunderstanding, we modified step 3 in Fig. 1 based on the reviewer’s suggestion. Comment 7: 4) From experimental results the predictive performance of the proposed model looks good. But are there any limitations of the proposed model? Please, state them in the discussion section if they are any. Response: We thank the reviewer for the helpful comment. In fact, the method based on the machine learning proposed in the paper is not an online method, and the model needs to be retrained after the new data is updated. Accordingly, we make a discussion in the “Discussion” section of the revised manuscript. Validity of the findings No comment. Comments for the author The general comments can be split into the major and minor comments which are represented as below. Major comments: Comment 8: 1) How exactly the author combined EMD with the referenced models for the second comparison in the Discussion section? Do those techniques utilize EMD in the same manner as the proposed method? In the opinion of this reviewer, this explanation should be elaborated more. Response: We thank the reviewer for pointing out the unclear description. In the study, the combined EMD for comparison uses the same way as our proposed model, using EMD to decompose the wind speed, using a single classifier to predict each IMF component separately, and adding all the prediction results to get the final prediction wind speed. To avoid confusion, we have made a clear explanation in the ' Discussion' section of the revised manuscript. Comment 9: 2) Fig.1. At the bottom of this figure, Step 3 shows the process of merging the prediction results obtained for IMF components into the ensemble. Is it possible to create an additional figure which shows how the proposed method predicts individual IMF components, for example, for a single 3-step prediction, and how these predictions then form the ensemble? In the opinion of this reviewer, this additional figure can be useful for the potential readers of the manuscript since it provides a better understanding of the intermediate results. Response: Based on the reviewer’s suggestion, we added a sub-figure to illustrate the predict of an individual IMF component. For the prediction ensemble, we apologize for the confusing description. In fact, after obtaining the predicted results of each IMF (including residue), we only need to add all the predicted values to get the final wind speed, which is the formula in the 'Prediction result' box. To avoid misunderstandings, we have modified step 3 in Fig. 1 for better understanding in the revised manuscript. Minor comments: Comment 10: 1) In the Introduction section and along with the text of the manuscript, the author mentions short-, medium-, and long-term wind speed forecasting scenarios. In the Results section, the experimental results for the proposed and referenced models are obtained while testing 1-step, 2-step, and 3-step prediction strategies. As a reader, I am curious, whether these strategies used in the experiment cover those 3 scenarios (short-, medium-, long-term predictions) mentioned in Introduction or, for example, they all considered short-term or medium-term prediction scenarios? Is it possible to briefly clarify this moment? Response: We thank the reviewer for the question that need further clarification. The main difference between short-, medium-, and long-term forecasts is that the forecast time interval is different. Short-term forecasts are usually half an hour or a few hours, mid-term forecasts are half a day or a day, and long-term forecasts are usually days or a week. In the field of wind power, shortterm forecast is mainly used to estimate short-term power generation. Medium- and long-term forecasts are generally used to arrange future power generation plans or detect extreme weather. Multi-step forecasting refers to forecasting wind speeds at multiple consecutive time points at the same time, which is usually used in short-term forecasting. Short-term wind speed forecasts are essential for estimating power generation. Accurate power generation estimates mean that the greater the profit, and the more stable the power demand curve allocated by the power company. Comment 11: 2) In the Introduction section of the manuscript there are a lot of acronyms that are not defined in the text of the manuscript. Some of them are well-known, such as ANN, SVM, and ARIMA, but some of them are not. It is a good practice to define the acronyms before their use in the text. Response: We thank the reviewer for pointing out this issue. Based on the reviewer’s suggestion, we added a detailed description for these acronyms in the manuscript. Comment 12: 3) Does the proposed method support online learning, i.e., the update of prediction models when the newly unseen instances of data arrive? Response: Actually, the proposed combined method is not an online method, and the model needs to be retrained after the new data (hourly) is updated. We make a discussion in the “Conclusions” section of the revised manuscript. Comment 13: 4) There are some minor typos in the manuscript, such as at lines 282-283. It is recommended to double-check the manuscript and fix them. Response: According to the reviewer’s suggestions, we have carefully revised the language and grammar of the manuscript. Reviewer 2 Basic reporting Dear Author, from my own observations, Comment 1: If possible, could you please update more background and principles of Empirical Mode Decomposition (EMD), Feature Selection (FS), and Support Vector Regression (SVR) in the specific areas of wind speed forecasting based on your research topic. Response: Based on the reviewer’ suggestions, we have added more background and principles of Empirical Mode Decomposition (EMD), Feature Selection (FS) and Support Vector Regression (SVR) in the revised manuscript. Comment 2: The details of training data and testing data should be listed and respectively demonstrated the percentage of training data and testing data of the original dataset in terms of case studies or/and experimentations. Response: We thank the reviewer for pointing out this issue. We added the details (including percentages) of training data and testing data to Table 1 in the revised manuscript. Comment 3: Which are the most advantages of your proposed method in comparison with other existing algorithms of EMD and SVR in the areas of Short-Term Forecasting (STF) based on wind speed data? If possible, could you please have in comparison between with and without noisy signals/environments for STF system and contrast the simulation results between the two scenarios mentioned above. Alternatively, could you please provide either some Tables or update simulation results to make an explanation the advantages of your proposed methodology of EMD and SVR in the fields of wind speed STF and update some statistical criterion based on your research topic. Response: According to the characteristics of the IMF components decomposed by EMD, the first IMF component contains most of the high-frequency information in the wind speed, which requires the algorithm to have strong generalization. SVR inherits the characteristics of SVM and has strong generalization performance. Therefore, in this study, we use SVR to predict high frequencies component. Compared with the first IMF component, the frequency of other components decomposed by EMD is much lower and presents a Sin-like function curve. Compared with SVR, the linear regression model can usually get better performance. So, we introduced Lasso to predict the low-frequency components. To further explain the advantages of the combined model, we conduct a detailed analysis of the performance of SVR and Lasso in different IMF components in the “Discussion” section of the revised manuscript. Experimental design Comment 4: The proposed algorithm should be demonstrated as the form of Algorithm Environment, which can be efficaciously followed by readers. Additionally, the flowchart of proposed method mentioned in this paper should also be updated and included the dataset collections, pre-processing of dataset, post-processing of dataset, feature extractions, feature selections, and the performances of forecasting, respectively. Response: We thank the reviewer for pointing out these issues. Based on the reviewer’s suggestions, we referred to the algorithm environment to demonstrate the algorithm and modified the flowchart of the proposed method in the revised manuscript. Validity of the findings Comment 5: Alternatively, for one thing, different types of experimental samples and the number of feature selection might have either positive or negative impacts for the performances of wind speed forecasting for more complicated industrial systems to some extent. Based on this reason, could you please provide an evidence to make explanations the availability and feasibility of the proposed methodology based on wind speed data for STF system? For another thing, the characteristics of both healthy and faulty simulated wind speed data based on time-domain should be modified, and correspondingly the energy distribution of IMFs should also be analysed and discussed. Response: We apologize for the misunderstanding caused by the lack of a clear description. In fact, the purpose of this study is to forecast the future wind speed based on the current wind speed. It is essentially a time series analysis problem, which is different from the traditional “STF” research. In “short-term” wind speed forecasting, generally, healthy wind speed and fault wind speed are not distinguished, and only historical wind speed data are available. Moreover, the EMD used in the study is to decompose IMF components of wind speed, and perform time-series forecasts for each IMF separately. It is not necessary to use IMF energy distribution information to forecast wind speed. Comments for the author Comment 6: Recently, a new survey paper of wind turbine systems concentrates on fault diagnosis, prognosis and resilient control, which included model-based, signal-based, and knowledge-based (data-driven) techniques to demonstrate the characteristics of fault detection, classification, and isolation in various faulty scenarios. Additionally, the novel EMD or/and SVR techniques for STF have also been proposed in some papers. If possible, could you please update these References for having in comparison with other algorithms in the specific field of your research topic in terms of wind speed data for STF system, as well as making any comments? The References are shown in the following below. Reference 1: A. T. Eseye, et. al., “Short-Term Forecasting of Heat Demand of Buildings for Efficient and Optimal Energy Management Based on Integrated Machine Learning Models,” in IEEE Transactions on Industrial Informatics, vol. 16, no. 12, pp. 7743–7755, Dec. 2020. Reference 2: “An Overview on Fault Diagnosis, Prognosis and Resilient Control for Wind Turbine Systems,” Processes, vol. 9, no. 2, p. 300, Feb. 2021. Reference 3: M. Sajjad, et. al., “A Novel CNN-GRU-Based Hybrid Approach for Short-Term Residential Load Forecasting,” in IEEE Access, vol. 8, pp. 143759–143768, 2020. Reference 4: Y. Fu, et. al., “Actuator and Sensor Fault Classification for Wind Turbine Systems Based on Fast Fourier Transform and Uncorrelated Multi-Linear Principal Component Analysis Techniques,” Processes, vol. 8, no. 9, p. 1066, Sep. 2020. Response: We thank the reviewer for the comment that require further clarification. By reviewing the provided literatures, we found that these studies are mainly for detecting wind turbine system failures. However, the goal of the paper is to use current wind speed information to forecast the future wind speed. It is essentially a time series analysis problem, which is inconsistent with wind turbine system failure detection. "
Here is a paper. Please give your review comments after reading it.
244
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background: The planning and control of wind power production rely heavily on short-term wind speed forecasting. Due to the non-linearity and non-stationarity of wind, it is difficult to carry out accurate modeling and prediction through traditional wind speed forecasting models. Methods: In the paper, we combine empirical mode decomposition (EMD), feature selection (FS), support vector regression (SVR) and cross-validated lasso (LassoCV) to develop a new wind speed forecasting model, aiming to improve the prediction performance of wind speed. EMD is used to extract the intrinsic mode functions (IMFs) from the original wind speed time series to eliminate the non-stationarity in the time series. FS and SVR are combined to predict the high-frequency IMF obtained by EMD.</ns0:p><ns0:p>LassoCV is used to complete the prediction of low-frequency IMF and trend. Results: Data collected from two wind stations in Michigan, USA are adopted to test the proposed combined model. Experimental results show that in multi-step wind speed forecasting, compared with the classic individual and traditional EMD-based combined models, the proposed model has better prediction performance. Conclusions: Through the proposed combined model, the wind speed forecast can be effectively improved.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>As a sustainable and renewable energy alternative to traditional fossil fuels, wind power has attracted widespread attention and rapid development in recent years <ns0:ref type='bibr' target='#b17'>(Hu et al. 2018)</ns0:ref>. According to the statistical report of the Global Wind Energy Council, the world capacity is about 650.8 GW <ns0:ref type='bibr' target='#b9'>(Fu et al. 2020)</ns0:ref>, of which the installed capacity in 2019 is 59.7 GW (GLOBAL 2020). However, with the increase of grid-connected wind power, the stability of the power system will be challenged <ns0:ref type='bibr' target='#b29'>(Liu et al. 2018a)</ns0:ref>. This is because wind power is closely related to the non-stationarity of wind speed. Accurate wind speed forecasting will provide support for wind power planning and control, and even help reduce the impact of unexpected events on the stability of the power system <ns0:ref type='bibr' target='#b30'>(Liu et al. 2018b)</ns0:ref>. But due to the non-linearity and non-stationarity of wind, it is difficult to establish a satisfactory wind speed forecasting model. To this end, researchers have made great efforts to improve forecasting performance from different aspects, including basic predictive models, preprocessing methods, and combined or hybrid strategies.</ns0:p><ns0:p>For basic predictive models, a variety of methods has been presented, mainly including physical models, statistical models, and machine learning. The physical model usually uses physical parameters such as temperature and pressure to predict wind speed <ns0:ref type='bibr' target='#b12'>(Heng et al. 2016)</ns0:ref>. Numerical Weather Prediction (NWP) is one of the representative technologies. However, due to the weak PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:2:0:NEW 10 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science correlation between physical parameters and short-term wind speed, this type of model can only be used for medium-and long-term wind speed forecasting, not for short-term wind speed forecasting. In the short-term wind speed forecasting, the wind speed is generally predicted by analyzing the inherent laws of historical wind speed data <ns0:ref type='bibr' target='#b6'>(Chen et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b30'>Liu et al. 2018b</ns0:ref>).</ns0:p><ns0:p>The statistical model is a method widely used in short-term wind speed forecasting, which uses historical data to predict wind speed. Commonly used statistical models have autoregressive (AR) <ns0:ref type='bibr' target='#b33'>(Lydia et al. 2016a</ns0:ref>), autoregressive moving average (ARMA) <ns0:ref type='bibr' target='#b45'>(Torres et al. 2005</ns0:ref>) and autoregressive integrated moving average (ARIMA) <ns0:ref type='bibr'>(Wang &amp; Hu 2015)</ns0:ref>. Kavasseri et al. <ns0:ref type='bibr' target='#b20'>(Kavasseri &amp; Seetharaman 2009)</ns0:ref> proposed an f-ARIMA model for wind speed forecasting, and claimed that compared with the persistence model, their model has significantly improved the prediction accuracy. <ns0:ref type='bibr' target='#b0'>Maatallah et al. (Ait Maatallah et al. 2015)</ns0:ref> developed a Hammerstein autoregressive model to predict wind speed, and verified that their model has a better root mean square error (RMSE) than ARIMA and ANN. <ns0:ref type='bibr' target='#b40'>Poggi et al. (Poggi et al. 2003</ns0:ref>) developed a model to predict wind speeds of three Mediterranean sites in Corsica based on AR, and proved that the synthetic time series can retain the statistical characteristics of wind speeds. Likely, Lydia et al. <ns0:ref type='bibr' target='#b34'>(Lydia et al. 2016b</ns0:ref>) presented a short-term wind speed forecasting model by combining linear AR and non-linear AR. In general, the statistical model is based on the linear assumption of data, while the wind speed series have non-linear characteristics, which makes those methods unable to effectively deal with the non-linear characteristics of wind.</ns0:p><ns0:p>To solve the problem, machine learning is introduced by researchers to predict wind speed.</ns0:p><ns0:p>Normally, machine learning is used as a predictive model or parameter optimization, mainly includes the evolutionary algorithm, extreme learning machine (ELM) algorithm, ANN algorithm and SVM algorithm. <ns0:ref type='bibr' target='#b50'>Wang et al. (Wang 2017</ns0:ref>) presented a wind speed forecasting model by combining SVM and particle swarm optimization (PSO). <ns0:ref type='bibr' target='#b52'>Zhang et al. (Zhang et al. 2019)</ns0:ref> combined online sequential outlier robust ELM with hybrid mode decomposition (HMD) to predict wind speed. <ns0:ref type='bibr' target='#b48'>Wang et al. (Wang et al. 2018)</ns0:ref> Although researchers claim that deep learning can achieve better performance, these methods are computationally intensive and prone to overfitting on small data sets.</ns0:p><ns0:p>In addition to these basic forecasting models, preprocessing methods such as feature selection (FS) are also introduced in wind speed forecasting. This is because in short-term wind speed forecasting, the lag of historical wind speed is usually used as the feature, which may lead to a certain degree of redundancy. FS is used to select the best input for the basic predictive model, so that the model can obtain better generalization performance <ns0:ref type='bibr'>(Li et al. 2018a)</ns0:ref>. In the field of wind speed forecasting, there are mainly three forecast scenarios: short-term forecasting, medium-term forecasting and long-term forecasting. Among them, short-term wind speed forecasting is essential for estimating power generation, and it is difficult to predict accurately due to the nonlinearity and instability of wind speed. Therefore, in the study, we tried</ns0:p></ns0:div> <ns0:div><ns0:head>Empirical model decomposition</ns0:head><ns0:p>Due to the non-stationarity, intermittent and inherent nature of wind speed, it is difficult to directly predict the future wind speed. One possible solution is to decompose different frequencies from chaotic wind data <ns0:ref type='bibr' target='#b2'>(Bokde et al. 2019</ns0:ref>) and use models to predict them separately. Based on this idea, the study introduces signal processing technology to decompose wind speed. Common signal decomposition algorithms include Wavelet transform, morphology filters, EMD and many others.</ns0:p><ns0:p>Wavelet transform is not adaptive and follows the prior knowledge of its mother wavelet, so somewhat limits its ability to extract nonlinear and non-stationary components from the data.</ns0:p><ns0:p>Similarly, the morphology filters have to select the shape and the length of the structural element.</ns0:p><ns0:p>There is no uniform standard and depends on human experience, whereas EMD has received great attention from researchers because of its superior performance and easy-to-understand. Therefore, in this study, we use EMD to preprocessing the wind speed.</ns0:p><ns0:p>EMD is essentially a non-linear signal analysis method that can handle non-linear and nonstationary time series <ns0:ref type='bibr' target='#b18'>(Huang et al. 1998)</ns0:ref>. EMD uses the time-scale characteristics of the data to decompose the signal, and does not need to set any basis functions in advance. In theory, EMD can be applied to any type of signal. Since EMD was proposed, it has been rapidly applied to many different engineering fields such as marine and atmospheric research, seismic record analysis and mechanical fault diagnosis <ns0:ref type='bibr' target='#b10'>(Gao &amp; Liu 2021)</ns0:ref>.</ns0:p><ns0:p>The basic idea of EMD is to decompose non-stationary time series signals into a series of IMFs along with a residue <ns0:ref type='bibr' target='#b18'>(Huang et al. 1998)</ns0:ref>. The IMF should meet two principles: (1) the number of extreme and zero values must be equal or differ by at most one; (2) the average value of upper envelop and lower envelope must be zero <ns0:ref type='bibr' target='#b54'>(Ziqiang &amp; Puthusserypady 2007)</ns0:ref>. Let &#119904; ( &#119905; ) , &#119905; = 1,2,&#8230;, &#119897; be a time series. EMD decomposition steps are as follows:</ns0:p><ns0:p>Step 1: Identify the local minima and maxima of the time series.</ns0:p><ns0:p>Step 2: Use cubic splines to interpolate local minima and maxima values to generate lower &#119904; &#119897; and upper .</ns0:p><ns0:p>( &#119905; ) &#119904; &#119906; ( &#119905; ) Step 3: Computer the average envelope of the upper and lower envelopes</ns0:p><ns0:formula xml:id='formula_0'>&#119898; &#119905; = &#119904; &#119906; ( &#119905; ) + &#119904; &#119897; ( &#119905; ) 2</ns0:formula><ns0:p>Step 4: Subtract the average envelope from the original time series &#8462; ( &#119905; ) = &#119904; ( &#119905; ) -&#119898; &#119905; Step 5: Check if meets the two principles of IMF. If so, treat as the new IMF and</ns0:p><ns0:formula xml:id='formula_1'>&#8462; ( &#119905; ) &#8462; ( &#119905; ) &#119888; ( &#119905; )</ns0:formula><ns0:p>calculate the residual signal . Otherwise, replace with , and then repeat</ns0:p><ns0:formula xml:id='formula_2'>&#119903; ( &#119905; ) = &#119904; ( &#119905; ) -&#8462; ( &#119905; ) &#8462; ( &#119905; ) &#119904; ( &#119905; )</ns0:formula><ns0:p>steps 1 to 5.</ns0:p><ns0:p>Step 6: Set as new and repeat steps 1 to 5 until all IMFs are obtained.</ns0:p><ns0:formula xml:id='formula_3'>&#119903; ( &#119905; ) &#119904; ( &#119905; )</ns0:formula><ns0:p>Through the whole process, a set of IMFs from high to low frequency can be extracted from the time series. Therefore, the original time series can be expressed as:</ns0:p><ns0:formula xml:id='formula_4'>&#119904; ( &#119905; ) = &#119899; &#8721; &#119894; = 1 &#119888; &#119894; ( &#119905; ) + &#119903; &#119899; ( &#119905; )</ns0:formula><ns0:p>where is the number of IMFs. refers to the IMF, which is periodic and almost orthogonal to</ns0:p><ns0:formula xml:id='formula_5'>&#119899; &#119888; &#119894; ( &#119905; )</ns0:formula><ns0:p>each other <ns0:ref type='bibr'>(Li et al. 2018b</ns0:ref>). is the final residual representing the trend of .</ns0:p><ns0:p>&#119903; &#119899; ( &#119905; ) &#119904; ( &#119905; )</ns0:p></ns0:div> <ns0:div><ns0:head>Feature selection</ns0:head><ns0:p>After obtaining the IMF components of wind speed, we need to predict it. In the study, we use the observed and lag of the IMF components as the raw features, respectively forecast each IMF component, and add all the predicted IMF components to get the final wind speed. Despite, the raw features contain sufficient information for forecasting, some irrelevant or partially relevant features in the raw features may have a negative impact on the model. To avoid the impact, a common strategy is to use feature selection to remove irrelevant features. Commonly used feature selection algorithms include filter method, wrapper method, heuristic search algorithm, embedded method <ns0:ref type='bibr' target='#b5'>(Chandrashekar &amp; Sahin 2014)</ns0:ref>. In this study, we use the filter method. In order to obtain scores of different variables, we use the univariate linear regression test to calculate the correlation between features and output <ns0:ref type='bibr' target='#b32'>(Liu et al. 2019b)</ns0:ref>, which is defined as :</ns0:p><ns0:formula xml:id='formula_6'>&#119862;&#119900;&#119903; &#119894; = (&#119883;[ :,i ] -&#119898;&#119890;&#119886;&#119899; ( &#119883; [ :,&#119894; ])) * ( &#119910; -&#119898;&#119890;&#119886;&#119899; ( &#119910; )) &#119904;&#119905;&#119889; ( &#119883; [ :,&#119894; ]) * &#119904;&#119905;&#119889; ( &#119910; )</ns0:formula><ns0:p>where is an matrix, each column is a feature. is the vector of the output we are</ns0:p><ns0:formula xml:id='formula_7'>&#119883; &#119873; &#215; &#119872; &#119910; &#119873; &#215; 1</ns0:formula><ns0:p>interested in. Based on the rank of correlation, the irrelevant or partially relevant features are removed.</ns0:p></ns0:div> <ns0:div><ns0:head>Support vector regression</ns0:head><ns0:p>The support vector machine (SVM) is a learning method based on structural risk minimization criteria, which can minimize the expected risk and obtain better generalization performance on unknown data. The support vector regression (SVR) is an extension of SVM for regression problems <ns0:ref type='bibr' target='#b8'>(Drucker et al. 1997)</ns0:ref>. Due to the nonlinear and non-stationary nature of wind speed, SVR is widely used in short-term wind speed forecasting <ns0:ref type='bibr' target='#b23'>(Khosravi et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b28'>Liu et al. 2019a;</ns0:ref><ns0:ref type='bibr' target='#b41'>Santamar&#237;a-Bonfil et al. 2016</ns0:ref>). In the research, we use EMD to decompose the IMF components of wind speed, and the high-frequency IMF component contains the nonlinear and non-stationary part of wind speed. In order to obtain better generalization performance, we refer to existing research and use SVR to predict it.</ns0:p><ns0:p>The main idea of SVR is to implement linear regression in the high-dimensional feature space structure risks <ns0:ref type='bibr' target='#b6'>(Chen et al. 2018)</ns0:ref>. Given a set of samples , is the output and</ns0:p><ns0:formula xml:id='formula_8'>{&#119909; &#119894; ,&#119910; &#119894; }, &#119894; = 1,2,&#8230;,&#119873; &#119910; &#119894; &#119909; &#119894;</ns0:formula><ns0:p>is the input. The objective is:</ns0:p><ns0:formula xml:id='formula_9'>&#119891; ( &#119909; ) = &#119882; &#119879; &#8709; ( &#119909; ) + &#119887; &#119877; [ &#119891; ] = 1 2 &#8214; &#119882; &#8214; 2 + &#119862; &#119873; &#8721; &#119894; = 1</ns0:formula><ns0:p>&#119871;(&#119909; &#119894; ,&#119910; &#119894; ,&#119891;(&#119909; &#119894; ))</ns0:p><ns0:p>where and are the regression coefficient and bias, respectively. is the penalty coefficient.</ns0:p><ns0:p>&#119882; &#119887; &#119862; &#119871; represents the loss function, and is the structure risk. The corresponding</ns0:p><ns0:formula xml:id='formula_10'>(&#119909; &#119894; ,&#119910; &#119894; ,&#119891;(&#119909; &#119894; )) &#119877; [ &#119891; ]</ns0:formula><ns0:p>constrained optimization problem can be expressed as:</ns0:p><ns0:formula xml:id='formula_11'>&#119898;&#119894;&#119899; 1 2 &#8214; &#119882; &#8214; 2 + &#119862; &#119873; &#8721; &#119894; = 1 (&#120585; &#119894; + &#120585; * &#119894; ) &#119904;.&#119905;. &#119910; &#119894; -&#119882; &#119879; &#120601; ( &#119909; ) -&#119887; &#8804; &#120598; + &#120585; &#119894; &#119882; &#119879; &#120601; ( &#119909; ) + &#119887; -&#119910; &#119894; &#8804; &#120598; + &#120585; * &#119894; &#120585; &#119894; ,&#120585; * &#119894; &#8805; 0, &#119894; = 1,2, &#8230;,&#119899;</ns0:formula><ns0:p>where and refer to the slack variables. By introducing the Lagrange multiplier, the regression is the kernel function conforming to Mercer's theorem.</ns0:p><ns0:formula xml:id='formula_12'>&#8721; &#119873; &#119894; = 1 (&#120572; &#119894; -&#120572; * &#119894; ) = 0 &#119870;(&#119909; &#119894; ,&#119909;)</ns0:formula></ns0:div> <ns0:div><ns0:head>Cross-validated lasso</ns0:head><ns0:p>The Lasso algorithm is a regression model that can perform feature selection and regularization at the same time. It was originally proposed by Robert Tibshirani of Stanford University, with better prediction accuracy and interpretability (Tibshirani 1996). Normally, in regression, we want to find a coefficient that satisfies the following:</ns0:p><ns0:formula xml:id='formula_13'>&#120573; = (&#120573; 1 ,&#8230;,&#120573; &#119901; ) &#119884; = &#119883;&#120573; + &#120576;, &#119864; [ &#120576;|&#119883; ] = 0</ns0:formula><ns0:p>where is the dependent variable, is the covariate, and is the unobserved noise. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_14'>&#119898;&#119894;&#119899; &#120573; 0 , &#120573; { 1 &#119873; &#119873; &#8721; &#119894; = 1 (&#119910; &#119894; -&#120573; 0 -&#119909; &#119879; &#119894; &#120573;) 2 } &#119904;.&#119905;. &#119901; &#8721; &#119895; = 1 |&#120573; &#119895; | &#8804; &#119905;</ns0:formula><ns0:p>Rewritten in the Lagrangian form:</ns0:p><ns0:formula xml:id='formula_15'>&#120573; &#119897;&#119886;&#119904;&#119904;&#119900; = &#119886;&#119903;&#119892;&#119898;&#119894;&#119899; &#120573; &#8712; &#119877; &#119901; { 1 &#119873; &#8214; &#119910; -&#119883;&#120573; &#8214; 2 2 + &#120582; &#8214; &#120573; &#8214; 1}</ns0:formula><ns0:p>The -norm is used instead of the -norm in Lasso. Since the constraint region is diamond-</ns0:p><ns0:formula xml:id='formula_16'>&#119871; 1 &#119871; 2</ns0:formula><ns0:p>shaped, it is more likely to pick the solution that lies at the corner of the region. As a result, the solution of the lasso is sparse, with some coefficients set to exactly equal to zero, that is, Lasso performs a straightforward feature selection.</ns0:p><ns0:p>To estimate , the value of the penalty parameter is critically important. However, the optimal &#120573; &#119897;&#119886;&#119904;&#119904;&#119900; &#120582; is not given automatically. If is chosen appropriately, Lasso achieves the fast convergence &#120582; &#120582; under fairly general conditions; On the other hand (chosen inappropriately), Lasso may be inconsistent or have a slower convergence. In the paper, we adopt the cross-validated Lasso algorithm, in which the penalty parameter is chosen based on cross-validation, and this is also &#120582; the leading recommendation way in the theoretical literature <ns0:ref type='bibr' target='#b39'>(Park &amp; Casella 2008)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Prediction performance criteria</ns0:head><ns0:p>In the study, the mean absolute percentage error (MAPE), mean absolute error (MAE) and RMSE are used as performance indicators to evaluate the proposed wind forecasting model, which are defined as follows:</ns0:p><ns0:formula xml:id='formula_17'>&#119872;&#119860;&#119875;&#119864; = 1 &#119873; &#8721; &#119873; &#119894; = 1 | (&#119884; &#119894; -&#119884; &#119894; ) &#119884; &#119894; | &#119872;&#119860;&#119864; = 1 &#119873; &#8721; &#119873; &#119894; = 1 |&#119884; &#119894; -&#119884; &#119894; | &#119877;&#119872;&#119878;&#119864; = 1 &#119873; -1 &#8721; &#119873; i = 1 (&#119884; &#119894; -&#119884; &#119894; ) 2</ns0:formula><ns0:p>where and refer to the observed and predicted wind speed of data point , respectively. For Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Results</ns0:head></ns0:div> <ns0:div><ns0:head>Wind speed data</ns0:head><ns0:p>The wind speed data used in the study is gathered from two wind stations in Michigan, USA <ns0:ref type='table'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiments and result analysis</ns0:head><ns0:p>To verify the effectiveness of the proposed model, we compare it with five classic individual models, including Persistence, ELM, SVR and ANN, ARIMA. The 1-to 3-step forecasting results of these models under time series #1 and #2 are displayed in Fig. <ns0:ref type='figure'>3-4</ns0:ref>, and the corresponding error estimated results are listed in (2) In the 2-step forecasting, when wind station #1 is used, the proposed model has the lowest performance criteria, i.e., the values of RMSE, MAE, and MAPE are 0.7531, 0.5848, and 24.78%, respectively. In addition, for wind station #2, the proposed model still achieves the lowest performance criteria value. Take MAPE as an example, the value of MAPE is 22.99%, which is significantly lower than other models.</ns0:p><ns0:p>(3) In the 3-step forecasting, the proposed model is still the model with the highest prediction accuracy, and the MAPE of wind stations #1 and #2 are 27.55% and 24.59%, respectively. And Persistence has the worst RMSE value among these models, with MAPE of 57.64% and 47.99%, respectively.</ns0:p><ns0:p>In general, under 1-to 3-step forecasting, the proposed model can obtain the best prediction performance compared with the classic individual models.</ns0:p></ns0:div> <ns0:div><ns0:head>Compared with traditional EMD methods</ns0:head><ns0:p>As a nonlinear signal analysis method for processing nonlinear and non-stationary time series, Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>EMD has been widely used in time series. To further verify the effectiveness of our EMD model, we compare it with four widely used EMD models, namely EMD-ELM, EMD-SVR, EMD-SP-SVR, and EMD-ANN. It is worth noting that in this study, these methods used the same way as our proposed model, using EMD to decompose the wind speed, using a single classifier to predict each IMF component separately, and adding all the prediction results to get the final prediction wind speed. The prediction results and the error estimated results of these four EMD-based methods and the proposed method are displayed in Fig. <ns0:ref type='figure'>5</ns0:ref>-6 and Table <ns0:ref type='table'>6</ns0:ref>-9. Based on Fig. <ns0:ref type='figure'>5-6</ns0:ref> and Table <ns0:ref type='table'>6</ns0:ref>-9, it can be observed that:</ns0:p><ns0:p>(1) Compared with the above-mentioned classic individual models, the performance of the EMDbased method is significantly improved. Take wind station #1 as an example, in the 1-step forecasting, the value of RMSE of the EMD-based methods is around 0.60, while the classic individual model is around 1.20. After the wind speed is decomposed by EMD, the value of RMSE is reduced almost doubled.</ns0:p><ns0:p>(2) For wind station #1, except for the MAE in the 3-step forecasting, the performance indicators obtained from the proposed model are significantly better than those EMD-based combined models. For the 3-step forecasting, the performance of EMD-SVR and EMD-SVR-SP in MAE is slightly better than the proposed combined model, but in other evaluation indicators, the proposed combined model achieves a significantly better performance. Furthermore, EMD-ANN is always worse in MAPE as compared with the other three combined models, with MAPE of 23.55%, 27.67%, and 29.31% for 1-to 3-step forecasting.</ns0:p><ns0:p>(3) For wind station #2, in 1-to 3-step wind speed forecasting, the proposed combined model obtains the best prediction results. The RMSE, MAE and MAPE in the 1-step forecasting are 0.5593, 0.419, and 17.10%, respectively. In comparison, among the other four EMD-based combined models, the EMD-ELM and EMD-ANN models have similar prediction performance in 1-to 3-step forecasting, with MAPE values of 21.59%, 27.49%, 27.65% and 21.83%, 25.3%, 27.86%, respectively.</ns0:p><ns0:p>In total, the EMD-based method has obvious advantages over traditional methods, and the proposed method that using EMD, FS, SVR and LassoCV can achieve better performance.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head></ns0:div> <ns0:div><ns0:head>Performance of SVR-SP and LassoCV on different IMFs</ns0:head><ns0:p>According to the EMD principle, the frequency of the IMF components is from high to low. The non-linear and non-stationary information of wind speed data is mainly concentrated in the highfrequency IMF, and the low-frequency IMF presents a Sin-like function curve. Based on its characteristics, in this study we use SVR-SP and LassoCV to predict IMFs of different frequencies.</ns0:p><ns0:p>In order to verify the effectiveness of this hybrid EMD model, in this section, we take wind station #2 as an example to analyze the performance of the two methods on different IMF components.</ns0:p><ns0:p>Table <ns0:ref type='table'>10</ns0:ref> lists the RMSE of SVR-SP and LassoCV on different IMF components. It is worth mentioning that in multi-step prediction, the prediction accuracy of the first step is more important than the other steps, which is of great significance for the accurate estimation of wind power. It can be seen from Table <ns0:ref type='table'>10</ns0:ref> that SVR-SP can obtain significantly better performance than LassoCV at high frequency (IMF1), while LassoCV can obtain better performance at low frequencies (IMF2~IMF7, Trend), and its RMSE is already close to zero at IMF4. Moreover, SVR-SP has a risk of overfitting when predicting low frequencies, resulting in poor performance. In total, the proposed model that combines the EMD decomposition characteristics and the advantages of the algorithm can achieve better performance than the traditional EMD model.</ns0:p></ns0:div> <ns0:div><ns0:head>Comparison of different signal decomposition techniques</ns0:head><ns0:p>Besides EMD, Variational Mode Decomposition (VMD) and Ensemble Empirical Mode Decomposition (EEMD) are also widely used in short-term wind speed forecasting. Here, we analyze the impact of different signal decomposition techniques on the performance of our proposed method. Table <ns0:ref type='table'>11</ns0:ref> shows the prediction performance of the three signal decomposition techniques on two wind stations. For wind station #1, it can be found that compared with VMD and EEMD, EMD obtains the best RMSE value in the 1-step forecasting. The performance obtained by VMD in the 1-step and 2-step forecasting is relatively close, but it drops significantly in the 3-step forecasting. EEMD inherits from EMD, similar to EMD, as the step size increases, the performance will decrease significantly. For wind station #2, EMD also obtained the best predictive performance. VMD has a similar conclusion on wind station #1, and the performance of the 1-step and 2-step forecasting is relatively close. It should be pointed out that in multi-step forecasting, the 1-step forecasting is usually used for wind energy estimation, and other steps are used to assist decision-making, so more attention is paid to the performance of the 1-step forecasting.</ns0:p></ns0:div> <ns0:div><ns0:head>The impact of the number of selected features on performance</ns0:head><ns0:p>Feature selection is used to remove redundant features in the study. However, the number of selected significant features will more or less affect the short-term wind speed forecasting. In order to ensure the stability in the complicated industrial system, we analyzed the performance of our proposed method under the different number of selected features. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science be pointed out that in the study based on the characteristics of EMD decomposition we use FS and SVR to predict high-frequency component (i.e., IMF 1 ), and use LassoCV to predict low-frequency components. Feature selection is mainly used in the prediction of IMF 1 component. From Figure <ns0:ref type='figure'>7</ns0:ref>, we can be seen that feature selection can slightly improve the performance of 1-step forecasting, but has little effect on 1-step and 2-step forecasting. Overall, as the number of selected features decreases, the generalization performance of the method will improve, but when the selected features are too scarce, the performance will drop sharply due to the deletion of useful features. In order to determine the appropriate number of features, by following <ns0:ref type='bibr' target='#b4'>(Bradley et al. 1998;</ns0:ref><ns0:ref type='bibr' target='#b7'>Chizi et al. 2009</ns0:ref>) , this study uses cross-validation to select.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance under different signal-to-noise ratios</ns0:head><ns0:p>In the process of collecting wind speed, it is often affected by the environment and the anemometer itself, resulting in a certain amount of noise in the data. In order to verify the reliability of the method, we analyzed the prediction performance under different signal-to-noise ratios (SNRs).</ns0:p><ns0:p>Figure <ns0:ref type='figure'>8</ns0:ref> shows the 1-step to 3-step prediction performance of the method from 30~60db SNR.</ns0:p><ns0:p>Take wind station #1 as an example, it can be seen from Figure <ns0:ref type='figure'>8</ns0:ref> that the performance of the proposed method is relatively stable under different signal-to-noise ratios. The RMSE value of 1step forecasting is about 0.6, the RMSE value of 2-step forecasting is about 0.75, and the RMSE value of 3-step forecasting is about 0.85. In general, as the signal-to-noise ratio increases, the prediction performance of the proposed method will be improved. Similar performance also exists on site #2. These experimental results show that the proposed method can accurately predict wind speed under certain noise.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>As a sustainable and renewable energy, wind power has attracted widespread attention and rapid development in recent years. Reliable and accurate wind speed forecasting will provide support Manuscript to be reviewed</ns0:p><ns0:p>Computer Science needs to be retrained. In future research, we will try to integrate online learning in our proposed method.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 1</ns0:head><ns0:p>The whole process of the proposed model. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>developed an error correction-based ELM model for short-term wind speed forecasting. Liu et al. (Liu et al. 2020) introduced the Jaya-SVM (Jaya algorithm-based support vector machine) into wind speed forecasting. Krishnaveny et al. (Nair et al. 2017) exploited the performance of three different models, i.e. ANN, ARIMA and hybrid model, in wind speed forecasting. Azeem et al. (Azeem et al. 2018) investigated the KNNbased and ANN-based models for wind speed forecasting. Recently, deep learning, a new branch of machine learning, has received extensive attention. It has been widely used for regression and classification problems. According to the literature, deep learning can abstract the hidden structure and inherent characteristics of data compared with shallow methods. Khodayar et al. (Khodayar &amp; Wang 2019) introduced a scalable graph convolutional deep learning (GCDLA) for wind speed forecasting. Wang et al. (Wang et al. 2016a) investigated a deep belief network model for wind speed forecasting. Khdayar et al. (Khodayar et al. 2019) combined rough set theory and restricted Boltzmann machines presented a wind speed forecasting. Hong et al. (Hong &amp; Satriani 2020) based on a convolutional neural network developed a day-ahead wind speed forecasting model.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>For example: Paramasivan et al. (Paramasivan &amp; Lopez 2016) employed a ReliefF feature selection algorithm to identify key features, and then used a bagging neural network to predict the wind speed. Niu et al. (Niu et al. 2018) presented a multi-step wind speed forecasting model using optimal FS, modified bat algorithm and cognition strategy. Botha et al. (Botha &amp; Walt 2017) combined FS with SVM to predict short-term wind speed. Kong et al. (Kong et al. 2015) combined feature selection and reduced support vector machines (RSVM) for wind speed forecasting. Due to the unstable nature of wind, the model of combined-or hybrid-signal processing technology has become the mainstream of wind speed forecasting. Wherein the signal processing technology is usually employed to decompose the wind speed to reduce or eliminate the instability. Commonly used signal processing techniques have empirical mode decomposition (EMD), variational mode decomposition (VMD) and wavelet transform (WT). Wang et al. (Wang et al. 2016b) decomposed wind speed into stable signals using ensemble empirical mode decomposition (EEMD). Sun et al. (Sun &amp; Wang 2018) developed a fast ensemble empirical mode decomposition model to improve the accuracy of wind speed forecasting. Tascikaraoglu et al. (Tascikaraoglu et al. 2016) based on WT proposed a wind speed forecasting model. Hu et al. (Hu &amp; Wang 2015) adopted an empirical wavelet transform (EWT) to extract key information in wind speed time series. Yu et al. (Yu et al. 2017) explored the performance of EMD, EEMD and complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) in wind speed forecasting.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:2:0:NEW 10 Aug 2021) Manuscript to be reviewed Computer Science obtained by mapping the original input through a predefined function , and to minimize &#8709; ( &#119909; )</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>minimize the objective function while forcing the sum of the absolute values of the coefficients to be less than a fixed value (Hung et al. 2016): &#119905; PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:2:0:NEW 10 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>, RMSE, the smaller value, the better the performance.PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:2:0:NEW 10 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Figure 7 shows the RMSE value between the number of selected features and the performance of our proposed method. It should PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:2:0:NEW 10 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>for wind power planning and control. Due to the non-linearity and non-stationarity of wind, forecasting is still a difficult yet challenging problem. In the paper, we develop a new wind speed forecasting model based on EMD, FS, SVR and LassoCV. EMD is employed to extract IMFs from the original non-stationary wind speed time series. FS and SVR are combined to predict the highfrequency IMF. LassoCV is adopted to complete the prediction of low-frequency IMF and trend.By testing in two wind speeds obtained from Michigan, USA, the experimental results show that under 1-to 3-step forecasting the proposed model can achieve better prediction performance than the classic individual and traditional EMD combined models. Although the proposed model has achieved good performance, it still has some limitations. After the new data is updated, the model PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:2:0:NEW 10 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,349.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,199.12,525.00,420.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,420.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>-5. It is worth noting that for a fair comparison, the parameters of the involved models are selected based on cross-validation. Based on the experimental results, we can get the following conclusions:(1)In the 1-step forecasting, for wind station #1, the proposed model obtains the best accuracy: RMSE, MAE, and MAPE are 0.5859, 0.4426, and 21.11%, respectively. The classic individual models from low to high based on RMSE are ELM, ANN, Persistence, SVR, and ARIMA, with MAPE values of 36.20%, 36.24%, 36.20%, 34.87%, and 34.25%, respectively. Likely, in wind station #2, compared with the classic individual models, the proposed model still obtains the best performance, and the MAPE value is 17.10%.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>step Models RMSE MAE MAPE (%) RMSE MAE MAPE (%) RMSE MAE MAPE (%)</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='5'>Computer Science</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell cols='7'>Statistical indicators 3-step 122.89 116.05 2-step 2-step RMSE MAE MAPE (%) 2-step 111.04 108.97 3-step 111.11 103.08 RMSE MAE MAPE (%) 3-step 3-EMD-ELM Wind station Dataset Models 1-step Persistence P RMSE (%) 102.98 P MAE (%) 103.24 Models 1-step 2-step Persistence P RMSE (%) 127.42 89.54 P MAE (%) 132.24 83.49 1-step 1-step Models RMSE MAE MAPE (%) 0.6400 0.5128 22.63 0.7854 0.6316 27.22 0.8746 0.6937 29.02 EMD-ELM 0.6560 0.5283 21.59 0.8199 0.6669 27.49 0.8775 0.7096 27.65 Date Mean (m/s) Max (m/s) Min (m/s) Std. Stew P MAPE (%) 71.47 100.34 109.26 P MAPE (%) 110.33 78.38 95.19 EMD-SVR 0.6379 0.5120 23.32 0.7768 0.6181 27.09 0.8583 0.6749 28.48 EMD-SVR 0.6567 0.5233 24.88 0.8317 0.6736 29.85 0.8508 0.6986 30.52 Kurt. . EMD-SVR-SP 0.6310 0.4867 23.03 0.7987 0.6141 26.30 0.8591 0.6762 28.66 EMD-SVR-SP 0.6437 0.4972 24.06 0.8211 0.6718 28.53 0.8894 0.7264 32.31</ns0:cell></ns0:row><ns0:row><ns0:cell>1 1</ns0:cell><ns0:cell>ARIMA ARIMA EMD-ANN EMD-ANN The proposed The proposed Site #1</ns0:cell><ns0:cell cols='5'>Training set P RMSE (%) P RMSE (%) 0.6342 0.5055 107.55 Sept. 1, 2019 ~ 100.11 75.25 23.55 0.6397 0.5046 21.83 Oct. 20, 2019 (~83%) P MAE (%) 103.55 P MAE (%) 121.83 74.83 0.5859 0.4426 21.11 0.7531 0.5848 3.2975 101.60 92.86 0.7879 0.6221 0.7927 0.6373 97.69 89.35 0.5593 0.4193 17.10 0.7540 0.5966 Testing set Oct. 21, 2019 ~ Oct. 31, 2019 (~17%) 3.1614 P MAPE (%) 62.20 82.58 P MAPE (%) 126.31 97.56 115.77</ns0:cell><ns0:cell>14.4 106.94 99.60 13.9 95.26</ns0:cell><ns0:cell>0 24.78 0.8528 0.6798 27.67 0.8987 0.7040 25.34 0.8520 0.6934 2.378 0.871 0.865 29.31 27.86 27.55 22.99 0.7911 0.6437 24.59 0 2.486 1.108 1.312</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ELM ELM Site #2</ns0:cell><ns0:cell>Training set P RMSE (%) P RMSE (%) P MAE (%) P MAE (%) Testing set P MAPE (%) P MAPE (%)</ns0:cell><ns0:cell cols='2'>Sept. 1, 2019 ~ 116.85 123.99 81.12 Oct. 20, 2020 (~83%) 119.68 142.95 82.96 Oct. 21, 2019 ~ Oct. 31, 2020 (~17%) 71.44 161.98 123.54</ns0:cell><ns0:cell cols='2'>3.6919 105.83 100.59 100.57 99.60 3.5667 87.82 144.54</ns0:cell><ns0:cell>11.3 112.35 100.11 9.3 100.31</ns0:cell><ns0:cell>0 0</ns0:cell><ns0:cell>2.183 0.807 0.353 2.118 0.500 -0.318</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>SVR SVR</ns0:cell><ns0:cell>P RMSE (%) P RMSE (%)</ns0:cell><ns0:cell>100.36 107.43</ns0:cell><ns0:cell>73.93</ns0:cell><ns0:cell cols='2'>108.16 89.84</ns0:cell><ns0:cell>109.11</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>P MAE (%) P MAE (%)</ns0:cell><ns0:cell>103.87 119.81</ns0:cell><ns0:cell>73.66</ns0:cell><ns0:cell cols='2'>103.96 86.54</ns0:cell><ns0:cell>96.76</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>P MAPE (%) P MAPE (%)</ns0:cell><ns0:cell>65.15 113.43</ns0:cell><ns0:cell>89.76</ns0:cell><ns0:cell cols='2'>88.48 103.00</ns0:cell><ns0:cell>91.62</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ANN ANN</ns0:cell><ns0:cell>P RMSE (%) P RMSE (%)</ns0:cell><ns0:cell>104.54 112.78</ns0:cell><ns0:cell>73.95</ns0:cell><ns0:cell cols='2'>103.68 106.62</ns0:cell><ns0:cell>116.09</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>P MAE (%) P MAE (%)</ns0:cell><ns0:cell>111.33 125.58</ns0:cell><ns0:cell>73.16</ns0:cell><ns0:cell>98.62</ns0:cell><ns0:cell>98.82</ns0:cell><ns0:cell>104.56</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>P MAPE (%) P MAPE (%)</ns0:cell><ns0:cell>71.63 137.50</ns0:cell><ns0:cell>90.12</ns0:cell><ns0:cell cols='2'>84.78 112.23</ns0:cell><ns0:cell>102.23</ns0:cell></ns0:row><ns0:row><ns0:cell>1 1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2 2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:2:0:NEW 10 Aug 2021) Manuscript to be reviewed PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:2:0:NEW 10 Aug 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 8 (on next page)</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>The improvement rate of the proposed model relative to other combined models at wind station #1.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:2:0:NEW 10 Aug 2021)Manuscript to be reviewed</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57863:2:0:NEW 10 Aug 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Response to Reviewer Comments We thank the editor and reviewers for their helpful comments and suggestions. Their comments have improved the manuscript effectively. We have included almost all of their suggestions and present a point-by-point response to their comments. Reviewer 2 Basic reporting No comment. Experimental design No comment. Validity of the findings No comment. Additional comments Comment 1: If possible, could the authors please update more basic concepts and fundamentals of Short-Term Wind Speed Forecasting (STWSF), Empirical Mode Decomposition (EMD), Feature Selection (FS), and Support Vector Regression (SVR) based on the research topic in this paper? Response: We thank the reviewer for the comment. Actually, in the field of wind speed forecasting, according to the forecast time interval, it can be divided into short-, medium- and long-term forecasts. Short-term forecasts (STWSF) are usually half an hour or a few hours, medium-term forecasts are half a day or a day, and long-term forecasts are usually days or a week. For mediumand long-term forecasts, weather information such as temperature, pressure, and humidity is usually used to predict wind speed. For short-term forecasts, it is difficult to make accurate forecasts using weather information [1-2]. Wind speed is generally predicted by analyzing the inherent laws of historical wind speed data. For Empirical Mode Decomposition (EMD), since wind speed is a nonlinearity and non-stationarity time series, it is difficult to directly predict it. EMD proposed by Huang et al. [7] is a nonlinear non-stationary signal decomposition method that can decompose the time series into a set of intrinsic mode functions (IMFs), which is easy to predict. For Support Vector Regression (SVR), due to the non-linearity and non-stationarity of wind speed, algorithms with better generalization performance are usually used for prediction. SVR inherits the characteristics of SVM and has strong generalization performance, making it widely used in STWSF [3-5]. In order to be consistent with existing work, this study applies it to predict the high-frequency IMF component of EMD decomposition, because it contains the non-linearity and non-stationarity parts of the wind speed. For Feature Selection (FS), unlike traditional research, the features in STWSF are constructed using the lag of IMF components, and there may be some redundancy. In order to improve the generalization performance of the model, we introduce FS to remove it. For the convenience of readers, we have added the above basic concepts and fundamentals in the corresponding sections of the revised manuscript Comment 2: In this paper, the descriptions of starting layer, convolutional layer, hidden layer, and pooling layer should be modified. How many convolutional layers and hidden layers have been determined in this paper, and what is the functionality of these layers in the experimentations and proposed methodologies? Meanwhile, if possible, could the authors please make an explanation for choosing the number of these layers mentioned above and update some descriptions and statements on the functionality of these aforementioned layers? Specifically, it is worthy to point out, the various numbers of convolutional layers, hidden layers, and pooling layers might have either positive or negative impacts on the performances and characteristics of Short-Term Wind Speed Forecasting (STWSF) -- Empirical Mode Decomposition (EMD) -- Feature Selection (FS) -- Support Vector Regression (SVR), namely STWSF-EMD-FSSVR, for more complicated industrial systems to some extent. Based on this reason, could the authors please provide an evidence to make some explanations of the availability, feasibility, and capability of the proposed method STWSF-EMD-FS-SVR based on the research topic in this paper? Response: We apologize for the misunderstanding caused by the unclear description. In this study, we use EMD to decompose wind speed data, and use traditional machine learning methods (feature selection (FS), SVR and LassoCV) to forecast future wind speed, without using the latest deep learning techniques such as convolutional neural network (CNN). In fact, the wind speed data used is relatively small, and the deep learning-based method often leads to model overfitting and poor generalization performance on unknown data [6]. Comment 3: The detailed information of training data and testing data should also be modified and respectively demonstrated the percentage of training data and testing data of the original or/and experimental dataset regarding case studies or/and experimentations based on the research topic in this paper for STWSF-EMD-FS-SVR. Response: We thank the reviewer for the helpful comment. According to the reviewer’s suggestion, we have listed the detailed information of training data and test data in the revised manuscript regarding case studies based on our research topic. Comment 4: Which are the most advantages of the proposed method in comparison with other existing algorithms of Empirical Mode Decomposition (EMD) in the areas of Short-Term Wind Speed Forecasting (STWSF), Feature Selection (FS), and Support Vector Regression (SVR) based on data-driven approach? A data-driven method of fault diagnosis has been demonstrated in a recent survey paper (https://doi.org/10.3390/pr9020300). Could the authors please explain which are the differences among Empirical Mode Decomposition (EMD), Variational Mode Decomposition (VMD), and Ensemble Empirical Mode Decomposition (EEMD) techniques? Meanwhile, evidence of using the proposed EMD framework rather than VMD or EEMD methods should also be supported. Alternatively, could the authors please provide either some TABLES or update Simulation or/and Experimental Results to make an explanation of the advantages of the proposed methodology EMD, as well as update some statistical criteria based on the research topic in this paper? Response: We thank the reviewer for the question that need further clarification. In fact, wind speed is an unstable and non-stationarity time series that is difficult to forecast directly. In the study, we use EMD to decompose the wind speed, and according to the characteristics of the decomposed IMF components, use FS+SVR and LassoCV to forecast it respectively. Compared with traditional direct forecast models, EMD-based models can obtain more stable and better forecast performance. For EMD, EEMD and VMD, EMD is a nonlinear non-stationary signal decomposition method proposed by Huang et al. [7]. The method is improved from wavelet transform and window Fourier transform, and decomposes the signal into a set of intrinsic mode functions (IMFs) according to local characteristics in the time domain [7]. EEMD is a further improvement method based on EMD by Wu et al. [3]. This method adds different white noises to the raw signal, and obtains the final IMFs by averaging the results of multiple EMD decompositions. VMD is a signal decomposition method proposed by Dragomiretskiy et al. [9] in 2014. Different from EMD and its derivative methods, VMD method is improved based on Hilbert transform and Wiener filtering method. These methods are the excellent non-linear and non-stationary signal decomposition methods. In this study, we tested the performance of these three methods in short-term wind speed prediction and found that compared with EEMD and VMD, EMD can achieve comparable or better performance in the 1-step forecasting. It should be pointed out that in multi-step forecasting, the 1-step forecasting is usually used for wind energy estimation, and other steps are used to assist decision-making, so more attention is paid to the performance of the 1-step forecasting. In the revised manuscript, we list the comparison results of these three nonlinear and non-stationary signal decomposition methods in the 'Discussion' section. Comment 5: The proposed algorithm STWSF-EMD-FS-SVR should be demonstrated as the form of Algorithm Environment, which can be efficaciously followed by readers. Additionally, the flowchart of the proposed method mentioned in this paper should also be updated and included the determined dataset collections, pre-processing of the dataset, post-processing of the dataset, feature extractions, feature selections, and the performances and characteristics of the proposed algorithm STWSFEMD-FS-SVR. A data-driven and supervised machine learning-based fault detection and fault classification has been addressed in a journal paper (https://doi.org/10.3390/pr8091066), which includes respectively the determined dataset collections, pre-processing of the dataset, postprocessing of the dataset, feature extractions, and feature selections. Response: We thank the reviewer for the helpful comment. In fact, in the short-term wind speed forecasting, the future wind speed is predicted based on historical wind speed data. The form of the data is relatively fixed, and there is almost no pre-processing and post-processing procedures. So, the paper usually pays more attention to the implementation part of the algorithm. In order to be consistent with existing works [1-2], the paper puts algorithms and their applications in the 'Method' section, and data sets and experiments in the 'Results' section. Meanwhile, in order to facilitate readers to follow our work, we have added a flowchart to Figure 1 in the revised manuscript for better understanding. Comment 6: Alternatively, both different topologies of experimental samples and the various numbers of selected significant features might also have either positive or negative impacts on the performances and characteristics of STWSF-EMD-FS-SVR for more complicated industrial systems to some extent. Based on this reason, could the authors please support an evidence to make some explanations of the availability, feasibility, and capability of the proposed methodology STWSFEMD-FS-SVR based on the research topic in this paper? Response: We thank the reviewer for the helpful comment. This study mainly uses historical wind speed data to forecast future wind speed, so the topology of the experimental samples is fixed. For the various number of selected significant features on performance, we add an experiment about the number of feature selection and performance in the 'Discussion' section. Experimental results show that feature selection can slightly improve the performance of 1-step forecasting, but has little effect on 1-step and 2-step forecasting. Overall, as the number of selected features decreases, the generalization performance of the method will improve, but when the selected features are too scarce, the performance will drop sharply due to the deletion of useful features. In order to determine the appropriate number of features, by following [11-12], this study uses cross-validation to select. Comment 7: If possible, could the authors please have in comparison between with and without Additive White Gaussian Noise (AWGN) signals/environments (including the different values of signal-to-noise ratio [SNR]) for the system mentioned in this paper? As well as, the authors should contrast the simulation results between these scenarios mentioned above. Alternatively, could the authors please provide either some Tables or updated simulation results to make an explanation of the advantages of the proposed methodology of STWSF-EMD-FS-SVR in the fields of data-driven short-term wind speed forecasting and update some statistical criteria based on the research topic? Response: According to the reviewer’s suggestion, we added a comparison experiment with or without additive white Gaussian noise (AWGN) signal in the 'Discussion' section. The experimental results show that the performance of the proposed method is relatively stable under different signal-to-noise ratios. Take wind station #1 as an example, the RMSE value of 1-step forecasting is about 0.6, the RMSE value of 2-step forecasting is about 0.75, and the RMSE value of 3-step forecasting is about 0.85. References: [1] Chen, J., Zeng, G. Q., Zhou, W., Du, W., & Lu, K. D. (2018). Wind speed forecasting using nonlinear-learning ensemble of deep learning time series prediction and extremal optimization. Energy conversion and management, 165, 681-695. [2] Liu, H., Mi, X., & Li, Y. (2018). Smart multi-step deep learning model for wind speed forecasting based on variational mode decomposition, singular spectrum analysis, LSTM network and ELM. Energy Conversion and Management, 159, 54-64. [3] Wang, J., Zhou, Q., Jiang, H., & Hou, R. (2015). Short-term wind speed forecasting using support vector regression optimized by cuckoo optimization algorithm. Mathematical Problems in Engineering, 2015. [4] Hu, Q., Zhang, S., Xie, Z., Mi, J., & Wan, J. (2014). Noise model based ν-support vector regression with its application to short-term wind speed forecasting. Neural Networks, 57, 111. [5] Nurunnahar, S., Talukdar, D. B., Rasel, R. I., & Sultana, N. (2017, December). A short term wind speed forcasting using svr and bp-ann: a comparative analysis. In 2017 20th International [6] [7] [8] [9] [10] [11] [12] Conference of Computer and Information Technology (ICCIT) (pp. 1-6). IEEE. Brigato, L., & Iocchi, L. (2021, January). A close look at deep learning with small data. In 2020 25th International Conference on Pattern Recognition (ICPR) (pp. 2490-2497). IEEE. Huang, N. E., Shen, Z., Long, S. R., Wu, M. C., Shih, H. H., Zheng, Q., ... & Liu, H. H. (1998). The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proceedings of the Royal Society of London. Series A: mathematical, physical and engineering sciences, 454(1971), 903-995. Wu, Z., & Huang, N. E. (2009). Ensemble empirical mode decomposition: a noise-assisted data analysis method. Advances in adaptive data analysis, 1(01), 1-41. Dragomiretskiy, K., & Zosso, D. (2013). Variational mode decomposition. IEEE transactions on signal processing, 62(3), 531-544. Fu, Y., Gao, Z., Liu, Y., Zhang, A., & Yin, X. (2020). Actuator and sensor fault classification for wind turbine systems based on fast Fourier transform and uncorrelated multi-linear principal component analysis techniques. Processes, 8(9), 1066. Bradley, P. S., Mangasarian, O. L., & Street, W. N. (1998). Feature selection via mathematical programming. INFORMS Journal on Computing, 10(2), 209-217. Chizi, B., Rokach, L., & Maimon, O. (2009). A survey of feature selection techniques. In Encyclopedia of Data Warehousing and Mining, Second Edition (pp. 1888-1895). IGI Global. "
Here is a paper. Please give your review comments after reading it.
245
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The development of Medium Access Control (MAC) protocols for Internet of Things should consider various aspects such as energy saving, scalability for a wide number of nodes, and grouping awareness. Although numerous protocols consider these aspects in the limited view of handling the medium access, the proposed Grouping MAC (GMAC) exploits prior knowledge of geographic node distribution in the environment and their priority levels. Such awareness enables GMAC to significantly reduce the number of collisions and prolong the network lifetime. GMAC is developed on the basis of five cycles that manage data transmission between sensors and cluster head and between cluster head and sink.</ns0:p><ns0:p>These two stages of communication increase the efficiency of energy consumption for transmitting packets. In addition, GMAC contains slot decomposition and assignment based on node priority, and, therefore, is a grouping-aware protocol. Compared with standard benchmarks IEEE 802.15.4 and industrial automation standard 100.11a and userdefined grouping, GMAC protocols generate a Packet Delivery Ratio (PDR) higher than 90%, whereas the PDR of benchmark is as low as 75% in some scenarios and 30% in others. In addition, the GMAC accomplishes lower end-to-end (e2e) delay than the least e2e delay of IEEE with a difference of 3 s. Regarding energy consumption, the consumed energy is 28.1 W/h for GMAC-IEEE Energy Aware (EA) and GMAC-IEEE, which is less than that for IEEE 802.15.4 (578 W/h) in certain scenarios.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The recent development of Wireless Sensor Networks (WSNs) and the incorporation of technologies of Internet of Things (IoT) has enabled their applications in various industrial fields, particularly through IoT-based WSN (IoT-WSN) <ns0:ref type='bibr'>(Hassan et al., 2020)</ns0:ref>. Such the emergence has led to numerous applications in different sectors such as agriculture <ns0:ref type='bibr'>(Hassan, 2020;</ns0:ref><ns0:ref type='bibr' target='#b13'>Keswani et al., 2018)</ns0:ref>, smart cities <ns0:ref type='bibr' target='#b0'>(Al-Majhad et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b16'>Nassar et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b34'>Zhang, 2020)</ns0:ref>, intelligent transportation system <ns0:ref type='bibr' target='#b15'>(Muthuramalingam et al., 2019)</ns0:ref>, medical field <ns0:ref type='bibr' target='#b17'>(Onasanya &amp; Elshakankiri, 2019;</ns0:ref><ns0:ref type='bibr' target='#b31'>Yao et al., 2019)</ns0:ref>, security and surveillance <ns0:ref type='bibr' target='#b2'>(Benzerbadj et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b14'>Memon et al., 2020)</ns0:ref>, military <ns0:ref type='bibr' target='#b35'>(Zieliski, Chudzikiewicz &amp; Furtak, 2019)</ns0:ref>, forensics <ns0:ref type='bibr' target='#b32'>(Yaqoob et al., 2019)</ns0:ref>, education, and voting <ns0:ref type='bibr' target='#b26'>(Srikrishnaswetha, Kumar &amp; Mahmood, 2019)</ns0:ref>. Sensing-based applications that monitor and gather data are regarded as common applications of IoT <ns0:ref type='bibr' target='#b21'>(Sadeq, 2018;</ns0:ref><ns0:ref type='bibr' target='#b30'>Wu, Wu &amp; Yuce, 2018)</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> shows a conceptual diagram of a healthcare monitoring application using IoT-WSN. Body area networks are installed on patients in hospitals, and they continuously gather data from all patients in real time. The sensors are deployed in 3D based configuration, and the sensors are located in each patients' room separately from other patients' rooms, which emphasize the assumption of clusters-based decomposition. The collected data are then used within an intelligent system to assign care algorithmically to increases the recovery ratio. For one floor, one sink connects the clusters, each representing one patient with sensors on different parts of the body. On the other end, the sink is connected to a Software Defined Network (SDN) controller connected to an application layer in the cloud to monitor patients and assign tasks to doctors.</ns0:p><ns0:p>This application requires continuous sensor data collection and transfer to the cloud. The wireless nature of the network and the limited resources of its nodes create two issues. First, the management of node access to the medium must be coordinated with consideration of the sensing rate, sensors' nature, and their relation to the application. This issue affects Quality of Service (QoS) metrics in the network. Second, the management of energy in the network affects the lifetime metric. These issues are not independent of each other, enabling Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) mode may cause failure in several sensors in accessing the medium, which eventually leads to energy waste and shorter lifetime. However, saving the node energy requires careful scheduling and management of their access to the medium.</ns0:p><ns0:p>Numerous types of Medium Access Control (MAC) protocols are available. A few of the most widely used are IEEE 802.15.4 and ISA 100.11a as IoT MAC protocols in numerous types of applications. These types have small differences in terms of enabling or ignoring packet life time or adding priority to packets or not. One critical issue of these two protocols is the scalable energy awareness solution <ns0:ref type='bibr' target='#b25'>(Sotenga, Djouani &amp; Kurien, 2020)</ns0:ref>. When an application requires installing a high number of sensor nodes, the protocol may be inefficient in terms of energysaving due to the resulting collisions. Another issue is the non-awareness of various priority and grouping aspects of the sensor nodes. Such an awareness is important to manage the medium effectively. Recent variants are developed with grouping awareness, such as User-Defined Grouping (UDG) <ns0:ref type='bibr' target='#b33'>(Yasari et al., 2017)</ns0:ref> based on ISA 100.11a. However, this variant has various limitations and necessitates developing of IoT-oriented MAC protocol with scalability, energy efficiency, and grouping awareness. The article aims to propose a novel protocol based on both IEEE 802.15.4 and ISA 100.11a to improve their CAP performance based on exploiting prior knowledge about the clustering information of the network and the priority level of the nodes. Therefore, this study aims to develop two novel variants of Grouping MAC (GMAC) based on current benchmarks, namely, GMAC-IEEE and GMAC-ISA.</ns0:p></ns0:div> <ns0:div><ns0:head>Literature Review</ns0:head><ns0:p>In addition, several improvements on IEEE 802.15.4 in various aspects have been performed, such as the Clear Channel Assessment (CCA) and its effect on the delay and overhead on the protocol. An improvement on CCA <ns0:ref type='bibr' target='#b29'>(Wang, Liu &amp; Yin, 2018)</ns0:ref> is proposed using a graded tailoring strategy, which checks the length of the original packet and modifies its original size according to the partition points. Assuming that the same back-off unit size is used, the protocol includes 20 symbols. To make the data packet tail size 8, those with lower than 8 add zeroes, and those higher than 8 subtract zeroes. This improvement is useful from the general perspective of delay and over-head, but it ignores prior knowledge of nodes or packets priorities. The mechanism of CCA of IEEE 802.15.4 is also examined in different ways. For example, the CCA has been modified to include primary and secondary stages <ns0:ref type='bibr' target='#b6'>(Gamal et al., 2020)</ns0:ref>. In addition, an optimization model is built for the delay with energy consumption as a constraint. The model is solved using linear quadratic programming, but it does not consider retransmission essential in IEEE 802.15.4.</ns0:p><ns0:p>Another modification to IEEE 802.15.4 <ns0:ref type='bibr' target='#b19'>(Patel &amp; Kumar, 2017)</ns0:ref> aims to increase the number of CCA from one to two, reducing the number of back-off periods to confirm the status decision of the channel and scarify the low energy consumption of CCA. This modification avoids highenergy consumption when a failure occurs and bandwidth loss if the channel becomes idle. The number of retransmissions and their effect on performance is also examined. The network nodes are divided into sub-groups or classes according to the number of failed retransmission <ns0:ref type='bibr' target='#b10'>(Henna &amp; Sarwar, 2018)</ns0:ref>. Specifically, the low number of failed retransmissions implies low increases in the back-off time and converts the protocol of IEEE 802.15.4 from a fixed to an adaptive back-off. However, the approach lacks an automatic means to decide to change the back-off for each class. Another issue is the neglect of energy level of each sensor that is considered a highly critical aspect in the performance. Furthermore, the approach does not embed prior knowledge regarding the sensor's class or priority related to its function in the system.</ns0:p><ns0:p>TDMA and CSMA functionalities of IEEE 802.15.4 are also combined for WSN scheduling with the support of demand, that is, profile. A proposed WSN scheduling based on the concept of network virtualization <ns0:ref type='bibr' target='#b27'>(Uchiteleva, Shami &amp; Refaey, 2017)</ns0:ref> divides the networks into profiles, each of which indicates a set of nodes sharing the same channel demand nature or characteristics. The scheduling proposes two profile categories, bursty and periodic. The super-frame in IEEE802.15.4 is then decomposed into contention access frame and contention free frame <ns0:ref type='bibr' target='#b27'>(Uchiteleva et al., 2017)</ns0:ref>, which contains a set of guaranteed time slot. Next, an optimization is conducted to maximize the utility for each profile. The algorithm uses a greedy optimization approach. Furthermore, as stated in <ns0:ref type='bibr' target='#b24'>(Shrestha, Hossain &amp; Choi, 2014)</ns0:ref>, the strength of CSMA/CA when it is combined with TDMA improves the scalability by preserving the performance of legacy-based CSMA/CA-based MAC scheme in congested networks.</ns0:p><ns0:p>Other approaches in the literature aim to improve ISA 100.11a, which is regarded as a common protocol for industrial wireless sensor networks owing to its wide use in MAC layer management of sensors of control systems <ns0:ref type='bibr' target='#b4'>(Florencio, Doria Neto &amp; Martins, 2020)</ns0:ref>. In a recent survey <ns0:ref type='bibr' target='#b20'>(Raptis, Passarella &amp; Conti, 2020)</ns0:ref>, a comparison between ISA 100.11a and WirelessHART has been conducted to conclude the need to optimize various aspects in ISA such as communication and energy optimization. An optimization of ISA under TDMA <ns0:ref type='bibr' target='#b22'>(Satrya &amp; Shin, 2020)</ns0:ref> proposes a solution representation that provides a code for each node according to its time slot. Next, the work of <ns0:ref type='bibr' target='#b33'>(Yasari et al., 2017)</ns0:ref> develops a genetic-based scheduling algorithm that enables flexible scheduling in ISA 100.11a. However, this work only optimizes one parameter in ISA 100.11a, the packet lifetime assigned to each group, and another parameter outside ISA 100.11a, which is the number of nodes in each group. An objective function is then used to maximize the number of nodes and the distribution of nodes in the groups according to their weights. In addition, this work ignores a direct optimization to the network performance measures such as QoS, which is used as constraint in the optimization only. Some researchers aimed to enhance ISA 100.11a in the context of application, such as adapting ISA to operate in a specific control environment. In <ns0:ref type='bibr' target='#b12'>(Herrmann &amp; Messier, 2018)</ns0:ref>, an optimization of the scheduling and the routing (cross-layer) is proposed. The goal is to minimize energy consumption and prolong the lifetime of the petroleum refinery process. The frame structure and an optimization of the scheduling and selection of the routing hops, are the elements of ISA enhancement. Despite the many developments of ISA 100.11a and IEEE 802.15.4 and other related WSN scheduling, some researchers proposed different improvement perspectives. For example, in <ns0:ref type='bibr' target='#b3'>(Farayev et al., 2020)</ns0:ref>, the pre-knowledge of the periodic nature of data generated in the network is exploited to formulate joint optimization of scheduling, power control, and rate adaptation for discrete rate transmission mode. Although this assumption is useful when it is valid, many WSNs have no pre-knowledge of the nature of data generation, such as event-based monitoring.</ns0:p><ns0:p>The literature on MAC layer scheduling in IEEE 802.15.4 and ISA 100.11a tackles various aspects of these two protocols. Several approaches focus on optimizing CSMA <ns0:ref type='bibr' target='#b6'>(Gamal et al., 2020)</ns0:ref> and others pay attention to TDMA <ns0:ref type='bibr' target='#b18'>(Osamy, El-Sawy &amp; Khedr, 2019)</ns0:ref>, but limited work concentrates on integrating both functionalities <ns0:ref type='bibr' target='#b29'>(Wang et al., 2018)</ns0:ref>. Furthermore, none of the previous work develops protocols for integrated CSMA-TDMA with grouping awareness. This factor affects the performance that relies heavily on prior knowledge about each node in terms of its application or role in the system or its priority of responding when packets are generated, compared with systems that consider scheduling of medium access but not the source or group of node priority.</ns0:p><ns0:p>Overall, the handling of the problem of MAC scheduling in IEEE 802.15.4 based protocols, despite its covering to various development aspects such as optimization of parameters, PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:06:62075:1:1:NEW 27 Aug 2021)</ns0:ref> Manuscript to be reviewed Computer Science incorporation of adaptive approaches, usage of TDMA and/or CMSA, non-of the previous approaches has addressed the scheduling with a consideration of the node's geographical distributions. Considering that the collisions occur more frequently when the nodes are close to each other or share the same coverage zone, relative location-aware contention is an important criterion for optimizing the scheduling and reducing the collisions. The article aims to propose a novel MAC scheduling protocol based on IEEE 802.15.4 that enables clustering, which is location-based grouping as a criterion for handling scheduling. Furthermore, the proposed protocol will exploit TDMA to exchange clusters' information with the sink and CMSA for transmitting information within the clusters to the cluster heads. With such scheduling management, the developed protocol is the first MAC scheduling that jointly enables clustering awareness and CSMA-TDMA integration in a single protocol.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>This section provides the developed protocol and the evaluation methods and metrics used to compare with state-of-the-art protocols or benchmarks. We divide the methodology into several parts. At first, we present the assumptions and symbols, and the network hierarchy. Next, the energy model and the energy-aware back-off time along with the protocol design are provided. In the protocol design, we provide the network cycle and the protocol activity diagram.</ns0:p></ns0:div> <ns0:div><ns0:head>Assumptions and Symbols</ns0:head><ns0:p>We assume that the network is represented by a graph with , where &#119866; = (&#119881;,&#119864;) &#119881; = {&#119899; 1 , &#119899; 2 , &#8230;&#119899; &#119870; } denotes the sensor , and k denotes the number of sensors. Each sensor is located in position &#119899; &#119894; &#119894; &#119894; &#1013; . denotes the set of links between the nodes. (&#119909; &#119894; , &#119910; &#119894; , &#119911; &#119894; ) &#119877; 3 &#119864; i. The clustering information, the number of groups in the network, and the priorities of the nodes are defined in advance based on the application.</ns0:p><ns0:p>ii. The network consists of non-overlapping clusters in the coverage of sensor nodes when using low power transmission mode, but cluster heads connect to the sink when using highpower transmission mode.</ns0:p><ns0:p>iii. The network consists of several priority groups where &#119923; &#119947; = &#120783;,&#120784;..&#119950;&#119938;&#119961;&#119918;&#119955;&#119952;&#119958;&#119953;, &#119950;&#119938;&#119961;&#119918;&#119955;&#119952;&#119958;&#119953; denotes the maximum number of groups of sensors, and denotes an index of the priority &#119947; level. Lower j is equivalent to higher priority according to Equation <ns0:ref type='formula'>6</ns0:ref>.</ns0:p><ns0:p>iv. Each node creates an array with size equal to the number of slots. The value in the array indicates the probability of selecting one of the slots for transmission. Initially, all the slots are assigned the same value, which means that no slot has higher probability than another.</ns0:p><ns0:p>v. Each sensor node is equipped with a battery, and the initial energy for all sensor nodes is the same.</ns0:p></ns0:div> <ns0:div><ns0:head>Network Hierarchy</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62075:1:1:NEW 27 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We build a GMAC protocol based on a network hierarchy depicted in Figure <ns0:ref type='figure'>2</ns0:ref>. The information on environment, sensors, locations, and sinks are provided to the application responsible for managing the network. The information is analyzed, and the optimal cluster decomposition and cluster head assignment are generated and provided to the sinks by the SDN controller through the flow table. Each sink knows its clusters, and their cluster heads become responsible for collecting the data to send to the cloud. GMAC protocol operates within this sub-network that is assumed to be non-overlapping in the coverage zone when using low transmission mode, which is only enabled within cluster communication. In the beginning, the sink broadcasts an information frame that carries the cycle specification, slots, and their definitions. The details of this protocol are presented in the following sub-section.</ns0:p></ns0:div> <ns0:div><ns0:head>Energy Model</ns0:head><ns0:p>Energy is consumed at each sensor whenever a data packet is sent or received. The consumed energy is calculated according to number of bits in the packet for transmission and receiving and the number of bits in the packet and the distance between the sender and receiver in the transmission case. We also assume that the sensors can operate in one of two modes. The first is the high energy mode for communicating between cluster heads and sinks, and the second is the low energy mode for communicating within clusters. This model is based on the radio energy dissipation model presented in Equations (1) and Equation ( <ns0:ref type='formula'>2</ns0:ref>) that are given in <ns0:ref type='bibr' target='#b28'>(Wang et al., 2017)</ns0:ref>.</ns0:p><ns0:p>&#119864; &#119879;&#119909; (&#119896;,&#119889;) = &#119864; &#119879;&#119909; -&#119890;&#119897;&#119890;&#119888; (&#119896;) + &#119879; &#119879;&#119909; -&#119886;&#119898;&#119901; (&#119896;,&#119889;) =</ns0:p><ns0:p>(1)</ns0:p><ns0:p>{ &#119864; &#119890;&#119897;&#119890;&#119888; * &#119896; + &#120576; &#119891;&#119904; * &#119896; * &#119889; 2 , &#119889; &#8804; &#119889; 0 &#119891;&#119900;&#119903; &#119897;&#119900;&#119908; &#119890;&#119899;&#119890;&#119903;&#119892;&#119910; &#119898;&#119900;&#119889;&#119890; &#119864; &#119890;&#119897;&#119890;&#119888; * &#119896; + &#120576; &#119898;&#119901; * &#119896; * &#119889; 4 , &#119889; &gt; &#119889; 0 &#119891;&#119900;&#119903; &#8462;&#119894;&#119892;&#8462; &#119890;&#119899;&#119890;&#119903;&#119892;&#119910; &#119898;&#119900;&#119889;&#119890;</ns0:p><ns0:p>To receive k bits' message, the energy consumption is given in Equation ( <ns0:ref type='formula'>2</ns0:ref>).</ns0:p><ns0:p>(2) &#119864; &#119877;&#119883; (&#119896;) = &#119864; &#119877;&#119883; -&#119890;&#119897;&#119890;&#119888; (&#119896;) = &#119864; &#119890;&#119897;&#119890;&#119888; * &#119896; Energy-aware back-off time One of the developed models of the protocol is the energy-aware back-off time which enables the node to consider its residuals energy. The approach uses the current energy in the node , the &#119864; minimum allowed energy , the maximum energy , the maximum and minimum values &#119864; &#119898;&#119894;&#119899; &#119864; &#119898;&#119886;&#119909; of or in a linear proportional model as it to select the best as it is given &#119862;&#119882; &#119862;&#119882; &#119898;&#119886;&#119909; , &#119862;&#119882; &#119898;&#119894;&#119899; &#119862;&#119882;&#119864; in Equation (3) which will be used in the back-off time calculation in Equation ( <ns0:ref type='formula'>4</ns0:ref>).</ns0:p><ns0:p>( Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>) &#119862;&#119882;&#119864; = ( &#119862;&#119882; &#119898;&#119886;&#119909; -&#119862;&#119882; &#119898;&#119894;&#119899; &#119864; &#119898;&#119886;&#119909; -&#119864; &#119898;&#119894;&#119899; ) (&#119864; -&#119864; &#119898;&#119894;&#119899; ) + &#119862;&#119882; &#119898;&#119894;&#119899;<ns0:label>3</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Considering that at certain point of time, the nodes will have different values of residual energies, then it is more likely when two nodes contribute in a collision when attempting to access the channel, they have different energies. Hence, the Packet Delivery Ratio (PDR) will get more chance to increase, and node will be more immune from wasting its energy in frequent collision. The suffix EA will be used to indicate to the protocol that uses this developed model.</ns0:p></ns0:div> <ns0:div><ns0:head>Protocol Design</ns0:head><ns0:p>GMAC is a MAC scheduling protocol for WSN that is partitioned into multi-clusters and six cycles. Each cycle contains one frame, except cycle 1 that includes frames. Hence, the total &#119873; &#119862; number of frames is 5+ . Next, we present the details of the cycles, the general protocol &#119873; &#119862; activity diagram and their corresponding frames.</ns0:p><ns0:p>1) Cycle 0. One-time cycle that occurs only at the beginning of the network. In this cycle, the frame is sent from the sink as a broadcast frame to identify the cluster heads, nodes, and the assignments. The frame is a broadcast frame, and the nodes are in low power mode. In case of packet loss within this cycle, the same settings of the previous cycles are used.</ns0:p><ns0:p>2) Cycle 1. A periodic cycle to send data from the nodes to the cluster head. The nodes operate in low coverage mode, and the frames in the cycle differ from one cluster to another according to cluster size and the group information. The frame of any cluster contains the number of &#119894; slots according to Equation ( <ns0:ref type='formula'>5</ns0:ref>) and Equation ( <ns0:ref type='formula'>6</ns0:ref>). Each subframe i has the size of ( <ns0:ref type='formula'>5</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_2'>&#119873;&#119891; &#119894; = &#119898; &#8721; &#119871; &#119895; = 1 &#119871; &#119895; &#119873; &#119894; &#119895; (6) &#119871; &#119895; = &#119898;&#119886;&#119909;&#119866;&#119903;&#119900;&#119906;&#119901; -j + 1</ns0:formula><ns0:p>Where, denotes the maximum number of groups in the sensors, and denotes an &#119898;&#119886;&#119909;&#119866;&#119903;&#119900;&#119906;&#119901; j index of the group. Low index is equivalent to high priority level given to the group. denotes the number of nodes in cluster that have the priority level . Manuscript to be reviewed</ns0:p><ns0:p>Computer Science , where m = 1. = 5(1 + 2 + 3 + 4) = 50</ns0:p><ns0:p>The equation of can be generalized to consider the number of nodes in group i when the &#119873; &#119891; &#119873; &#119894; number of nodes differ in each group, such as</ns0:p><ns0:formula xml:id='formula_3'>(8) &#119873;&#119891; &#119895; = &#8721; &#119871; &#119895; = 1 &#119871; &#119895; &#119898; &#215; &#119873; &#119894; &#119895;</ns0:formula><ns0:p>3) Cycle 2. A periodic cycle to send data from cluster heads to sinks. In this cycle, the cluster heads operate in high coverage mode while other nodes are in sleep mode. As provided in Equation ( <ns0:ref type='formula'>9</ns0:ref>), the frame type of this cycle is long and setup by the coordinator based on the cluster sizes and number. On the one hand, each cluster head has a predefined number of slots according to its cluster size. In this frame, cluster heads do not compete. On the other hand, the nodes in each cluster are in sleep mode while their cluster head is communicating with the sink, to prevent interference. The frame is decomposed of subframes where each is assigned to &#119873; &#119888; one cluster head. After the cluster head selection by the controller, each cluster head knows its dedicated subframes for transmission. The sub-frame sizes are determined on the basis of the number of nodes in their corresponding cluster and their groups. The frame consists of the number of slots equal to . &#119873; &#119905;&#119891; (9)</ns0:p><ns0:formula xml:id='formula_4'>&#119873; &#119905;&#119891; = &#119898; &#8721; &#119873;&#119888; &#119895; = 1 &#215; &#8721; &#119871;&#119895; &#119894; = 1 &#119871; &#119895; &#215; &#119873; &#119894; = &#8721; &#119873;&#119888; &#119895; = 1</ns0:formula></ns0:div> <ns0:div><ns0:head>&#119891; &#119895;</ns0:head><ns0:p>Example 2: Assume a network of four clusters. Table <ns0:ref type='table'>1</ns0:ref> shows that each cluster contains different numbers of groups and nodes inside each group. Apply Equation ( <ns0:ref type='formula'>7</ns0:ref>) to calculate the total size of the sub-frame for the corresponding cluster when m=1. To explain the sequence or the network cycle in this example, we show the detail frame and sub-frames in Figure <ns0:ref type='figure'>3</ns0:ref>.</ns0:p><ns0:p>4) Cycle 3. A periodic cycle triggered every to update the energy status and packet delivery &#119879; &#119906; status of the nodes to their cluster heads. The cycle repeats after the NC number of cycles that are predefined by the application. Nodes work in low coverage mode. Only one frame, update frame 1, is contained, and its length is the same as in cycle 1.</ns0:p><ns0:p>5) Cycle 4. A periodic cycle triggered every to update the energy and PDR status of the nodes &#119879; &#119906; to the sink. In this cycle, a long frame is sent by cluster heads to the sink that has predefined slots. The cluster heads operate in high coverage mode while other nodes are in sleep mode.</ns0:p><ns0:p>The frame type, update frame 2, is a long frame as provided in Equation ( <ns0:ref type='formula'>9</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>6) Cycle 5. A periodic cycle triggered every</ns0:head><ns0:p>and broadcasted by the sink to change the cluster &#119879; &#119906; heads based on the node energy consumption. Only one frame is contained, update frame 3, to identify the new head of a certain cluster. In addition, the cycle contains an indicator variable that defines one of three possible states: State 1 allows contention only for nodes that have not occupied their slot, State 2 allows contention for all nodes, and State 3 does not allow any contention.</ns0:p><ns0:p>7) Network Cycle. Figure <ns0:ref type='figure'>3</ns0:ref> shows that the network consists of six cycles. In cycles 0, 1, 4, and 5, the nodes work in low power mode wherein the coverage radius is regarded as RA m. This value assures no interference between nodes in one cluster with those in another cluster. In cycles 2 and 3, the cluster head works in high power mode in which the radius becomes RB and is responsible for sending data from the cluster head to the sink. Nodes in high power mode go on sleep mode except for the cluster head. 8) Protocol Activity Diagram. Figure <ns0:ref type='figure'>4</ns0:ref> shows the activity diagram that provides sequential descriptions of the various cycles and tasks in the GMAC protocol. The class diagram starts with having a broadcast frame to inform all the nodes about clustering and the cluster head decision. This application layer makes this decision in the cloud based on various considerations, which is beyond the scope of this study. This broadcast frame includes information of each cluster of each node, its cluster head, and its dedicated slot in the TDMA cycles. The sizes of those cycles and contention access periods and their sizes are also defined.</ns0:p><ns0:p>Next, each cluster's nodes repeatedly send data to its cluster head using CAP period of ISA or IEEE frame. Subsequently, TDMA cycle is performed to send the data gathered by the cluster head to the sink in sequential manner. We define a time period that updates the cluster head &#119879; &#119906; and the sink on node energy status. This awareness enables maximum utilization of slots by updating the indicator variable that allows the nodes to increase this contention, and consequently, maximize its rewards. Hence, three consecutive cycles are triggered every :</ns0:p><ns0:p>&#119879; &#119906; cycle 3 for having nodes updating its cluster head using TDMA about its energy and nodes occupation status, cycle 4 for having the cluster head update the sink about their cluster states using TDMA, and cycle 5 when the sink updates the entire network about the clustering decision and the indicator variable. The pseudocode of the whole protocol is presented in Algorithm 1. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>2-the node in each cluster sends data to its cluster heads using ISA/IEEE frame with CAP 3-each cluster head sends its cluster data to the sink in its dedicated slots inside ISA/IEEE frame TDMA 4-if (time to update cluster head) 4.1 in each cluster, nodes will update the cluster head about their status using TDMA 4.2 cluster head updates sink about the cluster status using TDMA 4.3 sink updates clustering decision and indicator variables else go to 2 5-end End</ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENTAL DESIGN AND RESULTS</ns0:head><ns0:p>This section provides the evaluation scenarios and experimental works for comparing our GMAC with the benchmarks IEEE 802.15.4, ISA 100.11a, and UDG <ns0:ref type='bibr' target='#b33'>(Yasari et al., 2017)</ns0:ref>. GMAC has two variants, namely, GMAC-IEEE and GMAC-ISA. Hence, we have five protocols: GMAC-IEEE, GMAC-ISA, and UDG based on ISA 100.11a, IEEE 802.15.4, and ISA 100.11a.</ns0:p></ns0:div> <ns0:div><ns0:head>1) Experiment Design</ns0:head><ns0:p>The simulation work uses MATLAB 2019b for evaluation. The experiment design is based on changing the inter-arrival time that indicates the offered load in the network. The range of the inter-arrival time changes from 0.1 to 5sec. In each experiment, a set of scenarios are generated on the basis of changing two variables, the number of nodes in the network and the number of clusters. The number of nodes is = 10, 20, &#8230;100, and the number of clusters is taken as = &#119870; &#119873; &#119888; 1,2,..10. In addition, different priority groups are generated = 1, 2,..10. Moreover, the average &#119871; &#119895; number inside each cluster ranges from to where denotes the diversity of the</ns0:p><ns0:formula xml:id='formula_5'>&#119870; &#119873; &#119888; -&#120590; &#119870; &#119873; &#119888;</ns0:formula><ns0:p>+ &#120590; &#120590; number of nodes in the clusters. Table <ns0:ref type='table'>2</ns0:ref> provides the simulation parameter. For channel fading, we use Nakagami, which is a method commonly used in the simulation of physical fading radio channels. Using the parameter, this distribution can model signal fading conditions ranging from extreme to mild, light to no fading, <ns0:ref type='bibr' target='#b1'>Beaulieu &amp; Cheng, 2005)</ns0:ref>. The scenario details in terms of number of nodes and number of clusters are shown in Table <ns0:ref type='table'>3</ns0:ref> with the visualization of two scenarios in Figure <ns0:ref type='figure'>5</ns0:ref>(a) and Figure <ns0:ref type='figure'>5(b)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>2) Evaluation Results and Configuration</ns0:head><ns0:p>The experimental results for the evaluation scenarios of Table <ns0:ref type='table'>3</ns0:ref> are provided in Figure <ns0:ref type='figure'>6</ns0:ref> to Manuscript to be reviewed</ns0:p><ns0:p>Computer Science enable less collision due to dividing them into geographical clusters with less overlap of their coverage compared with one cluster contention in IEEE.</ns0:p><ns0:p>&#61623; Packet Delivery Ratio. Both GMAC protocols generate a PDR between 90% and 100%, whereas the PDR of IEEE is as low as 75% in scenarios 3-150 and as low as 30% in scenarios 4-500. Similarly, higher values of PDR for GMAC-ISA and GMAC-ISA EA are obtained compared with basic ISA. ISA and UDG have lower PDR for scenarios 3-400 and 5-400. This is interpreted by the high number of nodes that are distributed in physical clusters where ISA and UDG do not consider clustering them in the MAC scheduling. This causes pressure on the sink. In GMAC with clusters and cluster head, the pressure is mitigated.</ns0:p><ns0:p>Moreover, higher PDR is generated from IEEE-based protocols in Figure <ns0:ref type='figure'>6</ns0:ref> compared with the PDR generated from ISA-based protocol in Figure <ns0:ref type='figure'>7</ns0:ref>. This is interpreted by the packet lifetime of the latter, which causes less opportunity of sending packets compared with IEEE protocol that does not add lifetime on the packets.</ns0:p><ns0:p>&#61623; End-to-end Delay. The second metric that is generated is end-to-end (e2e) delay, which is shown in Figure <ns0:ref type='figure'>8</ns0:ref> for IEEE-based protocols and in Figure <ns0:ref type='figure'>9</ns0:ref> for ISA-based protocol. In all scenarios in Figure <ns0:ref type='figure'>8</ns0:ref>, there is a slight difference between GMAC-IEEE's e2e delay and IEEE, indicating the competitive performance between the approaches in terms of the delay.</ns0:p><ns0:p>In Figure <ns0:ref type='figure'>9</ns0:ref>, the difference in the delay between ISA GMAC-based protocols and ISA is higher that the corresponding results of IEEE, which is interpreted that the packet lifetime shows more influence in ISA experiment in reducing the delay compared with IEEE. In some scenarios, GMAC has lower e2e delay than the original IEEE. For example, the least e2e delay for GMAC-IEEE and GMAC-IEEE EA is for scenario 5-150, that is, 10s, which is lower than 3s of the least e2e delay of IEEE. This is interpreted by the capability of GMAC in enabling higher successful transmission with less number of trials compared with IEEE.</ns0:p><ns0:p>&#61623; Energy Consumption. The energy consumption is depicted in Figure <ns0:ref type='figure' target='#fig_3'>10</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_5'>11</ns0:ref> for GMAC-based approaches and the original IEEE and ISA. In Figure <ns0:ref type='figure' target='#fig_3'>10</ns0:ref>, the energy consumption is much lower for GMAC-IEEE EA, and GMAC-IEEE than IEEE. We also observe that IEEE consumes the highest energy for all scenarios of 5 clusters, namely, 5-200, 5-300, and 5-400 with some differences caused by the random changing in the geographical distribution of the node. However, GMAC-based protocols are less affected by the change in the number of nodes and number of clusters, and their energy consumption level is below 100 W/h for all scenarios. Furthermore, we observe in scenario 5-300 that the energy consumption is 28.1W/h for GMAC-IEEE EA and GMAC-IEEE but around 578W/h for IEEE. Similarly, in Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>scheduling. Another observation is that GMAC-ISA EA has high energy consumption in some scenarios, such as 3-400, which is interpreted by its behavior in exploiting the residual energy of the nodes for sending that may cause more collision when the nodes reach a state of low energy level in the entire network. Another interpretation factor is the differences in the geographical positions of the nodes that cause non-linear energy consumption profile. More specifically, the changing in the node locations is another factor that causes variation in the energy consumption and to the number of nodes. Furthermore, the latter factor has non-linear component in the overall energy consumption due to the piecewise nature of the energy consumption formula.</ns0:p><ns0:p>&#61623; Network Lifetime. The other metric that is generated is the lifetime, which indicates the experiment time until the first node exits the network. In Figure <ns0:ref type='figure' target='#fig_3'>12</ns0:ref>, GMAC-IEEE and GMAC-IEEE EA experience the same lifetime, which is in general longer than the lifetime of IEEE. This is similar to the behavior of GMAC-ISA and GMAC-ISA EA compared with ISA as in Figure <ns0:ref type='figure' target='#fig_3'>13</ns0:ref>. UDG has a longer lifetime. The pro-organizing aspect of GMAC interpreted the behavior before enabling contention compared with the other protocol that limits the number of collision and prolong the lifetime of the network. The number of retransmissions, which indicates the collision level in the network, is depicted in Figure <ns0:ref type='figure' target='#fig_3'>14</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_3'>15</ns0:ref>. The retransmission numbers for IEEE and ISA are higher than those of GMAC and GMAC EA (IEEE and ISA). This is correlated with the energy consumption of the protocols. As stated earlier, this indicates the role of clustering-aware GMAC in mitigating the collisions in the contention.</ns0:p><ns0:p>&#61623; Throughput. The last metric that is generated is the throughput, which indicates how much of the bandwidth is exploited in producing successful transmission. The results of this metric for IEEE-based and ISA-based protocols are depicted in Figure <ns0:ref type='figure' target='#fig_3'>16</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_4'>17</ns0:ref>, respectively.</ns0:p><ns0:p>Both figures reveal that GMAC and GMAC EA have higher throughput than the basic protocols including UDG, regardless of the scenario. </ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This study proposes a novel protocol for MAC layer in IoT network considering three aspects of operations: low energy level of nodes, high number of nodes, and need of grouping the nodes according to their priorities. This protocol is considered distinct from state-of-the art MAC protocols for IoT, such as IEEE 802.15.4 and ISA 100.11a. In addition, GMAC is suitable to work using SDN technology where a control plane is separated from the data plane and the network is managed by an SDN controller. This is because the protocol requires adaptability to the geographic node distribution by incorporating clustering awareness in the frames and cycles.</ns0:p><ns0:p>In addition, GMAC decomposes the network into cycles to enable efficient medium sharing and competition by activating channel access attempts within the cluster that is supposed to be isolated from other clusters in terms of communication collision when a low energy level is used. This is sufficient to transfer packets from nodes to cluster heads while transferring packets to the sink uses dedicated cycles by cluster heads. GMAC is compared with state-of-the-art protocols with respect to various scenarios in terms of number of nodes and priority levels. A clear superiority is observed in PDR, energy consumption, and competitive performance in terms of e2e delay. GMAC protocols generate a PDR higher than 90%, whereas the PDR of benchmark is as low as 75% in some scenarios and 30% in others. In addition, GMAC protocols has lower e2e delay than the least e2e delay of IEEE with a difference of 3s. Regarding energy consumption, the consumed energy is 28.1W/h for GMAC-IEEE EA and GMAC-IEEE, which is lower than that of IEEE 802.15.4 (578W/h) in certain scenarios. The highest achieved throughput by GMAC-IEEE EA and GMAC-IEEE is 539kbps and 529kbps, respectively, compared with only 68.4kbps by IEEE. As limitations for the work, we state that it requests that the nodes are distributed in a wide geographical area where their various clusters have no overlapping in the coverage, so the intra-cluster operations are executed without collisions with other clusters. Second, it assumes that the nodes are stationary, which does not change the clustering decision. However, it enables dynamic reallocation of the cluster head. Future work can develop more energy-saving techniques and incorporate machine learning for channel access. Another future work is to implement hardware for validating the new protocol and evaluate it in real-world scenarios. Developing the work to include channel hopping is an additional future work.</ns0:p><ns0:note type='other'>Figure 1</ns0:note><ns0:p>Application of IoT-WSN in health care using SDN technology.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62075:1:1:NEW 27 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>4) &#119861;&#119886;&#119888;&#119896; &#119900;&#119891;&#119891;_&#119879;&#119894;&#119898;&#119890; = &#119877;&#119886;&#119899;&#119889;&#119900;&#119898;(0,&#119862;&#119882;&#119864;) &#215; &#119878;&#119897;&#119900;&#119905;&#119879;&#119894;&#119898;&#119890; PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62075:1:1:NEW 27 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>&#119873;&#119891; &#119895; = 5(1 &#215; &#119898; + 2 &#215; &#119898; + 3 &#215; &#119898; + 4 &#215; &#119898;)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Algorithm 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Pseudo code of the GMAC-ISA/IEEE protocol Input Locations of sensors and their initial battery levels Fixed sink Start while (sink is active) 1-the sink node sends broadcast frame to inform about cluster heads and cluster decomposition PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62075:1:1:NEW 27 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>First, we present the PDR of IEEE, GMAC-IEEE and GMAC-IEEE EA in Figure 6, and of ISA, GMAC-ISA and GMAC-ISA EA in Figure 7. GMAC-IEEE and GMAC-IEEE EA generate higher values of PDR compared with IEEE with slight superiority of GMAC-IEEE EA. This is interpreted by the geographical distribution awareness of G-MAC based protocols which PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62075:1:1:NEW 27 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 11 ,</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11, the energy consumption for ISA-based protocols, that is, ISA and UDG, are the most consuming for scenarios 5-150, 5-200, and 5-300, indicating the effect of diverse localization area of the nodes and lack of grouping them within clusters on the energy consumption. However, GMAC-ISA is less affected because of its clustering-aware</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>The highest achieved throughput by GMAC-IEEE EA and GMAC-IEEE is 539kbps and 529kbps, respectively, compared with only 68.4kbps by IEEE. Similarly, Figure17demonstrates that GMAC-ISA and GMAC-ISA EA have accomplished throughput of 352.7kbps and 350.2kbps, compared with 19.8kbps and 13.8kbps by ISA and UDG, respectively. Obviously, the high throughput is achieved because of the effective management architecture of nodes access to the medium and its tight relation with the nodes geographical location which is an ignored factor in the original IEEE and ISA.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62075:1:1:NEW 27 Aug 2021)Manuscript to be reviewed</ns0:note> </ns0:body> "
"FAKULTI TEKNOLOGI DAN SAINS MAKLUMAT • FACULTY OF INFORMATION SCIENCE AND TECHNOLOGY August 16th, 2021 Dear Editors We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. This manuscript has been corrected in accordance with the comments given by the reviewers. The corresponding changes and refinements made in the revised paper are summarized in section below. We believe that the manuscript is now suitable for publication in PeerJ. Associate Prof Ts Dr. Rosilah Hassan Centre for Cyber Security, Faculty of Information Science & Technology On behalf of all authors. Fakulti Teknologi dan Sains Maklumat Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor Darul Ehsan Malaysia Tel.: +603-8921 6172 Faks: +603-8925 6732 E-mel: dftsm@ukm.edu.my Web: www.ukm.my Mengilham Harapan, Mencipta Masa Depan • Inspiring Futures, Nurturing Possibilities Reviewer 1 (Anand Nayyar) Basic reporting The Research Paper is Ok. And novel protocol is being proposed, but the paper needs more technical revisions and is subject for re-review Literature review, and the proposed work needs more technical structure information to be elaborated. The literature survey is extended to 15 articles. The research gap is elaborated at the ending paragraph of the literature survey. Experimental design Experimental design is Ok. But needs more technical layout. More technical elaboration of the experiments configuration and parameters is presented Validity of the findings Results are Ok. But analysis part is weak. And needs to be discussed with more detailed description. The analysis is extended and more interpretation and justification are added. Additional comments The Paper needs the following Strong Revisions and is subject for re-review, and after re-review, the final decision for the paper will be taken: 1. Introduction should be added with more information with regard to the Problem Definition, scope and even with some Background. Add Objectives of the paper at the end of Introduction. Add Organization of the paper. Done 2. Min 15-25 papers should be cited under literature review, and every paper should be elaborated with what is being observed, what is the novelty aspect and what experimental results are observed. At the end of literature review, highlight what overall technical gaps are observed that led to the design of the proposed methodology. The literature survey is extended to 15 articles. The research gap is elaborated at the ending paragraph of the literature survey. 3. Give a Novel name of the proposed protocol---And add this in Title, Abstract and other sections uniformly in the paper. And under proposed protocol- Give the heading of the proposed protocol--Start with System Model. Then highlight the Architecture- Give Steps, Flowchart and Algorithm of the proposed protocol. The name of the protocol GMAC is added to the title, abstract and keywords, and it is mentioned in the article for 72 times. Also, it is highlighted in the article: protocol design, activity diagram and pseudocode. 4. Under experimentation- Add Simulation Parameters in Table. Simulation parameters already exist in Table 2. Add assumptions of simulation. The assumptions of the simulation are provided in the section named assumptions and symbols. Give the detailed description of every point of testing under seperate head. The testing scenarios are presented in Table 3. In addition, we inserted each metric of testing under separate head as requested. Give the Data based table of the values of experimentation. The data-based values are added to the graphs. 5. Add some Analysis section and do performance evaluation with more existing proposed protocols/. The approach is compared with three existing protocols, namely, IEEE 802.15.4, ISA 100.11a and UDG and more analysis is added. 6. Add future scope to this paper. Future work already exists, we highlighted them Future work can develop more energy-saving techniques and incorporate machine learning for channel access. Another future work is to implement hardware for validating the new protocol and evaluate it in real-world scenarios. Developing the work to include channel hopping is an additional future work. 7. Add some more latest references to the paper. We extended the number of references. Reviewer 2 Basic reporting Topic selection is good and basic impression of paper falls positive. Experimental design Proposed algorithm must be properly explained in text. A more elaboration of the pseudocode is provided. Validity of the findings Fine Additional comments Highlight all assumptions and limitations of your work. There is a section for the assumptions of the work. For the limitation, we add in in the lines --. Mention time complexity of entire pipeline. The protocol has decentralized nature and it is event based which means that no central algorithm is running with certain complexity to be analyzed. For the local algorithm at each node, the only operation before selecting a slot is a roulette wheel and after selecting it is an addition operation. Mention all figures properly in text. There was numbering error and it is fixed. Reviewer 3 Basic reporting Professional Proofread and English are required. References and literature review is sufficient. As requested, another round of English proof is done. Experimental design The rationale, aim, and objectives may need to be clearly defined in the Abstract and other sections. The rationale, aim, and objectives are highlighted. Validity of the findings Consistency is missing in the paper. Conclusion, Limitation, and Future scope needs revision for more clarity and enhancing the readability of the paper. The conclusion, limitation and future scope of the paper is updated. In addition, various changes are done according to the other reviewers’ comments. Additional comments The authors are working on an important area of research. Few observations: 1. The main rationale/objective of the paper needs to be added in the abstract and Introduction. The objective of the article is added to the last section of the introduction. 2. The writing is ambiguous and more clarity is required to enhance the readability of the paper. More elaboration of the concepts and algorithm is added as requested. 3. Consistency is missing in the paper. Conclusion, Limitation, and Future scope needs revision and elaborations for more clarity and enhancing the readability of the paper. Conclusion, Limitation, and Future scope are updated. 4. Mathematical Equations and paper require thorough proofread. Mathematical equations are double checked and another round of English proof is done. 5. Figure 1 is talking about its application in healthcare, though in the paper authors mentioned having its applications in various domains, then rationale of providing only for healthcare is missing. The health care was added as a motivational example. This example is selected because the sensors are deployed in 3D based configuration, and the sensors are located in each patients’ room separately from other patients’ rooms, which emphasizes the assumption of clusters-based decomposition. "
Here is a paper. Please give your review comments after reading it.
246
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The development of Medium Access Control (MAC) protocols for Internet of Things should consider various aspects such as energy saving, scalability for a wide number of nodes, and grouping awareness. Although numerous protocols consider these aspects in the limited view of handling the medium access, the proposed Grouping MAC (GMAC) exploits prior knowledge of geographic node distribution in the environment and their priority levels. Such awareness enables GMAC to significantly reduce the number of collisions and prolong the network lifetime. GMAC is developed on the basis of five cycles that manage data transmission between sensors and cluster head and between cluster head and sink.</ns0:p><ns0:p>These two stages of communication increase the efficiency of energy consumption for transmitting packets. In addition, GMAC contains slot decomposition and assignment based on node priority, and, therefore, is a grouping-aware protocol. Compared with standard benchmarks IEEE 802.15.4 and industrial automation standard 100.11a and userdefined grouping, GMAC protocols generate a Packet Delivery Ratio (PDR) higher than 90%, whereas the PDR of benchmark is as low as 75% in some scenarios and 30% in others. In addition, the GMAC accomplishes lower end-to-end (e2e) delay than the least e2e delay of IEEE with a difference of 3 s. Regarding energy consumption, the consumed energy is 28.1 W/h for GMAC-IEEE Energy Aware (EA) and GMAC-IEEE, which is less than that for IEEE 802.15.4 (578 W/h) in certain scenarios.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The recent development of Wireless Sensor Networks (WSNs) and the incorporation of technologies of Internet of Things (IoT) has enabled their applications in various industrial fields, particularly through IoT-based WSN (IoT-WSN) <ns0:ref type='bibr'>(Hassan et al., 2020)</ns0:ref>. Such the emergence has led to numerous applications in different sectors such as agriculture <ns0:ref type='bibr'>(Hassan, 2020;</ns0:ref><ns0:ref type='bibr' target='#b13'>Keswani et al., 2018)</ns0:ref>, smart cities <ns0:ref type='bibr' target='#b0'>(Al-Majhad et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b16'>Nassar et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b34'>Zhang, 2020)</ns0:ref>, intelligent transportation system <ns0:ref type='bibr' target='#b15'>(Muthuramalingam et al., 2019)</ns0:ref>, medical field <ns0:ref type='bibr' target='#b17'>(Onasanya &amp; Elshakankiri, 2019;</ns0:ref><ns0:ref type='bibr' target='#b31'>Yao et al., 2019)</ns0:ref>, security and surveillance <ns0:ref type='bibr' target='#b2'>(Benzerbadj et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b14'>Memon et al., 2020)</ns0:ref>, military <ns0:ref type='bibr' target='#b35'>(Zieliski, Chudzikiewicz &amp; Furtak, 2019)</ns0:ref>, forensics <ns0:ref type='bibr' target='#b32'>(Yaqoob et al., 2019)</ns0:ref>, education, and voting <ns0:ref type='bibr' target='#b26'>(Srikrishnaswetha, Kumar &amp; Mahmood, 2019)</ns0:ref>. Sensing-based applications that monitor and gather data are regarded as common applications of IoT <ns0:ref type='bibr' target='#b21'>(Sadeq, 2018;</ns0:ref><ns0:ref type='bibr' target='#b30'>Wu, Wu &amp; Yuce, 2018)</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> shows a conceptual diagram of a healthcare monitoring application using IoT-WSN. Body area networks are installed on patients in hospitals, and they continuously gather data from all patients in real time. The sensors are deployed in 3D based configuration, and the sensors are located in each patients' room separately from other patients' rooms, which emphasize the assumption of clusters-based decomposition. The collected data are then used within an intelligent system to assign care algorithmically to increases the recovery ratio. For one floor, one sink connects the clusters, each representing one patient with sensors on different parts of the body. On the other end, the sink is connected to a Software Defined Network (SDN) controller connected to an application layer in the cloud to monitor patients and assign tasks to doctors.</ns0:p><ns0:p>This application requires continuous sensor data collection and transfer to the cloud. The wireless nature of the network and the limited resources of its nodes create two issues. First, the management of node access to the medium must be coordinated with consideration of the sensing rate, sensors' nature, and their relation to the application. This issue affects Quality of Service (QoS) metrics in the network. Second, the management of energy in the network affects the lifetime metric. These issues are not independent of each other, enabling Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) mode may cause failure in several sensors in accessing the medium, which eventually leads to energy waste and shorter lifetime. However, saving the node energy requires careful scheduling and management of their access to the medium.</ns0:p><ns0:p>Numerous types of Medium Access Control (MAC) protocols are available. A few of the most widely used are IEEE 802.15.4 and ISA 100.11a as IoT MAC protocols in numerous types of applications. These types have small differences in terms of enabling or ignoring packet life time or adding priority to packets or not. One critical issue of these two protocols is the scalable energy awareness solution <ns0:ref type='bibr' target='#b25'>(Sotenga, Djouani &amp; Kurien, 2020)</ns0:ref>. When an application requires installing a high number of sensor nodes, the protocol may be inefficient in terms of energysaving due to the resulting collisions. Another issue is the non-awareness of various priority and grouping aspects of the sensor nodes. Such an awareness is important to manage the medium effectively. Recent variants are developed with grouping awareness, such as User-Defined Grouping (UDG) <ns0:ref type='bibr' target='#b33'>(Yasari et al., 2017)</ns0:ref> based on ISA 100.11a. However, this variant has various limitations and necessitates developing of IoT-oriented MAC protocol with scalability, energy efficiency, and grouping awareness. The article aims to propose a novel protocol based on both IEEE 802.15.4 and ISA 100.11a to improve their CAP performance based on exploiting prior knowledge about the clustering information of the network and the priority level of the nodes. Therefore, this study aims to develop two novel variants of Grouping MAC (GMAC) based on current benchmarks, namely, GMAC-IEEE and GMAC-ISA.</ns0:p></ns0:div> <ns0:div><ns0:head>Literature Review</ns0:head><ns0:p>In addition, several improvements on IEEE 802.15.4 in various aspects have been performed, such as the Clear Channel Assessment (CCA) and its effect on the delay and overhead on the protocol. An improvement on CCA <ns0:ref type='bibr' target='#b29'>(Wang, Liu &amp; Yin, 2018)</ns0:ref> is proposed using a graded tailoring strategy, which checks the length of the original packet and modifies its original size according to the partition points. Assuming that the same back-off unit size is used, the protocol includes 20 symbols. To make the data packet tail size 8, those with lower than 8 add zeroes, and those higher than 8 subtract zeroes. This improvement is useful from the general perspective of delay and over-head, but it ignores prior knowledge of nodes or packets priorities. The mechanism of CCA of IEEE 802.15.4 is also examined in different ways. For example, the CCA has been modified to include primary and secondary stages <ns0:ref type='bibr' target='#b6'>(Gamal et al., 2020)</ns0:ref>. In addition, an optimization model is built for the delay with energy consumption as a constraint. The model is solved using linear quadratic programming, but it does not consider retransmission essential in IEEE 802.15.4.</ns0:p><ns0:p>Another modification to IEEE 802.15.4 <ns0:ref type='bibr' target='#b19'>(Patel &amp; Kumar, 2017)</ns0:ref> aims to increase the number of CCA from one to two, reducing the number of back-off periods to confirm the status decision of the channel and scarify the low energy consumption of CCA. This modification avoids highenergy consumption when a failure occurs and bandwidth loss if the channel becomes idle. The number of retransmissions and their effect on performance is also examined. The network nodes are divided into sub-groups or classes according to the number of failed retransmission <ns0:ref type='bibr' target='#b10'>(Henna &amp; Sarwar, 2018)</ns0:ref>. Specifically, the low number of failed retransmissions implies low increases in the back-off time and converts the protocol of IEEE 802.15.4 from a fixed to an adaptive back-off. However, the approach lacks an automatic means to decide to change the back-off for each class. Another issue is the neglect of energy level of each sensor that is considered a highly critical aspect in the performance. Furthermore, the approach does not embed prior knowledge regarding the sensor's class or priority related to its function in the system.</ns0:p><ns0:p>TDMA and CSMA functionalities of IEEE 802.15.4 are also combined for WSN scheduling with the support of demand, that is, profile. A proposed WSN scheduling based on the concept of network virtualization <ns0:ref type='bibr' target='#b27'>(Uchiteleva, Shami &amp; Refaey, 2017)</ns0:ref> divides the networks into profiles, each of which indicates a set of nodes sharing the same channel demand nature or characteristics. The scheduling proposes two profile categories, bursty and periodic. The super-frame in IEEE802.15.4 is then decomposed into contention access frame and contention free frame <ns0:ref type='bibr' target='#b27'>(Uchiteleva et al., 2017)</ns0:ref>, which contains a set of guaranteed time slot. Next, an optimization is conducted to maximize the utility for each profile. The algorithm uses a greedy optimization approach. Furthermore, as stated in <ns0:ref type='bibr' target='#b24'>(Shrestha, Hossain &amp; Choi, 2014)</ns0:ref>, the strength of CSMA/CA when it is combined with TDMA improves the scalability by preserving the performance of legacy-based CSMA/CA-based MAC scheme in congested networks.</ns0:p><ns0:p>Other approaches in the literature aim to improve ISA 100.11a, which is regarded as a common protocol for industrial wireless sensor networks owing to its wide use in MAC layer management of sensors of control systems <ns0:ref type='bibr' target='#b4'>(Florencio, Doria Neto &amp; Martins, 2020)</ns0:ref>. In a recent survey <ns0:ref type='bibr' target='#b20'>(Raptis, Passarella &amp; Conti, 2020)</ns0:ref>, a comparison between ISA 100.11a and WirelessHART has been conducted to conclude the need to optimize various aspects in ISA such as communication and energy optimization. An optimization of ISA under TDMA <ns0:ref type='bibr' target='#b22'>(Satrya &amp; Shin, 2020)</ns0:ref> proposes a solution representation that provides a code for each node according to its time slot. Next, the work of <ns0:ref type='bibr' target='#b33'>(Yasari et al., 2017)</ns0:ref> develops a genetic-based scheduling algorithm that enables flexible scheduling in ISA 100.11a. However, this work only optimizes one parameter in ISA 100.11a, the packet lifetime assigned to each group, and another parameter outside ISA 100.11a, which is the number of nodes in each group. An objective function is then used to maximize the number of nodes and the distribution of nodes in the groups according to their weights. In addition, this work ignores a direct optimization to the network performance measures such as QoS, which is used as constraint in the optimization only. Some researchers aimed to enhance ISA 100.11a in the context of application, such as adapting ISA to operate in a specific control environment. In <ns0:ref type='bibr' target='#b12'>(Herrmann &amp; Messier, 2018)</ns0:ref>, an optimization of the scheduling and the routing (cross-layer) is proposed. The goal is to minimize energy consumption and prolong the lifetime of the petroleum refinery process. The frame structure and an optimization of the scheduling and selection of the routing hops, are the elements of ISA enhancement. Despite the many developments of ISA 100.11a and IEEE 802.15.4 and other related WSN scheduling, some researchers proposed different improvement perspectives. For example, in <ns0:ref type='bibr' target='#b3'>(Farayev et al., 2020)</ns0:ref>, the pre-knowledge of the periodic nature of data generated in the network is exploited to formulate joint optimization of scheduling, power control, and rate adaptation for discrete rate transmission mode. Although this assumption is useful when it is valid, many WSNs have no pre-knowledge of the nature of data generation, such as event-based monitoring.</ns0:p><ns0:p>The literature on MAC layer scheduling in IEEE 802.15.4 and ISA 100.11a tackles various aspects of these two protocols. Several approaches focus on optimizing CSMA <ns0:ref type='bibr' target='#b6'>(Gamal et al., 2020)</ns0:ref> and others pay attention to TDMA <ns0:ref type='bibr' target='#b18'>(Osamy, El-Sawy &amp; Khedr, 2019)</ns0:ref>, but limited work concentrates on integrating both functionalities <ns0:ref type='bibr' target='#b29'>(Wang et al., 2018)</ns0:ref>. Furthermore, none of the previous work develops protocols for integrated CSMA-TDMA with grouping awareness. This factor affects the performance that relies heavily on prior knowledge about each node in terms of its application or role in the system or its priority of responding when packets are generated, compared with systems that consider scheduling of medium access but not the source or group of node priority.</ns0:p><ns0:p>Overall, the handling of the problem of MAC scheduling in IEEE 802.15.4 based protocols, despite its covering to various development aspects such as optimization of parameters, PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:06:62075:2:0:NEW 2 Sep 2021)</ns0:ref> Manuscript to be reviewed Computer Science incorporation of adaptive approaches, usage of TDMA and/or CMSA, non-of the previous approaches has addressed the scheduling with a consideration of the node's geographical distributions. Considering that the collisions occur more frequently when the nodes are close to each other or share the same coverage zone, relative location-aware contention is an important criterion for optimizing the scheduling and reducing the collisions. The article aims to propose a novel MAC scheduling protocol based on IEEE 802.15.4 that enables clustering, which is location-based grouping as a criterion for handling scheduling. Furthermore, the proposed protocol will exploit TDMA to exchange clusters' information with the sink and CMSA for transmitting information within the clusters to the cluster heads. With such scheduling management, the developed protocol is the first MAC scheduling that jointly enables clustering awareness and CSMA-TDMA integration in a single protocol.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>This section provides the developed protocol and the evaluation methods and metrics used to compare with state-of-the-art protocols or benchmarks. We divide the methodology into several parts. At first, we present the assumptions and symbols, and the network hierarchy. Next, the energy model and the energy-aware back-off time along with the protocol design are provided. In the protocol design, we provide the network cycle and the protocol activity diagram.</ns0:p></ns0:div> <ns0:div><ns0:head>Assumptions and Symbols</ns0:head><ns0:p>We assume that the network is represented by a graph with , where &#119866; = (&#119881;,&#119864;) &#119881; = {&#119899; 1 , &#119899; 2 , &#8230;&#119899; &#119870; } denotes the sensor , and k denotes the number of sensors. Each sensor is located in position &#119899; &#119894; &#119894; &#119894; &#1013; . denotes the set of links between the nodes. (&#119909; &#119894; , &#119910; &#119894; , &#119911; &#119894; ) &#119877; 3 &#119864; i. The clustering information, the number of groups in the network, and the priorities of the nodes are defined in advance based on the application.</ns0:p><ns0:p>ii. The network consists of non-overlapping clusters in the coverage of sensor nodes when using low power transmission mode, but cluster heads connect to the sink when using highpower transmission mode.</ns0:p><ns0:p>iii. The network consists of several priority groups where &#119923; &#119947; = &#120783;,&#120784;..&#119950;&#119938;&#119961;&#119918;&#119955;&#119952;&#119958;&#119953;, &#119950;&#119938;&#119961;&#119918;&#119955;&#119952;&#119958;&#119953; denotes the maximum number of groups of sensors, and denotes an index of the priority &#119947; level. Lower j is equivalent to higher priority according to Equation <ns0:ref type='formula'>6</ns0:ref>.</ns0:p><ns0:p>iv. Each node creates an array with size equal to the number of slots. The value in the array indicates the probability of selecting one of the slots for transmission. Initially, all the slots are assigned the same value, which means that no slot has higher probability than another.</ns0:p><ns0:p>v. Each sensor node is equipped with a battery, and the initial energy for all sensor nodes is the same.</ns0:p></ns0:div> <ns0:div><ns0:head>Network Hierarchy</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62075:2:0:NEW 2 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We build a GMAC protocol based on a network hierarchy depicted in Figure <ns0:ref type='figure'>2</ns0:ref>. The information on environment, sensors, locations, and sinks are provided to the application responsible for managing the network. The information is analyzed, and the optimal cluster decomposition and cluster head assignment are generated and provided to the sinks by the SDN controller through the flow table. Each sink knows its clusters, and their cluster heads become responsible for collecting the data to send to the cloud. GMAC protocol operates within this sub-network that is assumed to be non-overlapping in the coverage zone when using low transmission mode, which is only enabled within cluster communication. In the beginning, the sink broadcasts an information frame that carries the cycle specification, slots, and their definitions. The details of this protocol are presented in the following sub-section.</ns0:p></ns0:div> <ns0:div><ns0:head>Energy Model</ns0:head><ns0:p>Energy is consumed at each sensor whenever a data packet is sent or received. The consumed energy is calculated according to number of bits in the packet for transmission and receiving and the number of bits in the packet and the distance between the sender and receiver in the transmission case. We also assume that the sensors can operate in one of two modes. The first is the high energy mode for communicating between cluster heads and sinks, and the second is the low energy mode for communicating within clusters. This model is based on the radio energy dissipation model presented in Equations (1) and Equation ( <ns0:ref type='formula'>2</ns0:ref>) that are given in <ns0:ref type='bibr' target='#b28'>(Wang et al., 2017)</ns0:ref>.</ns0:p><ns0:p>&#119864; &#119879;&#119909; (&#119896;,&#119889;) = &#119864; &#119879;&#119909; -&#119890;&#119897;&#119890;&#119888; (&#119896;) + &#119879; &#119879;&#119909; -&#119886;&#119898;&#119901; (&#119896;,&#119889;) =</ns0:p><ns0:p>(1)</ns0:p><ns0:p>{ &#119864; &#119890;&#119897;&#119890;&#119888; * &#119896; + &#120576; &#119891;&#119904; * &#119896; * &#119889; 2 , &#119889; &#8804; &#119889; 0 &#119891;&#119900;&#119903; &#119897;&#119900;&#119908; &#119890;&#119899;&#119890;&#119903;&#119892;&#119910; &#119898;&#119900;&#119889;&#119890; &#119864; &#119890;&#119897;&#119890;&#119888; * &#119896; + &#120576; &#119898;&#119901; * &#119896; * &#119889; 4 , &#119889; &gt; &#119889; 0 &#119891;&#119900;&#119903; &#8462;&#119894;&#119892;&#8462; &#119890;&#119899;&#119890;&#119903;&#119892;&#119910; &#119898;&#119900;&#119889;&#119890;</ns0:p><ns0:p>To receive k bits' message, the energy consumption is given in Equation ( <ns0:ref type='formula'>2</ns0:ref>).</ns0:p><ns0:p>(2) &#119864; &#119877;&#119883; (&#119896;) = &#119864; &#119877;&#119883; -&#119890;&#119897;&#119890;&#119888; (&#119896;) = &#119864; &#119890;&#119897;&#119890;&#119888; * &#119896; Energy-aware back-off time One of the developed models of the protocol is the energy-aware back-off time which enables the node to consider its residuals energy. The approach uses the current energy in the node , the &#119864; minimum allowed energy , the maximum energy , the maximum and minimum values &#119864; &#119898;&#119894;&#119899; &#119864; &#119898;&#119886;&#119909; of or in a linear proportional model as it to select the best as it is given &#119862;&#119882; &#119862;&#119882; &#119898;&#119886;&#119909; , &#119862;&#119882; &#119898;&#119894;&#119899; &#119862;&#119882;&#119864; in Equation (3) which will be used in the back-off time calculation in Equation ( <ns0:ref type='formula'>4</ns0:ref>).</ns0:p><ns0:p>( Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>) &#119862;&#119882;&#119864; = ( &#119862;&#119882; &#119898;&#119886;&#119909; -&#119862;&#119882; &#119898;&#119894;&#119899; &#119864; &#119898;&#119886;&#119909; -&#119864; &#119898;&#119894;&#119899; ) (&#119864; -&#119864; &#119898;&#119894;&#119899; ) + &#119862;&#119882; &#119898;&#119894;&#119899;<ns0:label>3</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Considering that at certain point of time, the nodes will have different values of residual energies, then it is more likely when two nodes contribute in a collision when attempting to access the channel, they have different energies. Hence, the Packet Delivery Ratio (PDR) will get more chance to increase, and node will be more immune from wasting its energy in frequent collision. The suffix EA will be used to indicate to the protocol that uses this developed model.</ns0:p></ns0:div> <ns0:div><ns0:head>Protocol Design</ns0:head><ns0:p>GMAC is a MAC scheduling protocol for WSN that is partitioned into multi-clusters and six cycles. Each cycle contains one frame, except cycle 1 that includes frames. Hence, the total &#119873; &#119862; number of frames is 5+ . Next, we present the details of the cycles, the general protocol &#119873; &#119862; activity diagram and their corresponding frames.</ns0:p><ns0:p>1) Cycle 0. One-time cycle that occurs only at the beginning of the network. In this cycle, the frame is sent from the sink as a broadcast frame to identify the cluster heads, nodes, and the assignments. The frame is a broadcast frame, and the nodes are in low power mode. In case of packet loss within this cycle, the same settings of the previous cycles are used.</ns0:p><ns0:p>2) Cycle 1. A periodic cycle to send data from the nodes to the cluster head. The nodes operate in low coverage mode, and the frames in the cycle differ from one cluster to another according to cluster size and the group information. The frame of any cluster contains the number of &#119894; slots according to Equation ( <ns0:ref type='formula'>5</ns0:ref>) and Equation ( <ns0:ref type='formula'>6</ns0:ref>). Each subframe i has the size of ( <ns0:ref type='formula'>5</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_2'>&#119873;&#119891; &#119894; = &#119898; &#8721; &#119871; &#119895; = 1 &#119871; &#119895; &#119873; &#119894; &#119895; (6) &#119871; &#119895; = &#119898;&#119886;&#119909;&#119866;&#119903;&#119900;&#119906;&#119901; -j + 1</ns0:formula><ns0:p>Where, denotes the maximum number of groups in the sensors, and denotes an &#119898;&#119886;&#119909;&#119866;&#119903;&#119900;&#119906;&#119901; j index of the group. Low index is equivalent to high priority level given to the group. denotes the number of nodes in cluster that have the priority level . Manuscript to be reviewed</ns0:p><ns0:p>Computer Science , where m = 1. = 5(1 + 2 + 3 + 4) = 50</ns0:p><ns0:p>The equation of can be generalized to consider the number of nodes in group i when the &#119873; &#119891; &#119873; &#119894; number of nodes differ in each group, such as</ns0:p><ns0:formula xml:id='formula_3'>(8) &#119873;&#119891; &#119895; = &#8721; &#119871; &#119895; = 1 &#119871; &#119895; &#119898; &#215; &#119873; &#119894; &#119895;</ns0:formula><ns0:p>3) Cycle 2. A periodic cycle to send data from cluster heads to sinks. In this cycle, the cluster heads operate in high coverage mode while other nodes are in sleep mode. As provided in Equation ( <ns0:ref type='formula'>9</ns0:ref>), the frame type of this cycle is long and setup by the coordinator based on the cluster sizes and number. On the one hand, each cluster head has a predefined number of slots according to its cluster size. In this frame, cluster heads do not compete. On the other hand, the nodes in each cluster are in sleep mode while their cluster head is communicating with the sink, to prevent interference. The frame is decomposed of subframes where each is assigned to &#119873; &#119888; one cluster head. After the cluster head selection by the controller, each cluster head knows its dedicated subframes for transmission. The sub-frame sizes are determined on the basis of the number of nodes in their corresponding cluster and their groups. The frame consists of the number of slots equal to . &#119873; &#119905;&#119891; (9)</ns0:p><ns0:formula xml:id='formula_4'>&#119873; &#119905;&#119891; = &#119898; &#8721; &#119873;&#119888; &#119895; = 1 &#215; &#8721; &#119871;&#119895; &#119894; = 1 &#119871; &#119895; &#215; &#119873; &#119894; = &#8721; &#119873;&#119888; &#119895; = 1</ns0:formula></ns0:div> <ns0:div><ns0:head>&#119891; &#119895;</ns0:head><ns0:p>Example 2: Assume a network of four clusters. Table <ns0:ref type='table'>1</ns0:ref> shows that each cluster contains different numbers of groups and nodes inside each group. Apply Equation ( <ns0:ref type='formula'>7</ns0:ref>) to calculate the total size of the sub-frame for the corresponding cluster when m=1. To explain the sequence or the network cycle in this example, we show the detail frame and sub-frames in Figure <ns0:ref type='figure'>3</ns0:ref>.</ns0:p><ns0:p>4) Cycle 3. A periodic cycle triggered every to update the energy status and packet delivery &#119879; &#119906; status of the nodes to their cluster heads. The cycle repeats after the NC number of cycles that are predefined by the application. Nodes work in low coverage mode. Only one frame, update frame 1, is contained, and its length is the same as in cycle 1.</ns0:p><ns0:p>5) Cycle 4. A periodic cycle triggered every to update the energy and PDR status of the nodes &#119879; &#119906; to the sink. In this cycle, a long frame is sent by cluster heads to the sink that has predefined slots. The cluster heads operate in high coverage mode while other nodes are in sleep mode.</ns0:p><ns0:p>The frame type, update frame 2, is a long frame as provided in Equation ( <ns0:ref type='formula'>9</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>6) Cycle 5. A periodic cycle triggered every</ns0:head><ns0:p>and broadcasted by the sink to change the cluster &#119879; &#119906; heads based on the node energy consumption. Only one frame is contained, update frame 3, to identify the new head of a certain cluster. In addition, the cycle contains an indicator variable that defines one of three possible states: State 1 allows contention only for nodes that have not occupied their slot, State 2 allows contention for all nodes, and State 3 does not allow any contention.</ns0:p><ns0:p>7) Network Cycle. Figure <ns0:ref type='figure'>3</ns0:ref> shows that the network consists of six cycles. In cycles 0, 1, 4, and 5, the nodes work in low power mode wherein the coverage radius is regarded as RA m. This value assures no interference between nodes in one cluster with those in another cluster. In cycles 2 and 3, the cluster head works in high power mode in which the radius becomes RB and is responsible for sending data from the cluster head to the sink. Nodes in high power mode go on sleep mode except for the cluster head. 8) Protocol Activity Diagram. Figure <ns0:ref type='figure'>4</ns0:ref> shows the activity diagram that provides sequential descriptions of the various cycles and tasks in the GMAC protocol. The class diagram starts with having a broadcast frame to inform all the nodes about clustering and the cluster head decision. This application layer makes this decision in the cloud based on various considerations, which is beyond the scope of this study. This broadcast frame includes information of each cluster of each node, its cluster head, and its dedicated slot in the TDMA cycles. The sizes of those cycles and contention access periods and their sizes are also defined.</ns0:p><ns0:p>Next, each cluster's nodes repeatedly send data to its cluster head using CAP period of ISA or IEEE frame. Subsequently, TDMA cycle is performed to send the data gathered by the cluster head to the sink in sequential manner. We define a time period that updates the cluster head &#119879; &#119906; and the sink on node energy status. This awareness enables maximum utilization of slots by updating the indicator variable that allows the nodes to increase this contention, and consequently, maximize its rewards. Hence, three consecutive cycles are triggered every :</ns0:p><ns0:p>&#119879; &#119906; cycle 3 for having nodes updating its cluster head using TDMA about its energy and nodes occupation status, cycle 4 for having the cluster head update the sink about their cluster states using TDMA, and cycle 5 when the sink updates the entire network about the clustering decision and the indicator variable. The pseudocode of the whole protocol is presented in Algorithm 1. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>2-the node in each cluster sends data to its cluster heads using ISA/IEEE frame with CAP 3-each cluster head sends its cluster data to the sink in its dedicated slots inside ISA/IEEE frame TDMA 4-if (time to update cluster head) 4.1 in each cluster, nodes will update the cluster head about their status using TDMA 4.2 cluster head updates sink about the cluster status using TDMA 4.3 sink updates clustering decision and indicator variables else go to 2 5-end End</ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENTAL DESIGN AND RESULTS</ns0:head><ns0:p>This section provides the evaluation scenarios and experimental works for comparing our GMAC with the benchmarks IEEE 802.15.4, ISA 100.11a, and UDG <ns0:ref type='bibr' target='#b33'>(Yasari et al., 2017)</ns0:ref>. GMAC has two variants, namely, GMAC-IEEE and GMAC-ISA. Hence, we have five protocols: GMAC-IEEE, GMAC-ISA, and UDG based on ISA 100.11a, IEEE 802.15.4, and ISA 100.11a.</ns0:p></ns0:div> <ns0:div><ns0:head>1) Experiment Design</ns0:head><ns0:p>The simulation work uses MATLAB 2019b for evaluation. The experiment design is based on changing the inter-arrival time that indicates the offered load in the network. The range of the inter-arrival time changes from 0.1 to 5sec. In each experiment, a set of scenarios are generated on the basis of changing two variables, the number of nodes in the network and the number of clusters. The number of nodes is = 10, 20, &#8230;100, and the number of clusters is taken as = &#119870; &#119873; &#119888; 1,2,..10. In addition, different priority groups are generated = 1, 2,..10. Moreover, the average &#119871; &#119895; number inside each cluster ranges from to where denotes the diversity of the</ns0:p><ns0:formula xml:id='formula_5'>&#119870; &#119873; &#119888; -&#120590; &#119870; &#119873; &#119888;</ns0:formula><ns0:p>+ &#120590; &#120590; number of nodes in the clusters. Table <ns0:ref type='table'>2</ns0:ref> provides the simulation parameter. For channel fading, we use Nakagami, which is a method commonly used in the simulation of physical fading radio channels. Using the parameter, this distribution can model signal fading conditions ranging from extreme to mild, light to no fading, <ns0:ref type='bibr' target='#b1'>Beaulieu &amp; Cheng, 2005)</ns0:ref>. The scenario details in terms of number of nodes and number of clusters are shown in Table <ns0:ref type='table'>3</ns0:ref> with the visualization of two scenarios in Figure <ns0:ref type='figure'>5</ns0:ref>(a) and Figure <ns0:ref type='figure'>5(b)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>2) Evaluation Results and Configuration</ns0:head><ns0:p>The experimental results for the evaluation scenarios of Table <ns0:ref type='table'>3</ns0:ref> are provided in Figure <ns0:ref type='figure'>6</ns0:ref> to Manuscript to be reviewed</ns0:p><ns0:p>Computer Science enable less collision due to dividing them into geographical clusters with less overlap of their coverage compared with one cluster contention in IEEE.</ns0:p><ns0:p>&#61623; Packet Delivery Ratio. Both GMAC protocols generate a PDR between 90% and 100%, whereas the PDR of IEEE is as low as 75% in scenarios 3-150 and as low as 30% in scenarios 4-500. Similarly, higher values of PDR for GMAC-ISA and GMAC-ISA EA are obtained compared with basic ISA. ISA and UDG have lower PDR for scenarios 3-400 and 5-400. This is interpreted by the high number of nodes that are distributed in physical clusters where ISA and UDG do not consider clustering them in the MAC scheduling. This causes pressure on the sink. In GMAC with clusters and cluster head, the pressure is mitigated.</ns0:p><ns0:p>Moreover, higher PDR is generated from IEEE-based protocols in Figure <ns0:ref type='figure'>6</ns0:ref> compared with the PDR generated from ISA-based protocol in Figure <ns0:ref type='figure'>7</ns0:ref>. This is interpreted by the packet lifetime of the latter, which causes less opportunity of sending packets compared with IEEE protocol that does not add lifetime on the packets.</ns0:p><ns0:p>&#61623; End-to-end Delay. The second metric that is generated is end-to-end (e2e) delay, which is shown in Figure <ns0:ref type='figure'>8</ns0:ref> for IEEE-based protocols and in Figure <ns0:ref type='figure'>9</ns0:ref> for ISA-based protocol. In all scenarios in Figure <ns0:ref type='figure'>8</ns0:ref>, there is a slight difference between GMAC-IEEE's e2e delay and IEEE, indicating the competitive performance between the approaches in terms of the delay.</ns0:p><ns0:p>In Figure <ns0:ref type='figure'>9</ns0:ref>, the difference in the delay between ISA GMAC-based protocols and ISA is higher that the corresponding results of IEEE, which is interpreted that the packet lifetime shows more influence in ISA experiment in reducing the delay compared with IEEE. In some scenarios, GMAC has lower e2e delay than the original IEEE. For example, the least e2e delay for GMAC-IEEE and GMAC-IEEE EA is for scenario 5-150, that is, 10s, which is lower than 3s of the least e2e delay of IEEE. This is interpreted by the capability of GMAC in enabling higher successful transmission with less number of trials compared with IEEE.</ns0:p><ns0:p>&#61623; Energy Consumption. The energy consumption is depicted in Figure <ns0:ref type='figure' target='#fig_3'>10</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_5'>11</ns0:ref> for GMAC-based approaches and the original IEEE and ISA. In Figure <ns0:ref type='figure' target='#fig_3'>10</ns0:ref>, the energy consumption is much lower for GMAC-IEEE EA, and GMAC-IEEE than IEEE. We also observe that IEEE consumes the highest energy for all scenarios of 5 clusters, namely, 5-200, 5-300, and 5-400 with some differences caused by the random changing in the geographical distribution of the node. However, GMAC-based protocols are less affected by the change in the number of nodes and number of clusters, and their energy consumption level is below 100 W/h for all scenarios. Furthermore, we observe in scenario 5-300 that the energy consumption is 28.1W/h for GMAC-IEEE EA and GMAC-IEEE but around 578W/h for IEEE. Similarly, in Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>scheduling. Another observation is that GMAC-ISA EA has high energy consumption in some scenarios, such as 3-400, which is interpreted by its behavior in exploiting the residual energy of the nodes for sending that may cause more collision when the nodes reach a state of low energy level in the entire network. Another interpretation factor is the differences in the geographical positions of the nodes that cause non-linear energy consumption profile. More specifically, the changing in the node locations is another factor that causes variation in the energy consumption and to the number of nodes. Furthermore, the latter factor has non-linear component in the overall energy consumption due to the piecewise nature of the energy consumption formula.</ns0:p><ns0:p>&#61623; Network Lifetime. The other metric that is generated is the lifetime, which indicates the experiment time until the first node exits the network. In Figure <ns0:ref type='figure' target='#fig_3'>12</ns0:ref>, GMAC-IEEE and GMAC-IEEE EA experience the same lifetime, which is in general longer than the lifetime of IEEE. This is similar to the behavior of GMAC-ISA and GMAC-ISA EA compared with ISA as in Figure <ns0:ref type='figure' target='#fig_3'>13</ns0:ref>. UDG has a longer lifetime. The pro-organizing aspect of GMAC interpreted the behavior before enabling contention compared with the other protocol that limits the number of collision and prolong the lifetime of the network. The number of retransmissions, which indicates the collision level in the network, is depicted in Figure <ns0:ref type='figure' target='#fig_3'>14</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_3'>15</ns0:ref>. The retransmission numbers for IEEE and ISA are higher than those of GMAC and GMAC EA (IEEE and ISA). This is correlated with the energy consumption of the protocols. As stated earlier, this indicates the role of clustering-aware GMAC in mitigating the collisions in the contention.</ns0:p><ns0:p>&#61623; Throughput. The last metric that is generated is the throughput, which indicates how much of the bandwidth is exploited in producing successful transmission. The results of this metric for IEEE-based and ISA-based protocols are depicted in Figure <ns0:ref type='figure' target='#fig_3'>16</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_4'>17</ns0:ref>, respectively.</ns0:p><ns0:p>Both figures reveal that GMAC and GMAC EA have higher throughput than the basic protocols including UDG, regardless of the scenario. </ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This study proposes a novel protocol for MAC layer in IoT network considering three aspects of operations: low energy level of nodes, high number of nodes, and need of grouping the nodes according to their priorities. This protocol is considered distinct from state-of-the art MAC protocols for IoT, such as IEEE 802.15.4 and ISA 100.11a. In addition, GMAC is suitable to work using SDN technology where a control plane is separated from the data plane and the network is managed by an SDN controller. This is because the protocol requires adaptability to the geographic node distribution by incorporating clustering awareness in the frames and cycles.</ns0:p><ns0:p>In addition, GMAC decomposes the network into cycles to enable efficient medium sharing and competition by activating channel access attempts within the cluster that is supposed to be isolated from other clusters in terms of communication collision when a low energy level is used. This is sufficient to transfer packets from nodes to cluster heads while transferring packets to the sink uses dedicated cycles by cluster heads. GMAC is compared with state-of-the-art protocols with respect to various scenarios in terms of number of nodes and priority levels. A clear superiority is observed in PDR, energy consumption, and competitive performance in terms of e2e delay. GMAC protocols generate a PDR higher than 90%, whereas the PDR of benchmark is as low as 75% in some scenarios and 30% in others. In addition, GMAC protocols has lower e2e delay than the least e2e delay of IEEE with a difference of 3s. Regarding energy consumption, the consumed energy is 28.1W/h for GMAC-IEEE EA and GMAC-IEEE, which is lower than that of IEEE 802.15.4 (578W/h) in certain scenarios. The highest achieved throughput by GMAC-IEEE EA and GMAC-IEEE is 539kbps and 529kbps, respectively, compared with only 68.4kbps by IEEE. As limitations for the work, we state that it requests that the nodes are distributed in a wide geographical area where their various clusters have no overlapping in the coverage, so the intra-cluster operations are executed without collisions with other clusters. Second, it assumes that the nodes are stationary, which does not change the clustering decision. However, it enables dynamic reallocation of the cluster head. Future work can develop more energy-saving techniques and incorporate machine learning for channel access. Another future work is to implement hardware for validating the new protocol and evaluate it in real-world scenarios. Developing the work to include channel hopping is an additional future work.</ns0:p><ns0:note type='other'>Figure 1</ns0:note><ns0:p>Application of IoT-WSN in health care using SDN technology.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62075:2:0:NEW 2 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>4) &#119861;&#119886;&#119888;&#119896; &#119900;&#119891;&#119891;_&#119879;&#119894;&#119898;&#119890; = &#119877;&#119886;&#119899;&#119889;&#119900;&#119898;(0,&#119862;&#119882;&#119864;) &#215; &#119878;&#119897;&#119900;&#119905;&#119879;&#119894;&#119898;&#119890; PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62075:2:0:NEW 2 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>&#119873;&#119891; &#119895; = 5(1 &#215; &#119898; + 2 &#215; &#119898; + 3 &#215; &#119898; + 4 &#215; &#119898;)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Algorithm 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Pseudo code of the GMAC-ISA/IEEE protocol Input Locations of sensors and their initial battery levels Fixed sink Start while (sink is active) 1-the sink node sends broadcast frame to inform about cluster heads and cluster decomposition PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62075:2:0:NEW 2 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>First, we present the PDR of IEEE, GMAC-IEEE and GMAC-IEEE EA in Figure 6, and of ISA, GMAC-ISA and GMAC-ISA EA in Figure 7. GMAC-IEEE and GMAC-IEEE EA generate higher values of PDR compared with IEEE with slight superiority of GMAC-IEEE EA. This is interpreted by the geographical distribution awareness of G-MAC based protocols which PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62075:2:0:NEW 2 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 11 ,</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11, the energy consumption for ISA-based protocols, that is, ISA and UDG, are the most consuming for scenarios 5-150, 5-200, and 5-300, indicating the effect of diverse localization area of the nodes and lack of grouping them within clusters on the energy consumption. However, GMAC-ISA is less affected because of its clustering-aware</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>The highest achieved throughput by GMAC-IEEE EA and GMAC-IEEE is 539kbps and 529kbps, respectively, compared with only 68.4kbps by IEEE. Similarly, Figure17demonstrates that GMAC-ISA and GMAC-ISA EA have accomplished throughput of 352.7kbps and 350.2kbps, compared with 19.8kbps and 13.8kbps by ISA and UDG, respectively. Obviously, the high throughput is achieved because of the effective management architecture of nodes access to the medium and its tight relation with the nodes geographical location which is an ignored factor in the original IEEE and ISA.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62075:2:0:NEW 2 Sep 2021)Manuscript to be reviewed</ns0:note> </ns0:body> "
"FAKULTI TEKNOLOGI DAN SAINS MAKLUMAT • FACULTY OF INFORMATION SCIENCE AND TECHNOLOGY September 2th, 2021 Dear Editors We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. This manuscript has been corrected in accordance with the comments given by the reviewers. The corresponding changes and refinements made in the revised paper are summarized in section below. We believe that the manuscript is now suitable for publication in PeerJ. Associate Prof Ts Dr. Rosilah Hassan Centre for Cyber Security, Faculty of Information Science & Technology On behalf of all authors. Fakulti Teknologi dan Sains Maklumat Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor Darul Ehsan Malaysia Tel.: +603-8921 6172 Faks: +603-8925 6732 E-mel: dftsm@ukm.edu.my Web: www.ukm.my Mengilham Harapan, Mencipta Masa Depan • Inspiring Futures, Nurturing Possibilities Editor comments (Noor Jhanjhi) Minor Revisions Please pay close attention to your figures and ensure they are clear, with all axes labelled appropriately. Ensure the number of significant figures included is consistent throughout your figures. Axes should be labelled with the variable, with units in brackets (where appropriate). Fig. 15 has overlapping text. Figures have been labelled with variables and units have been putted in brackets. Figure 15 overlapping text has been corrected. Reviewer 1 (Anand Nayyar) Basic reporting The Revised Paper has presented the Literature review, structure and results in better manner. Experimental design Yes, the results are satisfactory. Validity of the findings The findings are Ok. Reviewer 2 Basic reporting All the changes have been done. Current form of paper is suitable for publication. Experimental design All the changes have been done. Current form of paper is suitable for publication. Validity of the findings All the changes have been done. Current form of paper is suitable for publication. Additional comments Accepted Reviewer 3 Basic reporting OK Experimental design OK Validity of the findings OK Additional comments Suggested changes are incorporated. We may proceed. "
Here is a paper. Please give your review comments after reading it.
247
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Different fields such as linguistics, teaching, and computing have demonstrated special interest in the study of sign languages (SL). However, the processes of teaching and learning these languages turn complex since it is unusual to find people teaching these languages that are fluent in both SL and the native language of the students. The teachings from deaf individuals become unique. Nonetheless, it is important for the student to lean on supportive mechanisms while being in the process of learning an SL.</ns0:p><ns0:p>Bidirectional communication between deaf and hearing people through SL is a hot topic to achieve a higher level of inclusion. However, all the processes that convey teaching and learning SL turn difficult and complex since it is unusual to find SL teachers that are fluent also in the native language of the students, making it harder to provide computer teaching tools for different SL. Moreover, the main aspects that a second language learner of an SL finds difficult are phonology, non-manual components, and the use of space (the latter two are specific to SL, not to spoken languages). This proposal appears to be the first of the kind to favor the Costa Rican Sign Language (LESCO, for its Spanish acronym), as well as any other SL. Our research focus stands on reinforcing the learning process of final-user hearing people through a modular architectural design of a learning environment, relying on the concept of phonological proximity within a graphical tool with a high degree of usability. The aim of incorporating phonological proximity is to assist individuals in learning signs with similar handshapes. This architecture separates the logic and processing aspects from those associated with the access and generation of data, which makes it portable to other SL in the future. The methodology used consisted of defining 26 phonological parameters (13 for each hand), thus characterizing each sign appropriately.</ns0:p><ns0:p>Then, a similarity formula was applied to compare each pair of signs. With these precalculations, the tool displays each sign and its top ten most similar signs. A SUS usability test and an open qualitative question were applied, as well as a numerical evaluation to a group of learners, to validate the proposal. In order to reach our research aims, we have</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>It is important to lean on learning reinforcement tools to enhance the learning process of a sign language (SL), the knowledge obtained from class work seems to be insufficient to meet this purpose. The use of technology becomes a great option to achieve the objective of learning a sign language because the student faces a new language based on his visual abilities rather than the spoken word. The use of technology can offer easy-to-use visual interfaces permitting the interested party to compare and associate the meaning of signs more accurately. The interested party has the possibility to access these applications if necessary.</ns0:p><ns0:p>A previous research <ns0:ref type='bibr' target='#b29'>(Naranjo-Zeled&#243;n et al. 2020)</ns0:ref>, showed that phonological proximity upgrades different areas within the study of a Sign Language. In relation to this particular research, the fundamental characteristics of the Costa Rican Sign Language (LESCO, for its acronym in Spanish) were disclosed regarding phonological proximity for clustering and learning reinforcement purposes.</ns0:p><ns0:p>The main contributions of our proposal are: 1. The design of a modular and portable architecture of a learning reinforcement environment for sign languages (it can be applied to different sign languages).</ns0:p><ns0:p>2. The inclusion of a graphical software tool, including a signing avatar and a phonological proximity component meant to enrich the learning process.</ns0:p><ns0:p>3. The identification of homonym and paronym signs that illustrate the different degrees of proximity between the signs.</ns0:p><ns0:p>4. The evaluation of the phonological proximity module at the implementation level, as well as the usability and usefulness of this implementation in a software tool.</ns0:p><ns0:p>The architecture design includes the aforementioned software tool, containing a phonological component so to make the learning process more complete. This tool was developed by the Costa Rica Institute of Technology. At first, the tool showed concepts classified into different categories <ns0:ref type='bibr'>(colors, alphabet, numbers, etc.)</ns0:ref>. These classifications are grouped into three levels: basic, intermediate, and advanced. If a sign is chosen by the user, an avatar will reproduce it, interfacing at the same time with the PIELS (International Platform for Sign Language Edition, for its acronym in Spanish). So far, it displays content from LESCO, but the design of the tool allows for adaptation to any SL <ns0:ref type='bibr' target='#b33'>(Serrato-Romero and Chac&#243;n-Rivas, 2016)</ns0:ref>.</ns0:p><ns0:p>Our phonological proximity study analyzes homonyms (signs with the same starting handshape), paronyms (similar signs with different meanings), and polysemy which we have determined is very rare in LESCO (same sign with different meanings). Some examples of these phenomena are easy to find in spoken languages such as the paronyms 'affect' and 'effect', or the homonyms of 'book' ('something to read' or 'make a reservation'). Sign languages require to employ visual variables, like handshapes, their location at the pointing spot and facial gestures. The software tool was broadened to examine these correlations and their impact on reinforcing learning.</ns0:p><ns0:p>Conventional techniques in use, as will be seen in the Background section, have drawbacks because they are specifically designed for some specific sign language. This situation is very possibly since they do not use, or at least do not make explicit, a formalization of the grammar of the specific sign language. This is in direct contrast to our approach, which takes as a starting point the formalization of phonological parameters mapped to integer numbers and their subsequent use by applying similarity measures.</ns0:p><ns0:p>We emphasize the difference between other methods to better understand the position of this work. In this way, the research community can formalize their own parameters and adopt our approach without the need for any change at the architectural level. In the field at hand, which is education, the proposals revolve around the use of various techniques or approaches, each with its rationale and justification, but at the same time revealing clear disadvantages, as explained below:</ns0:p><ns0:p>&#61623; Self-assessment open-source software, with web-based tests for adult learners. These Yes/No tests require important improvements to offer a more complete service, such as providing the option to see translations directly after indicating whether the user knows or does not know a sign. &#61623; Using hardware devices (wearables, usually Kinect) with recognition of hand movements and guidance to learners. As a general rule, the use of wearables is preferable to be left as the last option, because it is unnatural, it is expensive and the equipment can be lost or damaged, hindering the process. &#61623; Educational games (who also use wearables). The same downsides identified in the previous point are faced here. &#61623; Incorporation of computer vision to indicate unnatural movements in the novice. While it is true that computer vision is a very fertile field of research today, but it also requires the use of additional equipment. &#61623; Teaching fingerspelling in the form of quizzes. In general, fingerspelling presents a communication technique that is too limited and should only be used when the signaling person does not have any other resources to communicate their message at all. &#61623; Lexicon teaching proposals. can be incomplete when the desired outcome is to produce real communications, with syntactic connections that make sense to the other party.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60570:1:1:NEW 9 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Our main objective is to demonstrate that SL learning can be reinforced by technological means incorporating the concept of phonological proximity. In turn, we will explain how the phonological components of the SL in question should be parameterized, to incorporate them into an architecture that provides a suitable interface with a signing avatar.</ns0:p><ns0:p>In conclusion, in this paper we provide elements that clarify how to deal with issues of central importance in this type of technologies, hence making clear the advantages of the proposed system, specifically: &#61623; Modular architecture, to simplify the maintenance and the incorporation of new functionalities. &#61623; Applicable to several sign languages, to take advantage of the conceptual power of our contribution in other languages and other types of projects. &#61623; Enrichment of learning environments, to take advantage of opportunities to accelerate and improve the experience. &#61623; Differentiation between homonyms and paronyms, to contrast the different degrees of proximity between the signs and, therefore, the need to emphasize the practice where it is most necessary. &#61623; Applicable to other environments, offering a portable concept where researchers can incorporate it and take advantage of it. &#61623; Usability and usefulness validated through a standard test, to have a high degree of certainty that the concept of proximity is properly supported by a real tool, which in turn is easy to use.</ns0:p><ns0:p>The next section provides a background of previous work on the subject matter. Then, we present the proposed architecture of the learning reinforcement environment ('Architecture for SL learning reinforcement environment' section). After that, we illustrate details of its deployment in a software tool showing the experiments carried out with the phonological proximity module and the users' validation ('The SL Learning Reinforcement Tool module' section). Finally, we draw our conclusions, depicting our contribution to the sign language learning process and the future work ('Conclusions' section).</ns0:p></ns0:div> <ns0:div><ns0:head>Background</ns0:head><ns0:p>This section presents the antecedents of the object of study. They have been classified into three groups of great significance, each one explained in the following subsections. First, the importance of phonological proximity is addressed, which is a crucial matter in this study. Then, the similarity measures for phonological proximity are examined, which are the computational mechanism to determine similarity between objects to be compared. Finally, we go over the proposals for teaching tools that have been made for the student to practice sign languages. We believe that based on the exhaustive search performed in scholarly repositories, this literature reflects an up-to-date state of the research on these topics.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60570:1:1:NEW 9 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The importance of phonological proximity According to <ns0:ref type='bibr' target='#b2'>Baker et al. (2016)</ns0:ref> the main aspects that a second language learner of a sign language finds difficult are phonology, non-manual components, and the grammatical use of space. These features are specific to sign languages and do not take place in spoken languages. Moreover, the phonological inventory of sign languages is completely different from that of spoken languages. Another salient observation of the authors has to do with how among the different phonological parameters, the handshape has the largest number of distinctive possibilities. In the sign languages that have been studied, the number of different handshapes appears to be larger than the number of locations and movements. Iconicity is also to be considered since signs commonly portray iconic features, which means that the handshape resembles parts of the meaning, a phenomenon that is extremely rare to find in spoken languages. <ns0:ref type='bibr' target='#b35'>Williams, Stone &amp; Newman (2017)</ns0:ref> have studied the importance of phonological similarity to facilitate lexical access, that is, the process by which individuals produce a specific word from their mental lexicon or recognize it when it is used by others (American Psychological <ns0:ref type='bibr'>Association, n.d.)</ns0:ref>. This study, rooted in this psycholinguistic aspect, has determined that lexical access in sign language is facilitated by the phonological similarity of the lexical representations in memory. <ns0:ref type='bibr' target='#b22'>Keane et al. (2017)</ns0:ref> have demonstrated that for fingerspelled words in American Sign Language (ASL), the positional similarity score is the description of handshape similarity that best matches the signer perception when asked to rate the phonological proximity. The positional similarity approach is superior when compared to the contour difference approach, so in order to define similarity when fingerspelling, it is more important to look at the positional configuration of the handshapes than concentrating on the transitions.</ns0:p><ns0:p>An experiment conducted by <ns0:ref type='bibr' target='#b15'>Hildebrandt &amp; Corina (2002)</ns0:ref>, revealed that all subjects, regardless of their previous exposure to ASL, categorize signs that share location and movement (and differ in handshape) as highly similar. However, an ulterior examination of additional parameter contrasts revealed that different degrees of previous linguistic knowledge of the signers influenced the way they perceived similarity. So, for instance, the combination of location and handshape is recognized as carrying a higher level of similarity by native signers than by late deaf learners or by hearing signers.</ns0:p></ns0:div> <ns0:div><ns0:head>Similarity measures for phonological proximity</ns0:head><ns0:p>With regard to measuring the phonological proximity in a sign language, different similarity measures can be used. They can be categorized into five categories: Edit-based, Token-based, Hybrid, Structural (Domain-dependent), and Phonetic <ns0:ref type='bibr' target='#b30'>(Naumann &amp; Herschel, 2010;</ns0:ref><ns0:ref type='bibr' target='#b4'>Bisandu, Prasad &amp; Liman, 2019)</ns0:ref>.</ns0:p><ns0:p>Similarity measures can be used, as long as the characteristics of the data used as phonological parameters have been properly characterized. For instance, if the parameters are strings of characters, then the edit-based similarity measurements can satisfy the objective. If the parameters are token sets (this is our case), then token-based measures can be used successfully.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_0'>2021:04:60570:1:1:NEW 9 Jul 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Hybrid approaches strive for a balance regarding the response speed of other known measures and the robustness of comparison between all the tokens, so as to find the best matches, both to deal with named entities and to solve problems of misspelling in big data contexts (this fact does not point them as good candidates for sign languages). Phonetic measures, due to their very nature, have been extensively used for spoken languages, so they are not a good choice for sign languages. Finally, the domain-dependent measures use particularities of the data, which do not fit well in corpora of sign languages.</ns0:p><ns0:p>The current similarity measures have been widely welcomed by the research community and many of them are long-standing, hence the literature on the matter can be classified as classical. <ns0:ref type='bibr' target='#b11'>Coletti &amp; Bouchon-Meunier (2019)</ns0:ref> note that a complete review or even a simple listing of all the uses of similarity is impossible. They are used in various tasks ranging from management of data or information, such as content-based information retrieval, text summarization, recommendation systems, to user profile exploitation, and decision-making, to cite only a few. Among the many similarity measures proposed, a broad classification may be of use: the classical crisp context <ns0:ref type='bibr' target='#b9'>(Choi et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b23'>Lesot et al., 2009)</ns0:ref> and the fuzzy context <ns0:ref type='bibr' target='#b3'>(Bhutani &amp; Rosenfeld, 2003;</ns0:ref><ns0:ref type='bibr' target='#b5'>Bouchon-Meunier et al., 1996;</ns0:ref><ns0:ref type='bibr' target='#b26'>Li et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b12'>Couso et al., 2013)</ns0:ref>. Due to the nature of our research, we concentrate on the classical crisp context, since the fuzzy scenario does not apply to our object of study. The main characteristics of these predominantly classic measures are:</ns0:p><ns0:p>&#61623; Edit-based focusing on the calculation of the changes necessary to produce one string from another, weighing the number of necessary changes (insertions, deletions or modifications) to produce the new string. Hamming distance <ns0:ref type='bibr' target='#b13'>(Hamming, 1950)</ns0:ref> and Levenshtein distance <ns0:ref type='bibr' target='#b24'>(Levenshtein, 1966)</ns0:ref> are the best known. &#61623; Token-based approaches measuring the number of matches between two sets of parameters, (n-gram tokens), where tokens are words or numbers. In this category we can mention Jaccard distance <ns0:ref type='bibr' target='#b17'>(Jaccard, 1912)</ns0:ref> and Cosine distance <ns0:ref type='bibr' target='#b34'>(Singhal, 2001)</ns0:ref>. &#61623; Hybrid strategies comparing strings, using an internal similarity function (Jaro or Levenshtein, for instance). Monge-Elkan <ns0:ref type='bibr' target='#b27'>(Monge &amp; Elkan, 1997)</ns0:ref> and Soft TF-IDF <ns0:ref type='bibr' target='#b10'>(Cohen, Ravikumar &amp; Fienberg, 2003)</ns0:ref> are examples of these techniques. &#61623; Structural proposals focusing on data particularities (Domain-dependent). Dates <ns0:ref type='bibr' target='#b30'>(Naumann &amp; Herschel, 2010)</ns0:ref> is the best-known example. &#61623; Phonetic measures matching similar sounds in spoken languages (for example, they give maximum qualification to pairs of words such as 'feelings' and 'fillings'), applying preestablished rules of similar sounds. Soundex <ns0:ref type='bibr'>(Russell &amp; Odell, 1918)</ns0:ref> and K&#246;lner-Phonetik <ns0:ref type='bibr' target='#b32'>(Postel, 1969)</ns0:ref> follow this strategy.</ns0:p></ns0:div> <ns0:div><ns0:head>Proposals for teaching tools</ns0:head><ns0:p>Currently, studies for LESCO seem to be insufficient. That is why this section goes into software-based proposals, despite the fact they do not consider this foremost linguistic concept. Also, some researchers focus on the study of sign languages they can relate to more easily, be it PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60570:1:1:NEW 9 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science their own country or workplace sign language in use; this situation is very common in the research community. Because of this situation, it seems relevant to mention the authors findings and proposals in this section. The education for deaf people or interpreters of sign language is out of the scope of this research. Our main focus stands for reinforcing the learning process for hearing people as final users.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b14'>(Haug &amp; Ebling, 2019</ns0:ref>) a report about the use of open-source software for sign language learning and self-assessment has been made. Another finding has been the web-based test for Swiss German Sign Language (DSGS, for its acronym in Swiss German), designed for adult learners. They gave important feedback on the appropriateness of the DSGS vocabulary self-assessment instrument. This feedback arose inputs for the system improvement. The innovation of this study relies on the fact about using existing open-source software as a starting point to develop and evaluate a DSGS test for self-assessment purposes.</ns0:p><ns0:p>The target of the study in (&#193;lvarez-Robles, &#193;lvarez &amp; Carre&#241;o-Le&#243;n, 2020) is to bring forward an interactive software system (ISS) making use of a hardware device (Leap Motion) so that average users had fluid communication with deaf people. The objective is to allow a natural recognition of hand movements by helping the average users learning Mexican Sign Language (MSL) at the same time. Also, through gamification techniques, it would permit the user to learn and communicate with deaf people.</ns0:p><ns0:p>Related to British Sign Language (BSL), there is a lack of games available in the marketplace for teaching purposes indicating that minimal efforts have been made to meet this objective <ns0:ref type='bibr' target='#b20'>(Kale, 2014)</ns0:ref>. The intention of the study is to develop a prototype using the Microsoft Kinect, to help teachers educate young students. This prototype would teach basic BSL, by using JavaScript and HTML5 in a web browser. Positive feedback coming from interviews and playtests among 10 sign language experts unfamiliar with games technology to teach sign language, was also collected. They indicated that this prototype could be used as complement for those conventional teaching methods.</ns0:p><ns0:p>Another research <ns0:ref type='bibr' target='#b16'>(Huenerfauth et al, 2016)</ns0:ref> revealed there is a lack of interactive tools for those students learning ASL. These tools might provide them feedback on their signing accuracy, whenever their ASL teacher is not available for them. A software system project was also performed by utilizing a Kinect camera. By incorporating computer vision, this software can identify aspects of signing by showing non-natural movements so to provide feedback to the students in their won practice. This tool is not supposed to replace feedback from ASL teachers. However, the tool can detect errors. Students state it is better for them to have tools able to provide feedback like videos helping with error minimization, mainly time-aligned with their signing.</ns0:p><ns0:p>Learning sign language is a task, commonly performed in peer groups, with few study materials <ns0:ref type='bibr' target='#b19'>(Joy, 2019)</ns0:ref>. According to their opinion, fingerspelled sign learning turns into the initial stage of sign learning used when there is no corresponding sign, or the signer is not aware of it. Since most of the existing tools are costly because of the external sensors they use, they suggested SignQuiz, a low-cost web-based fingerspelling learning application for Indian Sign PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60570:1:1:NEW 9 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Language (ISL), with automatic sign language recognition. This application has been the first endeavor in ISL for learning signs using a deep neural network. The results reveal that SignQuiz is a better option than printed medium.</ns0:p><ns0:p>There is another available proposal for Chinese Sign Language, by <ns0:ref type='bibr' target='#b7'>Chai et al. (2017)</ns0:ref>. They indicate that using a computer-aided tool known as SignInstructor can offer an effective and efficient learning means. Even they go far beyond stating that the intervention of human teachers is no longer needed, and that the sign language learning is highly effective showing an outstanding score, even higher than the one obtained with face-to-face learning. The system has three modules: 1) a multimodal player for standard materials, including videos, postures, figures, and text; 2) online sign recognition by means of Kinect; 3) an automatic evaluation module.</ns0:p><ns0:p>There is a proposal for a Ghanaian Interactive Sign Language (GISL) Tutor <ns0:ref type='bibr' target='#b31'>(Osei, 2012)</ns0:ref>. This interactive tutor becomes the first computer-based for this sign language. It was specifically designed to teach vocabulary of Ghanaian-specific signs. Those Ghanaians who were involved with this tutor and tested it, said they would even like to have more available signs. The GISL&#180;s Tutor main purpose is letting Ghanaian-specific signs be accessible to anyone interested in using this tool, by displaying pre-recorded lessons with the help of a computerized avatar.</ns0:p><ns0:p>We can conclude this section affirming that, to the best of our knowledge, there are no documented proposals regarding software-assisted learning of sign languages that exploit the concept of phonological proximity. After having studied the background of this topic, it is clear that this research and its corresponding proposal is pertinent, insofar it explains in detail the mechanisms that can be used to incorporate a component of phonological proximity to reinforce the learning of any sign language.</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture for SL learning reinforcement environment</ns0:head><ns0:p>The architecture of the tool has been conceived to show a high level of modularity, as well as to separate the logic and processing aspects from those associated with the access and generation of data. Obviously, by relying on several existing components, the design must show all the interdependencies that this implies.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows the general architecture of the SL learning reinforcement environment, which consists of four layers, ranging from those interacting with the users to those providing signs and processing modules reflecting logically derived similitude relations. These layers are: 1) A graphical user interface, both for web browsers and mobile devices.</ns0:p><ns0:p>2) The SL Learning Reinforcement Tool.</ns0:p><ns0:p>3) An interface with a Phonological Proximity Submodule and a Signs and Discourses repository. 4) A semantic disambiguation module. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The user interface provides access through any device that allows running a conventional web browser or running an application on a commonly used mobile device. The users can play different roles, which is relevant in the next layer, to determine the views and actions they can carry out with the tool.</ns0:p><ns0:p>The SL Learning Reinforcement Tool can be used in web or mobile environments. It consists of a sublayer called the learning module, which in turn classifies users into three roles: user (a learner), administrator, and instructional designer. These roles exist to strike a balance between separation of duties and flexibility. The user role of the tool provides access to the practice, assessment and statistics modules. The administrator role also has access to these modules, while the instructional designer role can use the practice, assessment and lesson modules. This lesson module is where the instructional design is carried out, that is, the design of each lesson, practice and evaluation mechanisms.</ns0:p><ns0:p>The other sublayer is the Phonological Proximity Component. This sublayer is in charge of interfacing with the following architectural layers, sending requests for signs in phonological close proximity every time that any of the users from any of the mentioned roles require it, as well as reproducing signs through the avatar.</ns0:p><ns0:p>Then we find two parallel layers, actually based on the already operational PIELS platform: first the Phonological Proximity Submodule and aligned to this same level the Sign and Discourses Repository. These layers interact with each other and also with the layer of the SL Learning Reinforcement Tool, previously explained.</ns0:p><ns0:p>The Phonological Proximity Submodule is responsible for receiving requests of similar signs, using a unique sign identifier. The Top-ten Petitioner Component processes these requests and returns the ten signs with the greatest phonological proximity with respect to the one received as a parameter. To do this, a repository called PIELS Similitude Matrix is queried, in which all the signs that make up the LESCO lexicon have been pre-processed. To keep this repository updated, there is a New Signs Similitude Evaluation Component, which receives each new sign included in the PIELS Sign Database from the parallel layer and applies a similarity measure between all the signs (the measure that has worked best is the cosine formula).</ns0:p><ns0:p>The Signs and Discourses Repository contains the database with the LESCO lexicon upto-date and a collection of discourses built through the use of the PIELS platform. It also contains the signing avatar, which is in charge of visually reproducing the signs that it receives by parameter, as well as complete discourses. Naturally, to build these discourses the previously existing signs are used or new signs may be created in its built-in editor as needed. This layer provides a mechanism to embed the visual display of the avatar in the upper layer of the SL Learning Reinforcement Tool, in the modules that so demand it: lessons, practice and instructional design.</ns0:p><ns0:p>Finally, there is the layer called Semantic Disambiguation, which is intended to be used in future work. The Disambiguation Container works through a big data Web corpus and a cognitive computing module <ns0:ref type='bibr' target='#b28'>(Naranjo-Zeled&#243;n et al., 2019)</ns0:ref>. It will be used to provide additional functionality to the tool, consisting of determining semantic proximity, that is, signs whose meanings are similar, regardless of whether or not they are similar in shape.</ns0:p></ns0:div> <ns0:div><ns0:head>The SL Learning Reinforcement Tool module</ns0:head><ns0:p>In this paper, we focus on describing the architecture and carefully detailing the SL Learning Reinforcement Tool module, to emphasize the feasibility of our proposal from a practical point of view and to ensure that this has been validated by a group of users with a suitable profile for this task. The use of the tool seeks to demonstrate in a tangible way that it is feasible to incorporate the concept of phonological proximity in a learning reinforcement tool serving as the basis for validating this concept with sign language learners. The tool description and the performed experiments are explained in the two following subsections.</ns0:p></ns0:div> <ns0:div><ns0:head>Tool description</ns0:head><ns0:p>The interface classifies signs into five categories (alphabet, numbers, greetings, Costa Rican geography and colors), displaying a new screen when choosing one of them with a list of available signs. Sequentially, the set of signs pertain to learning levels, ranging from the simplest to the most complex ones. Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> exhibits the interface once logged into the system. The adaptation of the software tool is achieved by adding a functionality when choosing a signal. Whenever the user clicks on it, a list of its paronyms is shown, so the avatar reproduces the sign, then the user can request to reproduce the similar signs. In this way, the small differences can easily be determined, fact that alerts the user about how careful they should be when having a conversation with a deaf person.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> exhibits the graphical display once the alphabet lesson module is chosen and clicked on 'Letter-D'. The system shows the top-ten similar signs, the first sign displays a homonym 'Where' (from the starting handshape), and the rest of them are paronyms: 'Letter-K', 'Desamparados' (the name of a crowded city), 'Sunday', 'Dangerous', 'Dog', 'Mouse', 'Nineteen', 'And', 'Ministry/Minister'. If the user clicks on the suggested signs, then the avatar reproduces them. Figure <ns0:ref type='figure' target='#fig_3'>4</ns0:ref> shows the avatar pointing at 'Letter-D', while Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref> shows paronym 'Sunday'. When practicing, the system displays the avatar making a sign. This task can be repeated at will. Then, the student chooses the right word corresponding to the four listed options. Figure <ns0:ref type='figure'>6</ns0:ref> illustrates this stage.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>6</ns0:ref>: Avatar signing 'Letter-U' for practice sessions.</ns0:p><ns0:p>The system guides the student&#180;s learning process by following a sequence of steps. This feature makes possible for the student to take lessons, save his progress status and, have access to new levels within the application to learn new concepts. The system also allows the theme to be changed, so to increase contrast levels facilitating accessibility for low-vision individuals. Besides, the student can access statistics to measure his daily progress.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimentation</ns0:head><ns0:p>In the experiments carried out, we have evaluated the similarity measure we have used in our phonological proximity module, and we have conducted a validation with different users regarding the usability and usefulness of the tool.</ns0:p></ns0:div> <ns0:div><ns0:head>Phonological proximity module</ns0:head><ns0:p>A similar measure, applied to the phonological parameters of the signs, was implemented. It produces a list ordered from highest to lowest for each one of them. To interact with the students, whenever they select a sign, the system just displays the ten most similar results to avoid overwhelming with huge outcomes Table <ns0:ref type='table'>1</ns0:ref> shows those similar scores gotten from sign 'Letter-D'. As per the standard cosine formula, a higher number stands for higher similarity. As a matter of fact, if a 100 score is obtained, it means that the signs are homonyms. We also present the 26 phonological parameters for sign 'LETTER-D' and its top-ten similar signs, those matching parameters pop out in bold. As depicted in previous research <ns0:ref type='bibr' target='#b29'>(Naranjo-Zeled&#243;n et al., 2020)</ns0:ref>, the results achieved by the cosine formula are based on mapping the phonological parameters of each sign to an array of numbers. These numbers are predefined and show different phonological characteristics of the signs like the hand orientation, the handshape, and the hand spatial location. Therefore, this standard formula is used to measure proximity among the orderings over the ndimensional space. The formula is shown in Eq. ( <ns0:ref type='formula'>1</ns0:ref>).</ns0:p><ns0:p>( <ns0:ref type='formula'>1</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_0'>cos(&#119909;,&#119910;) = x &#8226; y || x || |y||</ns0:formula><ns0:p>Here x and y are two arrays, both of them containing 26 entries that map 13 phonological parameters identified for each hand. If the cosine value is 1, then the two arrays are identical, while a value of 0 means that they do not have anything in common.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 1: Phonological parameters for sign 'LETRA-D' and top-ten similar signs.</ns0:head></ns0:div> <ns0:div><ns0:head>User validation</ns0:head><ns0:p>This section is structured to first present the importance of user research, which leads to the choice of subjects. Then, we proceed to explain what these subjects were asked to do. Then, we provide a rationale for the concept of phonological proximity as the central axis of the validation process. Finally, we explain in detail how usability and utility were validated.</ns0:p><ns0:p>User research is becoming more and more relevant in the field of education and learning, which is why this research adheres to its principles, particularly for validation purposes. As <ns0:ref type='bibr' target='#b21'>Kao et al. (2018)</ns0:ref> indicate, iterative user research for products has been conducted in over 50 educational technology companies at different stages of development. User research has been used as a collaborative and interdisciplinary process gathering together experts from academic fields, teaching and learning sciences, and human-computer interaction, along with software developers, since developing effective educational products require an understanding of many expertise fields.</ns0:p><ns0:p>The profile of the subjects to whom the survey was administered consisted of 12 regular users of technological tools being involved in a process of learning a sign language, 10 of them with a basic knowledge of LESCO and the other 2 in a more advanced level, not experts though. Their ages range from 21 to 52 years, although the average is 31 years of age, with 8 men and 4 women. As for men, the average age is 29 and that of women is 35. A total of 10 individuals out of 12 are novices, while the other 2 have a little more advanced knowledge of LS. Regarding their professions and academic degrees, they are classified into a Doctor of Computer Science, a Bachelor of Administration, an Industrial Design Engineer, four Computer Science undergraduates, four Computer Engineers and an Industrial Production Engineer. All of them used the web version of the tool, in order to facilitate remote interaction between the subjects and the researchers, and to more expeditiously clarify any doubts that may have arisen. We have found that when applying our proposal to these users, who are mostly novice sign language learners, the results are very satisfactory, as will be discussed later.</ns0:p><ns0:p>The subjects had to do two tests, in preparation for which they interacted directly with the tool, indicating in which cases they detected similarity between base signs and signs proposed by the system as highly similar. After performing this exercise with 3 base signs and their 10 similar signs, they were asked to complete the SUS test, as well as answer an open question about their perception of usefulness. To complement the above, they were asked to also give a numerical rating to the utility. Although the tests were carried out remotely, it was possible to observe how the subjects decided relatively quickly if the suggested signs seemed similar or not, without having to ask the avatar to reproduce them several times.</ns0:p><ns0:p>The results obtained in the evaluation of the phonological proximity module are deeply analyzed in our previous paper <ns0:ref type='bibr' target='#b29'>(Naranjo-Zeled&#243;n et al., 2020.</ns0:ref> In our database, we have already mapped each sign to a vector of 26 numerical parameters, each one with a precise phonological meaning. The parameters follow this order: left index, left middle, left ring finger, left pinky, left finger separation, left thumb, right index, right middle, right ring finger, right pinky, right finger separation, right thumb, left rotation, left wrist posture, left interiority, right rotation, right wrist posture, right interiority, left laterality, left height, left depth, contact with the left arm, right laterality, right height, right depth, contact with the right arm.</ns0:p><ns0:p>For example, the array <ns0:ref type='table' target='#tab_0'>[1,3,2,2,1,3,1,3,2,2,1,3,3,3,3,2,3,3,2,4,15,3, 2,4,15,3,2</ns0:ref>] contains the 26 parameters that phonologically describe the sign for 'PROTECTION', while the subarray <ns0:ref type='bibr'>[1,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>2,</ns0:ref><ns0:ref type='bibr'>2,</ns0:ref><ns0:ref type='bibr'>1,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>2,</ns0:ref><ns0:ref type='bibr'>4,</ns0:ref><ns0:ref type='bibr'>15,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>2]</ns0:ref> represents only the left-hand parameters. As explained above, the cosine is a token-based approach, and it was selected because it is the most natural way to handle numerical arrays of parameters, as is the case in this research. An edit-based measure of similarity would force the arrays to be converted to strings, taking care that some parameters have one digit while others have two. On the other hand, domain-dependent measures do not apply to our data, and a hybrid approach presents unnecessary complications. Obviously, phonetic similarity measures are specifically designed for spoken languages, so they are left out in this discussion.</ns0:p><ns0:p>With regard to the user evaluation, we have conducted an extrinsic evaluation through the SUS test, consisting of a questionnaire with ten items and a five Likert scale response for each option, ranging from 'Strongly agree' to 'Strongly disagree' <ns0:ref type='bibr' target='#b6'>(Brooke, 1986)</ns0:ref>. Among its benefits, we can identify that it has become an industry standard, widely referenced in articles and publications. By using SUS, one can make sure of these very desirable characteristics: &#61623; It is extremely easy to administer to participants. &#61623; It can be used on small sample sizes and yet attain reliable results. &#61623; It is valid in effectively differentiating between usable and unusable systems.</ns0:p><ns0:p>In addition to the SUS standardized test, we have considered it appropriate to include an open question of a qualitative nature, where the participants had to answer in a mandatory manner. The question is 'How do you think comparing similar signs has made learning easier or more difficult for you?'. The objective of this question is to evaluate the usefulness of our tool. Then, we proceeded with a last question, to assign a numerical rating to the previous question, on a scale from 1 to 100, worded as follows: 'Based on your answer to the previous question, how would you numerically rate the improvement in learning using similar signs? (1 is the lowest, 100 is the highest)'.</ns0:p><ns0:p>To carry out the test, each participant was summoned individually and given instructions to enter the system so to become familiar with it. Then, they were asked to choose 3 signs corresponding to the 'alphabet' group and analyze the similarity with their corresponding 10 most similar signs, to determine the precision threshold of the similarity formula used, as well as the possible need to refine the initial configuration of the signs before doing validations. After corroborating the levels of similarity, they proceeded to answer the SUS questionnaire, the qualitative question, and assign the numerical rating. The SUS survey format used can be seen in Figure <ns0:ref type='figure' target='#fig_6'>7</ns0:ref>. Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> shows the results of applying the SUS test to the participants, that is, the 12 participants in the rows and the 10 standard questions in the columns. There is an additional column on the far right, which corresponds to the numerical evaluation of the usefulness of the phonological proximity as perceived by the participants. Regarding the tone of the responses to the qualitative question, Table <ns0:ref type='table'>3</ns0:ref> shows the opinions and their tone for each participant in the study. This tone has been established by the authors as negative, mainly negative, neutral, mainly positive, or positive. The next section provides a broader discussion of the findings presented in both tables. </ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The SUS usability standard test gives a score of 89, which indicates that the tool has an extremely high level of usability, since the average of a large number of studies of this nature is 68 and from 84.1 to 100 the usability is located in the 'A+' percentile, which is the highest <ns0:ref type='bibr' target='#b25'>(Lewis &amp; Sauro, 2018)</ns0:ref>. Such a high ranking also has the desirable characteristic that the product is more likely to be recommended by users to their peers. This characteristic is of particular relevance when it comes to an innovative product, which presents an important differentiation compared to the existing options, in this case in the educational field.</ns0:p><ns0:p>If we examine the extreme ratings, that is, the highest and the lowest, are removed and averaged based on the remaining 10 ratings, the result is still very similar, increasing slightly from 89 to 91, which is indicative that none of them have a greater weight in the overall result. On the other hand, looking for an interpretation of these extremes, it should be noted that the lowest score was 60, and it was awarded by a subject who experienced problems during the reproduction of some signs by the avatar, due to a momentary synchronization problem on the platform, which may have negatively affected her perception. The highest score, 97.5, was given by one of the three subjects with the least experience in sign languages. In both cases it seems that the extreme ratings have a fairly predictable explanation.</ns0:p><ns0:p>On the other hand, the phonological proximity score, which reflects a numerical evaluation by each participant in a scale from 1 to 100, yields a very satisfactory average of 91.0. The obtained score demonstrates that the tool is useful for our objective of reinforcing sign language learning.</ns0:p><ns0:p>Carrying out the exercise of eliminating extreme ratings again, it can be seen the majority grant a 90 or 100 and that only two subjects gave an overall rating of 70. These are the two people who are not experts but have a little more advanced knowledge in sign languages. Again, the results make sense.</ns0:p><ns0:p>The open question showed a positive tone in practically all the answers, which can be synthesized in concepts such as 'usefulness', 'detection of differences' and 'reduce confusion'. It is evident that these opinions are favorable and that they in fact reflect satisfaction with respect to the improvement that students perceived when using a computer tool that incorporates the concept of phonological proximity.</ns0:p><ns0:p>Obviously, the graphic display of the tool is very useful to match the visual nature of sign languages. Although it is not the central focus of this research, it is important to highlight the fact that the appropriate graphic design and an avatar that reproduces the signs as similar as possible to what the students have learned in class is decisive for the proposal as a whole to be successful.</ns0:p><ns0:p>We are of the opinion that there is room for improvement in terms of the similarity of some signs that did not seem to represent a contribution for the majority of subjects. The phonological parameterization and the formula used work well for most cases, but in some particular cases it may be that the rotation and location of the hand account for most of the similarity, leaving aside the handshape, which is precisely what the novice student looks at first. There was only one sign that gave this problem repeatedly, but it deserves attention in future work. Validation was helpful in raising this possibility.</ns0:p><ns0:p>The general appreciation that we obtain from what it is stated in this analysis of results is that both the tool used and the concept itself of phonological proximity to detect slight differences have received the endorsement of the subjects of this study. Both from a quantitative and qualitative point of view, the subjects who collaborated in the validation show a clear acceptance of phonological proximity as a valuable concept to help reinforce their learning.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper presents an architecture to strengthen the sign language learning making use of phonological proximity concept to improve results. As far as we know, this is the first time this approach has been suggested in relation to reinforce the learning of sign languages and, particularly applied to the Costa Rican Sign Language (LESCO, for its Spanish acronym). The main contributions of our proposal are: (1) a modular architecture of a learning reinforcement environment for sign language that can be applied to different sign languages and environments;</ns0:p><ns0:p>(2) the inclusion of a software tool with a phonological proximity component to enrich the We describe the operation of our software tool with a graphical interface that classifies concepts and reproduces those selected signs, broadening its current functionality through phonologically similar signs, in other words, with similar handshapes. To meet this purpose, we explore the incorporation of homonymous and paronyms.</ns0:p><ns0:p>By allowing the students to compare pretty similar signs, we have included the phonological proximity component into an existing tool with a suitable interface. The value of this improvement is to reduce the number of mistakes of those similar signs in a real conversation with a deaf person since this can seriously impact the communication and hinder the understanding between each other. A mapping of the signs with the other signs becomes an essential requirement to dispose of the available lexicon in advance. To ease the use, we decided that the interface should list only ten similar signs, so the user might not be confused, situation which would be counterproductive.</ns0:p><ns0:p>We evaluated both the phonological proximity module and the usability and usefulness of the tool. Thanks to the practical application of the concept of phonological proximity, learning is reinforced, as has been validated through our experimentation with sign language students in this research. We conclude that the incorporation of the phonological proximity concept to this software tool can upgrade the reinforcement of LESCO learning, offering the possibility of using the same approach of our line of research in other sign languages.</ns0:p><ns0:p>For future work, the use of standardized questionnaires for user experience, such as AttrakDiff or UEQ (User-Experience Questionnaire), along with the SUS test, are to be considered, encompassing a comprehensive evaluation of the tool. We will also study the effect of using the similarity measure on separate phonological components of signs. This would consist of validating the handshape, orientation and location of the hand as separate components, to determine if this produces improvement in the results. Additionally, we will consider the inclusion of facial gestures, head and trunk movements as possible elements that improve the accuracy of the similarity. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1: SL learning reinforcement environment architecture.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2: The software basic level interface with signs classified into categories.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Top-ten homonyms/paronyms for 'Letter-D'.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Avatar signing homonym 'Letter-D'.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Avatar signing paronym 'Sunday'.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60570:1:1:NEW 9 Jul 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: SUS (System Usability Scale) standard test.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60570:1:1:NEW 9 Jul 2021) Manuscript to be reviewed Computer Science learning process; (3) the identification of homonyms and paronyms to contrast the different degrees of proximity between the signs; (4) the evaluation of the phonological proximity module and the usability and usefulness of the tool.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 1 SL</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,70.87,327.50,672.95' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 : SUS Test and Phonological Proximity Scores. Table 3: Opinion/Judgment of learning improvement by using phonological proximity.</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> </ns0:body> "
"June 18, 2021 Dear Editors: The authors would like to thank the editors and the reviewers for their useful comments that have enabled us to improve the paper. In the present report, all the reviewers' comments have been boxed; they are followed by the changes we have made in the paper. We trust that the manuscript is already suitable for publication in PeerJ. Luis Naranjo-Zeledón University of Alicante On behalf of all authors. Revision Notes with the list of changes we have made on “Architecture design of a reinforcement environment for learning sign languages” (#CS-2021:04:60570:0:1:REVIEW) Reviewer 1 (Anonymous): Basic reporting The abstract is NOT satisfactory because it didn't contain the following parts: i. The importance of or motivation for the research. ii. The issue/argument of the research. iii. The methodology. iv. The result/findings. v. The implications of the result/findings. Answer: The recommendations have been addressed, specifying the requested parts in the abstract. The changes are reflected in the following lines: i. The importance of or motivation for the research: Lines 20-28. ii. The issue / argument of the research: Lines 29-35. iii. The methodology: Lines 35-40. iv. The result / findings: Lines 40-47. v. The implications of the result / findings: Lines 47-49. The modified abstract has been rewritten as follows: Different fields such as linguistics, teaching, and computing have demonstrated special interest in the study of sign languages (SL). However, the processes of teaching and learning these languages turn complex since it is unusual to find people teaching these languages that are fluent in both SL and the native language of the students. The teachings from deaf individuals become unique. Nonetheless, it is important for the student to lean on supportive mechanisms while being in the process of learning an SL. Bidirectional communication between deaf and hearing people through SL is a hot topic to achieve a higher level of inclusion. However, all the processes that convey teaching and learning SL turn difficult and complex since it is unusual to find SL teachers that are fluent also in the native language of the students, making it harder to provide computer teaching tools for different SL. Moreover, the main aspects that a second language learner of an SL finds difficult are phonology, non-manual components, and the use of space (the latter two are specific to SL, not to spoken languages). This proposal appears to be the first of the kind to favor the Costa Rican Sign Language (LESCO, for its Spanish acronym), as well as any other SL. Our research focus stands on reinforcing the learning process of final-user hearing people through a modular architectural design of a learning environment, relying on the concept of phonological proximity within a graphical tool with a high degree of usability. The aim of incorporating phonological proximity is to assist individuals in learning signs with similar handshapes. This architecture separates the logic and processing aspects from those associated with the access and generation of data, which makes it portable to other SL in the future. The methodology used consisted of defining 26 phonological parameters (13 for each hand), thus characterizing each sign appropriately. Then, a similarity formula was applied to compare each pair of signs. With these pre-calculations, the tool displays each sign and its top ten most similar signs. A SUS usability test and an open qualitative question were applied, as well as a numerical evaluation to a group of learners, to validate the proposal. In order to reach our research aims, we have analyzed previous work on proposals for teaching tools meant for the student to practice SL, as well as previous work on the importance of phonological proximity in this teaching process. This previous work justifies the necessity of our proposal, whose benefits have been proved through the experimentation conducted by different users on the usability and usefulness of the tool. To meet these needs, homonymous words (signs with the same starting handshape) and paronyms (signs with highly similar handshape), have been included to explore their impact on learning. It allows the possibility to apply the same perspective of our existing line of research to other SL in the future. Basic reporting - Relevant literature review of latest similar research studies on the topic at hand must be discussed Answer: In the Background section, the antecedents of the object of study have been discussed. They have been classified into three groups of great significance, each one explained in separated subsections. First, the importance of phonological proximity is addressed, which is a crucial matter in this study. Then, the similarity measures for phonological proximity are examined, which are the computational mechanism to determine similarity between objects to be compared. Finally, the proposals for teaching tools that have been made for the student to practice sign languages. We believe that based on the exhaustive search performed in scholarly repositories, this literature reflects an up-to-date state of the research on these topics. In lines 207-217 we have added the following paragraph: The current similarity measures have been widely welcomed by the research community and many of them are long-standing, hence the literature on the matter can be classified as classical. Coletti & Bouchon-Meunier (2019) note that a complete review or even a simple listing of all the uses of similarity is impossible. They are used in various tasks ranging from management of data or information, such as content-based information retrieval, text summarization, recommendation systems, to user profile exploitation, and decision-making, to cite only a few. Among the many similarity measures proposed, a broad classification may be of use: the classical crisp context (Choi et al., 2010; Lesot et al., 2009) and the fuzzy context (Bhutani & Rosenfeld, 2003; Bouchon-Meunier et al., 1996; Li et al., 2014; Couso et al., 2013. Due to the nature of our research, we concentrate on the classical crisp context, since the fuzzy scenario does not apply to our object of study. Experimental design - Please cite each equation and clearly explain its terms. Answer: We understand that with this valuable observation, the reviewer refers to clarifying the cosine formula. To accomplish this, the following explanation has been carried out, in lines 428-432: The formula is shown in Eq. (1). (1) Here x and y are two arrays, both of them containing 26 entries that map 13 phonological parameters identified for each hand. If the cosine value is 1, then the two arrays are identical, while a value of 0 means that they do not have anything in common. Experimental design - What are the evaluations used for the verification of results? Answer: In lines 473-481 we have added the following text: The results obtained in the evaluation of the phonological proximity module are deeply analyzed in our previous paper (Naranjo-Zeledón et al., 2020. In our database, we have already mapped each sign to a vector of 26 numerical parameters, each one with a precise phonological meaning. The parameters follow this order: left index, left middle, left ring finger, left pinky, left finger separation, left thumb, right index, right middle, right ring finger, right pinky, right finger separation, right thumb, left rotation, left wrist posture, left interiority, right rotation, right wrist posture, right interiority, left laterality, left height, left depth, contact with the left arm, right laterality, right height, right depth, contact with the right arm. Then, in lines 492-514 we establish the following: With regard to the user evaluation, we have conducted an extrinsic evaluation through the SUS test, consisting of a questionnaire with ten items and a five Likert scale response for each option, ranging from ‘Strongly agree’ to ‘Strongly disagree’ (Brooke, 1986). Among its benefits, we can identify that it has become an industry standard, widely referenced in articles and publications. By using SUS, one can make sure of these very desirable characteristics: • It is extremely easy to administer to participants. • It can be used on small sample sizes and yet attain reliable results. • It is valid in effectively differentiating between usable and unusable systems. In addition to the SUS standardized test, we have considered it appropriate to include an open question of a qualitative nature, where the participants had to answer in a mandatory manner. The question is ‘How do you think comparing similar signs has made learning easier or more difficult for you?’. The objective of this question is to evaluate the usefulness of our tool. Then, we proceeded with a last question, to assign a numerical rating to the previous question, on a scale from 1 to 100, worded as follows: ‘Based on your answer to the previous question, how would you numerically rate the improvement in learning using similar signs? (1 is the lowest, 100 is the highest)’. To carry out the test, each participant was summoned individually and given instructions to enter the system so to become familiar with it. Then, they were asked to choose 3 signs corresponding to the ‘alphabet’ group and analyze the similarity with their corresponding 10 most similar signs, to determine the precision threshold of the similarity formula used, as well as the possible need to refine the initial configuration of the signs before doing validations. After corroborating the levels of similarity, they proceeded to answer the SUS questionnaire, the qualitative question, and assign the numerical rating. Validity of the findings - The procedures and analysis of the data is seen to be unclear. Answer: In lines 454-462 we provide more detail on the subjects’ profile, when we state the following text: Their ages range from 21 to 52 years, although the average is 31 years of age, with 8 men and 4 women. As for men, the average age is 29 and that of women is 35. A total of 10 individuals out of 12 are novices, while the other 2 have a little more advanced knowledge of LS. Regarding their professions and academic degrees, they are classified into a Doctor of Computer Science, a Bachelor of Administration, an Industrial Design Engineer, four Computer Science undergraduates, four Computer Engineers and an Industrial Production Engineer. All of them used the web version of the tool, in order to facilitate remote interaction between the subjects and the researchers, and to more expeditiously clarify any doubts that may have arisen. Then, in lines 462-464 we emphasize that the results are valuable, particularly for novice users (people just starting to learn sign language) by writing the following text: We have found that when applying our proposal to these users, who are mostly novice sign language learners, the results are very satisfactory, as will be discussed later. Validity of the findings - The discussion is very important in research paper. Nevertheless, this section is short and should be presented completely. Answer: In the lines 533-581 we have written a new section, entirely dedicated to the discussion that the reviewer has so wisely requested. It contains the following text: The SUS usability standard test gives a score of 89, which indicates that the tool has an extremely high level of usability, since the average of a large number of studies of this nature is 68 and from 84.1 to 100 the usability is located in the ‘A+’ percentile, which is the highest (Lewis & Sauro, 2018). Such a high ranking also has the desirable characteristic that the product is more likely to be recommended by users to their peers. This characteristic is of particular relevance when it comes to an innovative product, which presents an important differentiation compared to the existing options, in this case in the educational field. If we examine the extreme ratings, that is, the highest and the lowest, are removed and averaged based on the remaining 10 ratings, the result is still very similar, increasing slightly from 89 to 91, which is indicative that none of them have a greater weight in the overall result. On the other hand, looking for an interpretation of these extremes, it should be noted that the lowest score was 60, and it was awarded by a subject who experienced problems during the reproduction of some signs by the avatar, due to a momentary synchronization problem on the platform, which may have negatively affected her perception. The highest score, 97.5, was given by one of the three subjects with the least experience in sign languages. In both cases it seems that the extreme ratings have a fairly predictable explanation. On the other hand, the phonological proximity score, which reflects a numerical evaluation by each participant in a scale from 1 to 100, yields a very satisfactory average of 91.0. The obtained score demonstrates that the tool is useful for our objective of reinforcing sign language learning. Carrying out the exercise of eliminating extreme ratings again, it can be seen the majority grant a 90 or 100 and that only two subjects gave an overall rating of 70. These are the two people who are not experts but have a little more advanced knowledge in sign languages. Again, the results make sense. The open question showed a positive tone in practically all the answers, which can be synthesized in concepts such as 'usefulness', 'detection of differences' and 'reduce confusion'. It is evident that these opinions are favorable and that they in fact reflect satisfaction with respect to the improvement that students perceived when using a computer tool that incorporates the concept of phonological proximity. Obviously, the graphic display of the tool is very useful to match the visual nature of sign languages. Although it is not the central focus of this research, it is important to highlight the fact that the appropriate graphic design and an avatar that reproduces the signs as similar as possible to what the students have learned in class is decisive for the proposal as a whole to be successful. We are of the opinion that there is room for improvement in terms of the similarity of some signs that did not seem to represent a contribution for the majority of subjects. The phonological parameterization and the formula used work well for most cases, but in some particular cases it may be that the rotation and location of the hand account for most of the similarity, leaving aside the handshape, which is precisely what the novice student looks at first. There was only one sign that gave this problem repeatedly, but it deserves attention in future work. Validation was helpful in raising this possibility. The general appreciation that we obtain from what it is stated in this analysis of results is that both the tool used and the concept itself of phonological proximity to detect slight differences have received the endorsement of the subjects of this study. Both from a quantitative and qualitative point of view, the subjects who collaborated in the validation show a clear acceptance of phonological proximity as a valuable concept to help reinforce their learning. Comments for the author - Please improve overall readability of the paper. Answer: Thanks to the reviewer’s comments, the paper has greatly improved its readability, both by proofreading and rearranging considerable portions of text. Reviewer 2 (Luis Quesada): Basic reporting Authors explain why they used SUS. However, they do not indicate this in the abstract. They must add it to the abstract (mentioning SUS and the open question). Answer: The use of SUS and the open question are now indicated in the abstract, in lines 38-40, stating the following: A SUS usability test and an open qualitative question were applied, as well as a numerical evaluation to a group of learners, to validate the proposal. Experimental design The user validation section (Line 360) should be reorganized for better understanding. First, indicate who the users are (377-380). Second, indicate what was done by users (393-399). Finally, explain how usability was evaluated (368-376 + 385-392). Answer: The timely recommendation of the reviewer has been followed, in the order suggested. Additionally, an additional introductory paragraph has been added explaining the new order of the section, on lines 441 to 444, as follows: This section is structured to first present the importance of user research, which leads to the choice of subjects. Then, we proceed to explain what these subjects were asked to do. Then, we provide a rationale for the concept of phonological proximity as the central axis of the validation process. Finally, we explain in detail how usability and utility were validated. Experimental design Although the process is well detailed, I suggest: + It would be good to know if participants tried the web or mobile version (Lines 393-399) + A more extensive description of the users profile should be added (i.e., age, sex, or any information to know better those who filled out the SUS). (Lines 377-380) Answer: In lines 460-462, the use of the web version is now explicit: All of them used the web version of the tool, in order to facilitate remote interaction between the subjects and the researchers, in order to more expeditiously clarify any doubts that may have arisen. On the other hand, a more extensive description of the users’ profile is now provided in lines 454-464: Their ages range from 21 to 52 years, although the average is 31 years of age, with 8 men and 4 women. As for men, the average age is 29 and that of women is 35. A total of 10 individuals out of 12 are novices, while the other 2 have a little more advanced knowledge of LS. Regarding their professions and academic degrees, they are classified into a Doctor of Computer Science, a Bachelor of Administration, an Industrial Design Engineer, four Computer Science undergraduates, four Computer Engineers and an Industrial Production Engineer. All of them used the web version of the tool, in order to facilitate remote interaction between the subjects and the researchers, and to more expeditiously clarify any doubts that may have arisen. We have found that when applying our proposal to these users, who are mostly novice sign language learners, the results are very satisfactory, as will be discussed later. Validity of the findings I suggest a more in-depth analysis of the evaluation results, i.e., + is there room for improvement? + Was there a difference between the users? (novices vs advanced), + Participant 2 sus score was the lowest, why could it happen? + Were the participants observed? If yes, then what details did the researchers observe? Answer: + In lines 572-578 we have incorporated this additional text, related to room for improvement: We are of the opinion that there is room for improvement in terms of the similarity of some signs that did not seem to represent a contribution for the majority of subjects. The phonological parameterization and the formula used work well for most cases, but in some particular cases it may be that the rotation and location of the hand account for most of the similarity, leaving aside the handshape, which is precisely what the novice student looks at first. There was only one sign that gave this problem repeatedly, but it deserves attention in future work. Validation was helpful in raising this possibility. + In lines 545-553 we now have stated the differences between the users (the situation with user 2 is addressed herein): If we examine the extreme ratings, that is, the highest and the lowest, are removed and averaged based on the remaining 10 ratings, the result is still very similar, increasing slightly from 89 to 91, which is indicative that none of them have a greater weight in the overall result. On the other hand, looking for an interpretation of these extremes, it should be noted that the lowest score was 60, and it was awarded by a subject who experienced problems during the reproduction of some signs by the avatar, due to a momentary synchronization problem on the platform, which may have negatively affected her perception. The highest score, 97.5, was given by one of the three subjects with the least experience in sign languages. In both cases it seems that the extreme ratings have a fairly predictable explanation. Additionally, we added the following explanation in lines 558-561: Carrying out the exercise of eliminating extreme ratings again, it can be seen the majority grant a 90 or 100 and that only two subjects gave an overall rating of 70. These are the two people who are not experts but have a little more advanced knowledge in sign languages. Again, the results make sense. + In lines 470-472 we have added the following remarks about observing the subjects: Although the tests were carried out remotely, it was possible to observe how the subjects decided relatively quickly if the suggested signs seemed similar or not, without having to ask the avatar to reproduce them several times. Comments for the author Authors present the architecture of a reinforcement environment for learning sign languages. They evaluate the usability of an application (based on the proposed architecture) using SUS. The app is for LESCO. The proposal is interesting. The contributions are well detailed. Introduction and backgroud sections are fine. In future studies, I suggest applying questionnaires for UX evaluation (i.e., attrakdiff or UEQ) combined with SUS for a comprehensive evaluation of the tool/app. Answer: Regarding future studies, we appreciate the reviewer’s suggestion and have incorporated the following text in lines 616-618: For future work, the use of standardized questionnaires for user experience, such as AttrakDiff or UEQ (User-Experience Questionnaire), along with the SUS test, are to be considered, encompassing a comprehensive evaluation of the tool. Reviewer 3 (Anonymous): Basic reporting No recommendations were made in this category. Answer: The authors appreciate the review and note that this category is correct. Experimental design No recommendations were made in this category. Answer: The authors appreciate the review and note that this category is correct. Validity of the findings No recommendations were made in this category. Answer: The authors appreciate the review and note that this category is correct. Comments for the author • In the Introduction section, the drawbacks of each conventional technique should be described clearly. • Introduction needs to explain the main contributions of the work more clearly. • The authors should emphasize the difference between other methods to clarify the position of this work further. • The Wide ranges of applications need to be addressed in Introductions • The objective of the research should be clearly defined in the last paragraph of the introduction section. • Add the advantages of the proposed system in one quoted line for justifying the proposed approach in the Introduction section. Answer: In lines 92–121 of the Introduction we now explain in detail the drawbacks of each conventional technique in the following text: Conventional techniques in use, as will be seen in the Background section, have drawbacks because they are specifically designed for some specific sign language. This situation is very possibly since they do not use, or at least do not make explicit, a formalization of the grammar of the specific sign language. This is in direct contrast to our approach, which takes as a starting point the formalization of phonological parameters mapped to integer numbers and their subsequent use by applying similarity measures. We emphasize the difference between other methods to better understand the position of this work. In this way, the research community can formalize their own parameters and adopt our approach without the need for any change at the architectural level. In the field at hand, which is education, the proposals revolve around the use of various techniques or approaches, each with its rationale and justification, but at the same time revealing clear disadvantages, as explained below: • Self-assessment open-source software, with web-based tests for adult learners. These Yes/No tests require important improvements to offer a more complete service, such as providing the option to see translations directly after indicating whether the user knows or does not know a sign. • Using hardware devices (wearables, usually Kinect) with recognition of hand movements and guidance to learners. As a general rule, the use of wearables is preferable to be left as the last option, because it is unnatural, it is expensive and the equipment can be lost or damaged, hindering the process. • Educational games (who also use wearables). The same downsides identified in the previous point are faced here. • Incorporation of computer vision to indicate unnatural movements in the novice. While it is true that computer vision is a very fertile field of research today, but it also requires the use of additional equipment. • Teaching fingerspelling in the form of quizzes. In general, fingerspelling presents a communication technique that is too limited and should only be used when the signaling person does not have any other resources to communicate their message at all. • Lexicon teaching proposals. can be incomplete when the desired outcome is to produce real communications, with syntactic connections that make sense to the other party. In lines 66-76 we now provide an explanation of the main contributions: The main contributions of our proposal are: 1. The design of a modular and portable architecture of a learning reinforcement environment for sign languages (it can be applied to different sign languages). 2. The inclusion of a graphical software tool, including a signing avatar and a phonological proximity component meant to enrich the learning process. 3. The identification of homonym and paronym signs that illustrate the different degrees of proximity between the signs. 4. The evaluation of the phonological proximity module at the implementation level, as well as the usability and usefulness of this implementation in a software tool. The architecture design includes the aforementioned software tool, containing a phonological component so to make the learning process more complete. In lines 98-100 we emphasize and clarify the position of our work further, by writing the following text: We emphasize the difference between other methods to better understand the position of this work. In this way, the research community can formalize their own parameters and adopt our approach without the need for any change at the architectural level. The ranges of applications are addressed in the Introduction, in lines 104-121: • Self-assessment open-source software, with web-based tests for adult learners. These Yes/No tests require important improvements to offer a more complete service, such as providing the option to see translations directly after indicating whether the user knows or does not know a sign. • Using hardware devices (wearables, usually Kinect) with recognition of hand movements and guidance to learners. As a general rule, the use of wearables is preferable to be left as the last option, because it is unnatural, it is expensive and the equipment can be lost or damaged, hindering the process. • Educational games (who also use wearables). The same downsides identified in the previous point are faced here. • Incorporation of computer vision to indicate unnatural movements in the novice. While it is true that computer vision is a very fertile field of research today, but it also requires the use of additional equipment. • Teaching fingerspelling in the form of quizzes. In general, fingerspelling presents a communication technique that is too limited and should only be used when the signaling person does not have any other resources to communicate their message at all. • Lexicon teaching proposals. can be incomplete when the desired outcome is to produce real communications, with syntactic connections that make sense to the other party. In lines 122-123 of the Introduction, we define the objective of the research. For the sake of clarity, we have kept the last paragraph to explain the structure of the remainder of the paper. The objective has been worded in the following way: Our main objective is to demonstrate that SL learning can be reinforced by technological means incorporating the concept of phonological proximity. The advantages of the proposed system that justify our approach appear in the Introduction section, in lines 126-142: In conclusion, in this paper we provide elements that clarify how to deal with issues of central importance in this type of technologies, hence making clear the advantages of the proposed system, specifically: • Modular architecture, to simplify the maintenance and the incorporation of new functionalities. • Applicable to several sign languages, to take advantage of the conceptual power of our contribution in other languages and other types of projects. • Enrichment of learning environments, to take advantage of opportunities to accelerate and improve the experience. • Differentiation between homonyms and paronyms, to contrast the different degrees of proximity between the signs and, therefore, the need to emphasize the practice where it is most necessary. • Applicable to other environments, offering a portable concept where researchers can incorporate it and take advantage of it. • Usability and usefulness validated through a standard test, to have a high degree of certainty that the concept of proximity is properly supported by a real tool, which in turn is easy to use. "
Here is a paper. Please give your review comments after reading it.
248
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Different fields such as linguistics, teaching, and computing have demonstrated special interest in the study of sign languages (SL). However, the processes of teaching and learning these languages turn complex since it is unusual to find people teaching these languages that are fluent in both SL and the native language of the students. The teachings from deaf individuals become unique. Nonetheless, it is important for the student to lean on supportive mechanisms while being in the process of learning an SL.</ns0:p><ns0:p>Bidirectional communication between deaf and hearing people through SL is a hot topic to achieve a higher level of inclusion. However, all the processes that convey teaching and learning SL turn difficult and complex since it is unusual to find SL teachers that are fluent also in the native language of the students, making it harder to provide computer teaching tools for different SL. Moreover, the main aspects that a second language learner of an SL finds difficult are phonology, non-manual components, and the use of space (the latter two are specific to SL, not to spoken languages). This proposal appears to be the first of the kind to favor the Costa Rican Sign Language (LESCO, for its Spanish acronym), as well as any other SL. Our research focus stands on reinforcing the learning process of final-user hearing people through a modular architectural design of a learning environment, relying on the concept of phonological proximity within a graphical tool with a high degree of usability. The aim of incorporating phonological proximity is to assist individuals in learning signs with similar handshapes. This architecture separates the logic and processing aspects from those associated with the access and generation of data, which makes it portable to other SL in the future. The methodology used consisted of defining 26 phonological parameters (13 for each hand), thus characterizing each sign appropriately.</ns0:p><ns0:p>Then, a similarity formula was applied to compare each pair of signs. With these precalculations, the tool displays each sign and its top ten most similar signs. A SUS usability test and an open qualitative question were applied, as well as a numerical evaluation to a group of learners, to validate the proposal. In order to reach our research aims, we have</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>It is important to lean on learning reinforcement tools to enhance the learning process of a sign language (SL), the knowledge obtained from class work seems to be insufficient to meet this purpose. The use of technology becomes a great option to achieve the objective of learning a sign language because the student faces a new language based on his visual abilities rather than the spoken word. The use of technology can offer easy-to-use visual interfaces permitting the interested party to compare and associate the meaning of signs more accurately. The interested party has the possibility to access these applications if necessary.</ns0:p><ns0:p>A previous research <ns0:ref type='bibr' target='#b29'>(Naranjo-Zeled&#243;n et al. 2020)</ns0:ref>, showed that phonological proximity upgrades different areas within the study of a Sign Language. In relation to this particular research, the fundamental characteristics of the Costa Rican Sign Language (LESCO, for its acronym in Spanish) were disclosed regarding phonological proximity for clustering and learning reinforcement purposes.</ns0:p><ns0:p>The main contributions of our proposal are: 1. The design of a modular and portable architecture of a learning reinforcement environment for sign languages (it can be applied to different sign languages).</ns0:p><ns0:p>2. The inclusion of a graphical software tool, including a signing avatar and a phonological proximity component meant to enrich the learning process.</ns0:p><ns0:p>3. The identification of homonym and paronym signs that illustrate the different degrees of proximity between the signs.</ns0:p><ns0:p>4. The evaluation of the phonological proximity module at the implementation level, as well as the usability and usefulness of this implementation in a software tool.</ns0:p><ns0:p>The architecture design includes the aforementioned software tool, containing a phonological component so to make the learning process more complete. This tool was developed by the Costa Rica Institute of Technology. At first, the tool showed concepts classified into different categories <ns0:ref type='bibr'>(colors, alphabet, numbers, etc.)</ns0:ref>. These classifications are grouped into three levels: basic, intermediate, and advanced. If a sign is chosen by the user, an avatar will reproduce it, interfacing at the same time with the PIELS (International Platform for Sign Language Edition, for its acronym in Spanish). So far, it displays content from LESCO, but the design of the tool allows for adaptation to any SL <ns0:ref type='bibr' target='#b33'>(Serrato-Romero and Chac&#243;n-Rivas, 2016)</ns0:ref>.</ns0:p><ns0:p>Our phonological proximity study analyzes homonyms (signs with the same starting handshape), paronyms (similar signs with different meanings), and polysemy which we have determined is very rare in LESCO (same sign with different meanings). Some examples of these phenomena are easy to find in spoken languages such as the paronyms 'affect' and 'effect', or the homonyms of 'book' ('something to read' or 'make a reservation'). Sign languages require to employ visual variables, like handshapes, their location at the pointing spot and facial gestures. The software tool was broadened to examine these correlations and their impact on reinforcing learning.</ns0:p><ns0:p>Conventional techniques in use, as will be seen in the Background section, have drawbacks because they are specifically designed for some specific sign language. This situation is very possibly since they do not use, or at least do not make explicit, a formalization of the grammar of the specific sign language. This is in direct contrast to our approach, which takes as a starting point the formalization of phonological parameters mapped to integer numbers and their subsequent use by applying similarity measures.</ns0:p><ns0:p>We emphasize the difference between other methods to better understand the position of this work. In this way, the research community can formalize their own parameters and adopt our approach without the need for any change at the architectural level. In the field at hand, which is education, the proposals revolve around the use of various techniques or approaches, each with its rationale and justification, but at the same time revealing clear disadvantages, as explained below:</ns0:p><ns0:p>&#61623; Self-assessment open-source software, with web-based tests for adult learners. These Yes/No tests require important improvements to offer a more complete service, such as providing the option to see translations directly after indicating whether the user knows or does not know a sign. &#61623; Using hardware devices (wearables, usually Kinect) with recognition of hand movements and guidance to learners. As a general rule, the use of wearables is preferable to be left as the last option, because it is unnatural, it is expensive and the equipment can be lost or damaged, hindering the process. &#61623; Educational games (who also use wearables). The same downsides identified in the previous point are faced here. &#61623; Incorporation of computer vision to indicate unnatural movements in the novice. While it is true that computer vision is a very fertile field of research today, but it also requires the use of additional equipment. &#61623; Teaching fingerspelling in the form of quizzes. In general, fingerspelling presents a communication technique that is too limited and should only be used when the signaling person does not have any other resources to communicate their message at all. &#61623; Lexicon teaching proposals. can be incomplete when the desired outcome is to produce real communications, with syntactic connections that make sense to the other party. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Our main objective is to demonstrate that SL learning can be reinforced by technological means incorporating the concept of phonological proximity. In turn, we will explain how the phonological components of the SL in question should be parameterized, to incorporate them into an architecture that provides a suitable interface with a signing avatar.</ns0:p><ns0:p>In conclusion, in this paper we provide elements that clarify how to deal with issues of central importance in this type of technologies, hence making clear the advantages of the proposed system, specifically: &#61623; Modular architecture, to simplify the maintenance and the incorporation of new functionalities. &#61623; Applicable to several sign languages, to take advantage of the conceptual power of our contribution in other languages and other tproypes of projects. &#61623; Enrichment of learning environments, to take advantage of opportunities to accelerate and improve the experience. &#61623; Differentiation between homonyms and paronyms, to contrast the different degrees of proximity between the signs and, therefore, the need to emphasize the practice where it is most necessary. &#61623; Applicable to other environments, offering a portable concept where researchers can incorporate it and take advantage of it. &#61623; Usability and usefulness validated through a standard test, to have a high degree of certainty that the concept of proximity is properly supported by a real tool, which in turn is easy to use.</ns0:p><ns0:p>The next section provides a background of previous work on the subject matter. Then, we present the proposed architecture of the learning reinforcement environment ('Architecture for SL learning reinforcement environment' section). After that, we illustrate details of its deployment in a software tool showing the experiments carried out with the phonological proximity module and the users' validation ('The SL Learning Reinforcement Tool module' section). Finally, we draw our conclusions, depicting our contribution to the sign language learning process and the future work ('Conclusions' section).</ns0:p></ns0:div> <ns0:div><ns0:head>Background</ns0:head><ns0:p>This section presents the antecedents of the object of study. They have been classified into three groups of great significance, each one explained in the following subsections. First, the importance of phonological proximity is addressed, which is a crucial matter in this study. Then, the similarity measures for phonological proximity are examined, which are the computational mechanism to determine similarity between objects to be compared. Finally, we go over the proposals for teaching tools that have been made for the student to practice sign languages. We believe that based on the exhaustive search performed in scholarly repositories, this literature reflects an up-to-date state of the research on these topics. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The importance of phonological proximity According to <ns0:ref type='bibr' target='#b2'>Baker et al. (2016)</ns0:ref> the main aspects that a second language learner of a sign language finds difficult are phonology, non-manual components, and the grammatical use of space. These features are specific to sign languages and do not take place in spoken languages. Moreover, the phonological inventory of sign languages is completely different from that of spoken languages. Another salient observation of the authors has to do with how among the different phonological parameters, the handshape has the largest number of distinctive possibilities. In the sign languages that have been studied, the number of different handshapes appears to be larger than the number of locations and movements. Iconicity is also to be considered since signs commonly portray iconic features, which means that the handshape resembles parts of the meaning, a phenomenon that is extremely rare to find in spoken languages.</ns0:p><ns0:p>Williams, Stone &amp; Newman (2017) have studied the importance of phonological similarity to facilitate lexical access, that is, the process by which individuals produce a specific word from their mental lexicon or recognize it when it is used by others (American Psychological <ns0:ref type='bibr'>Association, n.d.)</ns0:ref>. This study, rooted in this psycholinguistic aspect, has determined that lexical access in sign language is facilitated by the phonological similarity of the lexical representations in memory. <ns0:ref type='bibr' target='#b22'>Keane et al. (2017)</ns0:ref> have demonstrated that for fingerspelled words in American Sign Language (ASL), the positional similarity score is the description of handshape similarity that best matches the signer perception when asked to rate the phonological proximity. The positional similarity approach is superior when compared to the contour difference approach, so in order to define similarity when fingerspelling, it is more important to look at the positional configuration of the handshapes than concentrating on the transitions.</ns0:p><ns0:p>An eye-tracking study on German Sign Language <ns0:ref type='bibr' target='#b35'>(Wienholz et al., 2021)</ns0:ref> recorded eye movements of the participants while watching videos of sentences containing related or unrelated sign pairs, and pictures of a target and some unrelated distractor. The authors concluded that there is a phonological priming effect for sign pairs that share both the handshape and movement while they differ in their location. The results suggest a difference in the contribution of parameters to sign recognition and that sub-lexical features can influence sign language processing.</ns0:p><ns0:p>An experiment conducted by <ns0:ref type='bibr' target='#b15'>Hildebrandt &amp; Corina (2002)</ns0:ref>, revealed that all subjects, regardless of their previous exposure to ASL, categorize signs that share location and movement (and differ in handshape) as highly similar. However, an ulterior examination of additional parameter contrasts revealed that different degrees of previous linguistic knowledge of the signers influenced the way they perceived similarity. So, for instance, the combination of location and handshape is recognized as carrying a higher level of similarity by native signers than by late deaf learners or by hearing signers.</ns0:p></ns0:div> <ns0:div><ns0:head>Similarity measures for phonological proximity</ns0:head><ns0:p>With regard to measuring the phonological proximity in a sign language, different similarity measures can be used. They can be categorized into five categories: Edit-based, Token-based, Hybrid, Structural (Domain-dependent), and Phonetic <ns0:ref type='bibr' target='#b30'>(Naumann &amp; Herschel, 2010;</ns0:ref><ns0:ref type='bibr' target='#b4'>Bisandu, Prasad &amp; Liman, 2019)</ns0:ref>.</ns0:p><ns0:p>Similarity measures can be used, as long as the characteristics of the data used as phonological parameters have been properly characterized. For instance, if the parameters are strings of characters, then the edit-based similarity measurements can satisfy the objective. If the parameters are token sets (this is our case), then token-based measures can be used successfully. Hybrid approaches strive for a balance regarding the response speed of other known measures and the robustness of comparison between all the tokens, so as to find the best matches, both to deal with named entities and to solve problems of misspelling in big data contexts (this fact does not point them as good candidates for sign languages). Phonetic measures, due to their very nature, have been extensively used for spoken languages, so they are not a good choice for sign languages. Finally, the domain-dependent measures use particularities of the data, which do not fit well in corpora of sign languages.</ns0:p><ns0:p>The current similarity measures have been widely welcomed by the research community and many of them are long-standing, hence the literature on the matter can be classified as classical. <ns0:ref type='bibr' target='#b11'>Coletti &amp; Bouchon-Meunier (2019)</ns0:ref> note that a complete review or even a simple listing of all the uses of similarity is impossible. They are used in various tasks ranging from management of data or information, such as content-based information retrieval, text summarization, recommendation systems, to user profile exploitation, and decision-making, to cite only a few. Among the many similarity measures proposed, a broad classification may be of use: the classical crisp context <ns0:ref type='bibr' target='#b9'>(Choi et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b23'>Lesot et al., 2009)</ns0:ref> and the fuzzy context <ns0:ref type='bibr' target='#b3'>(Bhutani &amp; Rosenfeld, 2003;</ns0:ref><ns0:ref type='bibr' target='#b5'>Bouchon-Meunier et al., 1996;</ns0:ref><ns0:ref type='bibr' target='#b26'>Li et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b12'>Couso et al., 2013)</ns0:ref>. Due to the nature of our research, we concentrate on the classical crisp context, since the fuzzy scenario does not apply to our object of study. The main characteristics of these predominantly classic measures are:</ns0:p><ns0:p>&#61623; Edit-based focusing on the calculation of the changes necessary to produce one string from another, weighing the number of necessary changes (insertions, deletions or modifications) to produce the new string. Hamming distance <ns0:ref type='bibr' target='#b13'>(Hamming, 1950)</ns0:ref> and Levenshtein distance <ns0:ref type='bibr' target='#b24'>(Levenshtein, 1966)</ns0:ref> are the best known. &#61623; Token-based approaches measuring the number of matches between two sets of parameters, (n-gram tokens), where tokens are words or numbers. In this category we can mention Jaccard distance <ns0:ref type='bibr' target='#b17'>(Jaccard, 1912)</ns0:ref> and Cosine distance <ns0:ref type='bibr' target='#b34'>(Singhal, 2001)</ns0:ref>. &#61623; Hybrid strategies comparing strings, using an internal similarity function (Jaro or Levenshtein, for instance). Monge-Elkan <ns0:ref type='bibr' target='#b27'>(Monge &amp; Elkan, 1997)</ns0:ref> and Soft TF-IDF <ns0:ref type='bibr' target='#b10'>(Cohen, Ravikumar &amp; Fienberg, 2003)</ns0:ref> are examples of these techniques. &#61623; Structural proposals focusing on data particularities (Domain-dependent). Dates <ns0:ref type='bibr' target='#b30'>(Naumann &amp; Herschel, 2010)</ns0:ref> is the best-known example.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2021:04:60570:2:0:NEW 30 Aug 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#61623; Phonetic measures matching similar sounds in spoken languages (for example, they give maximum qualification to pairs of words such as 'feelings' and 'fillings'), applying preestablished rules of similar sounds. Soundex <ns0:ref type='bibr'>(Russell &amp; Odell, 1918)</ns0:ref> and K&#246;lner-Phonetik <ns0:ref type='bibr' target='#b32'>(Postel, 1969)</ns0:ref> follow this strategy.</ns0:p></ns0:div> <ns0:div><ns0:head>Proposals for teaching tools</ns0:head><ns0:p>Currently, studies for LESCO seem to be insufficient. That is why this section goes into software-based proposals, despite the fact they do not consider this foremost linguistic concept. Also, some researchers focus on the study of sign languages they can relate to more easily, be it their own country or workplace sign language in use; this situation is very common in the research community. Because of this situation, it seems relevant to mention the authors findings and proposals in this section. The education for deaf people or interpreters of sign language is out of the scope of this research. Our main focus stands for reinforcing the learning process for hearing people as final users.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b14'>(Haug &amp; Ebling, 2019</ns0:ref>) a report about the use of open-source software for sign language learning and self-assessment has been made. Another finding has been the web-based test for Swiss German Sign Language (DSGS, for its acronym in Swiss German), designed for adult learners. They gave important feedback on the appropriateness of the DSGS vocabulary self-assessment instrument. This feedback arose inputs for the system improvement. The innovation of this study relies on the fact about using existing open-source software as a starting point to develop and evaluate a DSGS test for self-assessment purposes.</ns0:p><ns0:p>The target of the study in (&#193;lvarez-Robles, &#193;lvarez &amp; Carre&#241;o-Le&#243;n, 2020) is to bring forward an interactive software system (ISS) making use of a hardware device (Leap Motion) so that average users had fluid communication with deaf people. The objective is to allow a natural recognition of hand movements by helping the average users learning Mexican Sign Language (MSL) at the same time. Also, through gamification techniques, it would permit the user to learn and communicate with deaf people. Related to British Sign Language (BSL), there is a lack of games available in the marketplace for teaching purposes indicating that minimal efforts have been made to meet this objective <ns0:ref type='bibr' target='#b20'>(Kale, 2014)</ns0:ref>. The intention of the study is to develop a prototype using the Microsoft Kinect, to help teachers educate young students. This prototype would teach basic BSL, by using JavaScript and HTML5 in a web browser. Positive feedback coming from interviews and playtests among 10 sign language experts unfamiliar with games technology to teach sign language, was also collected. They indicated that this prototype could be used as complement for those conventional teaching methods.</ns0:p><ns0:p>Another research <ns0:ref type='bibr' target='#b16'>(Huenerfauth et al, 2016)</ns0:ref> revealed there is a lack of interactive tools for those students learning ASL. These tools might provide them feedback on their signing accuracy, whenever their ASL teacher is not available for them. A software system project was also performed by utilizing a Kinect camera. By incorporating computer vision, this software can identify aspects of signing by showing non-natural movements so to provide feedback to the students in their won practice. This tool is not supposed to replace feedback from ASL teachers. However, the tool can detect errors. Students state it is better for them to have tools able to provide feedback like videos helping with error minimization, mainly time-aligned with their signing.</ns0:p><ns0:p>Learning sign language is a task, commonly performed in peer groups, with few study materials <ns0:ref type='bibr' target='#b19'>(Joy, 2019)</ns0:ref>. According to their opinion, fingerspelled sign learning turns into the initial stage of sign learning used when there is no corresponding sign, or the signer is not aware of it. Since most of the existing tools are costly because of the external sensors they use, they suggested SignQuiz, a low-cost web-based fingerspelling learning application for Indian Sign Language (ISL), with automatic sign language recognition. This application has been the first endeavor in ISL for learning signs using a deep neural network. The results reveal that SignQuiz is a better option than printed medium.</ns0:p><ns0:p>There is another available proposal for Chinese Sign Language, by <ns0:ref type='bibr' target='#b7'>Chai et al. (2017)</ns0:ref>. They indicate that using a computer-aided tool known as SignInstructor can offer an effective and efficient learning means. Even they go far beyond stating that the intervention of human teachers is no longer needed, and that the sign language learning is highly effective showing an outstanding score, even higher than the one obtained with face-to-face learning. The system has three modules: 1) a multimodal player for standard materials, including videos, postures, figures, and text; 2) online sign recognition by means of Kinect; 3) an automatic evaluation module.</ns0:p><ns0:p>There is a proposal for a Ghanaian Interactive Sign Language (GISL) Tutor <ns0:ref type='bibr' target='#b31'>(Osei, 2012)</ns0:ref>. This interactive tutor becomes the first computer-based for this sign language. It was specifically designed to teach vocabulary of Ghanaian-specific signs. Those Ghanaians who were involved with this tutor and tested it, said they would even like to have more available signs. The GISL&#180;s Tutor main purpose is letting Ghanaian-specific signs be accessible to anyone interested in using this tool, by displaying pre-recorded lessons with the help of a computerized avatar.</ns0:p><ns0:p>We can conclude this section affirming that, to the best of our knowledge, there are no documented proposals regarding software-assisted learning of sign languages that exploit the concept of phonological proximity. After having studied the background of this topic, it is clear that this research and its corresponding proposal is pertinent, insofar it explains in detail the mechanisms that can be used to incorporate a component of phonological proximity to reinforce the learning of any sign language.</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture for SL learning reinforcement environment</ns0:head><ns0:p>The architecture of the tool has been conceived to show a high level of modularity, as well as to separate the logic and processing aspects from those associated with the access and generation of data. Obviously, by relying on several existing components, the design must show all the interdependencies that this implies.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> shows the general architecture of the SL learning reinforcement environment, which consists of four layers, ranging from those interacting with the users to those providing signs and processing modules reflecting logically derived similitude relations. These layers are:</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60570:2:0:NEW 30 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>1) A graphical user interface, both for web browsers and mobile devices.</ns0:p><ns0:p>2) The SL Learning Reinforcement Tool.</ns0:p><ns0:p>3) An interface with a Phonological Proximity Submodule and a Signs and Discourses repository. 4) A semantic disambiguation module.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref>: SL learning reinforcement environment architecture.</ns0:p><ns0:p>The user interface provides access through any device that allows running a conventional web browser or running an application on a commonly used mobile device. The users can play different roles, which is relevant in the next layer, to determine the views and actions they can carry out with the tool.</ns0:p><ns0:p>The SL Learning Reinforcement Tool can be used in web or mobile environments. It consists of a sublayer called the learning module, which in turn classifies users into three roles: user (a learner), administrator, and instructional designer. These roles exist to strike a balance between separation of duties and flexibility. The user role of the tool provides access to the practice, assessment and statistics modules. The administrator role also has access to these modules, while the instructional designer role can use the practice, assessment and lesson modules. This lesson module is where the instructional design is carried out, that is, the design of each lesson, practice and evaluation mechanisms.</ns0:p><ns0:p>The other sublayer is the Phonological Proximity Component. This sublayer is in charge of interfacing with the following architectural layers, sending requests for signs in phonological close proximity every time that any of the users from any of the mentioned roles require it, as well as reproducing signs through the avatar.</ns0:p><ns0:p>Then we find two parallel layers, actually based on the already operational PIELS platform: first the Phonological Proximity Submodule and aligned to this same level the Sign and Discourses Repository. These layers interact with each other and also with the layer of the SL Learning Reinforcement Tool, previously explained.</ns0:p><ns0:p>The Phonological Proximity Submodule is responsible for receiving requests of similar signs, using a unique sign identifier. The Top-ten Petitioner Component processes these requests and returns the ten signs with the greatest phonological proximity with respect to the one received as a parameter. To do this, a repository called PIELS Similitude Matrix is queried, in which all the signs that make up the LESCO lexicon have been pre-processed. To keep this repository updated, there is a New Signs Similitude Evaluation Component, which receives each new sign included in the PIELS Sign Database from the parallel layer and applies a similarity measure between all the signs (the measure that has worked best is the cosine formula).</ns0:p><ns0:p>The Signs and Discourses Repository contains the database with the LESCO lexicon upto-date and a collection of discourses built through the use of the PIELS platform. It also contains the signing avatar, which is in charge of visually reproducing the signs that it receives by parameter, as well as complete discourses. Naturally, to build these discourses the previously existing signs are used or new signs may be created in its built-in editor as needed. This layer PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60570:2:0:NEW 30 Aug 2021)</ns0:p></ns0:div> <ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Computer Science provides a mechanism to embed the visual display of the avatar in the upper layer of the SL Learning Reinforcement Tool, in the modules that so demand it: lessons, practice and instructional design.</ns0:p><ns0:p>Finally, there is the layer called Semantic Disambiguation, which is intended to be used in future work. The Disambiguation Container works through a big data Web corpus and a cognitive computing module <ns0:ref type='bibr' target='#b28'>(Naranjo-Zeled&#243;n et al., 2019)</ns0:ref>. It will be used to provide additional functionality to the tool, consisting of determining semantic proximity, that is, signs whose meanings are similar, regardless of whether or not they are similar in shape.</ns0:p></ns0:div> <ns0:div><ns0:head>The SL Learning Reinforcement Tool module</ns0:head><ns0:p>In this paper, we focus on describing the architecture and carefully detailing the SL Learning Reinforcement Tool module, to emphasize the feasibility of our proposal from a practical point of view and to ensure that this has been validated by a group of users with a suitable profile for this task. The use of the tool seeks to demonstrate in a tangible way that it is feasible to incorporate the concept of phonological proximity in a learning reinforcement tool serving as the basis for validating this concept with sign language learners. The tool description and the performed experiments are explained in the two following subsections.</ns0:p></ns0:div> <ns0:div><ns0:head>Tool description</ns0:head><ns0:p>The interface classifies signs into five categories (alphabet, numbers, greetings, Costa Rican geography and colors), displaying a new screen when choosing one of them with a list of available signs. Sequentially, the set of signs pertain to learning levels, ranging from the simplest to the most complex ones. Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> exhibits the interface once logged into the system. The adaptation of the software tool is achieved by adding a functionality when choosing a signal. Whenever the user clicks on it, a list of its paronyms is shown, so the avatar reproduces the sign, then the user can request to reproduce the similar signs. In this way, the small differences can easily be determined, fact that alerts the user about how careful they should be when having a conversation with a deaf person.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref> exhibits the graphical display once the alphabet lesson module is chosen and clicked on 'Letter-D'. The system shows the top-ten similar signs, the first sign displays a homonym 'Where' (from the starting handshape), and the rest of them are paronyms: 'Letter-K', 'Desamparados' (the name of a crowded city), 'Sunday', 'Dangerous', 'Dog', 'Mouse', 'Nineteen', 'And', 'Ministry/Minister'. If the user clicks on the suggested signs, then the avatar reproduces them. Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref> shows the avatar pointing at 'Letter-D', while Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> Manuscript to be reviewed When practicing, the system displays the avatar making a sign. This task can be repeated at will. Then, the student chooses the right word corresponding to the four listed options. Figure <ns0:ref type='figure'>6</ns0:ref> illustrates this stage.</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure'>6</ns0:ref>: Avatar signing 'Letter-U' for practice sessions.</ns0:p><ns0:p>The system guides the student&#180;s learning process by following a sequence of steps. This feature makes possible for the student to take lessons, save his progress status and, have access to new levels within the application to learn new concepts. The system also allows the theme to be changed, so to increase contrast levels facilitating accessibility for low-vision individuals. Besides, the student can access statistics to measure his daily progress.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimentation</ns0:head><ns0:p>The objectives of our experimentation consisted of evaluating the similarity measure used in the proposal and the validation of the tool. Therefore, the methodology was oriented to achieve these objectives. First, each sign was defined using 26 phonological parameters (13 for each hand), and then a similarity formula was applied to compare each pair of signs. These previous calculations were used to display each sign and its top-ten similar signs. Finally, the SUS usability standard test and an open qualitative question were applied, as well as a numerical evaluation to a group of students, to validate the proposal.</ns0:p><ns0:p>As will be seen later, in the user validation subsection, user research is a method that naturally fits the type of research we are presenting. User research is meant to represent a strong foundation for design decisions and general strategy. It helps in creating high-quality products for end-users, with the necessary data to back the strategy and design decisions. It also helps to identify early adopters of the product, hence discovering people who can give contextual feedback from the early stages of development.</ns0:p><ns0:p>In the two following subsections, we explain the experiments carried out to evaluate the similarity measure we have used in our phonological proximity module, and the validation with different users regarding the usability and usefulness of the tool.</ns0:p></ns0:div> <ns0:div><ns0:head>Phonological proximity module</ns0:head><ns0:p>A similar measure, applied to the phonological parameters of the signs, was implemented. It produces a list ordered from highest to lowest for each one of them. To interact with the students, whenever they select a sign, the system just displays the ten most similar results to avoid overwhelming with huge outcomes Table <ns0:ref type='table'>1</ns0:ref> shows those similar scores gotten from sign 'Letter-D'. As per the standard cosine formula, a higher number stands for higher similarity. As a matter of fact, if a 100 score is obtained, it means that the signs are homonyms. We also present the 26 phonological parameters for sign 'LETTER-D' and its top-ten similar signs, those matching parameters pop out in bold. As depicted in previous research <ns0:ref type='bibr' target='#b29'>(Naranjo-Zeled&#243;n et al., 2020)</ns0:ref>, the results achieved by the cosine formula are based on mapping the phonological parameters of each sign to an array of numbers. These numbers are predefined and show different phonological characteristics of the signs like the hand orientation, the handshape, and the hand spatial location. Therefore, this standard formula is used to measure proximity among the orderings over the ndimensional space. The formula is shown in Eq. ( <ns0:ref type='formula'>1</ns0:ref>).</ns0:p><ns0:p>( <ns0:ref type='formula'>1</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_0'>cos(&#119909;,&#119910;) = x &#8226; y || x || |y||</ns0:formula><ns0:p>Here x and y are two arrays, both of them containing 26 entries that map 13 phonological parameters identified for each hand. If the cosine value is 1, then the two arrays are identical, while a value of 0 means that they do not have anything in common. Table <ns0:ref type='table'>1</ns0:ref>: Phonological parameters for sign 'LETRA-D' and top-ten similar signs.</ns0:p></ns0:div> <ns0:div><ns0:head>User validation</ns0:head><ns0:p>This section is structured to first present the importance of user research, which leads to the choice of subjects. Then, we proceed to explain what these subjects were asked to do. Then, we provide a rationale for the concept of phonological proximity as the central axis of the validation process. Finally, we explain in detail how usability and utility were validated.</ns0:p><ns0:p>User research is becoming more and more relevant in the field of education and learning, which is why this research adheres to its principles, particularly for validation purposes. As <ns0:ref type='bibr' target='#b21'>Kao et al. (2018)</ns0:ref> indicate, iterative user research for products has been conducted in over 50 educational technology companies at different stages of development. User research has been used as a collaborative and interdisciplinary process gathering together experts from academic fields, teaching and learning sciences, and human-computer interaction, along with software developers, since developing effective educational products require an understanding of many expertise fields.</ns0:p><ns0:p>The profile of the subjects to whom the survey was administered consisted of 12 regular users of technological tools being involved in a process of learning a sign language, 10 of them with a basic knowledge of LESCO and the other 2 in a more advanced level, not experts though. Their ages range from 21 to 52 years, although the average is 31 years of age, with 8 men and 4 women. As for men, the average age is 29 and that of women is 35. A total of 10 individuals out of 12 are novices, while the other 2 have a little more advanced knowledge of LS. Regarding their professions and academic degrees, they are classified into a Doctor of Computer Science, a PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60570:2:0:NEW 30 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Bachelor of Administration, an Industrial Design Engineer, four Computer Science undergraduates, four Computer Engineers and an Industrial Production Engineer. All of them used the web version of the tool, in order to facilitate remote interaction between the subjects and the researchers, and to more expeditiously clarify any doubts that may have arisen. We have found that when applying our proposal to these users, who are mostly novice sign language learners, the results are very satisfactory, as will be discussed later.</ns0:p><ns0:p>The subjects had to do two tests, in preparation for which they interacted directly with the tool, indicating in which cases they detected similarity between base signs and signs proposed by the system as highly similar. After performing this exercise with 3 base signs and their 10 similar signs, they were asked to complete the SUS test, as well as answer an open question about their perception of usefulness. To complement the above, they were asked to also give a numerical rating to the utility. Although the tests were carried out remotely, it was possible to observe how the subjects decided relatively quickly if the suggested signs seemed similar or not, without having to ask the avatar to reproduce them several times.</ns0:p><ns0:p>The results obtained in the evaluation of the phonological proximity module are deeply analyzed in our previous paper <ns0:ref type='bibr' target='#b29'>(Naranjo-Zeled&#243;n et al., 2020.</ns0:ref> In our database, we have already mapped each sign to a vector of 26 numerical parameters, each one with a precise phonological meaning. The parameters follow this order: left index, left middle, left ring finger, left pinky, left finger separation, left thumb, right index, right middle, right ring finger, right pinky, right finger separation, right thumb, left rotation, left wrist posture, left interiority, right rotation, right wrist posture, right interiority, left laterality, left height, left depth, contact with the left arm, right laterality, right height, right depth, contact with the right arm.</ns0:p><ns0:p>For example, the array <ns0:ref type='bibr'>[1,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>2,</ns0:ref><ns0:ref type='bibr'>2,</ns0:ref><ns0:ref type='bibr'>1,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>1,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>2,</ns0:ref><ns0:ref type='bibr'>2,</ns0:ref><ns0:ref type='bibr'>1,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>2,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>2,</ns0:ref><ns0:ref type='bibr'>4,</ns0:ref><ns0:ref type='bibr'>15,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>2,</ns0:ref><ns0:ref type='bibr'>4,</ns0:ref><ns0:ref type='bibr'>15,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>2]</ns0:ref> contains the 26 parameters that phonologically describe the sign for 'PROTECTION', while the subarray <ns0:ref type='bibr'>[1,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>2,</ns0:ref><ns0:ref type='bibr'>2,</ns0:ref><ns0:ref type='bibr'>1,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>2,</ns0:ref><ns0:ref type='bibr'>4,</ns0:ref><ns0:ref type='bibr'>15,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>2]</ns0:ref> represents only the left-hand parameters. As explained above, the cosine is a token-based approach, and it was selected because it is the most natural way to handle numerical arrays of parameters, as is the case in this research. An edit-based measure of similarity would force the arrays to be converted to strings, taking care that some parameters have one digit while others have two. On the other hand, domain-dependent measures do not apply to our data, and a hybrid approach presents unnecessary complications. Obviously, phonetic similarity measures are specifically designed for spoken languages, so they are left out in this discussion.</ns0:p><ns0:p>With regard to the user evaluation, we have conducted an extrinsic evaluation through the SUS test, consisting of a questionnaire with ten items and a five Likert scale response for each option, ranging from 'Strongly agree' to 'Strongly disagree' <ns0:ref type='bibr' target='#b6'>(Brooke, 1986)</ns0:ref>. Among its benefits, we can identify that it has become an industry standard, widely referenced in articles and publications. By using SUS, one can make sure of these very desirable characteristics: &#61623; It is extremely easy to administer to participants. &#61623; It can be used on small sample sizes and yet attain reliable results. &#61623; It is valid in effectively differentiating between usable and unusable systems. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In addition to the SUS standardized test, we have considered it appropriate to include an open question of a qualitative nature, where the participants had to answer in a mandatory manner. The question is 'How do you think comparing similar signs has made learning easier or more difficult for you?'. The objective of this question is to evaluate the usefulness of our tool. Then, we proceeded with a last question, to assign a numerical rating to the previous question, on a scale from 1 to 100, worded as follows: 'Based on your answer to the previous question, how would you numerically rate the improvement in learning using similar signs? (1 is the lowest, 100 is the highest)'.</ns0:p><ns0:p>To carry out the test, each participant was summoned individually and given instructions to enter the system so to become familiar with it. Then, they were asked to choose 3 signs corresponding to the 'alphabet' group and analyze the similarity with their corresponding 10 most similar signs, to determine the precision threshold of the similarity formula used, as well as the possible need to refine the initial configuration of the signs before doing validations. After corroborating the levels of similarity, they proceeded to answer the SUS questionnaire, the qualitative question, and assign the numerical rating. The SUS survey format used can be seen in Figure <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>. Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> shows the results of applying the SUS test to the participants, that is, the 12 participants in the rows and the 10 standard questions in the columns. There is an additional column on the far right, which corresponds to the numerical evaluation of the usefulness of the phonological proximity as perceived by the participants. Regarding the tone of the responses to the qualitative question, Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref> shows the opinions and their tone for each participant in the study. This tone has been established by the authors as negative, mainly negative, neutral, mainly positive, or positive. The next section provides a broader discussion of the findings presented in both tables. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The SUS usability standard test gives a score of 89, which indicates that the tool has an extremely high level of usability, since the average of a large number of studies of this nature is 68 and from 84.1 to 100 the usability is located in the 'A+' percentile, which is the highest <ns0:ref type='bibr' target='#b25'>(Lewis &amp; Sauro, 2018)</ns0:ref>.</ns0:p><ns0:p>Such a high ranking also has the desirable characteristic that the product is more likely to be recommended by users to their peers. This characteristic is of particular relevance when it comes to an innovative product, which presents an important differentiation compared to the existing options, in this case in the educational field.</ns0:p><ns0:p>If we examine the extreme ratings, that is, the highest and the lowest, are removed and averaged based on the remaining 10 ratings, the result is still very similar, increasing slightly from 89 to 91, which is indicative that none of them have a greater weight in the overall result. On the other hand, looking for an interpretation of these extremes, it should be noted that the lowest score was 60, and it was awarded by a subject who experienced problems during the reproduction of some signs by the avatar, due to a momentary synchronization problem on the platform, which may have negatively affected her perception. The highest score, 97.5, was given by one of the three subjects with the least experience in sign languages. In both cases it seems that the extreme ratings have a fairly predictable explanation.</ns0:p><ns0:p>On the other hand, the phonological proximity score, which reflects a numerical evaluation by each participant in a scale from 1 to 100, yields a very satisfactory average of 91.0. The obtained score demonstrates that the tool is useful for our objective of reinforcing sign language learning.</ns0:p><ns0:p>Carrying out the exercise of eliminating extreme ratings again, it can be seen the majority grant a 90 or 100 and that only two subjects gave an overall rating of 70. These are the two people who are not experts but have a little more advanced knowledge in sign languages. Again, the results make sense.</ns0:p><ns0:p>The open question showed a positive tone in practically all the answers, which can be synthesized in concepts such as 'usefulness', 'detection of differences' and 'reduce confusion'. It is evident that these opinions are favorable and that they in fact reflect satisfaction with respect to the improvement that students perceived when using a computer tool that incorporates the concept of phonological proximity.</ns0:p><ns0:p>Obviously, the graphic display of the tool is very useful to match the visual nature of sign languages. Although it is not the central focus of this research, it is important to highlight the fact that the appropriate graphic design and an avatar that reproduces the signs as similar as possible to what the students have learned in class is decisive for the proposal as a whole to be successful.</ns0:p><ns0:p>We are of the opinion that there is room for improvement in terms of the similarity of some signs that did not seem to represent a contribution for the majority of subjects. The phonological parameterization and the formula used work well for most cases, but in some particular cases it may be that the rotation and location of the hand account for most of the The general appreciation that we obtain from what it is stated in this analysis of results is that both the tool used and the concept itself of phonological proximity to detect slight differences have received the endorsement of the subjects of this study. Both from a quantitative and qualitative point of view, the subjects who collaborated in the validation show a clear acceptance of phonological proximity as a valuable concept to help reinforce their learning.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper presents an architecture to strengthen the sign language learning making use of phonological proximity concept to improve results. As far as we know, this is the first time this approach has been suggested in relation to reinforce the learning of sign languages and, particularly applied to the Costa Rican Sign Language (LESCO, for its Spanish acronym). The main contributions of our proposal are: (1) a modular architecture meant to reinforce different sign languages learning; (2) the inclusion of a software tool with a phonological proximity component to assist in the learning process; (3) the identification of homonyms and paronyms to contrast the proximity levels between the signs; (4) a thorough evaluation of our phonological proximity module and the usability and usefulness of the tool.</ns0:p><ns0:p>We describe the operation of our software tool with a graphical interface that classifies concepts and reproduces those selected signs, broadening its current functionality through phonologically similar signs, in other words, with similar handshapes. To meet this purpose, we explore the incorporation of homonymous and paronyms.</ns0:p><ns0:p>By allowing the students to compare pretty similar signs, we have included the phonological proximity component into an existing tool with a suitable interface. The value of this improvement is to reduce the number of mistakes of those similar signs in a real conversation with a deaf person since this can seriously impact the communication and hinder the understanding between each other. A mapping of the signs with the other signs becomes an essential requirement to dispose of the available lexicon in advance. To ease the use, we decided that the interface should list only ten similar signs, so the user might not be confused, situation which would be counterproductive.</ns0:p><ns0:p>We evaluated both the phonological proximity module and the usability and usefulness of the tool. Thanks to the practical application of the concept of phonological proximity, learning is reinforced, as has been validated through our experimentation with sign language students in this research. We conclude that the incorporation of the phonological proximity concept to this software tool can upgrade the reinforcement of LESCO learning, offering the possibility of using the same approach of our line of research in other sign languages.</ns0:p><ns0:p>For future work, the use of standardized questionnaires for user experience, such as AttrakDiff or UEQ (User-Experience Questionnaire), along with the SUS test, are to be considered, encompassing a comprehensive evaluation of the tool. We will also study the effect of using the similarity measure on separate phonological components of signs. This would consist of validating the handshape, orientation and location of the hand as separate components, to determine if this produces improvement in the results. Additionally, we will consider the inclusion of facial gestures, head and trunk movements as possible elements that improve the accuracy of the similarity.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60570:2:0:NEW 30 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2: The software basic level interface with signs classified into categories.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Figure3exhibits the graphical display once the alphabet lesson module is chosen and clicked on 'Letter-D'. The system shows the top-ten similar signs, the first sign displays a homonym 'Where' (from the starting handshape), and the rest of them are paronyms: 'Letter-K', 'Desamparados' (the name of a crowded city), 'Sunday', 'Dangerous', 'Dog', 'Mouse', 'Nineteen', 'And', 'Ministry/Minister'. If the user clicks on the suggested signs, then the avatar reproduces them. Figure4shows the avatar pointing at 'Letter-D', while Figure5shows paronym 'Sunday'.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Top-ten homonyms/paronyms for 'Letter-D'.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Avatar signing homonym 'Letter-D'.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Avatar signing paronym 'Sunday'.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60570:2:0:NEW 30 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: SUS (System Usability Scale) standard test.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60570:2:0:NEW 30 Aug 2021) Manuscript to be reviewed Computer Science similarity, leaving aside the handshape, which is precisely what the novice student looks at first. There was only one sign that gave this problem repeatedly, but it deserves attention in future work. Validation was helpful in raising this possibility.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,70.87,327.50,672.95' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 : SUS Test and Phonological Proximity Scores.</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 : Opinion/Judgment of learning improvement by using phonological proximity.</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60570:2:0:NEW 30 Aug 2021)</ns0:note></ns0:figure> </ns0:body> "
"August 30, 2021 Dear Editors: The authors would like to thank the editors and the reviewers for their useful comments that have enabled us to improve the paper. In the present report, all the reviewers' comments have been boxed; they are followed by the changes we have made in the paper. We trust that the manuscript is already suitable for publication in PeerJ. Luis Naranjo-Zeledón University of Alicante On behalf of all authors. Revision Notes with the list of changes we have made on “Architecture design of a reinforcement environment for learning sign languages” (#CS-2021:04:60570:0:1:REVIEW) Reviewer 1 (Anonymous): Basic reporting - Make sure the Conclusion succinctly summarizes the paper. It should not repeat phrases from the Introduction! Answer: The recommendation has been addressed. The changes to avoid repeated phrases from the Introduction are reflected in lines 611-615 as follows: The main contributions of our proposal are: (1) a modular architecture meant to reinforce different sign languages learning; (2) the inclusion of a software tool with a phonological proximity component to assist in the learning process; (3) the identification of homonyms and paronyms to contrast the proximity levels between the signs; (4) a thorough evaluation of our phonological proximity module and the usability and usefulness of the tool. Experimental design - The authors should further add an explanation about the research method. Answer: The recommendation has been addressed. The changes are reflected in lines 419-434 as follows: The objectives of our experimentation consisted of evaluating the similarity measure used in the proposal and the validation of the tool. Therefore, the methodology was oriented to achieve these objectives. First, each sign was defined using 26 phonological parameters (13 for each hand), and then a similarity formula was applied to compare each pair of signs. These previous calculations were used to display each sign and its top-ten similar signs. Finally, the SUS usability standard test and an open qualitative question were applied, as well as a numerical evaluation to a group of students, to validate the proposal. As will be seen later, in the user validation subsection, user research is a method that naturally fits the type of research we are presenting. User research is meant to represent a strong foundation for design decisions and general strategy. It helps in creating high-quality products for end-users, with the necessary data to back the strategy and design decisions. It also helps to identify early adopters of the product, hence discovering people who can give contextual feedback from the early stages of development. In the two following subsections, we explain the experiments carried out to evaluate the similarity measure we have used in our phonological proximity module, and the validation with different users regarding the usability and usefulness of the tool. Validity of the findings NA Answer: The authors acknowledge this appreciation and assume that the reviewer has no comments on this. Additional comments - There are only 2 studies from 2020 referred in this paper. - Authors should add the most recent reference: - Improved VGG Model for Road Traffic Sign Recognition. Answer: Regarding the inclusion of the study mentioned by the reviewer, the authors consider that it is not relevant or useful for our proposal. Thanks to the reviewer's recommendation, a more recent reference (from 2021) directly related to the object of study has been added. This can be seen in the lines 184-190 as follows: An eye-tracking study on German Sign Language (Wienholz et al., 2021) recorded eye movements of the participants while watching videos of sentences containing related or unrelated sign pairs, and pictures of a target and some unrelated distractor. The authors concluded that there is a phonological priming effect for sign pairs that share both the handshape and movement while they differ in their location. The results suggest a difference in the contribution of parameters to sign recognition and that sub-lexical features can influence sign language processing. In the References section, the new entry has been added (lines 751-753): Wienholz, A., Nuhbalaoglu, D., Steinbach, M., Herrmann, A., Mani, N. 2021. Phonological priming in German Sign Language: An eye tracking study using the Visual World Paradigm. Sign Language & Linguistics, 24(1), 4-35. "
Here is a paper. Please give your review comments after reading it.
249
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Sign language is a common language that deaf people around the world use to communicate with others. However, normal people are generally not familiar with sign language (SL) and they do not need to learn their language to communicate with them in everyday life. Several technologies offer possibilities for overcoming these barriers to assisting deaf people and facilitating their active lives, including natural language processing (NLP), text understanding, machine translation, and sign language simulation.</ns0:p><ns0:p>In this paper, we mainly focus on the problem faced by the deaf community in Saudi Arabia as an important member of the society that needs assistance in communicating with others, especially in the field of work as a driver. Therefore, this community needs a system that facilitates the mechanism of communication with the users using NLP that allows translating Arabic Sign Language (ArSL) into voice and vice versa. Thus, this paper aims to purplish our created dataset dictionary and ArSL corpus videos that were done in our previous work. Furthermore, we illustrate our corpus, data determination (deaf driver terminologies), dataset creation and processing in order to implement the proposed future system. Therefore, the evaluation of the dataset will be presented and simulated using two methods. First, using the evaluation of four expert signers, where the result was 10.23% WER. The second method, using Cohen's Kappa in order to evaluate the corpus of ArSL videos that was made by three signers from different regions of Saudi Arabia. We found that the agreement between signer 2 and signer 3 is 61%, which is a good agreement. In our future direction, we will use the ArSL video corpus of signer 2 and signer 3 to implement ML techniques for our deaf driver system.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Deaf people use sign language to communicate with their peers and normal people who know their sign language. Therefore, sign language is the only way to communicate with deaf people, although it receives even less attention from normal people. Furthermore, sign language is not a unified language among deaf people around the world, as each country has its own sign language. For example, American Sign Language (ASL), British Sign Language (BSL), Australian Sign Language (Auslan), Indian Sign Language (ISL) and Arabic Sign Language (ArSL). In particular, the Arab countries that use ArSL, such as the Gulf, Al-Sham, and some North African Arab countries, have similarities and differences in sign language. The reason for this is the differences between in dialects. As a result, deaf people face problems communicating with the community in many aspects while they are working or practicing their daily lives, such as health, education, and transportation. One of the solutions for these problems is using a sign language interpreter, of their work or daily life, such as health, education, and transportation. One solution to these problems is the use of a sign language interpreter, a person who knows sign language and can interpret it to normal people. However, this solution is not optimal because of loss of privacy and professional independence <ns0:ref type='bibr'>(Forestal,2001;</ns0:ref><ns0:ref type='bibr'>Broecker,1986)</ns0:ref>. Researchers have therefore developed certain technologies that allow for the provision of a computer interpreter instead of a human. For example, sign language recognition technology, machine translation (MT). In addition, some Arab researchers and organizations, such as the Arab League Educational, Cultural and Scientific Organization (ALECSO), have made efforts to unify ArSL by introducing the first dictionary in 1999 <ns0:ref type='bibr'>(Al-Binali&amp; Samareen,2009)</ns0:ref>.</ns0:p><ns0:p>In Saudi Arabia, deaf people have some difficulty communicating with others. In particular, when deaf people are driving a vehicle and non-deaf people are sitting as passengers. The nondeaf passenger cannot understand the deaf driver. Also, a non-deaf passenger cannot describe the location needed by the deaf driver. However, there are many mobile applications that can be used to facilitate communication, such as Tawasol and Turjuman, but they still do not provide real-time translation <ns0:ref type='bibr'>(Team Mind Rockets,2017;</ns0:ref><ns0:ref type='bibr'>Al-Nafjan, Al-Arifi &amp; Al-Wabil,2015)</ns0:ref>. In addition, non-deaf passengers or drivers do not necessarily download the sign language translation application and use it only once. In the area of deaf drivers, Saudi Arabia faces a lack of technology that can improve communication between deaf drivers and non-deaf passengers. To our knowledge, no research has yet been conducted to introduce a solution in the area of deaf drivers in Saudi Arabia. In addition, there is a lack of corpus of ArSL videos being constructed in the transportation domain, especially for deaf drivers.</ns0:p><ns0:p>The purpose of this research is to publish our dataset as a continuation of previous work <ns0:ref type='bibr' target='#b0'>(Abbas, 2020)</ns0:ref>. Also, we aim to publish our corpus in order to implement ML as future work. This paper is organized as follows: The second section illustrates some research work done to create videos and images of the standard corpus of sign language in each country as a database. In order to conduct their experiments to implement some systems that help deaf people in specific areas. We have also mentioned some research done in the Arab countries. In order to create an ArSL corpus by focusing on certain areas. The third section illustrates the linguistic background of ArSL, speech recognition and machine translation (MT) systems, while the fourth section focuses on the design architecture of ArSL. In the fifth section, we discuss the processing and data collection modules and evaluation videos of the deaf driver corpus that necessitated the implementation of ArSL in the deaf driving context. Finally, we illustrate future research directions.</ns0:p><ns0:p>Our research aims to answer these two questions:</ns0:p><ns0:p>1. Can we create a data corpus and video corpus for deaf drivers?</ns0:p><ns0:p>2. Can we evaluate the video corpus created of deaf drivers?</ns0:p></ns0:div> <ns0:div><ns0:head>Literature Review of ArSL Recognition and MT System</ns0:head><ns0:p>This literature review illustrates various standard sign language corpora for some countries of the world. In fact, when we start designing and developing a system for deaf people, we need the essential corpora of videos or images. In addition, these corpora are considered as a reference database that must be rich with a large and complete volume of standard sign language. They will help researchers to implement machine learning techniques and develop specific systems for deaf people. Starting with ASL, researchers developed approximately 2576 videos that included movements, hand shapes, and sentences <ns0:ref type='bibr'>(Mart&#237;nez et al.,2002)</ns0:ref>. Another researcher collected and annotated videos around a corpus of 843 sentences from Boston University and RWTH Aachen University <ns0:ref type='bibr' target='#b11'>(Dreuw et al.,2008)</ns0:ref>. In Britain, approximately 249 participants completed a conversational dataset of the BSL video corpus <ns0:ref type='bibr'>(Schembri et al.,2018)</ns0:ref>. In Indian sign language, researchers implement 1440 gestural videos of IPSL by 9 Indian signers <ns0:ref type='bibr' target='#b15'>(Kishore et al.,2011)</ns0:ref>. In Brazil, they have created the LIBRAS-HCRGBDS database of the Libras video corpus which has approximately 610 videos by 5 signers <ns0:ref type='bibr' target='#b20'>(Porfirio et al.,2013)</ns0:ref>. In Korean Sign Language (KSL), the researchers developed 6 thousand vocabulary words in the KSL corpus database <ns0:ref type='bibr' target='#b14'>(Kim,Jang &amp; Bien, 1996)</ns0:ref>. In Arabic Sign Language (ArSL), 80 Arabic signers implemented a sensor-based dataset for 40 sentences <ns0:ref type='bibr' target='#b7'>(Assaleh, Shanableh &amp; Zourob, 2012)</ns0:ref>. In addition, some researchers have produced a corpus of 500 videos and images. It contains hand shapes, facial expressions, alphabet, numbers. In addition, the movement in simple signs, continuous sentences and sentences with lip movement were performed by 10 signers. These corpuses are called SignsWorld Atlas. However, still the performed corpus does not cover all words and phrases as a database <ns0:ref type='bibr'>(Shohieb, Elminir &amp; Riad,2015)</ns0:ref>. Recently, the researchers led their efforts to build the KArSL database as a comprehensive ArSL reference database for numbers, letters and words. They used Kinect devices to make these ArSL videos available to all researchers <ns0:ref type='bibr' target='#b27'>(Sidig et al., 2021)</ns0:ref>. Specifically, in ArSL, researchers are dedicating their efforts to improving the focus of accuracy in a specific domain, such as medicine and education. Some researchers used 150 video signs from the Java programming domain in their experiment <ns0:ref type='bibr' target='#b1'>(Al-Barhamtoshy et al.,2019)</ns0:ref>. In the same educational field, two researchers introduced an intelligent system for deaf students based on the image. This system helped the deaf student in the educational environment. Therefore, they created the dictionary that needs it in life activities and academic environments <ns0:ref type='bibr' target='#b19'>(Mohammdi &amp; Elbourhamy, 2021)</ns0:ref>. Some of them performed some sentences in Arabic language without focusing on a specific field. Some focused on the jurisprudence of prayer and their symbols as datasets <ns0:ref type='bibr'>(Almasoud &amp; Al-Khalifa, 2012)</ns0:ref>. Other researchers used 600 phrases in the health domain <ns0:ref type='bibr' target='#b16'>(Luqman &amp; Mahmoud, 2019)</ns0:ref>. Similarly, some researchers have implemented their research using the data created in the field of education <ns0:ref type='bibr'>(El, El&amp;El Atawy,2014;</ns0:ref><ns0:ref type='bibr'>Almohimeed, Wald &amp; Damper,2011)</ns0:ref>. In our research, we focus on a different domain that can help deaf people in the work environment, such as driving a cab. The corpus we created will help deaf drivers to communicate with non-deaf passengers while deaf people drive their cars.</ns0:p></ns0:div> <ns0:div><ns0:head>ArSL and Linguistic Background</ns0:head><ns0:p>ArSL shows huge complexities in phonology, morphology, and structure, which is not the case for other sign languages. These complexities are explained below.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:06:62836:1:1:NEW 26 Aug 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Phonology of ArSL</ns0:head><ns0:p>Phonemes are mental representations, which is just a way to empty what is inside the brain. Phonemes are made up of four elements: 1) The shape of the hand. 2) Orientation of the hand. 3) Position of the hand. 4) Direction of the hand in motion <ns0:ref type='bibr'>(Schechter, 2014)</ns0:ref>. In phonology, these four elements are known as hand features (MFs). In particular, in Sign Language, there are MFs and also Non-Manual Features (NMFs) are involved. NMFs refers to emotional parts of the body, for example, lip motion, facial expression, shoulder movement, head movement, eyelids and eyebrows. In general, in ArSL, we use both MFs and NMFs to give the correct meaning, called essential NMFs. On the other hand, if the signer uses just only the MFs, the meaning will turn into another meaning that the signer did not mean to express <ns0:ref type='bibr'>(Johnston&amp; Schembri, 2007)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Morphology and Structure of ArSL</ns0:head><ns0:p>The rules of grammar in ArSL are not the same as those in Arabic. The differences are: verb tenses, differences between singular and plural, rules for prepositional and adverb and gender signs. Regarding sentence structure, ArSL uses only the Subject-Verb-Object (SVO) structure instead of the SVO, OVS and VOS structures <ns0:ref type='bibr' target='#b16'>(Luqman&amp; Mahmoud, 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>ArSL and Design Architecture</ns0:head><ns0:p>New technologies that support communication have a significant impact on human life. For deaf people, the developers and researchers have tried to use some new technologies to make the life of deaf people easier by developing automated systems that can help them to communicate in various aspects of their life with others. In this section, we will illustrate the brief explanation of some of the techniques used in order to implement the automated system for better communication between the community and among themselves.</ns0:p></ns0:div> <ns0:div><ns0:head>Machine Translation</ns0:head><ns0:p>Machine Translation (MT) is a standardized name for the system that relies on computer analysis to translate between two languages. It is used for text and speech using artificial intelligence (AI) and natural language processing (NLP). For example, translating from source 'English' to target 'Arabic' <ns0:ref type='bibr' target='#b21'>(Pradhan, Mishra&amp; Nayak, 2020;</ns0:ref><ns0:ref type='bibr'>Verma, Srivastava&amp; Kumar, 2015)</ns0:ref>. MT is also used to translate text or speech into video sign language or avatar. There are several approaches to MT that can be used depending on what we need to translate and what constitutes a better translation. One of these approaches is direct translation, without regard to grammar rules. To improve the quality of MT, the rule-based approach was introduced, which consists of parsing the source and target language. Another approach is corpus-based, which deals with massive data containing sentences. There are also knowledge-based approaches, which take into account the understanding of the source and target text in the context of linguistic and semantic knowledge. Finally, there is google translation which is developed by Google <ns0:ref type='bibr'>(Verma, Srivastava&amp; Kumar, 2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Arabic Speech Recognition</ns0:head><ns0:p>Speech Recognition (SR) is a computerized system that converts speech into text or signs. This system is used to communicate between humans and machines. It is also known as automatic speech recognition. Machine-based SR is a complicated task due to differences in dialects, contexts and speech styles. To reduce this complexity in SR, the system can exploit the repetition and structure of the token speech signal as multiple sources of knowledge. SR sources are constructed based on knowledge of phonology, syntax, phonemes, grammar and phrases <ns0:ref type='bibr' target='#b13'>(Katyal, Kaur&amp; Gill, 2014;</ns0:ref><ns0:ref type='bibr' target='#b9'>Chow et al., 1987)</ns0:ref>. In addition, the SR system has several classifiers. 1) The speech utterance, which is composed of separate, connected, continuous and spontaneous speech.</ns0:p><ns0:p>2) The speaker model, which contains one of two dependents that was designed for a specific speaker or independently for different speakers. 3) The size of vocabulary (Radha&amp; Vimala).</ns0:p><ns0:p>In terms of the SR process, there are four steps in which SR can be implemented. Analyze the speech signal, then extract the feature using different techniques to identify the vector, such as MFCC (Mel-frequency cepstral coefficient). Then, we build a model using different techniques like HMMs with the training dataset. In the last step, we test the model in the matching setting, taking the dissection and measuring the performance based on the error rate (Saksamudre&amp; Deshmukh, 2015; Yankayi&#351;, Ensari&amp; Aydin) as shown in Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>ArSL Gestures Recognitions</ns0:head><ns0:p>Gesture recognition is defined as the ability of the computer to understand gestures and execute commands based on the gestures made. The first gesture recognition system was introduced in 1993 as a kind of user interface for perceptual computing that helps capture human gestures and transfer them into commands using a computerized system. It is used with many technologies, especially in the field of games, such as X-box, PlayStation, and Wii Fit. These games use Just Dance and Kinect Sports, which recognizes the hand and certain body parts <ns0:ref type='bibr'>(Schechter, 2014;</ns0:ref><ns0:ref type='bibr'>Darrell&amp; Pentland, 1993)</ns0:ref>.</ns0:p><ns0:p>In the sign language domain, the gesture recognition system uses the following processes: 1) Recognize the deaf signs. 2) Analyze the sign. 3) Converting that sign into meaningful text (words or phrases), voice, or expressions that non-deaf can understand. In addition, there are two main methods for gesture recognition in ArSL, namely vision-based devices (video or image) and wearable-based devices. Each of them has advantages and disadvantages. One advantage of wearable devices is that there is no need to search for background and lighting. The disadvantage is that they interfere with movement. On the other hand, the advantage of vision-based technology is the ease of movement, while the disadvantage is the effect of changing the background and lighting <ns0:ref type='bibr' target='#b5'>(Ambar et al., 2018;</ns0:ref><ns0:ref type='bibr'>Paudyal et al., 2017)</ns0:ref>.</ns0:p><ns0:p>In addition, each has different processes and techniques. In the vision-based process, one or more cameras are the main tools that must be available to use this method. On the other hand, the wearable-based method depends on certain types of equipment and computers mainly. In terms of process, the wearable-based method spells the alphabet by reading the specific information in each sensor of the finger or glove joint. However, the vision-based method has certain steps, which are the follows: &#61623; Image capture, which consists in using a camera to collect data (building the corpus) and analyze the collected images. &#61623; Pre-processing, which consists of preparing the images and identifying the information according to color (segmentation).</ns0:p><ns0:p>&#61623; Feature extraction uses some techniques like root mean square (RMS) to identify the feature vector.</ns0:p><ns0:p>Classification that classifies based on the feature vector to build the model <ns0:ref type='bibr' target='#b17'>(Mahmood&amp; Abdulazeez, 2019;</ns0:ref><ns0:ref type='bibr' target='#b10'>Derpanis, 2004)</ns0:ref>. The vision-based gesture recognition process is shown in the Figure <ns0:ref type='figure'>2</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Deaf Driver Corpus</ns0:head><ns0:p>For creating a deaf driver corpus, we divided the processes that we will use for this creation into 4 modules. The four modules are preprocessing, recording, assessment, and validation modules. These high-level and low-level approaches are represented in (Figure <ns0:ref type='figure'>3</ns0:ref>) and (Figure <ns0:ref type='figure'>4</ns0:ref>).</ns0:p><ns0:p>In the preprocessing module, we first collected data at two levels: a word or phrase level dataset and a sentence level dataset to create a small Arabic dictionary. This dictionary is divided into eight sections (categories). 1) Welcoming 'Salam Alaikum and How are you? '. 2) Directions 'Left and Right'. 3) Place 'School and Deaf Association '. 4) Traffic and Transportation ' Driver's license and Traffic lights'. 5) Sentences used by deaf drivers when they need to talk with their passengers, e.g., 'We have arrived' 6) Sentences used by passengers when they need to talk with their deaf drivers, e.g., 'I do not have cash to pay the amount '. 7) General Words ' No and Yes and In and On '. 8) Amount 'Dollar, Riyal and 2 Riyals until 100 Riyals ' <ns0:ref type='bibr' target='#b0'>(Abbas, 2020)</ns0:ref>.</ns0:p><ns0:p>The total words and phrases (sentences) is 215. Some of these words and sentences fall under the general communication, collected from the Saudi Sign Language Dictionary, 2018 edition (Saudi Association for Hearing Impairment, 2018). Some of them are collected in the contextual domain of the normal conversation that was done between Saudi cab drivers and passengers. We mean in the contextual domain is each country has its own contextual domain in the payment process. For example, Saudi Arabia has its own curranty which is Riyal. Table <ns0:ref type='table'>1</ns0:ref> shows a part of the Arabic dictionary we created.</ns0:p><ns0:p>In the recording module, we made our videos at a rate of about 30 FPS (frames per second), by a camera for our ArSL corpus. These videos were taken by one of the ArSL expert signers who is not deaf. Figure <ns0:ref type='figure'>5</ns0:ref> shows an ArSL corpus captured by a non-deaf expert signer. In addition to that, we made video captures with three expert signers from different occasions in Saudi Arabia. Two of them are totally deaf, while the third is hard of hearing. Figure <ns0:ref type='figure'>6</ns0:ref> represents an ArSL corpus captured for each deaf expert. To record 215 words containing simple phrases and signs, we took about forty-five minutes continuously with the non-deaf expert signer. However, the three deaf expert signers took approximately fifty minutes. The expert signers used only one hand if it was appropriate for the context of the deaf driver, unless the sign required the use of both hands. Next, we segmented the video using VEGAS (Video Production Software -Unleash Your Creativity | VEGAS, n.d.) video editing software (we segmented each sign containing words or phrases that matched our dictionary into a single video, with a total number of 215 videos). To support future work, we added Arabic audio to each video and labeled it with Arabic text that refers to the same recorded ArSL.</ns0:p></ns0:div> <ns0:div><ns0:head>Dataset Evaluation</ns0:head><ns0:p>These collected and created data will be used for the evaluation and validation of our ArSL corpus. In this section, we have explained the evaluation module while the validation will be done in the future work.</ns0:p><ns0:p>In the evaluation module, we evaluated the video of each corpus generated based on our created dictionary using two techniques. First, we used a human expert evaluation technique. Second, we used a statistical method of measuring the agreement between three deaf experts signing in ArSL that we recorded.</ns0:p><ns0:p>In the first technique, which is a human expert evaluation, we made the evaluation based on the views of four participants who are experts in ArSL. One of them is completely deaf and the second is hard of hearing. Two other participants are not deaf, they work in a deaf club and are experts. We used the quantitative approach (questionnaire) and divided it into two sections. The first section was a demographic questionnaire (gender -age -education level -whether he/she is deaf or not). In the second section, we introduced an evaluation of the video based on the word (phrase) or related sentence. We attached each video to each related phrase or word. Participants were asked to evaluate the 215 sign videos to see if the video recorded for each word or phrase was correct or not. If not correct, it meant that the video was not related to that particular word or phrase. We asked the four participants to choose one of the types of correction they had to make for each video (add -replace -delete). The evaluation method is explained in (Table <ns0:ref type='table'>2</ns0:ref>) using some videos evaluated by one of the experts.</ns0:p><ns0:p>The evaluation results of the deaf experts based on the word error rate (WER) for each category (section) such as welcome, directions, and place are explained in Figure <ns0:ref type='figure'>7</ns0:ref>. This means that for each category, the evaluation result of these videos was wrong. Also, they should be replaced with correct videos.</ns0:p><ns0:p>As can be seen in Figure <ns0:ref type='figure'>7</ns0:ref>, first, the hospitality category has a high percentage of WERs, at 50%. Second, the location category has a percentage of about 36%. Thus, we need to reduce the WER in our video corpus that is related to these two categories (sections) in order to improve the communication between drivers and deaf passengers and also to describe the correct location for the passenger's destination.</ns0:p><ns0:p>The total WER of our video corpus was 10.23% as shown in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p><ns0:p>To resolve the corpus of erroneous ArSL videos, we again captured these videos based on the corrected signs that the raters explained to us.</ns0:p><ns0:p>In the second technique of measuring the agreement between the signs of each of the two deaf experts, we used Cohen's Kappa criterion using Eq. ( <ns0:ref type='formula'>1</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>K =</ns0:head><ns0:p>Whereas P0 represent the number of observed proportional agreement between two variables by using Eq. (2). Where (fi+) is the total of row, and (f+i) is the total of column <ns0:ref type='bibr'>(Viera&amp; Garrett,2005)</ns0:ref>.</ns0:p><ns0:p>When we use Cohen's Kappa (K) , we can achieve one of the six types that represents in the Table <ns0:ref type='table'>4</ns0:ref>.</ns0:p><ns0:p>For more details, the evaluation was done by a non-deaf coder who annotated each of the two ArSL video corpora on the basis of true and false independently. First, we measured the agreement between signer 1 and signer 2. Next, we measured the agreement between signer 1 and signer 3. Finally, we measured the agreement between signer 2 and signer 3. Therefore, by Manuscript to be reviewed Computer Science applying Cohen's Kappa statistical method to measure the agreement between each of the two videos in the ArSL corpus, we found the result represented in Table <ns0:ref type='table'>5</ns0:ref>.</ns0:p><ns0:p>As we can see in (Table <ns0:ref type='table'>5</ns0:ref>) The agreement between signer 2 and signer 3 reached about 60%, which is better than the others. The reason is that they are from the same school, the Deaf Association. However, signatory 1 is a volunteer who is not from their school. It should be noted that the first two are completely deaf, while the second is hard of hearing. In the validation module, in the future work, we will implement the ML technique using python as a programming language. To do this, we will use the ArSL video corpus that was made by signer 2 or signer 3. Which one got the best agreement based on Cohen's Kappa method for evaluation. Next, we will divide our data into training and testing datasets to measure accuracy and error rate.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions and Future Work</ns0:head><ns0:p>Through this research, we have reviewed the previous studies that have been conducted in the field of ArSL corpus generation as a standard sign language used by different countries. We have also illustrated the different areas in which researchers are conducting their efforts to create a data dictionary and annotated ArSL corpus, especially in Saudi Arabia. We have clarified the difficulties encountered in translating ArSL from a grammatical, semantic and syntactic perspective. How they affect the accuracy of translation and recognition. Finally, we described the ArSL dictionary for deaf drivers and explained the data collection processes to construct the videos in our corpus. These were recorded using a single camera and then verified using two methods. First, using the evaluation of 4 participants, experts in ArSL, two of whom were deaf. Second, by using Cohen's Kappa statistical method to measure the agreement between each of the two videos in the ArSL corpus recorded by three signers.</ns0:p><ns0:p>The created ArSL corpus provides opportunities to test various feature extraction methods and recognition techniques. Extending and validating the dataset using machine learning (ML) will be the focus of future work. In addition, this corpus will be used to design our proposed system to facilitate communication with deaf drivers.</ns0:p></ns0:div> <ns0:div><ns0:head>Kappa (K)</ns0:head><ns0:p>Type </ns0:p></ns0:div><ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Number of agreements that expected by chance represented by Pe and the formula represents as Eq. &#119891;&#119894; + &#119891; + &#119894;</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62836:1:1:NEW 26 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='12,42.52,199.12,525.00,246.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,42.52,178.87,525.00,210.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='14,42.52,178.87,525.00,208.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='15,42.52,178.87,525.00,279.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,42.52,178.87,525.00,372.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,178.87,525.00,223.50' type='bitmap' /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62836:1:1:NEW 26 Aug 2021)</ns0:note> </ns0:body> "
"Dear Editors We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. In particular, all the required modifications have been done and we answered for all the reviewers' questions. We believe that the manuscript is now suitable for publication in PeerJ. Ms. Samah A. Abbas Lecture of Management Information System On behalf of all authors. Samah Reviewer 1 (Luis Naranjo-Zeledón) Basic reporting • English needs to be improved significantly. I suggest a professional proofread, so that your writing can reach and interest a wider audience. Here are some examples: Lines 53-55: “…there are some researchers…provided their effort” Line 139: “These complicities are explained below” Line 286: “Feature Work ” Table 2: “‫”أو‬. • Considerate language should be used when dealing with inclusiveness issues. Avoid, for example, the word 'normal' as opposed to 'deaf' (lines 23, 38, 40, 49, 58, 59, 60, 63). ...could be changed to: American Sign Language (ASL), British Sign Language (BSL), Australian Sign Language (Auslan), Indian Sign Language (ISL), and Arabic Sign Language (ArSL). Note that the correct acronym for Australian Sign Language is Auslan. • Avoid paragraphs that are too long, as it makes reading cumbersome (example: lines 81-118). We improved the English language as you suggested. Also, we modified all paragraphs and lines that you mentioned. ------------------ • The introduction and the background show context. However, lines 92-94 state that 'For instance, Support Vector Machine (SVM) with a camera that gained 98.8 93% (Luqman & Mahmoud, 2017) (PCA) Principal Component Analysis, and (HMM) Hidden 94 Markov Model that achieved 99.9% accuracy (Ahmed & Aly, 2014)'. Should these percentages be considered baselines? If so, this should be set from the abstract. In fact, the title should be reconsidered, as it indicates that it is an 'Implementation', but then it becomes clear that it is a 'Proposal'. It should be clear from the outset whether the proposal is by any means compared against a baseline. It should be clear whether the proposal is by any means compared against a baseline. If the authors consider that this is not relevant, given that they focus on a new domain (public transport drivers and passengers), they should make it explicit. In the same way, whether it is an implementation or a proposal, it is good to say what it is. Eg: 'implementation of a prototype of machine translation' or 'proposal of a corpus with linguistic support'. Yes, I agree with you that we should focus deaf driver domain and therefore the proposed the corpus is created and evaluated. We modified the introduction and literature review based on what we are focus on. Many thanks for giving us these notes. ------------------ • The authorship of the figures must be clear. In some places, the author Abba is quoted and in others Abbas is quoted, causing confusion as to whether this paper is a continuation of earlier research. If so, you should make this clarification from the beginning and explain what is new with respect to the previous work. This is part of the context and it is recommended to place it at the beginning of the abstract. This article is for completing previous work that was done in my master thesis. Also, I need to submit this paper to publish our created dataset. In order to implement a new method with Machine learning as future work. -----------------• Although it is not a review article, it is good practice to briefly explain how the results presented in the Background have been produced. These search strings can enrich the Background section and it is left to the discretion of the authors to use them: https://scholar.google.com/scholar?hl=es&as_sdt=1%2C5&as_ylo=2017&as_vis=1&q=intitle%3A %22sign+language%22+intitle%3A%22systematic%22&btnG= https://scholar.google.com/scholar?hl=es&as_sdt=1%2C5&as_ylo=2017&as_vis=1&q=intitle%3A %22sign+languages%22+intitle%3A%22systematic%22&btnG= https://scholar.google.com/scholar?hl=es&as_sdt=1%2C5&as_ylo=2017&as_vis=1&q=intitle%3A %22sign+language%22+intitle%3A%22state-of-the-art%22&btnG= Thank you for your suggestion, but we changed our literature work to describe the standard sign languages for different countries. Also, some research is reviewed in creating ArSL for general domains (without deaf driver case study). -----------------• The authorship of the figures must be clear. In some places, the author Abba is quoted and in others Abbas is quoted, causing confusion as to whether this paper is a continuation of earlier research. If so, you should make this clarification from the beginning and explain what is new with respect to the previous work. This is part of the context and it is recommended to place it at the beginning of the abstract. The purpose of this research is to publish our dataset as a continuation of our previous thesis work. I added an explanation for that in the abstract. ------------------ Experimental design • Methods are described with sufficient detail, in order to replicate. • The journal requests in its guidelines that the research questions be well defined, relevant and meaningful. Please state in the Introduction what the research question is, which could be something like 'How can a corpus be built by computer means for ArSL that is valid from a linguistic perspective?'. The answer to this question must be made explicit in the Conclusions. We added two researches’ questions, and we used two methods to evaluate our ArSL corpus. ------------------ Validity of the findings • The research findings appear to be valid. In any case, it is recommended to clearly indicate in line 279 why a WER of 10.23% is considered good (refer to the related literature). We added more details about the evaluation section. ------------------ • Underlying data have been provided. However, in 'Chapter 8' concerning Amount, it is not clear why the word 'Riyals' is omitted from number 5 onwards. According to our culture, we minimize the repeated word “Riyals” in the ArSL sign and also when the normal Saudi people talk and we say just a number. Implicitly, the driver and passenger understand that it is a Riyal currency. ------------------ Additional comments • The authors' research is extremely valuable since it addresses a problem of social inclusion in an adequate paradigm, which is that of providing tools for work and incorporation into the productive scheme. Innovation is evident within the public transport domain in Saudi Arabia. Changes are required to achieve the high publishing standards of PeerJ Computer Science, which is why this reviewer has attempted to do a thorough and explicit review. Thank you for your comments and we appreciate your effort. Reviewer 2 (Dinesh Nagumothu) Basic reporting The introduction is clear and easy to follow, however, there are a few shortcomings. Line 57, “deaf people have some difficulties in communicating with others while they are driving a vehicle like a car and the deaf or normal person sitting as a passenger”. This statement is confusing if this work aims at deaf people driving or deaf people sitting next to a driver? Authors can rephrase this for better understanding to the reader. We modified the sentences to clarify what we mean. --------Literature survey on the ArSL recognition and machine translation is fine, but the novelty of this work is in the corpus. A discussion on existing ArSL corpora like An Arabic Sign Language Corpus for Instructional Language in School (https://eprints.soton.ac.uk/271106/) and their limitations in the deaf driver domain can make the reader understand why this corpus is useful Also, it would be good if authors can provide information related to deaf driver corpus in another sign language if there are any. Actually, it is good notes, I completely agree, I changed the literature to be more in corpus domain and standard SL. We illustrated some researches done in Arab countries for implementing ArSL corpus in some domains. In fact, Arab countries have not done any research in deaf driver corpus. Also, we haven’t dictionary and dataset for deaf driver. Also, we do not have any exciting ArSL new sign and new lexicon. The article that you mentioned is too old and it is focus on educational domain and dealing with students only. --------- Fig. 6 didn’t have information on the y-axis. Assuming this as Word Error Rate (WER). Adding axis labels in the charts can ease the readers. We added the required information on the figure --------- Experimental design What are the problems with current approaches? Why deaf drivers can’t use the existing ArSL for communication during driving? In fact, Arab countries haven’t done any research in deaf driver corpus. Also, we haven’t dictionary and dataset for the deaf driver. Also, we do not have any exciting ArSL with deaf driver. --------It has been mentioned that some of the words are derived from Saudi sign language and the rest of them are contextual to the domain. How are those words considered contextual to the domain? It has been mentioned that “Some of them are collected from the contextual domain of the normal conversation that done between taxi driver and passengers.”. This process needs to be more elaborate. The corpus should contain all the relevant words in the deaf driver domain without leaving any important information. We mean in the contextual domain is each country has its own contextual domain in the payment process. For example, Saudi Arabia has its own currency which is Riyal. Also, each country has its own flag like Saudi flag, Emirate flag, and so on. --------The corpus has 50% WER in the “Welcome” category but what’s the share of this “welcome” category in the whole corpus. Information about class distribution is important and needs to be included. We added Welcome as a category because Arabic is a very rich language. Therefore, we grouped many like (good morning, evening, afternoon, hi, hello... so on) by welcome class. --------- Validity of the findings Good to see that corpus files made available for the community. However, annotations can’t be found. Any reason for this? We did a first step which is done to create a unified ArSL sign deaf driver corpus. Also, the annotation process actually is done (labeling each Arabic sign is labeled to generate the equivaling as described in Figure 5 and 6, and also in Table 1. In section with title “Data Collection, Creation and Annotation “). --------- Additional comments Line 282-285, please check the font style and make it uniform. Corrected. Add a citation or a URL as a footnote for the VEGAS video editor Corrected. There are a few grammatical errors in the English language used -For instance: In line 254, “For future work supporting, we added in…”. Corrected, we did proofread, thank you for your useful comments. "
Here is a paper. Please give your review comments after reading it.
250
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Epilepsy is a common neurological disease that affects a wide range of the world population and is not limited by age. Moreover, seizures can occur anytime and anywhere because of the sudden abnormal discharge of brain neurons, leading to malfunction. The seizures of approximately 30% of epilepsy patients cannot be treated with medicines or surgery; hence these patients would benefit from a seizure prediction system to live normal lives. Thus, a system that can predict a seizure before its onset could improve not only these patients' social lives but also their safety. Numerous seizure prediction methods have already been proposed, but the performance measures of these methods are still inadequate for a complete prediction system. Here, a seizure prediction system is proposed by exploring the advantages of multivariate entropy, which can reflect the complexity of multivariate time series over multiple scales (frequencies), called multivariate multiscale modified-distribution entropy (MM-mDistEn), with an artificial neural network (ANN). The phase-space reconstruction and estimation of the probability density between vectors provide hidden complex information. The multivariate time series property of MM-mDistEn provides more understandable information within the multichannel data and makes it possible to predict of epilepsy. Moreover, the proposed method is tested with two different analyses: simulation data analysis proves that the proposed method has strong consistency over the different parameter selections, and the results from experimental data analysis show that the proposed entropy combined with an ANN obtains performance measures of 98.66% accuracy, 91.82% sensitivity, 99.11% specificity, and 0.84 area under the curve (AUC) value. In addition, the seizure alarm system was applied as a postprocessing step for prediction purposes, and a false alarm rate of 0.014 per hour and an average prediction time of 26.73 minutes before seizure onset were achieved by the proposed method. Thus, the proposed entropy as a feature extraction method combined with an ANN can predict the ictal state of epilepsy, and the results show great potential for all epilepsy patients.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Abstract</ns0:head><ns0:p>Epilepsy is a common neurological disease that affects a wide range of the world population and is not limited by age. Moreover, seizures can occur anytime and anywhere because of the sudden abnormal discharge of brain neurons, leading to malfunction. The seizures of approximately 30% of epilepsy patients cannot be treated with medicines or surgery; hence these patients would benefit from a seizure prediction system to live normal lives. Thus, a system that can predict a seizure before its onset could improve not only these patients' social lives but also their safety. Numerous seizure prediction methods have already been proposed, but the performance measures of these methods are still inadequate for a complete prediction system. Here, a seizure prediction system is proposed by exploring the advantages of multivariate entropy, which can reflect the complexity of multivariate time series over multiple scales (frequencies), called multivariate multiscale modified-distribution entropy (MM-mDistEn), with an artificial neural network (ANN). The phase-space reconstruction and estimation of the probability density between vectors provide hidden complex information. The multivariate time series property of MM-mDistEn provides more understandable information within the multichannel data and makes it possible to predict of epilepsy. Moreover, the proposed method is tested with two different analyses: simulation data analysis proves that the proposed method has strong consistency over the different parameter selections, and the results from experimental data analysis show that the proposed entropy combined with an ANN obtains performance measures of 98.66% accuracy, 91.82% sensitivity, 99.11% specificity, and 0.84 area under the curve (AUC) value. In addition, the seizure alarm system was applied as a postprocessing step for prediction purposes, and a false alarm rate of 0.014 per hour and an average prediction time of 26.73 minutes before seizure onset were achieved by the proposed method. Thus, the proposed</ns0:p></ns0:div> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Epilepsy is one of the most common neurological disorders of the nervous system, affecting approximately 50 million people worldwide, and approximately 5 million people are diagnosed with epilepsy each year <ns0:ref type='bibr'>(WHO, 2019)</ns0:ref>. Therefore, the social and economic impacts on patients with epilepsy are becoming increasingly concerning. Although the seizures of 70% of epileptic patients can be controlled by antiseizure medicines, the seizures of 30% of patients with epilepsy cannot be treated by either medicines or surgery; therefore, these patients must live their whole lives with epilepsy, and their seizures can occur anytime and anywhere <ns0:ref type='bibr' target='#b13'>(Fujiwara et al., 2015)</ns0:ref>. Electroencephalogram (EEG) can record the patients' brain activities and be used as a tool for diagnosing and analyzing epilepsy <ns0:ref type='bibr' target='#b34'>(Wang et al., 2010)</ns0:ref>. Thirty percent of epileptic patients whose seizures cannot be controlled urgently need a system that can improve their lives, by successfully predicting a seizure before it begins. However, epilepsy prediction remains one of the competitive challenges for researchers, and numerous methods have already been proposed to address this problem. Scholars approach this issue in various ways, e.g., linear methods <ns0:ref type='bibr' target='#b30'>(Salant et al., 1998)</ns0:ref> and nonlinear dynamics <ns0:ref type='bibr' target='#b19'>(Iasemidis et al., 1990)</ns0:ref>. For linear measurement, statistical measures, including the calculation of variance, skewness, and kurtosis, are used for several seizure prediction tools, and researchers have described that kurtosis increase but variance decrease during the state of preictal activity <ns0:ref type='bibr' target='#b0'>(Aarabi et al., 2009)</ns0:ref>. The mean phase coherence (MPC) <ns0:ref type='bibr' target='#b25'>(Mormann et al., 2003)</ns0:ref>, Shannon entropy index <ns0:ref type='bibr' target='#b28'>(Rosenblum et al., 2000)</ns0:ref>, and conditional probability index are the best nonlinear measures compared to other nonlinear features <ns0:ref type='bibr' target='#b26'>(Mormann et al., 2005)</ns0:ref>. Moreover, the differential entropy with the cumulative sum (CUSUM) procedure has been applied to predict seizures and shows 87.5% sensitivity, a 0.28 per hour false prediction rate and a 25 minute average prediction time <ns0:ref type='bibr'>(Zandi et al., 2009)</ns0:ref>. In other studies, the permutation entropy (PE) method has been used to extract features, and combined with a support vector machine (SVM) classification method, 94% sensitivity, a 0.111 per hour false prediction rate, and 63.93 minutes of the average prediction time were shown <ns0:ref type='bibr'>(Yang et al., 2018)</ns0:ref>. There are different types of methods for measuring time series complexity, e.g., entropies <ns0:ref type='bibr' target='#b10'>(Coifman &amp; Wickerhauser, 1992)</ns0:ref>, fractal dimensions <ns0:ref type='bibr' target='#b24'>(Mashiah et al., 2008)</ns0:ref>, and Lyapunov exponents <ns0:ref type='bibr' target='#b29'>(Rosenstein et al., 1993)</ns0:ref>. However, entropy calculation becomes more interesting in the neuroscience field because of the nonstationary features of the EEG signals. Entropy is a method that can be used to distinguish the regular, chaotic, and random behavior of a time series by measuring complexity <ns0:ref type='bibr' target='#b27'>(Palu&#353;, 1998)</ns0:ref>. Moreover, the use of entropy combined with a Monte Carlo tree search (MCTS) process is the most effective method to addredd the container loading problem <ns0:ref type='bibr' target='#b6'>(Cant et al., 2018)</ns0:ref>; therefore, entropy is a method that can be used to measure disorder or irregularities in a wide range of applications <ns0:ref type='bibr' target='#b18'>(Howedi et al., 2020)</ns0:ref>. Additionally, EEG signals from epileptic patients can be classified into three different states: interictal state, preictal state, and ictal state (see Fig. <ns0:ref type='figure'>1</ns0:ref>). The first state refers to the time between seizures, the second state is the time period just before the seizure arrives, and the last state is the seizure period <ns0:ref type='bibr' target='#b9'>(Chiang et al., 2011)</ns0:ref>. In previous work, a new entropy method called modified-distribution entropy (mDistEn) was proposed, and this method successfully detects the different states of epileptic EEG signals by calculating the complexity of the signals <ns0:ref type='bibr' target='#b4'>(Aung &amp; Wongsawat, 2020)</ns0:ref>. Moreover, an effective coarse-grained calculation was added to the entropy method, which becomes multiscale modified-distribution entropy (M-mDistEn) <ns0:ref type='bibr' target='#b5'>(Aung &amp; Wongsawat, 2021)</ns0:ref>. The purpose of this multiscale method is to detect the various scales (frequencies) of the EEG signals; therefore, this method is usable for detecting motion artifacts. The main difference between the common entropy and the multiscale entropy is the evaluation of time series coarsegrained entropy to quantify the interdependency between entropy and scales <ns0:ref type='bibr' target='#b11'>(Costa et al., 2002)</ns0:ref>. However, there are some limitations to multiscale entropy because it is designed for scalar time series analysis, and it is not suitable for accurately reflecting the complexity of multivariate time series in complex systems <ns0:ref type='bibr'>(Zhang &amp; Shang, 2019)</ns0:ref>. The advantages of multivariate entropy can overcome the shortcomings of multiscale entropy, including evaluating within-and cross-channel dependencies in multiple data channels, assessing of the underlying dynamical richness of multichannel observations, and more degrees of freedom in the analysis than those of standard multiscale entropy <ns0:ref type='bibr' target='#b3'>(Ahmed &amp; Mandic, 2011)</ns0:ref>. For the reasons outlined above, entropy can distinguish the different states of epileptic EEG signals, and therefore, the prediction of epilepsy is possible according to numerous experiments <ns0:ref type='bibr'>(Yang et al., 2018)</ns0:ref>. By exploring the advantages of multivariate methods and the previous entropy methods (mDistEn and M-mDistEn), a new method called MM-mDistEn is proposed, and this new method provides the crucial features extracted from epileptic EEG signals and applies these features to ANNs <ns0:ref type='bibr' target='#b32'>(Siddique &amp; Adeli, 2013)</ns0:ref> in seizure prediction systems. The proposed system also reveals improved results in all performance measures; thus, it may be another alternative method for helping epileptic patients predict seizures before they start.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>A detailed explanation of the calculation of the proposed entropy method, MM-mDistEn, is mentioned in this section. The classification of epilepsy is also performed by using an ANN, and the step-by-step procedure is described after the explanation of the parameter selection. In this paper, the public dataset is used for experimental data analysis and introduced in the next subsection.</ns0:p></ns0:div> <ns0:div><ns0:head>Dataset description</ns0:head><ns0:p>The signals used for analysis in this paper are from the public PhysioNet Database <ns0:ref type='bibr' target='#b14'>(Goldberger et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b31'>Shoeb, 2009)</ns0:ref>, where the CHB-MIT dataset of EEG signals with seizure events is used for data analysis. The dataset is freely downloadable via the link that is provided in the reference section <ns0:ref type='bibr' target='#b7'>(CHB-MIT, 2009)</ns0:ref>. Data were collected as previously described in <ns0:ref type='bibr'>(Daoud &amp;</ns0:ref> PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:07:63740:1:1:NEW 2 Sep 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(1)</ns0:p><ns0:p>(2) Bayoumi, 2019) and which include long-term scalp EEG data form pediatric subjects with intractable seizures. Recordings, grouped into 23 cases, were collected from 22 subjects (5 males, ages 3-22; and 17 females, ages 1.5-19), and these subjects were monitored for up to several days following withdrawal of antiseizure medication to characterize their seizures and evaluate their candidacy for surgical intervention <ns0:ref type='bibr' target='#b31'>(Shoeb, 2009)</ns0:ref>. Most files contain 23 EEG signals (24 or 26 in a few cases). The international 10-20 system of EEG electrode positions and the sampling rate is 256 per second with 16-bit resolution <ns0:ref type='bibr' target='#b14'>(Goldberger et al., 2000)</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head>Multivariate multiscale modified-distribution entropy</ns0:head><ns0:p>A new method, MM-mDistEn, was used to calculate the data multidimensionally. Three steps are required to calculate the entropy values, as shown in Fig. <ns0:ref type='figure'>2</ns0:ref>. The algorithm is calculated as follows:</ns0:p><ns0:p>Step 1. Multivariate time series First, the multivariate time series from the given time series data is constructed. The new input multichannel EEG can be set and notated as where c is the number of channels (1, 2, x c, i &#8230;, C), and i is the number of samples in each channel (1, 2, &#8230;, N).</ns0:p><ns0:p>Step 2. Coarse-graining process The coarse-grained time series can be calculated using the multivariate time series data according to the scale factor, and the equation can be expressed as</ns0:p><ns0:formula xml:id='formula_0'>g s c, j = 1 s &#8721; j &#8226; s i = ( j -1)s + 1 x c, i ,(1 &#8804; j &#8804; N s ),</ns0:formula><ns0:p>where is the multivariate coarse-grained time series, s is a scale factor and .</ns0:p><ns0:formula xml:id='formula_1'>g s c, j N s = N s</ns0:formula><ns0:p>Step 3. Calculate MM-mDistEn Phase-space reconstruction is performed before the calculation of the entropy values, and the reconstruction is as follows:</ns0:p><ns0:formula xml:id='formula_2'>, M s ( j ) = [ g s 1, j g s 1, j + &#964; &#8230; g s 1, j + (m -1)&#964; g s 2, j g s 2, j + &#964; &#8230; g s 2, j + (m -1)&#964; &#8942; g s c, j &#8942; g s c, j + &#964; &#8942; &#8230; &#8942; g s c, j + (m -1)&#964; ] (1 &#8804; j &#8804; N s ),</ns0:formula><ns0:p>where m is the embedding dimension and &#964; is the time delay. For the current study, m = 3 and &#964; = 1 are used (more information available in the parameter selection subsection below). The proposed method, MM-mDistEn, which is implemented based on distribution entropy, adds two more threshold parameters, 'r' and 'n', to existing parameters. r is set by multiplying the standard deviation of all data values by 0.2, and n is set to 2 <ns0:ref type='bibr' target='#b4'>(Aung &amp; Wongsawat, 2020)</ns0:ref>. For a given multivariate coarse-grained time series, PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63740:1:1:NEW 2 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p><ns0:formula xml:id='formula_3'>(4) (3) (5) (i) Create matrix (j) in term of by X s M s ( j ) (j) = [ , , &#8230;, ], (1 &#8804; j &#8804; N s -( )). X s M s ( j ) M s ( j + 1 ) M s ( j + (m -1) &#964; ) m -1 (ii)</ns0:formula><ns0:p>The distance matrix is computed as divided by r and then squared (n = 2):</ns0:p><ns0:formula xml:id='formula_4'>D s ij D s ij D s ij = ( D s ij r ) n .</ns0:formula><ns0:p>A matrix between (i) and (j)</ns0:p><ns0:formula xml:id='formula_5'>(1 &#8804; i, j &#8804; N s -( ), i &#8800; j) is computed using the D s ij X s X s m -1 Euclidean method. (iii)</ns0:formula><ns0:p>After obtaining , the empirical probability density function (ePDF) is calculated using</ns0:p><ns0:formula xml:id='formula_6'>D s ij</ns0:formula><ns0:p>the histogram approach from of the previous steps with the bin number, B. The D s ij probability for that number can be given as , where t = 1, 2, 3, &#8230;, B.</ns0:p><ns0:p>P t (iv) MM-mDistEn can be calculated by the following equation with the distance matrix D s ij as follows:</ns0:p><ns0:formula xml:id='formula_7'>MM-mDistEn ( m, &#964;, r, n, B, s ) = - 1 log 2 ( B ) &#8721; B t = 1 P t (D s ij ) log 2 [P t (D s ij )], (1 &#8804; i, j &#8804; N s -( , i &#8800; j). m -1)</ns0:formula><ns0:p>Parameter selection MM-mDistEn uses predefined values for selecting parameters. There are six parameters that are required to compute the entropy values. First, the time delay, &#964; and dimension, m are used for the reconstructing of phase-space, with values of 1 and 3, respectively <ns0:ref type='bibr' target='#b23'>(Li et al., 2015b)</ns0:ref>. Next, the distance matrix, , is calculated using the parameters r and n, where r is the tolerance number D s ij</ns0:p><ns0:p>and n is the order of the function, and both parameters are applied in calculating the proposed entropy. A large r and n can lead to noise influence, whereas a small r and n can cause information lose; and therefore, the parameter r is equal to the standard deviation of the series multiplied by 0.2, and n is set to 2 <ns0:ref type='bibr' target='#b4'>(Aung &amp; Wongsawat, 2020;</ns0:ref><ns0:ref type='bibr' target='#b8'>Chen et al., 2007)</ns0:ref>. When the ePDF is calculated, another parameter value called the bin number, B, is needed, and B is set to 64 for this estimation <ns0:ref type='bibr' target='#b20'>(Li et al., 2016)</ns0:ref>. The scale factor, s, is also needed for calculating of the multivariate multiscale entropy values, and the scale values used in data analysis range from 1 to 15 <ns0:ref type='bibr' target='#b2'>(Acharya et al., 2015)</ns0:ref>. Additionally, the different values are selected for these parameters and shown in the figures in results section.</ns0:p></ns0:div> <ns0:div><ns0:head>Classification of epileptic seizures from the extracted features</ns0:head><ns0:p>In this paper, a multilayer perceptron (MLP), which is an ANN, is used for training and testing the data. First, feature calculation is performed by using the MM-mDistEn method before these features are input to the ANN. After that, the neural networks are implemented by using the library support from TensorFlow. These features are imported into the environment for Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>calculation and separated into input data and target data, and then these data are split into two sets: preictal period for the test set and interictal period for training. The 12 units for the first hidden layer and second hidden layer are used in the neural networks. The model was trained with backpropagation and optimized with the RMSprop algorithm <ns0:ref type='bibr' target='#b12'>(Daoud &amp; Bayoumi, 2019)</ns0:ref>.</ns0:p><ns0:p>The loss function used in this model is the binary cross entropy <ns0:ref type='bibr' target='#b12'>(Daoud &amp; Bayoumi, 2019)</ns0:ref>. The rectified linear unit (ReLU) activation function <ns0:ref type='bibr' target='#b17'>(Hahnloser et al., 2000)</ns0:ref> is used for the hidden layers to add nonlinearity and make strong robustness to clear the noise from the input data. The softmax activation function is selected for the output layer to classify the multiclass outputs, and interictal, preictal, and ictal states of the epileptic EEG signals <ns0:ref type='bibr' target='#b33'>(Usman et al., 2020)</ns0:ref>. The networks for each patient are trained individually for all 24 subjects. Finally, the results are shown in the results section.</ns0:p></ns0:div> <ns0:div><ns0:head>Seizure alarm system</ns0:head><ns0:p>The calculation of mean values is performed when the predicted features (P F ) are generated from the ANN, and those values are used as the decision-making process of the seizure alarm system. The seizure features (S F ) are selected from the duration of the ictal period of the EEG signals, and the mean values of those periods are calculated. Then, the mean values of S F and P F are used for comparison, i.e., if the former values are greater than or equal to the latter values, the alarm signal is triggered for upcoming seizure events. The flow chart for the seizure alarm system is shown in Fig. <ns0:ref type='figure'>3</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>In this section, two different analyses are conducted with two different datasets. The simulation data are used to test the consistency of the proposed entropy by changing the different parameters values. The experimental dataset is required for the next process of classification and epilepsy prediction. A detailed explanation of the results from these two analyses is described in the following subsections.</ns0:p></ns0:div> <ns0:div><ns0:head>Analysis of simulation signals</ns0:head><ns0:p>The proposed entropy method is analyzed with two different datasets: simulation data and experimental data. For the first dataset, three different signal types; sine wave (50 Hz frequency), chaotic series, and Gaussian series are used <ns0:ref type='bibr' target='#b22'>(Li et al., 2015a)</ns0:ref>, and the length of each signal is 400 samples. After that, these three series are simulated as a function of the scale factor with embedding dimensions and shown in Fig. <ns0:ref type='figure'>4</ns0:ref>. In Fig. <ns0:ref type='figure'>4</ns0:ref> (a) and (c), these three series are plotted according to their entropy values, but the chaotic series and Gaussian series overlap with each other in embedding parameters of values 2 and 4. However, the entropy values of three different series plot well-defined over all scale factors (1 to 15) with an embedding parameter value of m = 3 and are visualized in <ns0:ref type='bibr'>Fig. 4 (b)</ns0:ref>. Therefore, the value of 3 for the embedding parameter is chosen, and the detailed explanation of the parameters used in the calculation of MM-mDistEn is described in the parameter selection section. Although the dependence of MM-mDistEn is on the bin ). In Fig. <ns0:ref type='figure'>5</ns0:ref> (c), the tolerance numbers, r used in the entropy method are multiplied by the standard deviation of the series, and these values range from 0.05 to 0.9 with an increase of 0.05. The plotted MM-mDistEn values are chaotic, Gaussian and sine waves on the order of higher to lower values.</ns0:p></ns0:div> <ns0:div><ns0:head>Analysis of the experimental data</ns0:head><ns0:p>The values of MM-mDistEn are plotted with the scale factors (1 to 15) and are shown in Fig. <ns0:ref type='figure'>6</ns0:ref>.</ns0:p><ns0:p>Instead of plotting all datasets, the entropy values of epileptic EEG signals from four subjects are plotted, and the entropy values are different in order of the highest values for the ictal state and the lowest values for the preictal state (see Fig. <ns0:ref type='figure'>6</ns0:ref> (a), (c), and (d)). The highest complexity is for ictal EEG signals, which are deterministic chaotic dynamics, compared with normal EEG signals, which are stochastic dynamics <ns0:ref type='bibr' target='#b23'>(Li et al., 2015b)</ns0:ref>. Although a different order is seen in Fig. <ns0:ref type='figure'>6</ns0:ref> (b), MM-mDistEn can still distinguish the three states of the epileptic EEG signals.</ns0:p><ns0:p>The performance measure is performed by calculating the accuracy, sensitivity, specificity, and AUC <ns0:ref type='bibr' target='#b21'>(Li et al., 2018)</ns0:ref> for all the subjects from the CHB-MIT dataset. <ns0:ref type='bibr' target='#b21'>(Li et al., 2018)</ns0:ref>. It is clearly shown that the proposed extraction method combined with an ANN achieves an average performance measure of 98.66% accuracy, 91.82% sensitivity, 99.11% specificity, and an AUC value of 0.84 (see Table <ns0:ref type='table'>1</ns0:ref>). Moreover, the minimum scores of the performance measure are still effective with an accuracy of 95.2%, a sensitivity of 83%, a specificity of 93.67%, and an AUC value of 0.75 (among all 24 subjects, in Fig. <ns0:ref type='figure'>7</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>Performance measures for predicting epileptic EEG signals</ns0:head><ns0:p>The performance of the proposed prediction algorithm is calculated based on three factors: the false alarm rate ( , the prediction time, and the prediction rate ( ) <ns0:ref type='bibr' target='#b1'>(Aarabi &amp; He, 2014)</ns0:ref> is the prediction time. The performance values of all cases of the 24 subjects are shown in Fig. <ns0:ref type='figure'>8</ns0:ref>. The prediction time is defined as the time between the instant at which a seizure can be predicted, and the actual beginning of the seizure and 1-hour (3600 seconds) long epileptic EEG signals are used for the prediction system (see Fig. <ns0:ref type='figure'>9</ns0:ref>). A of 26.73 minutes is achieved T avg among all cases from the 24 subjects. The proposed method with an ANN achieves an average false alarm rate of under 0.25 per hour, an average prediction rate of over 70%, and an average training time of fewer than 3.5 minutes (see Fig. <ns0:ref type='figure'>8</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>According to previous works <ns0:ref type='bibr' target='#b4'>(Aung &amp; Wongsawat, 2020)</ns0:ref>, there are some limitations on multivariate time series analysis, and therefore, MM-mDistEn is proposed to overcome these limitations. First, the phase-space reconstruction and estimation of the probability density between vectors provide hidden complex information. The multivariate time series property of MM-mDistEn gives us more understandable information within the multichannel data. Moreover, the data are also analyzed at different scales (frequencies) so that insight information can be seen in different scales ranges. According to the above results section, the proposed method was tested with two different analyses: simulation data and experimental data. Testing with the simulation data is used if the information of the proposed entropy has strong consistency and less dependency on preset parameters. Regarding the experimental data, the different performance measures are also provided on the proposed entropy combined with an ANN to classify the three states of the seizure from epileptic EEG signals. Additionally, postprocessing of the seizure alarm system helps patients predict upcoming seizures before they occur. The different performance measures on the proposed MM-mDistEn are illustrated in Fig. <ns0:ref type='figure'>7</ns0:ref>, and the performance evaluation for the prediction of epileptic EEG signals is described in Fig. <ns0:ref type='figure'>8</ns0:ref>. A summary of the performance comparison between the existing prediction methods that have used the same dataset and the proposed method is shown in Table <ns0:ref type='table'>1</ns0:ref>. The proposed method obtains a higher accuracy score among these methods, excluding the method using a convolutional neural network (CNN) <ns0:ref type='bibr' target='#b15'>(G&#243;mez et al., 2020)</ns0:ref>, while the training time is shorter than that of the other methods such as M-mDistEn with an ANN, PE with an ANN, a deep convolutional neural network (DCNN) with MLP and MLP <ns0:ref type='bibr' target='#b12'>(Daoud &amp; Bayoumi, 2019)</ns0:ref>. The sensitivity of the proposed method obtains a better score than other methods, but it is slightly lower than DCNN with MLP and CNN with SVM. Although the specificity of the proposed entropy is marginally lower than that of the method using CNN, the best rate for false alarms is obtained by the proposed method. The false alarm rate is also crucial for the prediction of epilepsy, and it is the smallest rate among these methods. An important factor in the prediction of epilepsy is the prediction time because it enables the delivery of warning signals to patients in a timely manner. The proposed combined system of MM-mDistEn and an ANN can send an alarm on average 26.73 minutes before the actual seizure starts according to the results from the experiments in all 24 subjects; therefore, the prediction time of the proposed method is earlier than that with the method using a CNN with SVM <ns0:ref type='bibr' target='#b33'>(Usman et al., 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this paper, a new feature extraction method, called MM-mDistEn, was proposed for predicting of seizures through combination with an ANN. The proposed method efficiently explores the information from multiple variables with multiple time scales and analyzes the complexity of that time series. Two different analyses were performed: a simulation dataset is used to prove the existence of consistency, and an experimental dataset is applied to distinguish of the different states of epileptic EEG signals. The performance measures of the proposed method were provided for the classification of the interictal, preictal, and ictal states. The advantages of multivariate robust entropy provide an efficient method for extracting features from multichannel EEG recordings. Moreover, the seizure alarm system was added as postprocessing step, which can warn patients about an oncoming seizure before its onset by providing an adequate prediction of the time between the preictal and ictal states. The proposed combination method will only require an EEG acquisition system for real-time usage, and it can become useful not only for clinical applications but also for usage outside of the hospital for epilepsy patients. Therefore, a portable version for seizure prediction can become a reality by using the proposed method. Future studies are needed for real-time applications to detect more complex behaviors from the different EEG datasets. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Fig. 1 shows a one-hour recording of epileptic EEG signals from subject no.1. The different states of interictal, preictal, and ictal of EEG signals from the frontal area, occipital area, and different area of the brain are clearly shown in Fig. 1.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>three different series have upward trends regarding the bin values, B (2 0 to 2 9 ), and distinguish these three series, as shown in Fig.5 (a). The time delay, &#964; values range from 1 to 10, and the MM-mDistEn values decrease with increasing parameter values (seeFig. 5 (b)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>TN are the number of true positives and the number of true negatives, i.e., the classifier correctly labels the actual number of ictal and normal EEG signals and FP and FN are the number of false positive and false negatives, these two values indicate the number of ictal and normal signals that are incorrectly categorized by the classifier</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>epileptiform activity in EEG by combining PCA with ApEn. Cognitive neurodynamics 4:233-240. World Health Organization (WHO). 2019, Epilepsy. Available at https://www.who.int/newsroom/fact-sheets/detail/epilepsy (accessed 9 May 2021) PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63740:1:1:NEW 2 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='14,42.52,199.12,525.00,204.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='15,42.52,178.87,525.00,262.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,42.52,178.87,525.00,252.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,199.12,525.00,192.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,226.12,525.00,196.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,199.12,525.00,432.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63740:1:1:NEW 2 Sep 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
" Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, 25/25 Phuttamonthon 4 Road, Salaya, Nakhon Pathom 73170, Thailand. September 2, 2021 Dear Editors, We thank the reviewers for their generous comments on the manuscript and allowing a revision of our manuscript, with an opportunity to address the reviewers’ comments. We are uploading our point-by-point response to the comments (below) from the reviewers, an updated manuscript with tracked changes, and a clean updated manuscript without tracked changes. We believe that the manuscript is now suitable for publication in PeerJ Computer Science. Yodchanan Wongsawat Associate Professor Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, Thailand. On behalf of all authors. Reviewer: Sujatha Radhakrishnan 1. Basic reporting Give the abbreviation for all the terms. Contribution and future work to be detailed. Check the modules of Figures 2 and 3. Include recent papers in the references. Response: We would like to express to our appreciation for the Reviewer’s careful reading of the manuscript. We have corrected the abbreviation of all terms. In addition, we have also revised our English with AJE. The English Editing Certificate is also attached. According to the reviewer’s suggestion, we have added the contribution and future work in the Abstract and Conclusion. We have corrected and changed with high-resolution figures. Please see Figure 2 and 3 in the tracked changes manuscript file. The three recent papers are also included in the Introduction and Discussion. 2. Experimental design Table 1. Comparison between different methods - For the top 3 references not given, the first one is the proposed work but the second (M-mDistEn) is published, and the accuracy mentioned in this table mismatch with the real paper. Also, give the reference for the PE. Response: Thank you for the comments. The M-mDistEn from our previous paper used different dataset with this current paper, and therefore the performance measures including accuracy, specificity, and sensitivity are no longer the same. We also re-implement the permutation entropy (PE) and ANN by ourselves to further compare with our proposed algorithm. One more reference from Gómez et al. 2020 is also added according to the reviewer’s recommendation. Please see Table 1 and the discussion in the tracked changes manuscript file. 3. Figure 3, Seizure alarm system - Features extraction using module should be MM-mDistEN - Check and give the optimized flow diagram. Response: We have corrected and improved the flow diagram (Figure 3) in the tracked changes manuscript file. 4. Validity of the findings Justify - Performance measures given in the recent paper is higher. Gómez, C., Arbeláez, P., Navarrete, M., Alvarado-Rojas, C., Le van Quyen, M., & Valderrama, M. (2020). Automatic seizure detection based on imaged-EEG signals through fully convolutional networks. Scientific reports, 10(1), 1-13. Response: Thank you for your suggestions for improving our manuscript. We updated the manuscript by adding the reviewer’s recommended paper and please see Table 1 in the tracked changes manuscript file and the additional discussion in the Discussion Section. “The proposed method obtains a higher accuracy score among these methods, excluding the method using a convolutional neural network (CNN) (Gómez et al., 2020), while the training time is shorter than that of the other methods such as M-mDistEn with an ANN, PE with an ANN, a deep convolutional neural network (DCNN) with MLP and MLP (Daoud & Bayoumi, 2019). The sensitivity of the proposed method obtains a better score than other methods, but it is slightly lower than DCNN with MLP and CNN with SVM. Although the specificity of the proposed entropy is marginally lower than that of the method using CNN, the best rate for false alarms is obtained by the proposed method.” Reviewer 2 1. Basic reporting This paper seems interesting. The study of this work is missing in literature survey. The paper is well organized, and the work is quite interesting and significant. (Weak accept). Technically Paper is satisfactory. Response: We would like to express to our appreciation for the Reviewer’s careful reading of the manuscript. The three recent papers are also included in the Introduction and Discussion. 2. Experimental design -It would be better if authors can do some more analysis with discussion. Response: Thank you for your suggestions for improving our manuscript. One more reference from Gómez et al. 2020 is also added for additional analysis according to the reviewer’s recommendation. Please see Table 1 and the discussion in the tracked changes manuscript file (lines 334-336 and 339-343). 3. Validity of the findings - The abstract is not well written, author is suggested to detail more in abstract, and must mention the novelty of the work with key findings. Response: Thank you for your suggestions to improve our manuscript. We have modified the abstract of the manuscript and please see the lines 28-36 in the tracked changes manuscript file. In addition, we have also revised our English with AJE. The English Editing Certificate is also attached. 4. Additional comments no comments Reviewer: Ahmad Lotfi 1. Basic reporting This paper is addressing an important issue. The research question and why this research is conducted is very clear. However, the choice of the research method is unclear. There are various entropy measures and all of them perform differently. Depends on the choice of entropy the parameters chosen for a specific entropy; the results will vary. How is this justified? The results are not compared with any other techniques. A public data set is used; hence, it should be possible to compare the results with other prediction techniques. Response: Thank you for your suggestions for improving our manuscript. Clarification on parameter selection is added in the Subsection Parameter Selection. To justify the proposed algorithm (MM-mDistEn+ANN) and compare the performance with other methods, we used our previous methods (M-mDistEn+ANN). We also re-implement the permutation entropy (PE) and ANN by ourselves to further compare with our proposed algorithm. Additional performances from (Daoud & Bayoumi 2019) and (Usman et al. 2020) are also used to justify the proposed performance. In addition to the reviewer’s comment, one more recent reference from Gómez et al. 2020 is also added. Please see Table 1 and the discussion in the revised manuscript file. 2. Experimental design As mentioned above, the project aims, and objectives are very clear, and the research question is a valid question. My concern is that the paper is not very rigorous. The literature review is rather short. For example, a comprehensive review of entropy measures is represented in 'An entropy-based approach for anomaly detection in activities of daily living in the presence of a visitor' or application of Monte Carlo is described in 'An entropy-guided Monte Carlo tree search approach for generating optimal container loading layouts.'. The paper should have a Related work section where a comprehensive review is presented. Response: Thank you for your suggestions for improving our manuscript. We have improved the literature review in the Introduction by including the suggested reference. Please see the lines 77-80 in the tracked changes manuscript file. 3. Validity of the findings The results are valid. Although, as mentioned above, some forms of comparison are required. The suggestion is to include some of the papers that used the same dataset and present their results briefly. Response: Thank you for your suggestions for improving our manuscript. According to the reviewer’s comment, one more recent reference from Gómez et al. 2020 is also added. Please see Table 1 and the discussion in the revised manuscript file. 4. The selection of six parameters for the entropy measure must be expanded. Response: Thank you for your suggestions for improving our manuscript. We have expanded the selection of six parameters for our proposed entropy and please see the lines 192-195 in the tracked changes manuscript file. 5. Additional comments Remove all references to 'we'. All sentences should be in a passive format. Response: The manuscript was rewritten to take into consideration Reviewer’s comment and please check the revised manuscript file. In addition, we have also revised our English with AJE. The English Editing Certificate is also attached. 6. Line 82, Likewise. I do not believe this is comparable. EEG information should be represented separately. Response: We have improved the manuscript according to the reviewer’s comments and please see the lines 81-82 in the tracked changes manuscript file. 7. Line 112, provide a brief statement before section 'Dataset description' is presented. Response: We have improved the manuscript according to the reviewer’s comments and please see the lines 115-119 in the tracked changes manuscript file. 8. Line 167-168, the selection of six parameters must be expanded. Response: We have improved the manuscript according to the reviewer’s comments and please see the lines 192-195 in the tracked changes manuscript file. 9. Line 167-175, USE: Line 167-168, ...First, time delay, \tau, and dimension, m, ... (use a comma instead of brackets to introduce the parameters. Response: We have improved the manuscript according to the reviewer’s comments and please see the lines 189-201 in the tracked changes manuscript file. 10. Line 211, In Figure 5(a) and Fig 5(c) ... Response: We have improved the manuscript according to the reviewer’s comments and please see Figure 5 in the revised manuscript file. 11. Line 251, Tavg is not included in the definition. Response: We have improved the manuscript according to the reviewer’s comments and please see the line 302 in the tracked changes manuscript file. 12. Figure 2 is not readable. Figure 3 is not readable. Figure 4 does not add value and should be removed. Response: We have improved the manuscript according to the reviewer’s comment and changed with the high-resolution figures. Please see Figures 2 and 3 in the revised manuscript file and we also removed the Figure 4 from our manuscript file. "
Here is a paper. Please give your review comments after reading it.
251
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. Forecasting the time of forthcoming pandemic reduces the impact of diseases by taking precautionary steps such as public health messaging and raising the consciousness of doctors. With the continuous and rapid increase in the cumulative incidence of COVID-19, statistical and outbreak prediction models including various machine learning (ML) models are being used by the research community to track and predict the trend of the epidemic, and also in developing appropriate strategies to combat and manage its spread. Methods. In this paper, we present a comparative analysis of various ML approaches including Support Vector Machine, Random Forest, K-Nearest Neighbor and Artificial Neural Network in predicting the COVID-19 outbreak in the epidemiological domain. We first apply autoregressive distributed lag (ARDL) method to identify and model the short and long-run relationships of the time-series COVID-19 datasets. That is, we determine the lags between a response variable and its respective explanatory time series variables as independent variables. Then, the resulting significant variables concerning their lags are used in the regression model selected by the ARDL for predicting and forecasting the trend of the epidemic. Results. Statistical measures that are, Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Symmetric Mean Absolute Percentage Error (SMAPE) are used for model accuracy. The values of MAPE for the best-selected models for confirmed, recovered and deaths cases are 0.003, 0.006 and 0.115 respectively, which falls under the category of highly accurate forecasts. In addition, we computed fifteen days ahead forecast for the daily deaths, recovered, and confirm patients and the cases fluctuated across time in all aspects. Besides, the results reveal the advantages of ML algorithms for supporting the decision-making of evolving short-term policies.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The outbreak of the novel coronavirus disease in 2019 (COVID-19) has emerged as one of the most devastating respiratory diseases since the 1918 HIN1 influenzas pandemic, infecting millions of people globally <ns0:ref type='bibr' target='#b24'>(Tuli et al. 2020)</ns0:ref>. The cumulative incidence of the virus is continually and rapidly increasing globally. At the early stage of the outbreak, it is important to have a clear understanding of the disease transmission and its dynamic progression, so that relevant agencies and organizations can make informed decisions and enforce appropriate control measures. Generally, capturing the transmission dynamics of a disease over time can provide insights into its progression, and show whether the outbreak control measures are effective and able to reduce the impact of the disease on a community <ns0:ref type='bibr'>(Kucharski et al. 2020</ns0:ref>). Access to real-time data and effective application of outbreak prediction or forecasting models are central to obtaining insightful information regarding the transmission dynamics of the disease and its consequences. Moreover, every outbreak has its unique transmission characteristics that are different from the other outbreaks, which raises the question of how standards prediction models would perform in delivering accurate results. In addition, various factors including the number of known and unknown variables, differences in population/behavioural complexity in various geopolitical areas, and the variations in containment strategies increase the uncertainty of prediction models <ns0:ref type='bibr' target='#b2'>(Ardabili et al. 2020</ns0:ref>). As a result, it is challenging for standard epidemiological models such as Susceptible-Infected-Recovered (SIR) to provide reliable results for long-term predictions. Therefore, it is important to not only study the relationship between the components of the outbreak datasets but also evaluate the effectiveness of the common disease prediction models. In recent months, there have been a handful of works that try to understand the spread of COVID-19, especially using statistical approaches. For instance, Kucharski et al. explored a combination of stochastic transmission model and four datasets that captured the daily number of new cases, the daily number of new internationally exported cases, the proportion of infected passengers on evacuation flight and the number of new confirmed cases, to estimate the transmission dynamics of the disease over some time <ns0:ref type='bibr'>(Kucharski et al. 2020)</ns0:ref>. In another study, a machine learning-based model is applied to analyse and predict the growth of COVID-19 <ns0:ref type='bibr' target='#b24'>(Tuli et al. 2020)</ns0:ref>. The authors demonstrated the effectiveness of using iterative weighting for fitting Generalized Inverse Weibull distribution when developing a prediction solution. <ns0:ref type='bibr'>Lin et al.</ns0:ref>, presented a conceptual model designed for the COVID-19 epidemic with consideration of individual behavioural responses and engagements with the government, including the extension in holidays, restriction on travel, quarantine, and hospitalization <ns0:ref type='bibr' target='#b17'>(Lin et al. 2020)</ns0:ref>. This work combined zoonotic transmission with the emigration pattern, and then estimate the future trends and the reporting proportion. The model gives promising insight into the trend of the COVID-19 outbreak, especially the impact of individual and government reactions or responses to the epidemic. The authors <ns0:ref type='bibr' target='#b1'>(Anastassopoulou et al. 2020)</ns0:ref> estimated the average values of the key epidemiological parameters including the per day case mortality, recovery ratios, and method to identify and model the short and the long-run relationships of the time-series COVID-19 datasets (confirmed, recovered and death cases). That is, we determine the lags between a response variable and its respective explanatory time series variables as independent variables. Then, the resulting significant variables concerning their lags are used in the regression model selected by the ARDL model for predicting and forecasting the trend and dynamics of the COVID-19. We evaluated the models using relevant accuracy and error metrics including Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Symmetric Mean Absolute Percentage Error (SMAPE).</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head></ns0:div> <ns0:div><ns0:head>Data Source</ns0:head><ns0:p>We conducted our study based on the publicly accessible data of daily deaths, recovered and confirmed cases 5071, 85879 and 138053 respectively reported for all over the world from 22 nd January 2020 to 9 th June 2021 in Fig. <ns0:ref type='figure'>1</ns0:ref>. The data is available in the online repository -GitHub (https://github.com/CSSEGISandData/COVID-19). We perform data processing including the conversion of data format from cumulative to daily basis. This repository is for the COVID-19 visual dashboard operated by Johns Hopkins University Centre Systems Science and Engineering (JHU CSSE). They have aggregated data from sources like WHO, WorldoMeters, BNO News, and Washington State Department of Health and many more. The data have the number of confirmed cases, the recovered cases, and the death cases for the globe. On this data, we attempted to forecast the key epidemiological parameters, that is, the number of upcoming daily new confirmed cases, deaths, and recoveries. Though, the quantity of deaths, recovery and confirmed cases of individuals is expected to be much higher over time. Therefore, we have similarly derived a correlation between these two variables and their past record (lags) by using the ARDL model.</ns0:p></ns0:div> <ns0:div><ns0:head>Autoregressive Distributive Lag Models</ns0:head><ns0:p>The ARDL models are used between regressed series and k number of regressors series in regression analysis. If there is only one independent series, the dependent lag series makes the model autoregressive. The numeral of independent lag series is denoted by ,</ns0:p><ns0:formula xml:id='formula_0'>&#119901; &#119905;&#8462; &#119901; &#119895; &#119895; = 1, . . ., &#119899;</ns0:formula><ns0:p>denotes daily recovery and confirm cases, the lags of dependent variable series are shown by &#119902; &#119905;&#8462; , where</ns0:p><ns0:formula xml:id='formula_1'>&#119902; &#119894; &#119894; = 0,1,&#8230;,&#119898;.</ns0:formula><ns0:p>The ARDL model can be expressed as:</ns0:p><ns0:formula xml:id='formula_2'>+ &#119910; &#119905; = &#120572; 0 + &#120573; 1 &#119910; &#119905; -1 + &#120573; 2 &#119910; &#119905; -2 + &#8230;, &#120573; &#119895; &#119910; &#119905; -&#119894; + &#120574; 1 &#119909; &#119905; + &#120574; 2 &#119909; &#119905; -1 + &#8230;,&#120574; &#119894; &#119909; &#119905; -&#119895;<ns0:label>(1)</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>&#120575; 1 &#119908; &#119905; + &#120575; 2 &#119908; &#119905; -1 + &#8230;, &#120575; &#119894; &#119908; &#119905; -&#119895; + &#120576; &#119905;</ns0:formula><ns0:p>Where denotes the number of daily deaths at time . represent the intercept term. In the same coefficients of death, recovery, and confirmed cases, respectively, while denotes the error term.</ns0:p></ns0:div> <ns0:div><ns0:head>&#120576; &#119905;</ns0:head><ns0:p>Eq.1 can be further simplified and presented in (Eq. 2):</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_4'>&#119910; &#119905; = &#120572; 0 + &#8721; &#119898; &#119894; = 1 &#120573; &#119894; &#119910; &#119905; -&#119894; + &#8721; &#119899; &#119895; = 0 &#120574; &#119895; &#119909; &#119905; -&#119895; + &#8721; &#119899; &#119895; = 0 &#120575; &#119895; &#119908; &#119905; -&#119895; + &#120576; &#119905;</ns0:formula><ns0:p>The number of deaths, confirm, and recover cases of people is likely to be much higher with time.</ns0:p><ns0:p>Therefore, the ARDL model for recovered cases and confirmed cases is shown in (Eq.3).</ns0:p></ns0:div> <ns0:div><ns0:head>&#119909; &#119905; &#119908; &#119905;</ns0:head><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_5'>&#119909; &#119905; = &#120579; 0 + &#120574; 1 &#119909; &#119905; -1 + &#8230;,&#120574; &#119894; &#119909; &#119905; -&#119895; + &#120575; 1 &#119908; &#119905; + &#120575; 2 &#119908; &#119905; -1 + &#8230;, &#120575; &#119894; &#119908; &#119905; -&#119895; + &#120576; &#119905;</ns0:formula><ns0:p>Similarly, the ARDL model for confirmed and recovered cases is shown in (Eq.4) (4)</ns0:p><ns0:formula xml:id='formula_6'>&#119908; &#119905; = &#120599; 0 + &#120575; 1 &#119908; &#119905; + &#120575; 2 &#119908; &#119905; -1 + &#8230;, &#120575; &#119894; &#119908; &#119905; -&#119895; + &#120574; 1 &#119909; &#119905; -1 + &#8230;,&#120574; &#119894; &#119909; &#119905; -&#119895; + &#120576; &#119905;</ns0:formula><ns0:p>There are different criteria used to select an optimal lag length selection. The authors in <ns0:ref type='bibr' target='#b5'>(Chandio et al. 2020</ns0:ref>) use Akaike Information Criterion (AIC) and the authors in <ns0:ref type='bibr' target='#b9'>(Gayawan &amp; Ipinyomi 2009)</ns0:ref> compare AIC, SIC and adj-R square to select the optimal lag length. We use adj-R square and parsimony model criteria to select an optimal number of the lag length in this study. It makes the call to the function easier when the number of lags order are the same, however, when the number of lags order is different from dependent and every independent sequence, we use the argument remove. It will remove the lags that are not contributed to the model. Once the ARDL model specifies the significant coefficients of the dependent variable and independent variables, the models including the RF, SVM, KNN, and ANN are used to assess the accuracy and error rate of these models. We utilized RF <ns0:ref type='bibr' target='#b4'>(Biau &amp; Scornet 2016)</ns0:ref>, SVM <ns0:ref type='bibr' target='#b16'>(Liang et al. 2018)</ns0:ref>, KNN <ns0:ref type='bibr' target='#b18'>(Mart&#237;nez et al. 2019)</ns0:ref> and ANN <ns0:ref type='bibr' target='#b11'>(Hu et al. 2018</ns0:ref>) time series models were applied to predict the COVID-19.</ns0:p><ns0:p>To overcome the overfitting problem, we use 80% training and 20% testing parts, respectively. Random forest is one of the best learning algorithms and it requires a bit of parameter tuning. Generally, in time series analysis, Support Vector Regression (SVR) is used. In SVM, various kernel functions are used to develop the input space into a feature space with a complex dimension. Like Gaussian Radial Basis (GRBF), Sigmoid, polynomial, etc. are some kernel functions. For SVM, we use Radial Basis Kernels (RBF) . In the SVM model, using</ns0:p><ns0:formula xml:id='formula_7'>&#119896; &#120574; (&#119910; &#119894; ,&#119910; &#119895; ) = exp ( -&#120574;&#8214;&#119910; &#119894; -&#119910; &#119895; &#8214;) 2</ns0:formula><ns0:p>RBF kernels it is necessary to tune model parameters to find an optimal value of the parameters and reducing the overfitting problem. So, we use the grid search method of tenfold cross-validation on the training part and testing part and their results are averaged. k-nearest neighbor (k-NN) predicts the response variable based on the nearest training points. It uses a training dataset in its place of learning a discriminative function from the training data. k-NN is used both for classification and regression problems. There are various techniques use to improve model accuracy. Such as maximum percentage accuracy graph, Elbow method, for loops to select an optimal value of k. Generally, the square root of n is used, and we utilized .</ns0:p><ns0:p>&#119899; ANN is a mathematical tool and has been generally used for classification and forecasting problems properly that contain predictors (input) and response (output) layers, and a hidden layer.</ns0:p><ns0:p>A combination of different hidden layers is used to choose a better MLP architecture network. It is the hidden layers in ANN models that play an important role in many successful applications of neural networks. ANN model is widely used in economic and financial studies <ns0:ref type='bibr' target='#b13'>(Huang et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b21'>Qi 1996)</ns0:ref>. The number of hidden layers depends upon the nature of the problem. The authors <ns0:ref type='bibr' target='#b27'>(Zhang et al. 1998</ns0:ref>) used two hidden layers and finds better model prediction accuracy. In the same way, the authors in <ns0:ref type='bibr' target='#b25'>(Xu et al. 2020</ns0:ref>) used ( , where is the number of predictors (inputs).</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>&#215; &#119896; + 1) &#119896;</ns0:head><ns0:p>For an optimal result of ANN, usually, trial and error method is used in determining the number of hidden nodes that is searching the architecture having the smallest MAPE among the models <ns0:ref type='bibr' target='#b10'>(G&#252;ler &amp; &#220;beyli 2005)</ns0:ref>. We use 4 hidden layers and 8 neurons in the hidden layers for daily death cases using trial and error procedure and 10,000 times iteration. In the same way, we use 2 hidden layers and 4 number of neurons in the hidden layers for daily recover cases.</ns0:p></ns0:div> <ns0:div><ns0:head>Forecast Evaluation Criterions</ns0:head><ns0:p>In this study, as the response variable is continuous, therefore, the forecasting capacity of different machine learning approaches are evaluated by using five different criteria including mean error </ns0:p></ns0:div> <ns0:div><ns0:head>Software and packages</ns0:head><ns0:p>In the current study, we will use the R programming version 4.0.4 and dLagM package that outfits the ARDL test method <ns0:ref type='bibr' target='#b20'>(Pesaran et al. 2001)</ns0:ref>. Subsequently, dLagM uses lag orders, dataset, and overall method which make the prerequisite lags and changes for definite models. One of the benefits of this approach is that the users are not required to specify the variation for the applied models. Which brings efficacy and value to researchers in various areas.</ns0:p><ns0:p>In this study, we used tseries, timeseries, zoo and window packages for the data.</ns0:p><ns0:p>In the same way, dLagM package in R for ARDL model. An orders and of the ARDL lag &#119901; &#119902; model are denoted by ARDL ( ), which has independent lags series and dependent lags series.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119901;,&#119902; &#119901; &#119902;</ns0:head><ns0:p>We use the packages, randomForest, forecast, caret, tiyverse, tsibble and purr for RF. the ntree is 500, mtry is p/3, where p is the number of features, sampsize is 70% and type is 'regression' utilized in the function. The other parameters are kept as default.</ns0:p><ns0:p>In this study, the library e1071 is used for SVM, the parameters cost=10 2 , gamma , and Manuscript to be reviewed The coefficient related to confirm cases and its first lag is highly significant at 0.5% level,</ns0:p><ns0:formula xml:id='formula_8'>(&#7527;) = 0.</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>&#119862; &#119905;</ns0:head><ns0:p>respectively. Similarly, the current death of the response variable (daily deaths of COVID-19),</ns0:p><ns0:p>&#119910; &#119905; are significant at the 0.5% level. In addition, the coefficient of recover cases (current) are also significant at 0.5% level, respectively. Overall, the model is highly significant at the 0.5% level with a p-value smaller than 2.2&#215;10^ (-15) with the adjusted R-squared equal to 85.2% and the alpha value <ns0:ref type='bibr' target='#b3'>(Benjamin et al. 2018</ns0:ref>). The fitted model can be written as: &#119910; &#119905; ( &#119863;&#119886;&#119894;&#119897;&#119910; &#119863;&#119890;&#119886;&#119905;&#8462;&#119904; ) = 5.85 &#215; 10^(2) + 0.734 &#215; 10^( -1)&#119910; &#119905; + 1.09 &#215; 10^( -2)&#119909; &#119905; -3.99 &#215; (5)</ns0:p><ns0:p>10^( -3)&#119909; &#119905; -1 -1.07 &#215; 10^( -2)&#119909; &#119905; -2 + 6.72 &#215; 10^( -3)&#119908; &#119905; + &#120576; &#119905;</ns0:p><ns0:p>In the second scenario, we examine the relationship between the number of recover cases and confirm cases. We fit the ARDL model for recover cases of COVID-19 series with confirm</ns0:p><ns0:formula xml:id='formula_9'>&#119909; &#119905; &#119908; &#119905;</ns0:formula><ns0:p>cases. We take using adj-R square and parsimony of the model and fitting the</ns0:p><ns0:formula xml:id='formula_10'>&#119901; 1 = 4, &#119886;&#119899;&#119889; &#119902; = 3</ns0:formula><ns0:p>ARDL model to the datasets. The results obtained from the ARDL model are presented in Tab. 3. Tab. 3 shows the summary of the ARDL model, the confirmed cases recorded in the current day.</ns0:p><ns0:p>The daily recover cases of the first day have a significant impact on the number of daily confirm cases from the COVID-19 on that particular day. The model is significant at the 0.5% level (&#119875; &lt; the adjusted R-squared value is 91.46%. The fitted model can be written as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_11'>2.16 &#215; 10^( -16)),<ns0:label>(6)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Tab. 4 shows the summary of the ARDL model, the confirmed cases recorded on the first day, second day and the third days. The daily recover cases of current, one day, two days and three days before have a significant impact on the number of daily recover cases from the COVID-19 on that particular day. The model is significant at the 0.5% level the adjusted R-</ns0:p><ns0:formula xml:id='formula_12'>(&#119875; &lt; 2.16 &#215; 10^( -16)),</ns0:formula><ns0:p>squared value is 85.23%. We select the model using adjusted R-squared value and alpha value <ns0:ref type='bibr' target='#b3'>(Benjamin et al. 2018</ns0:ref>). The fitted model can be written as:</ns0:p><ns0:p>&#119908; &#119905; ( &#119877;&#119890;&#119888;&#119900;&#119907;&#119890;&#119903;&#119890;&#119889; ) = 146.24 + 0.368&#119909; &#119905; -1 + 0.144&#119909; &#119905; -3 + 0.145&#119908; &#119905; -0.226&#119908; &#119905; -1 + 0.710&#119908; <ns0:ref type='bibr' target='#b8'>(Gao et al. 2019)</ns0:ref>. We highlighted the results for the ANN model indicating the smallest value among all models. In most cases, the ANN method shows significant performance compares to the rest of the method's base on training parts. In Tab. 6, the value of ME for the ANN model is lower than the other models. The results indicate that ANN shows the lowest value among the other methods. In addition, the ANN predicted value is close to the actual value. The ME value for KNN is negative and it reveals that the predicted value is less than the actual value. Similarly, the RMSE and MAE values of ANN are smaller than the rest of the methods and show that ANN achieved better performance compared to the other methods. Moreover, the MPE values are also smaller than the other methods. This shows ANN is better as compared to the other methods. While the values of MAPE and SMAPE of the ANN model are better than the three methods. Thus, the value of the MAPE and SMAPE for ANN is less than 1 which indicates that the selected model falls in the range of the perfect model <ns0:ref type='bibr' target='#b8'>(Gao et al. 2019)</ns0:ref>. We highlighted the results for the ANN model indicating the smallest value among all models. The ANN method shows significant performance compares to the rest of the method's base on 20% testing parts in most of the cases. Fig. <ns0:ref type='figure'>2</ns0:ref> shows the plot of the forecasting accuracy measures for the models. It is clear from the above plot that on average, ANN is the best model for forecasting the daily deaths of the COVID-19 outbreak. Tab. 7 summarizes the RF, SVM, KNN, and ANN forecasting accuracy measures of the COVID-19 confirm patient's on the training dataset.</ns0:p><ns0:p>In Tab. 7, the value of ME of the ANN model is smaller than the rest of the methods. This indicates that the ANN predicted value is near to the actual value. KNN has the lowest (the best) value among the other methods with the highest accuracy. Similarly, the RMSE values of the ANN have shown the lowest RMSE value as compared to the rest of the methods. While the MAE and MPE values of the ANN model have the smallest value among the other methods. The values of MAPE and SMAPE of the ANN and KNN models are smaller than the other methods. Thus, the value of the MAPE for KNN is in the range of 1 to 10 which reveals that the selected model falls in the category very good model. Overall, the ANN method achieved significant performance better than the other methods based on training parts. This indicates that ANN results are more consistent with RF, SVM, and KNN. In Tab. 8, the value of ME for the ANN model has the lowest (the best) value among the other methods with the highest accuracy. In the same way, the RMSE values and MAE values of the ANN model indicate that it predicted value close to the actual value. The MPE value of the ANN model revealed that the ANN has the smallest value among the other methods. While the MAPE and SMAPE values tell that the ANN has the smallest value among the other methods and it is in the range of 1 to 10 which revealed that the selected model falls in the category very good model. On average, the ANN method achieved significant performance better than the other methods based on 20% testing parts. This indicates that ANN results are more consistent with RF, SVM, and KNN. Fig. <ns0:ref type='figure'>3</ns0:ref> shows the plot of the forecasting accuracy measures for different models. In Tab. 9, the ME and RMSE values of the ANN model have the lowest (the best) value among the other methods with the highest accuracy and it reveals that the predicted value is very close and captured the original data. Similarly, the MAE and MPE values of the ANN model have the smallest value among the other methods and reveals that ANN has better power to capture the real data as compared to the other methods. Similarly, the values of MAPE and SMAPE of ANN and RF models are better than the other methods respectively. Thus, the value of the MAPE for ANN is in the range of 1 to 10 which showed that the selected model falls in the category very good model. On average, the ANN method achieved significant performance better than the other methods based on 20% testing parts. This indicates that ANN results are more consistent with RF, SVM, and KNN. Fig. <ns0:ref type='figure'>3</ns0:ref> shows the plot of the forecasting accuracy measures for different models.</ns0:p><ns0:p>In Tab. 10, the ME and RMSE values for the ANN model have the lowest value among the other methods with the highest accuracy. The MAE value and MPE value indicate that ANN has the smallest value among the other methods. Moreover, ANN follows the real data pattern with the smallest error as compared to the other methods. Similarly, the values of MAPE and SMAPE for ANN are in the range of 1 to 10 which revealed that the selected model falls in the category very good model. On average, the ANN method achieved significant performance better than the other methods based on 20% testing parts. This indicates that ANN results are more consistent with RF, SVM, and KNN. Fig. <ns0:ref type='figure'>4</ns0:ref> shows the plot of the forecasting accuracy measures for different models.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The performance of the neural network model can be assessed once trained the network employing the performance function as a prediction. All the methods are capable of capturing the pattern of the data effectively. Moreover, ANN performed well and almost capture the whole pattern of the testing part of the data when compared to RF, SVM, and KNN methods. Fig. <ns0:ref type='figure'>3</ns0:ref> shows the prediction accuracy of the number of daily Covid-19 recovered cases of RF, SVM, KNN, and ANN methods. The world daily deaths original testing data of COVID-19 and the forecasted data for RF, SVM, KNN and ANN models are plotted in Fig. <ns0:ref type='figure'>5</ns0:ref>. Fig. <ns0:ref type='figure'>5</ns0:ref> displays the prediction accuracy of RF, SVM, KNN, and ANN models. All the models are capable of capturing competently the pattern of the daily death cases of COVID-19. Fig. <ns0:ref type='figure'>5</ns0:ref> clearly shows that ANN captured the pattern of the test set of the data better than RF, SVM, and KNN methods. Also, Fig. <ns0:ref type='figure'>5</ns0:ref> displays the prediction accuracy of RF, SVM, KNN, and ANN models for COVID-19's daily recovered cases. Similar to death cases accuracy results, all the models effectively captured the pattern of the daily recovered cases of COVID-19. In the same way, in Fig. <ns0:ref type='figure'>6</ns0:ref> and Fig. <ns0:ref type='figure'>7</ns0:ref>, the ANN captured the pattern on the test part of the data. While the rest of the methods first follow the pattern up to some extent and then insensitive to the original data. Fig. <ns0:ref type='figure'>6</ns0:ref> and Fig. <ns0:ref type='figure'>7</ns0:ref> are shown below.</ns0:p><ns0:p>In Fig. <ns0:ref type='figure'>8</ns0:ref>, the original COVID-19 number of deaths data points and the resulting forecast of ANN were plotted for the next fifteen days from (10 June 2021 to 25 June 2021). As shown in the figure, the ANN forecast captures and follows the pattern of the original death cases of COVID-19. The subsequent fifteen days forecasted line fluctuated near 10,000. In addition, the forecasted number of deaths tends to gradually decline over time. This is an indication that the number of daily deaths decreases over time.</ns0:p><ns0:p>In Fig. <ns0:ref type='figure'>9</ns0:ref> Manuscript to be reviewed Computer Science forecasted drift going in the downward direction. This reveals that the number of daily confirm is decreasing over time.</ns0:p><ns0:p>In Fig. <ns0:ref type='figure'>10</ns0:ref>, the original COVID-19 recovered patient's data and forecast of ANN exhibited for the next fifteen days from (10 June 2021 to 25 June 2021). The ANN model forecast captured the pattern of the original COVID-19 recover patient's data. In addition, the next fifteen days forecasted drift going in the downward direction. This reveals that the number of daily recoveries is decreasing over time.</ns0:p><ns0:p>The key findings of this work as follows:</ns0:p><ns0:p>&#61623; The machine learning approaches are compared in this study to predict the Covid -19 cases. &#61623; The ANN results on average are better than the other methods using the performance metrics and used to forecast the next fifteen days' values. &#61623; The forecast shows that in the next fifteen days the total number of death cases will decrease using ANN. &#61623; The confirmed cases forecast for the next fifteen days revealed that the number of recovered cases will decrease. &#61623; The recovered cases forecast for the next fifteen days revealed that the number of recovered cases will increase. &#61623; From this study, it is revealed that the ANN provides the best forecast for the short term.</ns0:p><ns0:p>Therefore, policymakers can use this technique to take up-to-date decisions for the shortterm plan.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations:</ns0:head><ns0:p>&#61623; In this study, we do not consider and measured the other parameters like the number of lockdowns, social distancing, and measure of self-isolation. &#61623; The current study did not measure the association of vaccinated people and the number of new daily cases.</ns0:p><ns0:p>Future Work: In this study, the RF, SVM, KNN and ANN algorithms are used, though all the algorithms captured the original track almost in all cases, i.e. for the daily confirmed cases, deaths, and the number of recovered cases of the four countries. However, the performance metrics suggested the ANN model. Moreover, it can be possible to consider the other parameters like the number of lockdowns in the country, the number of vaccinations to the people, treatment procedures, etc. that can help for government to make and adjust their policies according to the various cases that are forecasted.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper proposed four predicting models for the COVID-19 outbreak. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>SMAPE. The results for the daily deaths cases are based on 80% training and 20% testing parts. Among the four methods using these performance metrics, the ANN achieved better results in every aspect. In the same way, the results obtained for the daily recovered cases using 80% training and 20% testing parts and ANN have attained better results than the other methods. Moreover, daily confirm cases results obtained using the same training and testing parts and in most of the cases, ANN performed better than the other methods. Therefore, the major findings of this study reveal that ANNs outperform the rest of the methods for both models. In addition, ANN suggests consistent prediction performance compared to RF, SVM, and KNN models and hence preferable as a robust forecast model. The AI-based method's accuracy for predicting the trajectory of the COVID-19 is high. For this specific application in predicting the disease, the authors consider the results are reliable. In this study, ANN generates the fastest convergence and good forecast ability in most cases. The AI results can help in short-term plans for the disease occurrences. The estimate models will help the public authority and medical staff to be prepared for the coming situation and take further timelines in medical care structures. The forecasted figures were calculated for the next fifteen days (that is, 10 June 2021 to 25 June 2021) for COVID-19 data. Predicting an event is a difficult, and some customized models probably would not be generalized to the cultural and financial conditions of various countries. In this study, the proposed models do not considers the factors like area and other government strategies. Therefore, it is to be noted, while take to mean these predictions.</ns0:p></ns0:div> <ns0:div><ns0:head>Disclosures</ns0:head><ns0:p>No conflicts of interest, financial or otherwise, are declared by the authors. Manuscript to be reviewed </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>ResultsA</ns0:head><ns0:label /><ns0:figDesc>total of three data sets of COVID-19 (confirm, recover and death) are used to evaluate the performance of the different ML approaches and suggested the best model for forecasting the COVID-19 outbreak. All data sets consisting of the world's daily confirm, recovery and death cases. Every time series divided into training and testing sets of observations. The original data divided into 80% training and 20% testing parts and the first 80% of the total observations in every time series used as a training set whereas the rest 20% used as the testing set. To overcome the overfitting problem, we use 10-fold cross-validation for each of the models and then their results are averaged. In addition, we also used prediction accuracy for training parts. Each time series containing a total of 505 observations spanning (22 January 2020, to 09 June 2021), the first 404 observations spanning (22 January 2020, to 28 February 2021) belong to the training series and the rest 100 observations spanning (29 February 2021, to 09 June 2021) part of the testing series. We use death, recover, and confirm cases from the COVID-19 dataset. The COVID-19 dataset is loaded into the R package environment, and then, we fit the ARDL model to the Daily Deaths series with recover and confirm cases. We choose using adj-R &#119910; &#119905; &#119877; &#119905; &#119862; &#119905; &#119901; 1 = 3, &#119901; 2 = 3, &#119886;&#119899;&#119889; &#119902; = 2 square and parsimony of the model. The insignificant variables are removed and fit the ARDL model. The results obtained from the ARDL model are presented in (Tab.2).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>&#119909; &#119905; ( &#119862;&#119900;&#119899;&#119891;&#119894;&#119903;&#119898; ) = 1.45 &#215; 10^(4) + 1.86 &#215; 10^( -1)&#119909; &#119905; -1 + 7.90 &#215; 10^( -1)&#119909; &#119905; -1 + &#120576; &#119905; PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57837:1:1:NEW 8 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>, the original COVID-19 confirmed patient's data and forecast of ANN exhibited for the next fifteen days from (10 June 2021 to 25 June 2021). The ANN model forecast captured the pattern of the original COVID-19 confirms patient data. In addition, the next fifteen days PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57837:1:1:NEW 8 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>+ &#119936; &#119957; )/&#120784; | * &#120783;&#120782;&#120782; 1 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57837:1:1:NEW 8 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,87.30,72.00,381.65,195.89' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,76.80,295.96,402.60,200.05' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,74.70,533.47,406.85,184.98' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>&#119909; &#119905; + &#120574; 2 &#119909; &#119905; -1 + &#8230;,&#120574; &#119894; &#119909; &#119905; -&#119895; &#120575; 1 &#119908; &#119905; + &#120575; 2 &#119908; &#119905; -1 + &#8230;, &#120575; &#119894; represent the lags order of and respectively. The parameters denoted</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>dependent variable. The two independent variables 'recover cases' and 'confirm cases' are</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>denoted by and &#119909; &#119905; &#120574; 1 &#119908; &#119905; -&#119895; respectively. Whereas &#119908; &#119905; &#119909; &#119905; &#119908; &#119905;</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>&#120573;, &#120574; &#119886;&#119899;&#119889; &#120575;</ns0:cell></ns0:row><ns0:row><ns0:cell>&#119910; &#119905;</ns0:cell><ns0:cell /><ns0:cell>&#119905; &#120572; 0</ns0:cell></ns0:row><ns0:row><ns0:cell>way,</ns0:cell><ns0:cell>denotes the</ns0:cell><ns0:cell cols='2'>autoregressive lag order of the model of the</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>We evaluate models including RF, SVM, KNN, and ANN to compare their performance using various accuracy metrics including ME, RMSE, MAE, MPE and MAPE. These metrics provide different perspectives to assess predicting models. The first three are the absolute performance measures while the fourth and fifth are relative performance measures. The training sample is used to estimate the parameters for specific model architecture. The testing set is then used to select the best model among all models considered. Tab. 5 summarizes the RF, SVM, KNN, and ANN forecasting accuracy measures for the training set of COVID-19 daily deaths data.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>0.401&#119908; &#119905; -3 + &#120576; &#119905;</ns0:cell><ns0:cell>&#119905; -2 -</ns0:cell><ns0:cell>(7)</ns0:cell></ns0:row></ns0:table><ns0:note>In Tab. 5, the values of ME for RF, SVM, KNN, and ANN models reveal that RF shows the lowest value (the best) among the other methods. Similarly, the RMSE values of RF, SVM, KNN, and ANN, respectively show that the ANN achieved better performance compared to the other methods. Moreover, the MAE values indicate that ANN is better than the other methods. While the values of MPE, the RF achieved better performance compared to the other methods. Similarly, the values of MAPE and SMAPE revealed that ANN is less than 1 which indicates that the selected model falls in the range of perfect model</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>Significant at 1%, '**' Significant at 5% , '*' Significant at 10% Residual standard error: 56600 Multiple R-squared: 0.8549, Adjusted R-squared: 0.8523 F-statistic: 338.8, P-value: 2.16 &#215;10^(-16)*** 1</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Coefficients</ns0:cell><ns0:cell>Estimate</ns0:cell><ns0:cell>Std. Error</ns0:cell><ns0:cell cols='2'>t value P-value</ns0:cell></ns0:row><ns0:row><ns0:cell>(Intercept)</ns0:cell><ns0:cell>146.24</ns0:cell><ns0:cell>4836.57</ns0:cell><ns0:cell>0.03</ns0:cell><ns0:cell>0.976</ns0:cell></ns0:row><ns0:row><ns0:cell>Rt.t</ns0:cell><ns0:cell>0.145</ns0:cell><ns0:cell>0.040</ns0:cell><ns0:cell>3.55</ns0:cell><ns0:cell>0.0004***</ns0:cell></ns0:row><ns0:row><ns0:cell>Rt.1</ns0:cell><ns0:cell>-0.226</ns0:cell><ns0:cell>0.049</ns0:cell><ns0:cell>-4.53</ns0:cell><ns0:cell>7.78&#215;10^(-6)***</ns0:cell></ns0:row><ns0:row><ns0:cell>Rt.2</ns0:cell><ns0:cell>0.710</ns0:cell><ns0:cell>0.051</ns0:cell><ns0:cell>13.78</ns0:cell><ns0:cell>2. &#215;10^(-16)***</ns0:cell></ns0:row><ns0:row><ns0:cell>Rt.3</ns0:cell><ns0:cell>-0.401</ns0:cell><ns0:cell>0.047</ns0:cell><ns0:cell>-8.47</ns0:cell><ns0:cell>4.7&#215;10^(-16)***</ns0:cell></ns0:row><ns0:row><ns0:cell>Ct.1</ns0:cell><ns0:cell>0.368</ns0:cell><ns0:cell>0.046</ns0:cell><ns0:cell>7.95</ns0:cell><ns0:cell>1.99&#215;10^(-14)***</ns0:cell></ns0:row><ns0:row><ns0:cell>Ct.2</ns0:cell><ns0:cell>-0.084</ns0:cell><ns0:cell>0.041</ns0:cell><ns0:cell>2.05</ns0:cell><ns0:cell>0.040*</ns0:cell></ns0:row><ns0:row><ns0:cell>Ct.3</ns0:cell><ns0:cell>0.144</ns0:cell><ns0:cell>0.040</ns0:cell><ns0:cell>3.58</ns0:cell><ns0:cell>0.0003***</ns0:cell></ns0:row><ns0:row><ns0:cell>'***'</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57837:1:1:NEW 8 Jul 2021)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Original Manuscript ID: CS-2021:02:57837:1:0: NEW Original Article Title: “Comparative Analysis of Machine Learning Approaches to Analyze and Predict the Covid-19 Outbreak” To: Editor-in-Chief PeerJ Computer Science Re: Response to reviewers Dear Editor, Thank you for letting a resubmission of our manuscript, with a prospect to address the reviewers’ comments. We are uploading (a) our point-by-point response to the comments (below) (response to reviewers), (b) an updated manuscript with yellow highlighting showing changes, and (c) a clean restructured manuscript deprived of highlights (PDF main document). The concerns elevated by the reviewers has significantly improved the article. Best regards, Aamir et.al Editor’s, Concern # 1: The results should be measured with the coefficient of determination R-squared and SMAPE, in addition to ME, RMSE, MAE, MPE, and MAPE. The authors should give more importance to the R-squared results in the article. Author response: Thank you for addressing the important concerns. Author Action All the results updated and measured using coefficient of determination R-squared and SMAPE. The coefficients that are not significant to the model are removed using the (Benjamin et al. 2018) criteria and parsimony model. Please get in the reviewed form of the paper. Editor’s, Concern # 2: All the information about the software packages employed by the authors should be moved into a separate ad-hoc section. Author response: Thank you again for the valuable comments. Author Action The software and packages that are used in this manuscript, we made separate section in the paper. Please check the highlighted script of the revised version of the paper. Editor’s, Concern # 3: The statistical significance in Table 2 and in the text should be reported for p-values lower than the 0.005 threshold, as pointed out by Benjamin and colleagues ( https://doi.org/10.1038/s41562-017-0189-z ). Author response: Thanks for the comments. Author Action: We’ve compared the new results using and cited the authors (Benjamin et al. 2018) and adjusted R-squared. Please you may find in the method section of the revised form of the highlighted manuscript. Editor’s, Concern # 4: The values with exponential numbers should be written in a scientific notation. For example, '8.97E-05' should be rewritten as '8.97 x 10^(-5)'. Author response: Thank you very much for the valuable remarks. Author Action: All the updated results are now replaced with the scientific notation. Please you may check the highlighted script of the reviewed form of the paper in each tables. Editor’s, Concern # 5: Please write the date months as complete words ('Jan' --> 'January', 'Feb' --> 'February', etc). Author response: Thanks for the appreciated comments. Author Action: The date months has been corrected in all the figures and in the paragraphs. Please check the highlighted version of the manuscript. Editor’s, Concern # 6: All the occurrences of 'i.e.' should be replaced with 'that is' or 'that means'. Author response: Thank you again for the comments. Author Action: It has been corrected in throughout the manuscript. Please you may find in the highlighted version of the paper. Editor’s, Concern # 7: The style of the text must be improved. Terms like 'In (Fig. 10)' should be replaced with 'In Fig. 10'. Author response: Thank you for the valuable comments. Author Action: All these terms are replaced with “In Fig. 10”, please you may find in the revised version of the manuscript. Editor’s, Concern # 8: The values of Tables 6, 7, 8, and 9 should be aligned on the left. Author response: Thanks for the comments. Author Action: These has been corrected in throughout the manuscript. Please check the highlighted version of the paper. Editor’s, Concern # 9:  In the figures, the axes values should be written in horizontal position, not in diagonal position. Author response: Thanks you again for the comments. Author Action: These has been corrected and replaced it with plain text in throughout the manuscript. Please check in the revised manuscript. Editor’s, Concern # 10: In the figures, the y axis label should say 'number of' instead of 'No of'. Author response: Thanks you again for pointing out the mistakes. Author Action: This has been corrected now and replaced with “number of” in the figures. Please you may find the updated version of the paper. Reviewer’s 1, Concern # 1: The current study does not explain why these models were selected for comparison. Specifically, it lacks a review or summary of the existing studies that have applied forecasting models for the COVID-19 pandemic. Please see the existing papers in PubMed/LitCovid (https://www.ncbi.nlm.nih.gov/research/coronavirus/docsum?text=forecasting%20model). The methods are from traditional mathematical models and recurrent neural networks other than ANN. The study should discuss them in detail and select representative models with reasonings. Author response: Thank you for such interesting comment. Author Action: The revised manuscript is updated and incorporated the above concern. New studies is also added to check the performance of the selected models. Reviewer’s 1, Concern # 2: First, the study should explain the data split and cross-validation in more detail. How was the cross-validation performed? Since this is time-series data, the split by time is critical. How does the 10-fold cross-validation reflect the timeline? Author response: Thank you for the appreciated comments. Author Action: This is one the most important concerns. Since in cross validation usually we have randomness and assign it to either train or test set. In time series, the cross validation usually used in forward chaining. Like for example if we’ve 10 fold cross validation. Then the first fold is one training and one testing, in second we’ve the first two training and the third testing and so on. So in the forward chaining we split the data. Reviewer’s 1, Concern # 3: Second, a more religious design should be made for the forecasting model. Currently, the spanning between the trains and testing is too close. The training set contains the data up to 7th Nov 2020, whereas the testing data is directly from 8th Nov 2020. Arguably, if the data of 7th Nov 2020 is available, it is not too challenging to predict the 8th even using a simple model. And it is not very useful because there is not much time for interventions or prevention can be done in advance. The study could quantify how the model forecasts in a week or earlier in advance. Author response: Thank you for the valuable remarks. Author Action: Actually dividing the data into training and testing parts is used to evaluate the model performance. The model that are trained and the data are tested is completely unseen to the model. For this, the cross validation is used that make randomness while trains the model. In addition, the model that captured the original line on the basis of training, is then capable to forecast for the futures events using the previous pattern. Furthermore, many time series models used the same approach to forecast the future events. Reviewer’s 1, Concern # 4: The study lacks a thorough discussion on the limitations. For example, there are many factors (prevention, vaccinations etc) that could impact the case growth. Such factors cannot be captured in the models. Author response: Thank you again for the valuable remarks. Author Action: We added some limitations and future recommendations of the study. In the limitations, we mentioned these factors that are not considered at this stage. In the future study, we include these factors as well. This will gives us some more interesting details in the future study and we will keep insight in the future study. Reviewer’s 2, Concern # 1: The manuscript size should be reduced, by improving the writing style. For example, information included in Tables, should not be also mentioned in the main body of the manuscript in such detail. E.g., lines 269-281, lines 283-296, lines 301-313, etc. Replace these details with a more qualitative and descriptive discussion of the overall findings. The literature review on the topic is very poor at the moment, and should be enhanced. The connection of this work with previously published work in the Journal should be established. The manuscript needs very thorough editing and proofreading. It would be best to remove the gray background of the figures. The green in Figure 2 is too bright; replace it with a smoother one (use the same blue and red as in Figure 1). Author response: Thank you very much for the valuable remarks. Author Action: We removed the table’s values from the main body of the manuscript. These values are replaced with qualitative discussion. We included some recent research work in the literature section. We made changes and relate the current studies with the previous studies. Overall, the manuscript has been checked and edited where it is necessary. The gray background of all the figures has been also removed and replaced with white background. In the same way, the green color has been removed and replaced it with “dark red” color. These all corrections have been made, please you may find it in the highlighted version of the manuscript. Reviewer’s 2, Concern # 2: The research question is well defined, and it contributes to the international literature on the topic. The methods are adequately described. Author response: Thank you for the appreciated comments. Reviewer’s 2, Concern # 3: This work has merit to be an important contribution to the topic of using machine learning methods for COVID-19 forecasting, by comparing several such methods. Author response: Thank you again for the valuable remarks. Reviewer’s 2, Concern # 4: This is an interesting manuscript that has a contribution to the international literature of COVID-19 forecasting, by comparing machine learning approaches towards this direction. There are some issues that need to be addressed before the manuscript is ready for publication. Author response: Thank you again for the appreciated comments. "
Here is a paper. Please give your review comments after reading it.
253
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. Forecasting the time of forthcoming pandemic reduces the impact of diseases by taking precautionary steps such as public health messaging and raising the consciousness of doctors. With the continuous and rapid increase in the cumulative incidence of COVID-19, statistical and outbreak prediction models including various machine learning (ML) models are being used by the research community to track and predict the trend of the epidemic, and also in developing appropriate strategies to combat and manage its spread. Methods. In this paper, we present a comparative analysis of various ML approaches including Support Vector Machine, Random Forest, K-Nearest Neighbor and Artificial Neural Network in predicting the COVID-19 outbreak in the epidemiological domain. We first apply autoregressive distributed lag (ARDL) method to identify and model the short and long-run relationships of the time-series COVID-19 datasets. That is, we determine the lags between a response variable and its respective explanatory time series variables as independent variables. Then, the resulting significant variables concerning their lags are used in the regression model selected by the ARDL for predicting and forecasting the trend of the epidemic. Results. Statistical measures that are, Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Symmetric Mean Absolute Percentage Error (SMAPE) are used for model accuracy. The values of MAPE for the best-selected models for confirmed, recovered and deaths cases are 0.003, 0.006 and 0.115 respectively, which falls under the category of highly accurate forecasts. In addition, we computed fifteen days ahead forecast for the daily deaths, recovered, and confirm patients and the cases fluctuated across time in all aspects. Besides, the results reveal the advantages of ML algorithms for supporting the decision-making of evolving short-term policies.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The outbreak of the novel coronavirus disease in 2019 (COVID-19) has emerged as one of the most devastating respiratory diseases since the 1918 HIN1 influenzas pandemic, infecting millions of people globally <ns0:ref type='bibr' target='#b25'>(Tuli et al. 2020)</ns0:ref>. The cumulative incidence of the virus is continually and rapidly increasing globally. At the early stage of the outbreak, it is important to have a clear understanding of the disease transmission and its dynamic progression, so that relevant agencies and organizations can make informed decisions and enforce appropriate control measures. Generally, capturing the transmission dynamics of a disease over time can provide insights into its progression, and show whether the outbreak control measures are effective and able to reduce the impact of the disease on a community <ns0:ref type='bibr'>(Kucharski et al. 2020</ns0:ref>). Access to real-time data and effective application of outbreak prediction or forecasting models are central to obtaining insightful information regarding the transmission dynamics of the disease and its consequences. Moreover, every outbreak has its unique transmission characteristics that are different from the other outbreaks, which raises the question of how standards prediction models would perform in delivering accurate results. In addition, various factors including the number of known and unknown variables, differences in population/behavioural complexity in various geopolitical areas, and the variations in containment strategies increase the uncertainty of prediction models <ns0:ref type='bibr' target='#b3'>(Ardabili et al. 2020</ns0:ref>). As a result, it is challenging for standard epidemiological models such as Susceptible-Infected-Recovered (SIR) to provide reliable results for long-term predictions. Therefore, it is important to not only study the relationship between the components of the outbreak datasets but also evaluate the effectiveness of the common disease prediction models. In recent months, there have been a handful of works that try to understand the spread of COVID-19, especially using statistical approaches. For instance, Kucharski et al. explored a combination of stochastic transmission model and four datasets that captured the daily number of new cases, the daily number of new internationally exported cases, the proportion of infected passengers on evacuation flight and the number of new confirmed cases, to estimate the transmission dynamics of the disease over some time <ns0:ref type='bibr'>(Kucharski et al. 2020)</ns0:ref>. In another study, a machine learning-based model is applied to analyse and predict the growth of COVID-19 <ns0:ref type='bibr' target='#b25'>(Tuli et al. 2020)</ns0:ref>. The authors demonstrated the effectiveness of using iterative weighting for fitting Generalized Inverse Weibull distribution when developing a prediction solution. <ns0:ref type='bibr'>Lin et al.</ns0:ref>, presented a conceptual model designed for the COVID-19 epidemic with consideration of individual behavioural responses and engagements with the government, including the extension in holidays, restriction on travel, quarantine, and hospitalization <ns0:ref type='bibr' target='#b18'>(Lin et al. 2020)</ns0:ref>. This work combined zoonotic transmission with the emigration pattern, and then estimate the future trends and the reporting proportion. The model gives promising insight into the trend of the COVID-19 outbreak, especially the impact of individual and government reactions or responses to the epidemic. The authors <ns0:ref type='bibr' target='#b2'>(Anastassopoulou et al. 2020)</ns0:ref> estimated the average values of the key epidemiological parameters including the per day case mortality, recovery ratios, and method to identify and model the short and the long-run relationships of the time-series COVID-19 datasets (confirmed, recovered and death cases). That is, we determine the lags between a response variable and its respective explanatory time series variables as independent variables. Then, the resulting significant variables concerning their lags are used in the regression model selected by the ARDL model for predicting and forecasting the trend and dynamics of the COVID-19. We evaluated the models using relevant accuracy and error metrics including Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Symmetric Mean Absolute Percentage Error (SMAPE).</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head></ns0:div> <ns0:div><ns0:head>Data Source</ns0:head><ns0:p>We conducted our study based on the publicly accessible data of daily deaths, recovered and confirmed cases 671127, 10585 and 309869 respectively reported for all over the world from 22 nd January 2020 to 18 th August 2021 in Fig. <ns0:ref type='figure'>1</ns0:ref>. The data is available in the online repository -GitHub (https://github.com/CSSEGISandData/COVID-19). We perform data processing including the conversion of data format from cumulative to daily basis. This repository is for the COVID-19 visual dashboard operated by Johns Hopkins University Centre Systems Science and Engineering (JHU CSSE). They have aggregated data from sources like WHO, WorldoMeters, BNO News, and Washington State Department of Health and many more. The data have the number of confirmed cases, the recovered cases, and the death cases for the globe. On this data, we attempted to forecast the key epidemiological parameters, that is, the number of upcoming daily new confirmed cases, deaths, and recoveries. Though, the quantity of deaths, recovery and confirmed cases of individuals is expected to be much higher over time. Therefore, we have similarly derived a correlation between these two variables and their past record (lags) by using the ARDL model.</ns0:p></ns0:div> <ns0:div><ns0:head>Autoregressive Distributive Lag Models</ns0:head><ns0:p>The ARDL models are used between regressed series and k number of regressors series in regression analysis. If there is only one independent series, the dependent lag series makes the model autoregressive. The numeral of independent lag series is denoted by ,</ns0:p><ns0:formula xml:id='formula_0'>&#119901; &#119905;&#8462; &#119901; &#119895; &#119895; = 1, . . ., &#119899;</ns0:formula><ns0:p>denotes daily recovery and confirm cases, the lags of dependent variable series are shown by &#119902; &#119905;&#8462; , where</ns0:p><ns0:formula xml:id='formula_1'>&#119902; &#119894; &#119894; = 0,1,&#8230;,&#119898;.</ns0:formula><ns0:p>The ARDL model can be expressed as:</ns0:p><ns0:formula xml:id='formula_2'>+ &#119910; &#119905; = &#120572; 0 + &#120573; 1 &#119910; &#119905; -1 + &#120573; 2 &#119910; &#119905; -2 + &#8230;, &#120573; &#119895; &#119910; &#119905; -&#119894; + &#120574; 1 &#119909; &#119905; + &#120574; 2 &#119909; &#119905; -1 + &#8230;,&#120574; &#119894; &#119909; &#119905; -&#119895;<ns0:label>(1)</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>&#120575; 1 &#119908; &#119905; + &#120575; 2 &#119908; &#119905; -1 + &#8230;, &#120575; &#119894; &#119908; &#119905; -&#119895; + &#120576; &#119905;</ns0:formula><ns0:p>Where denotes the number of daily deaths at time . represent the intercept term. In the same coefficients of death, recovery, and confirmed cases, respectively, while denotes the error term.</ns0:p></ns0:div> <ns0:div><ns0:head>&#120576; &#119905;</ns0:head><ns0:p>Eq.1 can be further simplified and presented in (Eq. 2):</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_4'>&#119910; &#119905; = &#120572; 0 + &#8721; &#119898; &#119894; = 1 &#120573; &#119894; &#119910; &#119905; -&#119894; + &#8721; &#119899; &#119895; = 0 &#120574; &#119895; &#119909; &#119905; -&#119895; + &#8721; &#119899; &#119895; = 0 &#120575; &#119895; &#119908; &#119905; -&#119895; + &#120576; &#119905;</ns0:formula><ns0:p>The number of deaths, confirm, and recover cases of people is likely to be much higher with time.</ns0:p><ns0:p>Therefore, the ARDL model for recovered cases and confirmed cases is shown in (Eq.3).</ns0:p></ns0:div> <ns0:div><ns0:head>&#119909; &#119905; &#119908; &#119905;</ns0:head><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_5'>&#119909; &#119905; = &#120579; 0 + &#120574; 1 &#119909; &#119905; -1 + &#8230;,&#120574; &#119894; &#119909; &#119905; -&#119895; + &#120575; 1 &#119908; &#119905; + &#120575; 2 &#119908; &#119905; -1 + &#8230;, &#120575; &#119894; &#119908; &#119905; -&#119895; + &#120576; &#119905;</ns0:formula><ns0:p>Similarly, the ARDL model for confirmed and recovered cases is shown in (Eq.4) (4)</ns0:p><ns0:formula xml:id='formula_6'>&#119908; &#119905; = &#120599; 0 + &#120575; 1 &#119908; &#119905; + &#120575; 2 &#119908; &#119905; -1 + &#8230;, &#120575; &#119894; &#119908; &#119905; -&#119895; + &#120574; 1 &#119909; &#119905; -1 + &#8230;,&#120574; &#119894; &#119909; &#119905; -&#119895; + &#120576; &#119905;</ns0:formula><ns0:p>There are different criteria used to select an optimal lag length selection. The authors in <ns0:ref type='bibr' target='#b6'>(Chandio et al. 2020</ns0:ref>) use Akaike Information Criterion (AIC) and the authors in <ns0:ref type='bibr' target='#b10'>(Gayawan &amp; Ipinyomi 2009)</ns0:ref> compare AIC, SIC and adj-R square to select the optimal lag length. We use adj-R square and parsimony model criteria to select an optimal number of the lag length in this study. It makes the call to the function easier when the number of lags order are the same, however, when the number of lags order is different from dependent and every independent sequence, we use the argument remove. It will remove the lags that are not contributed to the model. Once the ARDL model specifies the significant coefficients of the dependent variable and independent variables, the models including the RF, SVM, KNN, and ANN are used to assess the accuracy and error rate of these models. We utilized RF <ns0:ref type='bibr' target='#b5'>(Biau &amp; Scornet 2016)</ns0:ref>, SVM <ns0:ref type='bibr' target='#b17'>(Liang et al. 2018)</ns0:ref>, KNN <ns0:ref type='bibr' target='#b19'>(Mart&#237;nez et al. 2019)</ns0:ref> and ANN <ns0:ref type='bibr' target='#b12'>(Hu et al. 2018</ns0:ref>) time series models were applied to predict the COVID-19.</ns0:p><ns0:p>To overcome the overfitting problem, we use 80% training and 20% testing parts, respectively. Random forest is one of the best learning algorithms and it requires a bit of parameter tuning. Generally, in time series analysis, Support Vector Regression (SVR) is used. In SVM, various kernel functions are used to develop the input space into a feature space with a complex dimension. Like Gaussian Radial Basis (GRBF), Sigmoid, polynomial, etc. are some kernel functions. For SVM, we use Radial Basis Kernels (RBF) . In the SVM model, using</ns0:p><ns0:formula xml:id='formula_7'>&#119896; &#120574; (&#119910; &#119894; ,&#119910; &#119895; ) = exp ( -&#120574;&#8214;&#119910; &#119894; -&#119910; &#119895; &#8214;) 2</ns0:formula><ns0:p>RBF kernels it is necessary to tune model parameters to find an optimal value of the parameters and reducing the overfitting problem. So, we use the grid search method of tenfold cross-validation on the training part and testing part and their results are averaged. k-nearest neighbor (k-NN) predicts the response variable based on the nearest training points. It uses a training dataset in its place of learning a discriminative function from the training data. k-NN is used both for classification and regression problems. There are various techniques use to improve model accuracy. Such as maximum percentage accuracy graph, Elbow method, for loops to select an optimal value of k. Generally, the square root of n is used, and we utilized .</ns0:p><ns0:p>&#119899; ANN is a mathematical tool and has been generally used for classification and forecasting problems properly that contain predictors (input) and response (output) layers, and a hidden layer.</ns0:p><ns0:p>A combination of different hidden layers is used to choose a better MLP architecture network. It is the hidden layers in ANN models that play an important role in many successful applications of neural networks. ANN model is widely used in economic and financial studies <ns0:ref type='bibr' target='#b14'>(Huang et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b22'>Qi 1996)</ns0:ref>. The number of hidden layers depends upon the nature of the problem. The authors <ns0:ref type='bibr' target='#b28'>(Zhang et al. 1998</ns0:ref>) used two hidden layers and finds better model prediction accuracy. In the same way, the authors in <ns0:ref type='bibr' target='#b26'>(Xu et al. 2020</ns0:ref>) used ( , where is the number of predictors (inputs).</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>&#215; &#119896; + 1) &#119896;</ns0:head><ns0:p>For an optimal result of ANN, usually, trial and error method is used in determining the number of hidden nodes that is searching the architecture having the smallest MAPE among the models <ns0:ref type='bibr' target='#b11'>(G&#252;ler &amp; &#220;beyli 2005)</ns0:ref>. We use 4 hidden layers and 8 neurons in the hidden layers for daily death cases using trial and error procedure and 10,000 times iteration. In the same way, we use 2 hidden layers and 4 number of neurons in the hidden layers for daily recover cases.</ns0:p></ns0:div> <ns0:div><ns0:head>Forecast Evaluation Criterions</ns0:head><ns0:p>In this study, as the response variable is continuous, therefore, the forecasting capacity of different machine learning approaches are evaluated by using five different criteria including mean error </ns0:p></ns0:div> <ns0:div><ns0:head>Software and packages</ns0:head><ns0:p>In the current study, we will use the R programming version 4.0.4 and dLagM package that outfits the ARDL test method <ns0:ref type='bibr' target='#b21'>(Pesaran et al. 2001)</ns0:ref>. Subsequently, dLagM uses lag orders, dataset, and overall method which make the prerequisite lags and changes for definite models. One of the benefits of this approach is that the users are not required to specify the variation for the applied models. Which brings efficacy and value to researchers in various areas.</ns0:p><ns0:p>In this study, we used tseries, timeseries, zoo and window packages for the data.</ns0:p><ns0:p>In the same way, dLagM package in R for ARDL model. An orders and of the ARDL lag &#119901; &#119902; model are denoted by ARDL ( ), which has independent lags series and dependent lags series.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119901;,&#119902; &#119901; &#119902;</ns0:head><ns0:p>We use the packages, randomForest, forecast, caret, tiyverse, tsibble and purr for RF. the ntree is 500, mtry is p/3, where p is the number of features, sampsize is 70% and type is 'regression' utilized in the function. The other parameters are kept as default.</ns0:p><ns0:p>In this study, the library e1071 is used for SVM, the parameters cost=10 2 , gamma , and</ns0:p><ns0:formula xml:id='formula_8'>(&#7527;) = 0.1</ns0:formula><ns0:p>the insensitivity ( respectively. In the same way, k-nearest neighbor Regression the caret The coefficient related to confirm cases and its first lag is highly significant at 0.5% level,</ns0:p></ns0:div> <ns0:div><ns0:head>&#119862; &#119905;</ns0:head><ns0:p>respectively. Similarly, the current death of the response variable (daily deaths of COVID-19),</ns0:p><ns0:p>&#119910; &#119905; are significant at the 0.5% level. In addition, the coefficient of recover cases (current) are also significant at 0.5% level, respectively. Overall, the model is highly significant at the 0.5% level with a p-value smaller than 2.2&#215;10^ (-15) with the R-squared equal to 90.29% and the alpha value <ns0:ref type='bibr' target='#b4'>(Benjamin et al. 2018</ns0:ref>). The fitted model can be written as: &#119910; &#119905; ( &#119863;&#119886;&#119894;&#119897;&#119910; &#119863;&#119890;&#119886;&#119905;&#8462;&#119904; ) = 1905.14 + 0.340&#119910; &#119905; -1 + 0.251&#119910; &#119905; -2 + 8.93&#119909; &#119905; -10.25&#119909; &#119905; -2 -0.176&#119908; &#119905; -1 + (5)</ns0:p><ns0:formula xml:id='formula_9'>0.597&#119908; &#119905; -2 -0.262&#119908; &#119905; -3 + &#120576; &#119905;</ns0:formula><ns0:p>In the second scenario, we examine the relationship between the number of recover cases and confirm cases. We fit the ARDL model for recover cases of COVID-19 series with confirm</ns0:p><ns0:formula xml:id='formula_10'>&#119909; &#119905; &#119908; &#119905;</ns0:formula><ns0:p>cases. We take using R square and parsimony of the model and fitting the</ns0:p><ns0:formula xml:id='formula_11'>&#119901; 1 = 4, &#119886;&#119899;&#119889; &#119902; = 3</ns0:formula><ns0:p>ARDL model to the datasets. The results obtained from the ARDL model are presented in Tab. 3. Tab. 3 shows the summary of the ARDL model, the confirmed cases recorded in the current day.</ns0:p><ns0:p>The daily recover cases of the first day have a significant impact on the number of daily confirm cases from the COVID-19 on that particular day. The model is significant at the 0.5% level (&#119875; &lt; the R-squared value is 82.94%. . The fitted model can be written as:</ns0:p><ns0:formula xml:id='formula_12'>2.1 &#215; 10^( -15)),</ns0:formula><ns0:p>&#119909; &#119905; ( &#119862;&#119900;&#119899;&#119891;&#119894;&#119903;&#119898; ) = 7.96 &#215; 10 2 + 8.41 &#215; 10 -1 &#119909; &#119905; -1 -2.21 &#215; 10 -1 &#119909; &#119905; -2 + 1.42 &#215; 10 -1 &#119909; &#119905; -3 Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Tab. 4 shows the summary of the ARDL model, the confirmed cases recorded on the first day, second day and the third days. The daily recover cases of current, one day, two days and three days before have a significant impact on the number of daily recover cases from the COVID-19 on that particular day. The model is significant at the 0.5% level (&#119875; &lt; 2.16 &#215; 10^ ( -16) . We select the model using adjusted R-squared value and alpha the Rsquared value is 86.8%</ns0:p><ns0:p>value <ns0:ref type='bibr' target='#b4'>(Benjamin et al. 2018</ns0:ref>). The fitted model can be written as:</ns0:p><ns0:p>&#119908; &#119905; ( &#119877;&#119890;&#119888;&#119900;&#119907;&#119890;&#119903;&#119890;&#119889; ) = -8.1 &#215; 10^(3) + 5.79&#119909; &#119905; + 3 &#215; 10^( -1)&#119908; &#119905; -1 + 3.1 &#215; 10^( -1)&#119908; &#119905; -2 + 2.4 (7) <ns0:ref type='bibr' target='#b1'>(Ahmadini et al. 2021;</ns0:ref><ns0:ref type='bibr' target='#b9'>Gao et al. 2019)</ns0:ref>. Moreover, the R-square value of ANN is greater than other methods. We highlighted the results for the ANN model indicating the smallest value among all models. In most cases, the ANN method shows significant performance compares to the rest of the method's base on training parts. In Tab. 6, the value of ME for the ANN model is lower than the other models. The results indicate that ANN shows the lowest value among the other methods. In addition, the ANN predicted value is close to the actual value. The ME value for KNN is negative and it reveals that the predicted value is less than the actual value. Similarly, the RMSE and MAE values of ANN are smaller than the rest of the methods and show that ANN achieved better performance compared to the other methods. Moreover, the MPE values are also smaller than the other methods. This shows ANN is better as compared to the other methods. While the values of MAPE and SMAPE of the ANN model are better than the three methods. Thus, the value of the MAPE and SMAPE for ANN is less than 1 which indicates that the selected model falls in the range of the perfect model <ns0:ref type='bibr' target='#b9'>(Gao et al. 2019</ns0:ref>). In addition, the value of R-square of all other methods are smaller than the ANN. This show that, ANN is better than the other methods. We highlighted the results for the ANN model indicating the smallest value among all models. The ANN method shows significant performance compares to the rest of the method's base on 20% testing parts in most of the cases. Fig. <ns0:ref type='figure'>2</ns0:ref> shows the plot of the forecasting accuracy measures for the models. It is clear from the above plot that on average, ANN is the best model for forecasting the daily deaths of the COVID-19 outbreak. Tab. 7 summarizes the RF, SVM, KNN, and ANN forecasting accuracy measures of the COVID-19 confirm patient's on the training dataset.</ns0:p><ns0:p>In Tab. 7, the value of ME of the ANN model is smaller than the rest of the methods. This indicates that the ANN predicted value is near to the actual value. KNN has the lowest (the best) value among the other methods with the highest accuracy. Similarly, the RMSE values of the ANN have shown the lowest RMSE value as compared to the rest of the methods. While the MAE and MPE values of the ANN model have the smallest value among the other methods. The values of MAPE and SMAPE of the ANN and KNN models are smaller than the other methods. Thus, the value of the MAPE for KNN is in the range of 1 to 10 which reveals that the selected model falls in the category very good model. In the same way, the R-square value of ANN is better than the other methods. Overall, the ANN method achieved significant performance better than the other methods based on training parts. This indicates that ANN results are more consistent with RF, SVM, and KNN. In Tab. 8, the value of ME for the ANN model has the lowest (the best) value among the other methods with the highest accuracy. In the same way, the RMSE values and MAE values of the ANN model indicate that it predicted value close to the actual value. The MPE value of the ANN model revealed that the ANN has the smallest value among the other methods. While the MAPE and SMAPE values tell that the ANN has the smallest value among the other methods, and it is in the range of 1 to 10 which revealed that the selected model falls in the category very good model. Furthermore, the R-square value indicate that, ANN is better than the other methods. On average, the ANN method achieved significant performance better than the other methods based on 20% testing parts. This indicates that ANN results are more consistent with RF, SVM, and KNN. Fig. <ns0:ref type='figure'>3</ns0:ref> shows the plot of the forecasting accuracy measures for different models. In Tab. 9, the ME and RMSE values of the ANN model have the lowest (the best) value among the other methods with the highest accuracy and it reveals that the predicted value is very close and captured the original data. Similarly, the MAE and MPE values of the ANN model have the smallest value among the other methods and reveals that ANN has better power to capture the real data as compared to the other methods. Similarly, the values of MAPE and SMAPE of ANN and RF models are better than the other methods respectively. Thus, the value of the MAPE for ANN is in the range of 1 to 10 which showed that the selected model falls in the category very good model. In addition, the value of R-square reveals that the other methods are less efficient than ANN. On average, the ANN method achieved significant performance better than the other methods based on 20% testing parts. This indicates that ANN results are more consistent with RF, SVM, and KNN. Fig. <ns0:ref type='figure'>3</ns0:ref> shows the plot of the forecasting accuracy measures for different models. In Tab. 10, the ME and RMSE values for the ANN model have the lowest value among the other methods with the highest accuracy. The MAE value and MPE value indicate that ANN has the smallest value among the other methods. Moreover, ANN follows the real data pattern with the smallest error as compared to the other methods. Similarly, the values of MAPE and SMAPE for ANN are in the range of 1 to 10 which revealed that the selected model falls in the category very good model. While the R-square value of ANN is higher than the other methods. On average, the ANN method achieved significant performance better than the other methods based on 20% testing parts. This indicates that ANN results are more consistent with RF, SVM, and KNN. Fig. <ns0:ref type='figure' target='#fig_1'>4</ns0:ref> shows the plot of the forecasting accuracy measures for different models.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The performance of the neural network model can be assessed once trained the network employing the performance function as a prediction. All the methods are capable of capturing the pattern of the data effectively. Moreover, ANN performed well and almost capture the whole pattern of the testing part of the data when compared to RF, SVM, and KNN methods. Fig. <ns0:ref type='figure'>3</ns0:ref> shows the prediction accuracy of the number of daily Covid-19 recovered cases of RF, SVM, KNN, and ANN methods. The world daily deaths original testing data of COVID-19 and the forecasted data for RF, SVM, KNN and ANN models are plotted in Fig. <ns0:ref type='figure'>5</ns0:ref>. Fig. <ns0:ref type='figure'>5</ns0:ref> displays the prediction accuracy of RF, SVM, KNN, and ANN models. All the models are capable of capturing competently the pattern of the daily death cases of COVID-19. Fig. <ns0:ref type='figure'>5</ns0:ref> clearly shows that ANN captured the pattern of the test set of the data better than RF, SVM, and KNN methods. Also, Fig. <ns0:ref type='figure'>5</ns0:ref> displays the prediction accuracy of RF, SVM, KNN, and ANN models for COVID-19's daily recovered cases. Similar to death cases accuracy results, all the models effectively captured the pattern of the daily recovered cases of COVID-19. In the same way, in Fig. <ns0:ref type='figure'>6</ns0:ref> and Fig. <ns0:ref type='figure'>7</ns0:ref>, the ANN captured the pattern on the test part of the data. While the rest of the methods first follow the pattern up to some extent and then insensitive to the original data. Fig. <ns0:ref type='figure'>6</ns0:ref> and Fig. <ns0:ref type='figure'>7</ns0:ref> are shown below.</ns0:p><ns0:p>In Fig. <ns0:ref type='figure'>8</ns0:ref>, the original COVID-19 number of deaths data points and the resulting forecast of ANN were plotted for the next fifteen days from (19 August 2021 to 02 September 2021). As shown in the figure, the ANN forecast captures and follows the pattern of the original death cases of COVID-19. The subsequent fifteen days forecasted line fluctuated near 10,000. In addition, the forecasted number of deaths tends to gradually upward over time. This is an indication that the number of daily deaths increases over time.</ns0:p><ns0:p>In Fig. <ns0:ref type='figure'>9</ns0:ref>, the original COVID-19 confirmed patient's data and forecast of ANN exhibited for the next fifteen days from (19 August 2021 to 02 September 2021). The ANN model forecast captured the pattern of the original COVID-19 confirms patient data. In addition, the next fifteen days forecasted drift going in the upward direction. This reveals that the number of daily confirm is increasing over time.</ns0:p><ns0:p>In Fig. <ns0:ref type='figure'>10</ns0:ref>, the original COVID-19 recovered patient's data and forecast of ANN exhibited for the next fifteen days from (19 August 2021 to 02 September 2021). The ANN model forecast captured the pattern of the original COVID-19 recover patient's data. In addition, the next fifteen days forecasted drift going in the upward direction. This reveals that the number of daily recoveries is decreasing over time.</ns0:p><ns0:p>The key findings of this work as follows:</ns0:p><ns0:p>&#61623; The machine learning approaches are compared in this study to predict the Covid -19 cases. &#61623; The ANN results on average are better than the other methods using the performance metrics and used to forecast the next fifteen days' values. &#61623; The forecast shows that in the next fifteen days the total number of death cases will increase using ANN. &#61623; The confirmed cases forecast for the next fifteen days revealed that the number of recovered cases will increase. &#61623; The recovered cases forecast for the next fifteen days revealed that the number of recovered cases will decrease. &#61623; From this study, it is revealed that the ANN provides the best forecast for the short term.</ns0:p><ns0:p>Therefore, policymakers can use this technique to take up-to-date decisions for the shortterm plan.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations:</ns0:head><ns0:p>&#61623; In this study, we do not consider and measured the other parameters like the number of lockdowns, social distancing, and measure of self-isolation. &#61623; The current study did not measure the association of vaccinated people and the number of new daily cases.</ns0:p><ns0:p>Future Work: In this study, the RF, SVM, KNN and ANN algorithms are used, though all the algorithms captured the original track almost in all cases, i.e. for the daily confirmed cases, deaths, and the number of recovered cases of the four countries. However, the performance metrics suggested the ANN model. Moreover, it can be possible to consider the other parameters like the number of lockdowns in the country, the number of vaccinations to the people, treatment procedures, etc. that can help for government to make and adjust their policies according to the various cases that are forecasted.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper proposed four predicting models for the COVID-19 outbreak. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>SMAPE. The results for the daily deaths cases are based on 80% training and 20% testing parts. Among the four methods using these performance metrics, the ANN achieved better results in every aspect. In the same way, the results obtained for the daily recovered cases using 80% training and 20% testing parts and ANN have attained better results than the other methods. Moreover, daily confirm cases results obtained using the same training and testing parts and in most of the cases, ANN performed better than the other methods. Therefore, the major findings of this study reveal that ANNs outperform the rest of the methods for both models. In addition, ANN suggests consistent prediction performance compared to RF, SVM, and KNN models and hence preferable as a robust forecast model. The AI-based method's accuracy for predicting the trajectory of the COVID-19 is high. For this specific application in predicting the disease, the authors consider the results are reliable. In this study, ANN generates the fastest convergence and good forecast ability in most cases. The AI results can help in short-term plans for the disease occurrences. The estimate models will help the public authority and medical staff to be prepared for the coming situation and take further timelines in medical care structures. The forecasted figures were calculated for the next fifteen days (19 August 2021 to 02 September 2021). for COVID-19 data. Predicting an event is a difficult, and some customized models probably would not be generalized to the cultural and financial conditions of various countries. In this study, the proposed models do not considers the factors like area and other government strategies. Therefore, it is to be noted, while take to mean these predictions.</ns0:p></ns0:div> <ns0:div><ns0:head>Disclosures</ns0:head><ns0:p>No conflicts of interest, financial or otherwise, are declared by the authors. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Criterion</ns0:head><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>&#1108;) = 0.3 package is used. For ANN, the neuralnet package is used. The parameters, the algorithm, threshold, and linear.output is 'backprop', 0.01, TRUE and the other parameters are kept as default, respectively.PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57837:3:0:NEW 21 Sep 2021) Manuscript to be reviewed Computer Science Results A total of three data sets of COVID-19 (confirm, recover and death) are used to evaluate the performance of the different ML approaches and suggested the best model for forecasting the COVID-19 outbreak. All data sets consisting of the world's daily confirm, recovery and death cases. Every time series divided into training and testing sets of observations. The original data divided into 80% training and 20% testing parts and the first 80% of the total observations in every time series used as a training set whereas the rest 20% used as the testing set. To overcome the overfitting problem, we use 10-fold cross-validation for each of the models and then their results are averaged. In addition, we also used prediction accuracy for training parts. Each time series containing a total of 574 observations spanning (22 January 2020, to 18 August 2021), the first 459 observations spanning (22 January 2020, to 24 March 2021) belong to the training series and the rest 115 observations spanning (25 March 2021, to 18 August 2021) part of the testing series. We use death, recover, and confirm cases from the COVID-19 dataset. The COVID-19 dataset is loaded into the R package environment, and then, we fit the ARDL model to the Daily Deaths series with recover and confirm cases. We choose using R &#119910; &#119905; &#119877; &#119905; &#119862; &#119905; &#119901; 1 = 3, &#119901; 2 = 3, &#119886;&#119899;&#119889; &#119902; = 2 square and parsimony of the model. The insignificant variables are removed and fit the ARDL model. The results obtained from the ARDL model are presented in (Tab.2).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>+ 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>25 &#215; 10 -3 &#119908; &#119905; -1 + &#120576; &#119905; PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57837:3:0:NEW 21 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>= &#120783; (&#119884; &#119905; -&#119936; &#119957; ) &#120784; &#8721; &#119951; &#119957; = &#120783; (&#119936; &#119957; -&#119936;) &#120784; 1 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57837:3:0:NEW 21 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,98.78,72.00,404.95,209.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,99.93,308.70,402.70,190.49' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,108.20,536.66,386.15,166.48' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>&#119909; &#119905; + &#120574; 2 &#119909; &#119905; -1 + &#8230;,&#120574; &#119894; &#119909; &#119905; -&#119895; &#120575; 1 &#119908; &#119905; + &#120575; 2 &#119908; &#119905; -1 + &#8230;, &#120575; &#119894; represent the lags order of and respectively. The parameters denoted</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>dependent variable. The two independent variables 'recover cases' and 'confirm cases' are</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>denoted by and &#119909; &#119905; &#120574; 1 &#119908; &#119905; -&#119895; respectively. Whereas &#119908; &#119905; &#119909; &#119905; &#119908; &#119905;</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>&#120573;, &#120574; &#119886;&#119899;&#119889; &#120575;</ns0:cell></ns0:row><ns0:row><ns0:cell>&#119910; &#119905;</ns0:cell><ns0:cell /><ns0:cell>&#119905; &#120572; 0</ns0:cell></ns0:row><ns0:row><ns0:cell>way,</ns0:cell><ns0:cell>denotes the</ns0:cell><ns0:cell cols='2'>autoregressive lag order of the model of the</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>We evaluate models including RF, SVM, KNN, and ANN to compare their performance using various accuracy metrics including ME, RMSE, MAE, MPE and MAPE. These metrics provide different perspectives to assess predicting models. The first three are the absolute performance measures while the fourth and fifth are relative performance measures. The training sample is used to estimate the parameters for specific model architecture. The testing set is then used to select the best model among all models considered. Tab. 5 summarizes the RF, SVM, KNN, and ANN forecasting accuracy measures for the training set of COVID-19 daily deaths data.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>&#215; 10^( -1)&#119908; &#119905; -3 + &#120576; &#119905;</ns0:cell></ns0:row></ns0:table><ns0:note>In Tab. 5, the values of ME for RF, SVM, KNN, and ANN models reveal that RF shows the lowest value (the best) among the other methods. Similarly, the RMSE values of RF, SVM, KNN, and ANN, respectively show that the ANN achieved better performance compared to the other methods. Moreover, the MAE values indicate that ANN is better than the other methods. While the values of MPE, the RF achieved better performance compared to the other methods. Similarly, the values of MAPE and SMAPE revealed that ANN is less than 1 which indicates that the selected model falls in the range of perfect model</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Summary of ARDL model 1 for Daily Deaths of COVID-19 Significant at 1%, '**' Significant at 0.5% , '*' Significant at 10%</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57837:3:0:NEW 21 Sep 2021) Manuscript to be reviewed PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57837:3:0:NEW 21 Sep 2021)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57837:3:0:NEW 21 Sep 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Original Manuscript ID: CS-2021:02:57837:1:0: NEW Original Article Title: “Comparative Analysis of Machine Learning Approaches to Analyze and Predict the Covid-19 Outbreak” To: Editor-in-Chief PeerJ Computer Science Re: Response to reviewers Dear Editor, Thank you for letting a resubmission of our manuscript, with a prospect to address the reviewers’ comments. We are uploading (a) our point-by-point response to the comments (below) (response to reviewers), (b) an updated manuscript with yellow highlighting showing changes, and (c) a clean restructured manuscript deprived of highlights (PDF main document). The concerns elevated by the reviewers has significantly improved the articles. Best regards, Muhammad Aamir et al. Editor’s, Concern # 1: The authors addressed most of the previous comments and requests, but they forgot to include the results measured with R-squared. Please add the formula of R-squared in Table 1, and its measured results in Tables 5, 6, 7, 8, 9, and 10. Author response: Thank you very much for addressing the important concerns. Author Action Now we include the R-square in the performance criteria table. Moreover, we also include all of the results of R-square for RF, SVM, KNN and ANN. Please find the updated results in the revised manuscript that are measured based on R-squared value. Reviewer’s 1, Concern # 1: My last concern has primarily been addressed. Thanks, authors, for their dedicated efforts Author response: Thank you very much for the comments. Author Action: We acknowledged your concerns, and indeed improved our manuscript. "
Here is a paper. Please give your review comments after reading it.
254
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Accurate disease classification in plants is important for a profound understanding of their growth and health. Recognizing diseases in plants from images is one of the critical and challenging problem in agriculture. In this research, a deep learning architecture; CapPlant model is proposed, that utilizes plant images to predict whether it's healthy or contain some disease. The prediction process does not require handcrafted features; rather, the representations are automatically extracted from input data sequence by architecture.</ns0:p><ns0:p>Several convolutional layers are applied to extract and classify features accordingly. Last convolutional layer in CapPlant is replaced by state-of-the-art capsule layer to incorporate orientational and relative spatial relationship between different entities of a plant in an image to predict diseases more precisely. Proposed architecture is tested on PlantVillage dataset which contains more than 50,000images of infected and healthy plants. Significant improvements in terms of prediction accuracy has been observed using CapPlant model when compared with other plant disease classification models. The experimental results on the developed model has achieved an overall test accuracy of 93.01%, with F1-score of 93.07%</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The existence, survival and development of human race revolve around agriculture as major portion of food is derived from agriculture. The modern technological agriculture sector strives to enhance the quality and production of farming products and coping with cultivation diseases. These diseases are major threat to agricultural development as they adversely affect plants growth and quality resulting in reduced crop yield <ns0:ref type='bibr' target='#b43'>(Yin and Qiu, 2019)</ns0:ref>. To minimize such threats, the complex and unpredictable agricultural ecosystem requires continuous monitoring to analyze diverse physical and environmental aspects. Deep leaning (DL) can be utilized as it constitutes a modern and state-of-the-art technique for image processing and data analysis with great potential and promising results <ns0:ref type='bibr' target='#b16'>(Kamilaris and Prenafeta-Bold&#250;, 2018)</ns0:ref>. DL has been successfully applied in various domains like healthcare <ns0:ref type='bibr' target='#b22'>(Miotto et al., 2018)</ns0:ref>, automatic machine translation <ns0:ref type='bibr' target='#b38'>(Singh et al., 2017)</ns0:ref>, automatic text generation <ns0:ref type='bibr' target='#b25'>(Pawade et al., 2018)</ns0:ref>, image recognition <ns0:ref type='bibr' target='#b35'>(Satapathy et al., 2019)</ns0:ref> and agriculture <ns0:ref type='bibr' target='#b9'>(Gim&#233;nez-Gallego et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b47'>Zheng et al., 2019b)</ns0:ref> etc.</ns0:p><ns0:p>In past few years, researchers have targeted many crops such as blueberry, wheat, tomato and cherry for classification. Moreover, they have also targeted many plant diseases like leaf mold, late bright, tomato mosaic virus disease, two-spotted spider mite attack, target spot, tomato yellow leaf curl virus, rust, tan spot, septoria and others for detection. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows images of different diseases found in various plants.</ns0:p><ns0:p>Previously, researchers utilized handcrafted features along with classifiers for solving plant disease classification problems <ns0:ref type='bibr' target='#b34'>(Salazar-Reque et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b37'>Shruthi et al., 2019)</ns0:ref>. Presently, due to success of DL techniques, many researchers are using them for solving various classification problems <ns0:ref type='bibr' target='#b26'>(Picon et al., 2019a;</ns0:ref><ns0:ref type='bibr' target='#b8'>Ferentinos, 2018;</ns0:ref><ns0:ref type='bibr' target='#b27'>Picon et al., 2019b;</ns0:ref><ns0:ref type='bibr' target='#b41'>Too et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b15'>Kamal et al., 2019)</ns0:ref>. Pre-trained models PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:1:1:NEW 12 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed Although they have shown some promising results, however their time complexity and computational cost still needs improvement <ns0:ref type='bibr' target='#b27'>(Picon et al., 2019b;</ns0:ref><ns0:ref type='bibr' target='#b41'>Too et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b15'>Kamal et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Researchers have also created their own custom Convolution Neural Network (CNN) based models for classification and tested on various datasets <ns0:ref type='bibr' target='#b26'>(Picon et al., 2019a;</ns0:ref><ns0:ref type='bibr' target='#b8'>Ferentinos, 2018)</ns0:ref>. These models have some major drawbacks, for instance one major issue with some of these models is in targeting less number of crops and diseases, secondly they are presenting results using limited or none standard evaluation metrices like Accuracy <ns0:ref type='bibr'>(Testing, Training and Validation)</ns0:ref>, Precision, Recall and F1-Score that are generally used for evaluating a classification model. In this research, a deep learning architecture; CapPlant is developed using CNN along with capsule network to classify and detect any disease found in plants accurately.</ns0:p><ns0:p>Capsules in capsule network <ns0:ref type='bibr' target='#b33'>(Sabour et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b11'>Hinton et al., 2011</ns0:ref><ns0:ref type='bibr' target='#b12'>Hinton et al., , 2018) )</ns0:ref> are the groups of neurons that encode spatial information as well as the probability of an object being present. In contrast to CNN, capsule encodes information in a vector form to store spatial information as well. In recent years, capsule networks have been used for detection <ns0:ref type='bibr' target='#b0'>(Afshar et al., 2018</ns0:ref>), text classification <ns0:ref type='bibr' target='#b45'>(Zhao et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b18'>Kim et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b44'>Zhao et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b32'>Ren and Lu, 2018)</ns0:ref>, tumor classification <ns0:ref type='bibr' target='#b1'>(Afshar et al., 2019a</ns0:ref><ns0:ref type='bibr'>,b), bioinformatics (de Jesus et al., 2018)</ns0:ref> and simple classification problems <ns0:ref type='bibr' target='#b21'>(Lukic et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b10'>Hilton et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b45'>Zhao et al., 2018)</ns0:ref>. Keeping in view success of capsule network, <ns0:ref type='bibr' target='#b5'>(Bass et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b14'>Jaiswal et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b42'>Upadhyay and Schrater, 2018)</ns0:ref> have also explored capsule with Generative Adversarial Networks(GANs) and have presented some promising results.</ns0:p><ns0:p>Conceptual novelty of this work is the utilization of capsule network along with CNN. As CNN stores information in scalar form, they are considered as translational and rotational invariant, whereas in capsule network, information is grouped together in form of vector where length of a capsule vector represents the probability of the existence of feature in an image and the direction of the vector would represent its pose information.Therefore, exploiting capsule layer enables the model to capture relative spatial and orientation relationship between different entities of an object in image.</ns0:p></ns0:div> <ns0:div><ns0:head>2/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:1:1:NEW 12 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The rest of the paper is organized as follows: section II briefly explores different studies related to plant disease detection; implementation details along with models on which CapPlant is built upon are discussed in detail in section III followed by results in section IV. The paper is concluded in section V along with some future recommendations.</ns0:p></ns0:div> <ns0:div><ns0:head n='1'>RELATED WORK</ns0:head><ns0:p>A considerable amount of literature has been published on plant disease classification using conventional machine learning <ns0:ref type='bibr' target='#b34'>(Salazar-Reque et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b37'>Shruthi et al., 2019)</ns0:ref> and deep learning <ns0:ref type='bibr' target='#b27'>(Picon et al., 2019b;</ns0:ref><ns0:ref type='bibr' target='#b41'>Too et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b15'>Kamal et al., 2019)</ns0:ref> techniques. Few of the existing research have also explored capsule network for plant disease classification <ns0:ref type='bibr' target='#b19'>(Kurup et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b20'>Li et al., 2019)</ns0:ref>. This section explains few notable previous research work on plant disease classification. Ashqar and Abu-Naser (2018) implemented tomato leaves diseases detection using DL. Among many diseases that can exist in tomato plant only five were considered. CNN used for classification was divided into two parts. The first part used for feature extraction consisted of four convolution layers along with activation function ReLU followed by max pooling while the second part of the model comprised of two dense layers followed by flattening layer. Softmax was used as an activation for second part. Experiments were conducted on two type of images; one with three color channels and other with one color channel.</ns0:p><ns0:p>Nine thousand healthy and infected tomato images were considered for training.The dataset was created for six classes which included early blight, bacterial spot, septorial leaf spot, leaf mold, bacterial spot, yellow leaf curl virus and healthy. Original images were resized to a smaller size of 150*150 so that computation could be faster. Quality of images were maintained so that disease detection could work well.</ns0:p><ns0:p>The proposed model gave an accuracy of 99.84% on three color channels, where as achieved accuracy was 95.54% on one color channel. The data collection process was manual and tedious which resulted into considering limited number of diseases. <ns0:ref type='bibr' target='#b34'>Salazar-Reque et al. (2019)</ns0:ref> implemented an algorithm for detecting visual symptom in plants disease.</ns0:p><ns0:p>The images used were grouped into nine different categories by diseases and plants.These nine groups consisted of seven plants i.e. apple, grape, mango, potato, quinoa, peach and avocado. The target diseases for apple were scab, cedar apple rust and black rot. Target disease for avocado was necrosis and infection, target disease for grape was black rot, target disease for mango was necrosis, target disease potato was alternaria, target disease for peach was bacterial spot and target disease for quinoa was mildew. Total two hundred and seventy nine images were considered, out of which ninety belonged to apple diseases, thirty belonged to grape disease, thirty belonged to avocado, thirty belonged to mango disease, thirty belonged to potato disease, thirty belonged to peach disease and thirty belonged to quinoa. No image of healthy plants were considered. The system developed used a clustering algorithm for putting together same color pixels in regions known as super pixels. Two hundred and seventy-nine pictures of leaves were used. The proposed system used no images of healthy plants and a small dataset of infected plants images. The images groups were giving different True Positive Rate (TPR) and False Positive Rate (FPR), indicating proper groups were not formed. <ns0:ref type='bibr' target='#b31'>Reddy et al. (2019)</ns0:ref> presented an idea of combining bioinformatics with image processing for detecting diseases in crops and plants. In proposed methodology HSI (Hue, Saturation, Intensity) algorithm for image segmentation was used. Digital camera was used for capturing image and unwanted areas of image were removed using different techniques. The pixels in which the value of green intensity were more than the desired threshold were, considered as unhealthy crops/plants. The authors did not mentioned any detail about the dataset they used for the experiment. The authors also failed to mention any results based on which they achieved the stated conclusions. <ns0:ref type='table' target='#tab_7'>2021:07:63954:1:1:NEW 12 Sep 2021)</ns0:ref> Manuscript to be reviewed Computer Science taken from the cell phone. Dataset was created for seventeen diseases of five crops. The crops included winter wheat, corn, rapeseed, winter barley and common rice. The diseases considered were Septoria tritici, Puccinia striiformis, Puccinia recondite, Septoria nodorum, Drechslera triticirepentis, Oculimacula yallundae, Gibberella zeae, Blumeria graminis, Helminthosporium turcicum, Phoma lingam, Pyrenophora teres, Ramularia collo-cygni, Rhynchosporium secalis, Puccinia hordei, various diseases, Thanatephorus cucumeris, Pyricularia oryzae. Total one lac twenty one thousand nine hundred and fifty five images were considered, out of which eleven thousand two hundred and ninety five belonged to common rice, thirty two thousand two hundred and twenty nine belonged to winter barley, thirteen thousand seven hundred and seventy four belonged to rapeseed, sixty four thousand and twenty six belonged to winter wheat and six thirty one belonged to corn. Independent particular crop models reported a balance accuracy of 0.92, whereas multi crop reported a balance accuracy of 0.93. The proposed system lacked in sharing complete training and testing results. The system lacked in considering number of result variants i.e. F1 score, recall and precision. with many improvements in tile cropping and augmentation scheme. A mobile application was developed for providing input to the system; pictures were taken manually from app and then loaded on a server for further processing. More than eight thousand images were considered for training. The system was able to process the image and find the disease in quick time. The implemented technique considered only three diseases for wheat crop, which made the scope minor. Only tomato images were considered. The diseases considered were leaf mold, spider mites, septoria leaf spot, early blight, bacterial spot, mosaic virus, yellow curl virus and target spot. Accuracy reported for tomato disease classification using AlexNet was 95.65% and using SqueezeNet was 94.3%. The authors considered only tomato diseases and instead of creating their own neural network use pre built models.</ns0:p><ns0:p>The authors failed to report other evaluation metrics like F1 score, precision or recall. <ns0:ref type='bibr' target='#b29'>Ramcharan et al. (2017)</ns0:ref> implemented cassava disease detection using deep learning. The dataset was custom build using Sony Cybershot 20.2mp camera. The dataset was captured over a period of four weeks and consisted of about eleven thousand six hundred and seventy images. The dataset was named 'leafleft cassava dataset'. Five diseases were considered that are Cassava brown streak disease, Red mite damage, Cassava mosaic disease, Green mite damage and Brown leaf spot. A deep convolutional neural network Inception v3 was used for cassava disease detection. The last layer of CNN was replaced with three different variations to test the model on three different architectures. Three different architectures were support vector machines, softmax layer and knn. Confusion matrix was reported for different cassava diseases. The proposed technique is used only for cassava plant. The proposed technique considered only five out of many diseases. <ns0:ref type='bibr' target='#b46'>Zheng et al. (2019a)</ns0:ref> and <ns0:ref type='bibr' target='#b19'>Kurup et al. (2019)</ns0:ref> has explored capsule network and CNN for plant disease classification. Network architecture in <ns0:ref type='bibr' target='#b46'>(Zheng et al., 2019a)</ns0:ref> utilized 2 convolutional layers and 1 primary capsule layer for training and testing. However, the proposed model only presented test and train precision of 88% and 90% respectively. For reducing the drawbacks and also to get better performance new architecture of CNN; capsulenet is implemented in <ns0:ref type='bibr' target='#b19'>(Kurup et al., 2019)</ns0:ref>. Capsulenet was analyzed for two datasets: 1) first model was built for diagnosis of plant disease using plant leaves images. The dataset used for training contained 54,306 images of 14 different plant species. The proposed architecture reported an accuracy of around 94%. The problem with CNN is that it only considers presence of entities in an object,it does not take into account not relative spatial relationship between them. </ns0:p></ns0:div> <ns0:div><ns0:head n='2'>METHODOLOGY</ns0:head><ns0:p>To predict plant diseases from given images, simple yet effective model CapPlant is proposed in which last convolutional layer is replaced by state-of-the-art capsule layer to incorporate relative spatial and orientational relationship between different entities of plant in an image. The overall pipeline of proposed model is illustrated in Figure <ns0:ref type='figure'>2</ns0:ref>. Following sub section explains capsule network and the deep learning architecture of CapPlant that performs end-to-end learning for plant disease classification. End-to-end learning replaces a pipeline of components with a single deep learning neural network and goes directly from the input to the desired output. It eradicates the need of preprocessing or complex feature extraction process by learning directly from the labeled data and simplifies the decision making process. End-to-end models are of key importance for building artificial intelligence systems because of their simplicity, performance and data-driven nature <ns0:ref type='bibr' target='#b28'>(Rafique et al., 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Capsule Network</ns0:head><ns0:p>In the field of DL, although CNN has been a huge success, however, they have some major drawbacks in their basic architecture which causes them to fail in performing some major tasks. CNN automatically extract features from images and from these features it learns to detect and recognize different objects.</ns0:p><ns0:p>Early layers extract simple features like edges and as layers proceed features become more and more complex. At the end, CNN uses all extracted features to make a final prediction. Here lies the major drawback in basic architecture of CNN that only presence of feature is captured and nowhere in this approach spatial information is stored.</ns0:p><ns0:p>Capsule Network <ns0:ref type='bibr' target='#b33'>(Sabour et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b11'>Hinton et al., 2011</ns0:ref><ns0:ref type='bibr' target='#b12'>Hinton et al., , 2018) )</ns0:ref> have recently been proposed to address this limitation of CNN. Capsules are the groups of neurons that encode spatial information as well as the probability of an object being present. In capsule network, corresponding to each entity in an image, there is a capsule which gives:</ns0:p><ns0:p>1. Probability that the entity exists.</ns0:p><ns0:p>2. Instantiation parameters of that entity.</ns0:p><ns0:p>The main operations within capsules are performed as follow:</ns0:p><ns0:p>To encode the imperative spatial association between low and high level features within the image, the multiplication of the matrix of the input vectors with the weight matrix is calculated.</ns0:p></ns0:div> <ns0:div><ns0:head>6/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:1:1:NEW 12 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_0'>&#251; j|i = W i j u i + B j (1)</ns0:formula><ns0:p>The sum of the weighted input vectors is calculated to determine that current capsule will forward its output to which higher level capsule.</ns0:p><ns0:formula xml:id='formula_1'>su j = &#8721; i c i j &#251; j|i (2)</ns0:formula><ns0:p>Finally, non-linearity is applied using the squash function. While maintaining a direction of a vector, squashing function maps it to maximum length of 1 and minimum length of 0. </ns0:p><ns0:formula xml:id='formula_2'>v j = squash(su j )<ns0:label>(3</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head n='2.2.2'>Feature Representation Layers</ns0:head><ns0:p>When the input is fed into CapPlant, features or representations is extracted from the inputs by passing it through 4 layers of convolution, followed by ReLU activation and max polling after each layer. The output of 4th convolutional layer is reshaped and passed through the capsule layer to capture relative spatial and orientational relationship between different entities of an object in image. Tensor obtained from capsule layer is flattened and then passed through densely-connected Neural Network (NN) layer. In the end, softmax is applied to squash a vector in the range (0, 1) such that all the resulting elements add up to 1. For a particular class c i , the softmax function can be calculated as follows:</ns0:p><ns0:formula xml:id='formula_3'>f (c) i = e c i &#8721; C j e c j (4)</ns0:formula><ns0:p>Where c j are the scores inferred by the model for each class in C. Softmax activation for a class c i depends on all the scores in c. </ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.3'>Normalization and Output Layers</ns0:head><ns0:p>The CapPlant model is compiled based on the features extracted from the previous layers. To calculate the error, the Categorical Cross Entropy loss (CE) is used as follows:</ns0:p><ns0:formula xml:id='formula_4'>CE = &#8722;log( e c p &#8721; C j e c j )<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Where c p is the CNN score for the positive class.</ns0:p></ns0:div> <ns0:div><ns0:head>7/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:1:1:NEW 12 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>RESULT</ns0:head><ns0:p>This section presents the results of the CapPlant Model. Extensive experimentation is carried out to evaluate the performance of our model.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Experiments</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1.1'>Experimental Setup</ns0:head><ns0:p>GPU Tesla K40c workstation is used as baseline system for training, testing and validation of Cap-Plant model. Keras, OpenCV, Capsule Layers, Matplotlib and CuDNN libraries are used for software implementation of CapPlant. For training of our model CapPlant, Adam is applied as an optimizer, categorical cross entropy is utilized for calculating loss and accuracy is used as evaluation metric. Overall training and validation losses and accuracies are exploited to determine the performance of our model. Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> lists parameters along with their values adjusted for training and testing of CapPlant. Whereas, detail of each layer designed for testing, training and validation of CapPlant model is listed in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>. Furthermore, Figure <ns0:ref type='figure' target='#fig_6'>3</ns0:ref> is a visual illustration of data flow between each layer of proposed CapPlant model. Moreover, Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref> shows the learning curves obtained while calculating training and validation accuracy and losses. CapPlant model is trained for total 200 epochs, however, trained models at 50, 100, and 150 epochs were also obtained for the sake of comparison. Also, early stopping was employed at epoch 100 to avoid over fitting.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Results of Experiments</ns0:head><ns0:p>Dataset. In this research, PlantVillage; An open access repository of images on plant health to enable the development of mobile disease diagnostics <ns0:ref type='bibr' target='#b13'>Hughes and Salath'e (2015)</ns0:ref> obtained from source <ns0:ref type='bibr' target='#b23'>(Mohanty, 2018)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Input:</ns0:p><ns0:p>( <ns0:ref type='bibr'>32,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>3)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>3)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>3)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>16)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>16)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>16)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>16)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>32)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>32)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>32)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>32)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>64)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>64)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>64)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>64)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>128)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>128)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>128)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>128)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>1,</ns0:ref><ns0:ref type='bibr'>128)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>1,</ns0:ref><ns0:ref type='bibr'>128)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>1,</ns0:ref><ns0:ref type='bibr'>256)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>1,</ns0:ref><ns0:ref type='bibr'>256)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>256)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>256)</ns0:ref> Output:</ns0:p><ns0:p>(32, 50176) <ns0:ref type='table' target='#tab_6'>3</ns0:ref>. <ns0:ref type='table' target='#tab_7'>4</ns0:ref> shows the values of above evaluation metrices calculated for CapPlant model. Figure <ns0:ref type='figure'>5</ns0:ref> and 6 demonstrates bar chart representing recall, precission and F1 score, calculated for each disease and healthy category of plants respectively.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.2'>Comparison with previous Models</ns0:head><ns0:p>To further demonstrate the effectiveness of proposed CapPlant model, it is compared with previous state-of-the-art models for plant disease classification and detection. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In future, a recommender system with our proposed technique can be integrated to suggest various actions that need to be taken against given disease. Moreover, the idea of using CNN with capsule networks can bring significant improvement in the performance of many already existing DL models. It can point us towards a direction to explore various applications using capsules network.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Images of Diseases Found in Various Plants</ns0:figDesc><ns0:graphic coords='3,193.43,63.78,310.20,310.20' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b39'>Sullca et al. (2019)</ns0:ref> implemented diseases detection in blueberry leaves using computer vision and machine learning techniques. Noise removal in images was handled with the help of gaussian blur and median blur filters. Details in each image were enhanced with the help of weighted filters. Blueberry leave pictures were then group into three categories: plagued, healthy and diseased. Local binary patterns and histogram of oriented gradients were used for characteristics extraction. Due to unavailability of blueberry leaves dataset, custom dataset was created. Deep learning model gave an accuracy of 84%, predicting the disease of blueberry leaves. The proposed system only considered blueberry pest infections.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b27'>Picon et al. (2019b)</ns0:ref> implemented CNN model for classification of plant diseases. Three CNN models were proposed for combining different aspects together like crop identification data, geographical locations and weather conditions etc. Around one hundred thousand pictures of the actual field conditions were3/15PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Picon et al. (2019a) implemented deep convolution networks for disease detection in crop. The images used were divided into four groups of Rust, Tan Spot, Septoria and Healthy. The images were taken from Wheat 2014, Wheat 2015 and Wheat 2016 databases. Total eight thousand one hundred and seventy eight images were considered, out of which three thousand three hundred and thirty eight belonged to Rust, two thousand seven hundred and forty four belonged to Septoria, one thousand five hundred and sixty eight belonged to Tan spot, one thousand one hundred and sixteen belonged to Healthy class. One thousand three hundred and eight five images were taken from Wheat 2014 database, two thousand one hundred and eighty nine images were taken from Wheat 2015 and three thousand nine hundred and sixty nine images were taken from Wheat 2016 database. The proposed technique used residual neural networks</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b15'>Kamal et al. (2019)</ns0:ref> implemented depth wise separable architectures (convolution) for classification of plant diseases. The system developed used leaves images for detection of plant diseases. Several models were trained using the proposed method and reduced MobileNet stood out. More than eighty thousand images dataset was considered for training and testing, which covered fifty five classes of healthy and diseased plants. The images were taken from PlantVillage for training and for testing PlantLeafs dataset was used. Eighty two thousand one hundred and sixty one images of PlantVillage were considered, eighteen thousand five hundred and seventeen images of PlantLeaf1 were considered, tweenty three thousand one hundred and ten images of PlantLeaf2 were considered and thirty two thousand two hundred and forty one images of PlantLeaf3 were considered. Number of classes included in PlantVillage are fifty-five, where as in PlantLeaf1 these are eighteen, in PlantLeaf2 these are eleven and in PlantLeaf3 these are sixteen. It gave 36.03% accuracy when tested on pictures taken under different parameters than those of training. Even though the number of image dataset of healthy / diseased plants were more, nonetheless the developed system only considered accuracy as an evaluation metric and reported no precision, recall and F1 score.<ns0:ref type='bibr' target='#b36'>Sengar et al. (2018)</ns0:ref> implemented identification and quantification of powdery mildew disease in cherry using computer vision based technique. Adaptive intensity focused thresholding method was proposed for powdery mildew disease automatic segmentation. Two parameters were used in assessment of the level of disease spread in plants: 1) the portion in plant that was effected by the disease and 2) the length of effected portion in plant. Proposed model achieved 99% accuracy. The proposed technique may be used for predicting only one disease in cherry plant.Rangarajan et al. (2018) implemented tomato disease classification with the help of pre-trained deep learning algorithm. Two pre-trained models i.e. VGG16net and AlexNet were used by the authors . Thirteen thousand two hundred and sixty two tomato images from PlantVillage [40] dataset containing six disease classes and one heathy class were used by the proposed system. Accuracy reported for disease classification using VGG16net was 97.29% and using AlexNet was 97.49%. Comparing AlexNext and VGG16net, minimum execution time and better accuracy were reported with AlexNet. The authors considered six diseases for tomato plant for which they used pre-trained networks. 4/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:1:1:NEW 12 Sep 2021) Manuscript to be reviewed Computer Science Mohanty et al. (2016) implemented plant disease detection using DL techniques on plant images All images used were resized 256 x 256, both model prediction and optimizations were performed on these images. Twenty-six diseases in fourteen crops were detected using this model. Pre-trained models, AlexNet and GoogleNet were considered for this experiment. Models were trained on three different variations of PlantVillage datasets, first they were trained on color images, then on gray scale images and finally on segmented leaves images. Dataset containing fifty four thousand three hundred and six images was used containing healthy and diseased plant leaves. Dataset targeted different thirty eight classes. Five different training -test distributions were used i.e first Train 80% -Test 20%, second Train 60% -Test 40%, third Train 50% -Test 50%, fourth Train 40% -Test 60%, last one Train 20% -Test 80%. Color, grey scale and leaf-segmented images were considered. Two different training mechanisms were considered, first transfer learning and second training from scratch. Model achieved 99.35% accuracy but on a held out test set. The system dropped accuracy to 31% when tested on different images other than training images. The developed technique used pre-trained networks instead of developing their own neural network for classification.Barbedo<ns0:ref type='bibr' target='#b4'>Barbedo (2019)</ns0:ref> implemented plant disease identification using deep learning. The diagnosis in the given algorithm considers image classification on two things, one spots and second lesions. Forty six thousand four hundred and nine images were considered for disease identification. The images were taking using many sensors. The resolution of captured images were upto 24 MPixels. The plants considered were common bean, cassava, citrus, cocunut tree, corn, Kale, Cashew Tree, Coffee, Cotton, Grapevines, Passion fruit, Soybean, Sugarcane and wheat. Overall fourteen plants and seventy nine diseases were considered but many had very few images associated with them. The model used was pretrained GoogLNet CNN. Accuracy was reported for different plants. The developed technique used pre-trained networks instead of developing their own neural network for classification. The developed system focused more on creating a custom dataset for disease detection. The developed system used less images for many classes. Many conditions had a few images associated with them in the dataset captured.<ns0:ref type='bibr' target='#b7'>Durmu&#351; et al. (2017)</ns0:ref> implemented tomato disease detection using deep learning. Diseases that occurred in tomato fields or greenhouses both were considered. AlexNet and SqueezeNet algorithms were used for training and testing of tomato disease detectio. Images were taken from PlantVillage dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>2) second model was trained for classification of plant leaves. Dataset used for training had 2,997 images of 11 plants. The prediction model with capsulenet gave an accuracy of around 85%. All the above crop disease detection techniques work well for detecting the diseases in crops, however they have few limitations such as: 1. Limited Scope: Less number of crops/diseases are targeted. 2. Limited Evaluation Metrices: Conclusion has been achieved based on few result parameters. 3. Limitation of CNN: Most of the techniques used pre-trained networks or created their own CNN.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Visual illustration of data flow between each layer of proposed CapPlant model.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>319</ns0:head><ns0:label /><ns0:figDesc>For calculating predicting performance of CapPlant model, several evaluation metrices are calculated 320 such as F1 score, accuracy, recall and precision.321 accuracy = T p + T n T p + T n + F p + F n 9/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:1:1:NEW 12 Sep 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .where</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Learning Curves for CapPlant. To prevent overfitting of data, early stopping was employed at epoch 100.</ns0:figDesc><ns0:graphic coords='11,356.28,66.79,182.23,318.80' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5. Bar chart representing precision, recall and F1 score, calculated for each 26 plant diseases.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>128 Reshaping Caspsule Layer 14 x 14 x 256 Flattening 50176 Dense 38</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell cols='2'>Conv 1</ns0:cell><ns0:cell cols='2'>Max Pooling</ns0:cell><ns0:cell cols='2'>Conv 2</ns0:cell><ns0:cell>Max Pooling</ns0:cell><ns0:cell>Conv 3</ns0:cell><ns0:cell>Max Pooling</ns0:cell><ns0:cell>Conv 4</ns0:cell><ns0:cell>Max Pooling</ns0:cell></ns0:row><ns0:row><ns0:cell>224 x 224 x 3</ns0:cell><ns0:cell cols='2'>224 x 224 x 16</ns0:cell><ns0:cell cols='2'>112 x 112 x 16</ns0:cell><ns0:cell cols='2'>112 x 112 x 32</ns0:cell><ns0:cell>56 x 56 x 32</ns0:cell><ns0:cell>56 x 56 x 64</ns0:cell><ns0:cell>28 x 28 x 64</ns0:cell><ns0:cell>28 x 28 x 128</ns0:cell><ns0:cell>14 x 14 x</ns0:cell></ns0:row></ns0:table><ns0:note>Figure 2. Network architecture for Caplant; Deep Learning Architecture for Plant Disease Prediction through pictures. CapPlant is a real deep learning architecture because it uses end-to-end learning.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Inputs to the network are plant images of size 224 x 224 x 3. Size of inputs to CapPlant network is represented as (y; 224 x 224 x 3), where y is the batch size. At the expense of reduced accuracy, small batch sizes lead to faster training. Relatively large batch sizes are used to increase accuracy at the expense of slower training. For training of CapPlant, batch size is set to 32.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>)</ns0:cell></ns0:row><ns0:row><ns0:cell>2.2 Network Architecture</ns0:cell></ns0:row><ns0:row><ns0:cell>2.2.1 Model Inputs</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Hyper Parameters Set for Training of CapGAN</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameters</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell>Batch size</ns0:cell><ns0:cell>32</ns0:cell></ns0:row><ns0:row><ns0:cell>Epochs</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Image Size</ns0:cell><ns0:cell>224 x 224 x 3</ns0:cell></ns0:row><ns0:row><ns0:cell>Learning Rate</ns0:cell><ns0:cell>0.0002</ns0:cell></ns0:row><ns0:row><ns0:cell>Momentum for Adam Update</ns0:cell><ns0:cell>0.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Loss</ns0:cell><ns0:cell>Categorical Cross Entropy</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Detail of each CapPlant layer along with output shape and number of obtained parameters.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Total training images: 5000</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Total validation images: 5423</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Total validation images: 5470</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Total Classes: 38</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>43429 images belonging to 38 classes.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>5417 images belonging to 38 classes.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>5459 images belonging to 38 classes.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Layer</ns0:cell><ns0:cell>Output Shape</ns0:cell><ns0:cell>Param #</ns0:cell></ns0:row><ns0:row><ns0:cell>Input Layer</ns0:cell><ns0:cell>(32, 224, 224, 3)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2D</ns0:cell><ns0:cell>(32, 224, 224, 16)</ns0:cell><ns0:cell>448</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxPooling2</ns0:cell><ns0:cell>(32, 112, 112, 16)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2D</ns0:cell><ns0:cell>(32, 112, 112, 32)</ns0:cell><ns0:cell>4640</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxPooling2</ns0:cell><ns0:cell>(32, 56, 56, 32)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2D</ns0:cell><ns0:cell>(32, 56, 56, 64)</ns0:cell><ns0:cell>18496</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxPooling2</ns0:cell><ns0:cell>(32, 28, 28, 64)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2D</ns0:cell><ns0:cell>(32, 28, 28, 128)</ns0:cell><ns0:cell>73856</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxPooling2</ns0:cell><ns0:cell>(32, 14, 14, 128)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Reshape</ns0:cell><ns0:cell>(32, 14, 14, 1,128)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>ConvCapsuleLayer</ns0:cell><ns0:cell>(32, 14, '4, 1 , 256)</ns0:cell><ns0:cell>295168</ns0:cell></ns0:row><ns0:row><ns0:cell>Reshape</ns0:cell><ns0:cell>(32, 14, 14, 256)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Flatten</ns0:cell><ns0:cell>(32, 50176)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Dense</ns0:cell><ns0:cell>(32,38)</ns0:cell><ns0:cell>1906726</ns0:cell></ns0:row><ns0:row><ns0:cell>Total params: 2, 299, 334</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Trainable Params: 2, 299, 334</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Non-trainable params: 0</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>3.1.2 Training</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>has been used for training and testing. PlantVillage dataset have 54,306 images belonging to 14</ns0:figDesc><ns0:table /><ns0:note>8/15PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:1:1:NEW 12 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Details of PlantVillage Dataset used for Testing and Training of CapPlant</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Plant</ns0:cell><ns0:cell>Class Label</ns0:cell><ns0:cell>Name</ns0:cell><ns0:cell># of Training Samples</ns0:cell><ns0:cell># of Validation Samples</ns0:cell><ns0:cell># of Testing Samples</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>Apple scab</ns0:cell><ns0:cell>504</ns0:cell><ns0:cell>63</ns0:cell><ns0:cell>63</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Apple</ns0:cell><ns0:cell>1 2</ns0:cell><ns0:cell>Black rot Cedar Apple rust</ns0:cell><ns0:cell>496 220</ns0:cell><ns0:cell>62 27</ns0:cell><ns0:cell>63 28</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>3</ns0:cell><ns0:cell>Healthy</ns0:cell><ns0:cell>1316</ns0:cell><ns0:cell>164</ns0:cell><ns0:cell>165</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Blueberry</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>Healthy</ns0:cell><ns0:cell>1201</ns0:cell><ns0:cell>150</ns0:cell><ns0:cell>151</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Cherry</ns0:cell><ns0:cell>5 6</ns0:cell><ns0:cell>Healthy powdery mildew</ns0:cell><ns0:cell>683 841</ns0:cell><ns0:cell>85 105</ns0:cell><ns0:cell>86 106</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>7</ns0:cell><ns0:cell>Gray leaf spot</ns0:cell><ns0:cell>410</ns0:cell><ns0:cell>51</ns0:cell><ns0:cell>52</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Corn</ns0:cell><ns0:cell>8 9</ns0:cell><ns0:cell>Common rust Healthy</ns0:cell><ns0:cell>953 929</ns0:cell><ns0:cell>119 116</ns0:cell><ns0:cell>120 117</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>10</ns0:cell><ns0:cell>Northern leaf blight</ns0:cell><ns0:cell>788</ns0:cell><ns0:cell>98</ns0:cell><ns0:cell>99</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>11</ns0:cell><ns0:cell>black rot</ns0:cell><ns0:cell>944</ns0:cell><ns0:cell>118</ns0:cell><ns0:cell>118</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Grape</ns0:cell><ns0:cell>12 13</ns0:cell><ns0:cell>Esca black measles Healthy</ns0:cell><ns0:cell>1106 338</ns0:cell><ns0:cell>138 42</ns0:cell><ns0:cell>139 43</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>14</ns0:cell><ns0:cell>Leaf blight</ns0:cell><ns0:cell>860</ns0:cell><ns0:cell>107</ns0:cell><ns0:cell>109</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Orange</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>Haunglonbing</ns0:cell><ns0:cell>4405</ns0:cell><ns0:cell>550</ns0:cell><ns0:cell>552</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Peach</ns0:cell><ns0:cell>16 17</ns0:cell><ns0:cell>Bacterial spot Healthy</ns0:cell><ns0:cell>1837 288</ns0:cell><ns0:cell>229 36</ns0:cell><ns0:cell>231 36</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Pepper Bell</ns0:cell><ns0:cell>18 19</ns0:cell><ns0:cell>Bacterial spot Healthy</ns0:cell><ns0:cell>797 1182</ns0:cell><ns0:cell>99 147</ns0:cell><ns0:cell>101 149</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>20</ns0:cell><ns0:cell>Early blight</ns0:cell><ns0:cell>800</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Potato</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>Healthy</ns0:cell><ns0:cell>121</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>16</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>22</ns0:cell><ns0:cell>Late blight</ns0:cell><ns0:cell>800</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Raspberry</ns0:cell><ns0:cell>23</ns0:cell><ns0:cell>Healthy</ns0:cell><ns0:cell>296</ns0:cell><ns0:cell>37</ns0:cell><ns0:cell>38</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Soybean</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>Healthy</ns0:cell><ns0:cell>4072</ns0:cell><ns0:cell>509</ns0:cell><ns0:cell>509</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Squash</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>Powdery Mildew</ns0:cell><ns0:cell>1468</ns0:cell><ns0:cell>183</ns0:cell><ns0:cell>184</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Starberry</ns0:cell><ns0:cell>26 27</ns0:cell><ns0:cell>Healthy Leaf scorch</ns0:cell><ns0:cell>364 887</ns0:cell><ns0:cell>45 110</ns0:cell><ns0:cell>47 112</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>28</ns0:cell><ns0:cell>Bacterial spot</ns0:cell><ns0:cell>1701</ns0:cell><ns0:cell>212</ns0:cell><ns0:cell>214</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>29</ns0:cell><ns0:cell>Early blight</ns0:cell><ns0:cell>800</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>30</ns0:cell><ns0:cell>Healthy</ns0:cell><ns0:cell>1272</ns0:cell><ns0:cell>159</ns0:cell><ns0:cell>160</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>31</ns0:cell><ns0:cell>Late blight</ns0:cell><ns0:cell>1527</ns0:cell><ns0:cell>190</ns0:cell><ns0:cell>192</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>32</ns0:cell><ns0:cell>Leaf Mold</ns0:cell><ns0:cell>761</ns0:cell><ns0:cell>95</ns0:cell><ns0:cell>96</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Tomato</ns0:cell><ns0:cell>33</ns0:cell><ns0:cell>Septoria leaf spot</ns0:cell><ns0:cell>1416</ns0:cell><ns0:cell>177</ns0:cell><ns0:cell>178</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>34</ns0:cell><ns0:cell>Spider Mites</ns0:cell><ns0:cell>1340</ns0:cell><ns0:cell>167</ns0:cell><ns0:cell>169</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>35</ns0:cell><ns0:cell>Target spot</ns0:cell><ns0:cell>1123</ns0:cell><ns0:cell>140</ns0:cell><ns0:cell>141</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>36</ns0:cell><ns0:cell>Mosaic virus</ns0:cell><ns0:cell>298</ns0:cell><ns0:cell>37</ns0:cell><ns0:cell>38</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>37</ns0:cell><ns0:cell>Yellow leaf curl virus</ns0:cell><ns0:cell>4285</ns0:cell><ns0:cell>535</ns0:cell><ns0:cell>537</ns0:cell></ns0:row><ns0:row><ns0:cell>334</ns0:cell><ns0:cell cols='2'>4 CONCLUSION</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>335</ns0:cell><ns0:cell cols='6'>Advancement in DL and image processing provides a prospect to extend the research and applications</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='6'>of detection and classification of various diseases in plants using images. In this research, simple</ns0:cell></ns0:row></ns0:table><ns0:note>336 and effective model, CapPlant is developed for classifying various categorizes of healthy and effected 337 plants. In CapPlant, convolutional layer along with capsule layer is used to capture more features. As 338 capsules incorporate orientation and relative spatial relationships between different entities in an object, 339 they outperform conventional CNN. Results obtained from experimentation clearly demonstrate the 340 effectiveness of proposed CapPlant model. For now, model has been tested and validated for already 341 publicly available PlantVillage dataset. The threats to the validity of results obtained from CapPlant model 342 11/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:1:1:NEW 12 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Comparison of different evaluation metrices measured for CapPlant with various State-of-the-art Models 07% 93.07% 93.07% 93.77% may depend upon properties such as size, unsharpness, bit depth, and noise in the underlying test images.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Year Model</ns0:cell><ns0:cell>Training Accuracy</ns0:cell><ns0:cell>Validation Accuracy</ns0:cell><ns0:cell cols='5'>Test Accuracy Precision Recall F1-Score Average</ns0:cell></ns0:row><ns0:row><ns0:cell>2018 VGG net</ns0:cell><ns0:cell>83.86%</ns0:cell><ns0:cell>81.92%</ns0:cell><ns0:cell>81.83%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>82.53%</ns0:cell></ns0:row><ns0:row><ns0:cell>2019 Capsule Network</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>88%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>88%</ns0:cell></ns0:row><ns0:row><ns0:cell>2020 CapPlant</ns0:cell><ns0:cell>98.06%</ns0:cell><ns0:cell>92.31%</ns0:cell><ns0:cell>93.07%</ns0:cell><ns0:cell>93.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='15'>/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:1:1:NEW 12 Sep 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Original Article Title: CapPlant: A Capsule Network based Framework for Plant Disease Classification To: Academic Editor, PeerJ Computer Science Re: Response to Reviewers Dear Editor, We want to thank the Reviewers for their valuable suggestions on the manuscript and have edited the manuscript to address their concerns. We believe that the paper presentation is now more sound and convincing. We are uploading (a) our point-by-point response to the comments (response to reviewers), (b) an updated manuscript with yellow highlighting indicating changes, and (c) a clean updated manuscript without highlights (PDF main document). We believe that the manuscript is now suitable for publication in PeerJ! Best Regards, Omar Samin, Maryam Omar and Musadaq Mansoor Reviewer#1, Concern # 1: In the methodology section, authors have mentioned that CapPlant is a real deep learning architecture because it uses end-to-end learning. However, in research paper end-to-end learning is not explained and discussed. It would be better if the authors can explain the concept of end-to-end learning in the paper. Author Action: Thank you for your valuable suggestion. As recommended, we have added a paragraph at the end of section 2: Methodology that explains the concept of end-to-end learning. Reviewer#1, Concern # 2: Figure 3 shows the learning curves obtained while calculating training and validation accuracy and it gives an idea that model was trained for 200 epochs. In contrast, Table 1 lists 100 epochs. Author should explain that why model was trained for 200 epochs but model till 100 epochs was used for testing and validation. Author Action: We have trained our model for total 200 epochs, however we have also obtained trained models at 50, 100, and 150 epochs for sake of comparison. Figure 3 shows the learning curves obtained while calculating training and validation accuracy and it gives an idea that model was trained for 200 epochs, as early stopping was employed at epoch 100 to avoid overfitting, therefore Table 1 lists 100 epochs. We have also added this explanation at the end of section 3.1.2 training. Reviewer#1, Concern # 3: Please cite Plant Village dataset in Table 2. Author Action: We have updated the manuscript by citing Plant Village dataset in caption of Table 3 (Table 2 in previous manuscript is now Table 3 in updated manuscript). Reviewer#1, Concern # 4: It would be helpful if the authors can include the values of standard deviation values along with the accuracies reported in Table 4. Author Action: Upon suggestion, we have calculated the Standard Deviation (SD) of training, test and validation accuracies of VGGnet and CapPlant model. SD of VGGnet is found to be 1.14 whereas for CapPlant it is 3.14. Higher SD is observed for CapPlant as accuracies are 98.06%, 92.31% and 93.07% (above 90% but distributed) whereas for VGGnet model it is 83.86%, 81.92% and 81.83%(low accuracy but less deviation). Therefore, upon reconsideration we have calculated and added MEAN instead of SD to compare our models further in Table 4. We would request our reviewer to reconsider this point and accept MEAN values for comparison. Reviewer#2, Concern #1: In introduction author has mentioned that “exploiting capsule layer enables the model to capture relative spatial and orientation relationship between different entities of an object in image”. How? Author Action: We have added the following paragraph in Introduction to clear ambiguities and to support our claim: As CNN stores information in scalar form, they are considered as translational and rotational invariant, whereas in capsule network, information is grouped together in form of vector where length of a capsule vector represents the probability of the existence of feature in an image and the direction of the vector would represent its pose information. Therefore, exploiting capsule layer enables the model to capture relative spatial and orientation relationship between different entities of an object in image. Reviewer#2, Concern # 2: 2. In Table 2, author has already assigned class labels, these labels should be used in Table 4 & Table 5 instead of class names while discussing F1 score, Precision and Recall of each class. Author Action: Thankyou for your suggestion, we tried to use assigned labels in Figure 5 & Figure 6 as shown in figure below, however the chart gets more amalgamated and complex to read as names are not available, therefore we have reverted the changes to original. In our opinion, it will be convenient for the reader to understand bar graph if names are available. We would request our reviewer to consider Bar Charts with names instead of class numbers. Reviewer#2, Concern # 3: Paper lacks detailed model summary of proposed CapPlant model. There should be a visual illustration that shows type of layer(input, CNN, Capsule, Dense etc ) input tensor and output tensor of a complete model. Author Action: Thankyou for your suggestion, we have added Figure 3, which is a visual illustration that shows type of layer (input, CNN, Capsule, Dense etc ) input tensor and output tensor of a complete model. Reviewer#3, Concern # 1: n Line 49-58, the authors mention that “These models have some major drawbacks, for instance one major issue with some of these models is in targeting less number of crops and diseases, secondly they are presenting results using limited or none evaluation metrices. In this research, a deep learning architecture; CapPlant is developed using CNN along with capsule network to classify and detect any disease found in plants accurately.” It would be nice to elaborate a bit the evaluation metrics that the existing techniques are not focussing on and the ones that CapPlant is dealing with. Author Action: Thankyou for your valuable suggestion we have updated the paragraph as follows: These models have some major drawbacks, for instance one major issue with some of these models is in targeting less number of crops and diseases, secondly they are presenting results using limited or none standard evaluation metrices like Accuracy (Testing, Training and Validation), Precision, Recall and F1-Score that are generally used for evaluating a classification model . In this research, a deep learning architecture; CapPlant is developed using CNN along with capsule network to classify and detect any disease found in plants accurately. Reviewer#3, Concern # 2: For the purpose of reproducibility, it would be nice to make available the code in the form of Git repository. Author Action: Upon suggestion, we have created a public repository for CapPlant. [Github Link] Reviewer#3, Concern # 2: Threats to Validity for the CapPlant approach should be provided. Author Action: Thank you for your suggestion, we have added the following paragraph in our conclusion that highlight threats to validity for the CapPlant: For now, model has been tested and validated for already publicly available PlantVillage dataset. The threats to the validity of results obtained from CapPlant model may depend upon properties such as size, unsharpness, bit depth, and noise in the underlying test images. "
Here is a paper. Please give your review comments after reading it.
255
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Accurate disease classification in plants is important for a profound understanding of their growth and health. Recognizing diseases in plants from images is one of the critical and challenging problem in agriculture. In this research, a deep learning architecture; CapPlant model is proposed, that utilizes plant images to predict whether it is healthy or contain some disease. The prediction process does not require handcrafted features; rather, the representations are automatically extracted from input data sequence by architecture.</ns0:p><ns0:p>Several convolutional layers are applied to extract and classify features accordingly. The last convolutional layer in CapPlant is replaced by state-of-the-art capsule layer to incorporate orientational and relative spatial relationship between different entities of a plant in an image to predict diseases more precisely. The proposed architecture is tested on PlantVillage dataset, which contains more than 50,000 images of infected and healthy plants. Significant improvements in terms of prediction accuracy has been observed using CapPlant model when compared with other plant disease classification models. The experimental results on the developed model have achieved an overall test accuracy of 93.01%, with F1 score of 93.07%.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The existence, survival and development of human race revolve around agriculture, as the major portion of food is derived from agriculture. The modern technological agriculture sector strives to enhance the quality and production of farming products and coping with cultivation diseases. These diseases are a major threat to agricultural development as they adversely affect plants growth and quality, resulting in reduced crop yield <ns0:ref type='bibr' target='#b44'>(Yin and Qiu, 2019)</ns0:ref>. To minimize such threats, the complex and unpredictable agricultural ecosystem requires continuous monitoring to analyze diverse physical and environmental aspects. Deep Leaning (DL) can be utilized as it constitutes a modern and state-of-the-art technique for image processing and data analysis with great potential and promising results <ns0:ref type='bibr' target='#b17'>(Kamilaris and Prenafeta-Bold&#250;, 2018)</ns0:ref>. DL has been successfully applied in various domains like healthcare <ns0:ref type='bibr' target='#b22'>(Miotto et al., 2018)</ns0:ref>, automatic machine translation <ns0:ref type='bibr' target='#b40'>(Singh et al., 2017)</ns0:ref>, automatic text generation <ns0:ref type='bibr' target='#b26'>(Pawade et al., 2018)</ns0:ref>, image recognition <ns0:ref type='bibr' target='#b36'>(Satapathy et al., 2019)</ns0:ref> and agriculture <ns0:ref type='bibr' target='#b9'>(Gim&#233;nez-Gallego et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b48'>Zheng et al., 2019b)</ns0:ref> etc.</ns0:p><ns0:p>In the past few years, researchers have targeted many crops such as blueberry, wheat, tomato and cherry for classification. Moreover, they have also targeted many plant diseases like leaf mold, late bright, tomato mosaic virus disease, two-spotted spider mite attack, target spot, tomato yellow leaf curl virus, rust, tan spot, septoria and others for detection. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows images of different diseases found in various plants.</ns0:p><ns0:p>Previously, researchers utilized handcrafted features along with classifiers for solving plant disease classification problems <ns0:ref type='bibr' target='#b35'>(Salazar-Reque et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b38'>Shruthi et al., 2019)</ns0:ref>. Presently, due to success of DL techniques, many researchers are using them for solving various classification problems <ns0:ref type='bibr'>(Picon et al.</ns0:ref>, <ns0:ref type='bibr'>;</ns0:ref><ns0:ref type='bibr' target='#b8'>Ferentinos, 2018;</ns0:ref><ns0:ref type='bibr' target='#b28'>Picon et al., 2019b;</ns0:ref><ns0:ref type='bibr' target='#b42'>Too et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b16'>Kamal et al., 2019)</ns0:ref>. Pre-trained models like AlexNet, GoogleNet, DenseNet, MobileNet and VGG16net etc. are widely used for plant disease classification. Although they have shown some promising results, however their time complexity and computational cost still needs improvement <ns0:ref type='bibr' target='#b28'>(Picon et al., 2019b;</ns0:ref><ns0:ref type='bibr' target='#b42'>Too et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b16'>Kamal et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Researchers have also created their own custom Convolution Neural Network (CNN) based models for classification and tested on various datasets <ns0:ref type='bibr' target='#b27'>(Picon et al., 2019a;</ns0:ref><ns0:ref type='bibr' target='#b8'>Ferentinos, 2018)</ns0:ref>. These models have some major drawbacks, for instance, one major issue with some of these models is in targeting less number of crops and diseases, secondly they are presenting results using limited or non-standard evaluation metrics like Accuracy <ns0:ref type='bibr'>(Testing, Training and Validation)</ns0:ref>, Precision, Recall and F1-Score that are generally used for evaluating a classification model. In this research, a deep learning architecture; CapPlant is developed using CNN along with capsule network to classify and detect any disease found in plants accurately.</ns0:p><ns0:p>Capsules in a capsule network <ns0:ref type='bibr' target='#b34'>(Sabour et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b11'>Hinton et al., 2011</ns0:ref><ns0:ref type='bibr' target='#b12'>Hinton et al., , 2018) )</ns0:ref> are the groups of neurons that encode spatial information as well as the probability of an object being present. In contrast to CNN, capsule encodes information in a vector form to store spatial information as well. In recent years, capsule networks have been used for detection <ns0:ref type='bibr' target='#b0'>(Afshar et al., 2018)</ns0:ref>, text classification <ns0:ref type='bibr' target='#b46'>(Zhao et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b18'>Kim et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b45'>Zhao et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b33'>Ren and Lu, 2018)</ns0:ref>, tumor classification <ns0:ref type='bibr' target='#b1'>(Afshar et al., 2019a</ns0:ref><ns0:ref type='bibr'>,b), bioinformatics (de Jesus et al., 2018)</ns0:ref> and simple classification problems <ns0:ref type='bibr' target='#b21'>(Lukic et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b10'>Hilton et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b46'>Zhao et al., 2018)</ns0:ref>. Keeping in view success of capsule network, <ns0:ref type='bibr' target='#b5'>(Bass et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b15'>Jaiswal et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b43'>Upadhyay and Schrater, 2018)</ns0:ref> have also explored capsule with Generative Adversarial Networks(GANs) and have presented some promising results.</ns0:p><ns0:p>Conceptual novelty of this work is the utilization of capsule network along with CNN. As CNN stores information in scalar form, they are considered as translational and rotational invariant, whereas in capsule network, information is grouped together in form of vector where the length of a capsule vector represents the probability of the existence of a feature in an image and the direction of the vector would represent its pose information.Therefore, exploiting capsule layer enables the model to capture relative spatial and The rest of the paper is organized as follows: section II briefly explores different studies related to plant disease detection; implementation details along with models on which CapPlant is built upon are discussed in detail in section III followed by results in section IV. The paper is concluded in section V, along with some future recommendations.</ns0:p></ns0:div> <ns0:div><ns0:head n='1'>RELATED WORK</ns0:head><ns0:p>A considerable amount of literature has been published on plant disease classification using conventional machine learning <ns0:ref type='bibr' target='#b35'>(Salazar-Reque et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b38'>Shruthi et al., 2019)</ns0:ref> and deep learning <ns0:ref type='bibr' target='#b28'>(Picon et al., 2019b;</ns0:ref><ns0:ref type='bibr' target='#b42'>Too et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b16'>Kamal et al., 2019)</ns0:ref> techniques. Few of the existing research have also explored capsule network for plant disease classification <ns0:ref type='bibr' target='#b19'>(Kurup et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b20'>Li et al., 2019)</ns0:ref>. This section explains a few notable previous research work on plant disease classification. <ns0:ref type='bibr' target='#b41'>Sullca et al. (2019)</ns0:ref> implemented diseases detection in blueberry leaves using computer vision and machine learning techniques. Noise removal in images was handled with the help of gaussian blur and median blur filters. Details in each image were enhanced with the help of weighted filters. Blueberry leave pictures were then group into three categories: plagued, healthy and diseased. Local binary patterns and histogram of oriented gradients were used for characteristics extraction. Due to unavailability of blueberry leaves dataset, a custom dataset was created. Deep learning model gave an accuracy of 84%, predicting the disease of blueberry leaves. The proposed system only considered blueberry pest infections. <ns0:ref type='bibr' target='#b3'>Ashqar and Abu-Naser (2018)</ns0:ref> implemented tomato leaves diseases detection using DL. Among many diseases that can exist in tomato plant, only five were considered. CNN used for classification was divided into two parts. The first part used for feature extraction consisted of four convolution layers along with activation function ReLU followed by max pooling, while the second part of the model comprised of two dense layers followed by flattening layer. Softmax was used as an activation for the second part.</ns0:p><ns0:p>Experiments were conducted on two type of images; one with three color channels and other with one color channel. Nine thousand healthy and infected tomato images were considered for training.The dataset was created for six classes which included early blight, bacterial spot, septorial leaf spot, leaf mold, bacterial spot, yellow leaf curl virus and healthy. Original images were resized to a smaller size of 150*150 so that computation could be faster. Quality of images were maintained so that disease detection could work well. The proposed model gave an accuracy of 99.84% on three color channels, whereas achieved accuracy was 95.54% on one color channel. The data collection process was manual and tedious, which resulted into considering limited number of diseases. <ns0:ref type='bibr' target='#b35'>Salazar-Reque et al. (2019)</ns0:ref> implemented an algorithm for detecting visual symptom in plants disease.</ns0:p><ns0:p>The images used were grouped into nine different categories by diseases and plants.These nine groups consisted of seven plants i.e. apple, grape, mango, potato, quinoa, peach and avocado. The target diseases for apple were scab, cedar apple rust and black rot. Target disease for avocado was necrosis and infection, target disease for grape was black rot, target disease for mango was necrosis, target disease of potato was alternaria, target disease for peach was bacterial spot and target disease for quinoa was mildew. Total two hundred and seventy-nine images were considered, out of which ninety belonged to apple diseases, thirty belonged to grape disease, thirty belonged to avocado, thirty belonged to mango disease, thirty belonged to potato disease, thirty belonged to peach disease and thirty belonged to quinoa. No image of healthy plants were considered. The system developed used a clustering algorithm for putting together same color pixels in regions known as super pixels. Two hundred and seventy-nine pictures of leaves were used. The proposed system used no images of healthy plants and a small dataset of infected plants images. The images groups were giving different True Positive Rate (TPR) and False Positive Rate (FPR), indicating proper groups were not formed. <ns0:ref type='bibr' target='#b32'>Reddy et al. (2019)</ns0:ref> presented an idea of combining bioinformatics with image processing for detecting diseases in crops and plants. In the proposed methodology, HSI (Hue, Saturation, Intensity) algorithm for image segmentation was used. A digital camera was used for capturing image and unwanted areas of image were removed using different techniques. The pixels in which the value of green intensity were more than the desired threshold were, considered as unhealthy crops/plants. The authors did not mention any detail about the dataset they used for the experiment. The authors also failed to mention any results based on which they achieved the stated conclusions. <ns0:ref type='table' target='#tab_7'>2021:07:63954:2:0:NEW 24 Sep 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and weather conditions etc. Around one hundred thousand pictures of the actual field conditions were taken from the cell phone. Dataset was created for seventeen diseases of five crops. The crops included winter wheat, corn, rapeseed, winter barley and common rice. The diseases considered were Septoria tritici, Puccinia striiformis, Puccinia recondite, Septoria nodorum, Drechslera triticirepentis, Oculimacula yallundae, Gibberella zeae, Blumeria graminis, Helminthosporium turcicum, Phoma lingam, Pyrenophora teres, Ramularia collo-cygni, Rhynchosporium secalis, Puccinia hordei, various diseases, Thanatephorus cucumeris, Pyricularia oryzae. Total one lac twenty-one thousand nine hundred and fifty-five images were considered, out of which eleven thousand two hundred and ninety-five belonged to common rice, thirty-two thousand two hundred and twenty-nine belonged to winter barley, thirteen thousand seven hundred and seventy-four belonged to rapeseed, sixty-four thousand and twenty-six belonged to winter wheat and six thirty-one belonged to corn. Independent particular crop models reported a balance accuracy of 0.92, whereas multi crop reported a balance accuracy of 0.93. The proposed system lacked in sharing complete training and testing results. The system lacked in considering number of result variants i.e. F1 score, recall and precision. with many improvements in tile cropping and augmentation scheme. A mobile application was developed for providing input to the system; pictures were taken manually from the app and then loaded on a server for further processing. More than eight thousand images were considered for training. The system was able to process the image and find the disease in quick time. The implemented technique considered only three diseases for wheat crop, which made the scope minor. Only tomato images were considered. The diseases considered were leaf mold, spider mites, septoria leaf spot, early blight, bacterial spot, mosaic virus, yellow curl virus and target spot. Accuracy reported for tomato disease classification using AlexNet was 95.65% and using SqueezeNet was 94.3%. The authors considered only tomato diseases and instead of creating their own neural network use pre-built models.</ns0:p><ns0:p>The authors failed to report other evaluation metrics like F1 score, precision or recall. <ns0:ref type='bibr' target='#b30'>Ramcharan et al. (2017)</ns0:ref> implemented cassava disease detection using deep learning. The dataset was custom build using Sony Cybershot 20.2mp camera. The dataset was captured over a period of four weeks and consisted of about eleven thousand six hundred and seventy images. The dataset was named 'leafleft cassava dataset'. Five diseases were considered that are Cassava brown streak disease, Red mite damage, Cassava mosaic disease, Green mite damage and Brown leaf spot. A deep convolutional neural network Inception v3 was used for cassava disease detection. The last layer of CNN was replaced with three different variations to test the model on three different architectures. Three different architectures were support vector machines, softmax layer and knn. Confusion matrix was reported for different cassava diseases. The proposed technique is used only for cassava plant. The proposed technique considered only five out of many diseases. <ns0:ref type='bibr' target='#b47'>Zheng et al. (2019a)</ns0:ref> and <ns0:ref type='bibr' target='#b19'>Kurup et al. (2019)</ns0:ref> has explored capsule network and CNN for plant disease classification. Network architecture in <ns0:ref type='bibr' target='#b47'>(Zheng et al., 2019a)</ns0:ref> utilized 2 convolutional layers and 1 primary capsule layer for training and testing. However, the proposed model only presented test and train precision of 88% and 90% respectively. For reducing the drawbacks and also to get better performance, new architecture of CNN; capsulenet is implemented in <ns0:ref type='bibr' target='#b19'>(Kurup et al., 2019)</ns0:ref>. Capsulenet was analyzed for two datasets: 1) first model was built for diagnosis of plant disease using plant leaves images. The dataset used for training contained 54,306 images of 14 different plant species. The proposed architecture reported an accuracy of around 94%. The problem with CNN is that it only considers presence of entities in an object, it does not take into account not relative spatial relationship between them. </ns0:p></ns0:div> <ns0:div><ns0:head n='2'>METHODOLOGY</ns0:head><ns0:p>To predict plant diseases from the given images, simple yet effective model CapPlant is proposed in which the last convolutional layer is replaced by state-of-the-art capsule layer to incorporate relative spatial and orientational relationship between different entities of a plant in an image. The overall pipeline of the proposed model is illustrated in Figure <ns0:ref type='figure'>2</ns0:ref>. The following subsection explains capsule network and the deep learning architecture of CapPlant that performs end-to-end learning for plant disease classification.</ns0:p><ns0:p>End-to-end learning replaces a pipeline of components with a single deep learning neural network and goes directly from the input to the desired output. It eradicates the need of preprocessing or complex feature extraction process by learning directly from the labeled data and simplifies the decision making process. End-to-end models are of key importance for building artificial intelligence systems because of their simplicity, performance and data-driven nature <ns0:ref type='bibr' target='#b29'>(Rafique et al., 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Capsule Network</ns0:head><ns0:p>In the field of DL, although CNN has been a huge success, however, they have some major drawbacks in their basic architecture which causes them to fail in performing some major tasks. CNN automatically extracts features from images, and from these features it learns to detect and recognize different objects.</ns0:p><ns0:p>Early layers extract simple features like edges, and as layers proceed features become more and more complex. At the end, CNN uses all extracted features to make a final prediction. Here lies the major drawback in basic architecture of CNN that only presence of feature is captured and nowhere in this approach spatial information is stored.</ns0:p><ns0:p>Capsule Network <ns0:ref type='bibr' target='#b34'>(Sabour et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b11'>Hinton et al., 2011</ns0:ref><ns0:ref type='bibr' target='#b12'>Hinton et al., , 2018) )</ns0:ref> have recently been proposed to address this limitation of CNN. Capsules are the groups of neurons that encode spatial information as well as the probability of an object being present. In capsule network, corresponding to each entity in an image, there is a capsule which gives:</ns0:p><ns0:p>1. Probability that the entity exists.</ns0:p><ns0:p>2. Instantiation parameters of that entity.</ns0:p><ns0:p>The main operations within capsules are performed as follows:</ns0:p><ns0:p>To encode the imperative spatial association between low and high level features within the image, the multiplication of the matrix of the input vectors with the weight matrix is calculated.</ns0:p></ns0:div> <ns0:div><ns0:head>6/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:2:0:NEW 24 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_0'>&#251; j|i = W i j u i + B j (1)</ns0:formula><ns0:p>The sum of the weighted input vectors is calculated to determine that the current capsule will forward its output to which higher level capsule.</ns0:p><ns0:formula xml:id='formula_1'>su j = &#8721; i c i j &#251; j|i (2)</ns0:formula><ns0:p>Finally, non-linearity is applied using the squash function. While maintaining a direction of a vector, the squashing function maps it to maximum length of 1 and minimum length of 0. </ns0:p><ns0:formula xml:id='formula_2'>v j = squash(su j )<ns0:label>(3</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head n='2.2.2'>Feature Representation Layers</ns0:head><ns0:p>When the input is fed into CapPlant, features or representations is extracted from the inputs by passing it through 4 layers of convolution, followed by ReLU activation and max polling after each layer. The output of the 4th convolutional layer is reshaped and passed through the capsule layer to capture relative spatial and orientational relationship between different entities of an object in the image. The tensor obtained from the capsule layer is flattened and then passed through a densely-connected Neural Network (NN) layer. In the end, softmax is applied to squash a vector in the range (0, 1) such that all the resulting elements add up to 1. For a particular class c i , the softmax function can be calculated as follows:</ns0:p><ns0:formula xml:id='formula_3'>f (c) i = e c i &#8721; C j e c j (4)</ns0:formula><ns0:p>Where c j are the scores inferred by the model for each class in C. Softmax activation for a class c i depends on all the scores in c. </ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.3'>Normalization and Output Layers</ns0:head><ns0:p>The CapPlant model is compiled based on the features extracted from the previous layers. To calculate the error, the Categorical Cross Entropy loss (CE) is used as follows:</ns0:p><ns0:formula xml:id='formula_4'>CE = &#8722;log( e c p &#8721; C j e c j )<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Where c p is the CNN score for the positive class.</ns0:p></ns0:div> <ns0:div><ns0:head>7/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:2:0:NEW 24 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>RESULT</ns0:head><ns0:p>This section presents the results of the CapPlant Model. Extensive experimentation is carried out to evaluate the performance of our model.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Experiments</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1.1'>Experimental Setup</ns0:head><ns0:p>GPU Tesla K40c workstation is used as baseline system for training, testing and validation of Cap-Plant model. Keras, OpenCV, Capsule Layers, Matplotlib and CuDNN libraries are used for software implementation of CapPlant. For training of our model CapPlant, Adam is applied as an optimizer, categorical cross entropy is utilized for calculating loss and accuracy is used as evaluation metric. Overall training and validation losses and accuracies are exploited to determine the performance of our model. Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> lists parameters along with their values adjusted for training and testing of CapPlant. Whereas, detail of each layer designed for testing, training and validation of CapPlant model is listed in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>. Furthermore, Figure <ns0:ref type='figure' target='#fig_6'>3</ns0:ref> is a visual illustration of data flow between each layer of proposed CapPlant model. Moreover, Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref> shows the learning curves obtained while calculating training and validation accuracy and losses. CapPlant model is trained for total 200 epochs, however, trained models at 50, 100, and 150 epochs were also obtained for the sake of comparison. Also, early stopping was employed at epoch 100 to avoid over fitting.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Results of Experiments</ns0:head><ns0:p>Dataset. In this research, PlantVillage; An open access repository of images on plant health to enable the development of mobile disease diagnostics <ns0:ref type='bibr' target='#b13'>Hughes and Salath'e (2015)</ns0:ref> obtained from source <ns0:ref type='bibr' target='#b23'>(Mohanty, 2018)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Input:</ns0:p><ns0:p>( <ns0:ref type='bibr'>32,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>3)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>3)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>3)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>16)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>16)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>16)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>16)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>32)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>112,</ns0:ref><ns0:ref type='bibr'>32)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>32)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>32)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>64)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>56,</ns0:ref><ns0:ref type='bibr'>64)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>64)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>64)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>128)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>28,</ns0:ref><ns0:ref type='bibr'>128)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>128)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>128)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>1,</ns0:ref><ns0:ref type='bibr'>128)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>1,</ns0:ref><ns0:ref type='bibr'>128)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>1,</ns0:ref><ns0:ref type='bibr'>256)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>1,</ns0:ref><ns0:ref type='bibr'>256)</ns0:ref> Output: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>256)</ns0:ref> Input: <ns0:ref type='bibr'>(32,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>14,</ns0:ref><ns0:ref type='bibr'>256)</ns0:ref> Output:</ns0:p><ns0:p>(32, 50176) <ns0:ref type='table' target='#tab_6'>3</ns0:ref>. <ns0:ref type='table' target='#tab_7'>4</ns0:ref> shows the values of above evaluation metrics calculated for CapPlant model. Figure <ns0:ref type='figure'>5</ns0:ref> and 6 demonstrates bar chart representing recall, precission and F1 score, calculated for each disease and healthy category of plants respectively.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.2'>Comparison with previous Models</ns0:head><ns0:p>To further demonstrate the effectiveness of the proposed CapPlant model, it is compared with previous state-of-the-art models for plant disease classification and detection. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science CapPlant model may depend upon properties such as size, unsharpness, bit depth, and noise in the underlying test images.</ns0:p><ns0:p>In the future, a recommender system with our proposed technique can be integrated to suggest various actions that need to be taken against a given disease. Moreover, the idea of using CNN with capsule networks can bring significant improvement in the performance of many already existing DL models. It can point us towards a direction to explore various applications using capsules network.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Images of Diseases Found in Various Plants</ns0:figDesc><ns0:graphic coords='3,193.43,63.77,310.18,310.18' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:63954:2:0:NEW 24 Sep 2021) Manuscript to be reviewed Computer Science orientation relationship between different entities of an object in an image.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b28'>Picon et al. (2019b)</ns0:ref> implemented CNN model for classification of plant diseases. Three CNN models were proposed for combining different aspects together like crop identification data, geographical locations 3/15 PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Picon et al. (2019a) implemented deep convolution networks for disease detection in crop. The images used were divided into four groups of Rust, Tan Spot, Septoria and Healthy. The images were taken from Wheat 2014, Wheat 2015 and Wheat 2016 databases. Total eight thousand one hundred and seventy-eight images were considered, out of which three thousand three hundred and thirty-eight belonged to Rust, two thousand seven hundred and forty-four belonged to Septoria, one thousand five hundred and sixty-eight belonged to Tan spot, one thousand one hundred and sixteen belonged to Healthy class. One thousand three hundred and eight five images were taken from Wheat 2014 database, two thousand one hundred and eighty-nine images were taken from Wheat 2015 and three thousand nine hundred and sixty-nine images were taken from Wheat 2016 database. The proposed technique used residual neural networks</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b16'>Kamal et al. (2019)</ns0:ref> implemented depth wise separable architectures (convolution) for classification of plant diseases. The system developed used leaves images for detection of plant diseases. Several models were trained using the proposed method and reduced MobileNet stood out. More than eighty thousand images dataset was considered for training and testing, which covered fifty-five classes of healthy and diseased plants. The images were taken from PlantVillage for training and for testing PlantLeafs dataset was used. Eighty-two thousand one hundred and sixty-one images of PlantVillage were considered, eighteen thousand five hundred and seventeen images of PlantLeaf1 were considered, tweenty three thousand one hundred and ten images of PlantLeaf2 were considered and thirty-two thousand two hundred and forty-one images of PlantLeaf3 were considered. The number of classes included in PlantVillage are fifty-five, whereas in PlantLeaf1 these are eighteen, in PlantLeaf2 these are eleven and in PlantLeaf3 these are sixteen. It gave 36.03% accuracy when tested on pictures taken under different parameters than those of training. Even though the number of image dataset of healthy / diseased plants were more, nonetheless the developed system only considered accuracy as an evaluation metric and reported no precision, recall and F1 score.<ns0:ref type='bibr' target='#b37'>Sengar et al. (2018)</ns0:ref> implemented identification and quantification of powdery mildew disease in cherry using computer vision based technique. Adaptive intensity focused thresholding method was proposed for powdery mildew disease automatic segmentation. Two parameters were used in assessment of the level of disease spread in plants: 1) the portion in plant that was effected by the disease and 2) the length of the effected portion in plant. The proposed model achieved 99% accuracy. The proposed technique may be used for predicting only one disease in cherry plant.Rangarajan et al. (2018) implemented tomato disease classification with the help of pre-trained deep learning algorithm. Two pre-trained models i.e. VGG16net and AlexNet were used by the authors. Thirteen thousand two hundred and sixty-two tomato images from PlantVillage [40] dataset containing six disease classes and one heathy class were used by the proposed system. Accuracy reported for disease classification using VGG16net was 97.29% and using AlexNet was 97.49%. Comparing AlexNext and VGG16net, minimum execution time and better accuracy were reported with AlexNet. The authors considered six diseases for tomato plant, for which they used pre-trained networks. 4/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:2:0:NEW 24 Sep 2021) Manuscript to be reviewed Computer Science Mohanty et al. (2016) implemented plant disease detection using DL techniques on plant images All images used were resized 256 x 256, both model prediction and optimizations were performed on these images. Twenty-six diseases in fourteen crops were detected using this model. Pre-trained models, AlexNet and GoogleNet were considered for this experiment. Models were trained on three different variations of PlantVillage datasets, first they were trained on color images, then on gray scale images and finally on segmented leaves images. A dataset containing fifty-four thousand three hundred and six images was used, containing healthy and diseased plant leaves. The dataset targeted different thirty-eight classes. Five different training -test distributions were used i.e. first train 80% -test 20%, second train 60% -Test 40%, third train 50% -test 50%, fourth train 40% -test 60%, last one train 20% -test 80%. Color, grey scale and leaf-segmented images were considered. Two different training mechanisms were considered, first transfer learning and second training from scratch. The model achieved 99.35% accuracy, but on a held out test set. The system dropped accuracy to 31% when tested on different images other than training images. The developed technique used pre-trained networks instead of developing their own neural network for classification.Barbedo<ns0:ref type='bibr' target='#b4'>Barbedo (2019)</ns0:ref> implemented plant disease identification using deep learning. The diagnosis in the given algorithm considers image classification on two things, one spots and second lesions. Forty-six thousand four hundred and nine images were considered for disease identification. The images were taking using many sensors. The resolution of captured images were upto 24 MPixels. The plants considered were common bean, cassava, citrus, cocunut tree, corn, Kale, Cashew Tree, Coffee, Cotton, Grapevines, Passion fruit, Soybean, Sugarcane and wheat. Overall, fourteen plants and seventy-nine diseases were considered, but many had very few images associated with them. The model used was pretrained GoogLNet CNN.Accuracy was reported for different plants. The developed technique used pre-trained networks instead of developing their own neural network for classification. The developed system focused more on creating a custom dataset for disease detection. The developed system used fewer images for many classes. Many conditions had a few images associated with them in the dataset captured.<ns0:ref type='bibr' target='#b7'>Durmu&#351; et al. (2017)</ns0:ref> implemented tomato disease detection using deep learning. Diseases that occurred in tomato fields or greenhouses both were considered. AlexNet and SqueezeNet algorithms were used for training and testing of tomato disease detectio. Images were taken from PlantVillage dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>2) second model was trained for classification of plant leaves. The dataset used for training had 2,997 images of 11 plants. The prediction model with capsulenet gave an accuracy of around 85%. 5/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:2:0:NEW 24 Sep 2021) Manuscript to be reviewed Computer Science LIMITATIONS OF EXISTING SYSTEMS All the above crop disease detection techniques work well for detecting the diseases in crops, however they have few limitations such as: 1. Limited Scope: Less number of crops/diseases are targeted. 2. Limited Evaluation Metrics: Conclusion has been achieved based on few result parameters. 3. Limitation of CNN: Most of the techniques used pre-trained networks or created their own CNN.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Visual illustration of data flow between each layer of proposed CapPlant model.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>320</ns0:head><ns0:label /><ns0:figDesc>For calculating predicting performance of CapPlant model, several evaluation metrics are calculated such 321 as F1 score, accuracy, recall and precision.322 accuracy = T p + T n T p + T n + F p + F n 9/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:2:0:NEW 24 Sep 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .where</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Learning Curves for CapPlant. To prevent overfitting of data, early stopping was employed at epoch 100.</ns0:figDesc><ns0:graphic coords='11,356.28,66.79,182.23,318.80' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>337Figure 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5. Bar chart representing precision, recall and F1 score, calculated for each 26 plant diseases.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>128 Reshaping Caspsule Layer 14 x 14 x 256 Flattening 50176 Dense 38</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell cols='2'>Conv 1</ns0:cell><ns0:cell cols='2'>Max Pooling</ns0:cell><ns0:cell cols='2'>Conv 2</ns0:cell><ns0:cell>Max Pooling</ns0:cell><ns0:cell>Conv 3</ns0:cell><ns0:cell>Max Pooling</ns0:cell><ns0:cell>Conv 4</ns0:cell><ns0:cell>Max Pooling</ns0:cell></ns0:row><ns0:row><ns0:cell>224 x 224 x 3</ns0:cell><ns0:cell cols='2'>224 x 224 x 16</ns0:cell><ns0:cell cols='2'>112 x 112 x 16</ns0:cell><ns0:cell cols='2'>112 x 112 x 32</ns0:cell><ns0:cell>56 x 56 x 32</ns0:cell><ns0:cell>56 x 56 x 64</ns0:cell><ns0:cell>28 x 28 x 64</ns0:cell><ns0:cell>28 x 28 x 128</ns0:cell><ns0:cell>14 x 14 x</ns0:cell></ns0:row></ns0:table><ns0:note>Figure 2. Network architecture for Caplant; Deep Learning Architecture for Plant Disease Prediction through pictures. CapPlant is a real deep learning architecture because it uses end-to-end learning.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Inputs to the network are plant images of size 224 x 224 x 3. The size of inputs to CapPlant network is represented as (y; 224 x 224 x 3), where y is the batch size. At the expense of reduced accuracy, small batch sizes lead to faster training. Relatively large batch sizes are used to increase accuracy at the expense of slower training. For training of CapPlant, batch size is set to 32.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>)</ns0:cell></ns0:row><ns0:row><ns0:cell>2.2 Network Architecture</ns0:cell></ns0:row><ns0:row><ns0:cell>2.2.1 Model Inputs</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Hyper Parameters Set for Training of CapGAN</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameters</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell>Batch size</ns0:cell><ns0:cell>32</ns0:cell></ns0:row><ns0:row><ns0:cell>Epochs</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Image Size</ns0:cell><ns0:cell>224 x 224 x 3</ns0:cell></ns0:row><ns0:row><ns0:cell>Learning Rate</ns0:cell><ns0:cell>0.0002</ns0:cell></ns0:row><ns0:row><ns0:cell>Momentum for Adam Update</ns0:cell><ns0:cell>0.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Loss</ns0:cell><ns0:cell>Categorical Cross Entropy</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Detail of each CapPlant layer along with output shape and number of obtained parameters.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Total training images: 5000</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Total validation images: 5423</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Total validation images: 5470</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Total Classes: 38</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>43429 images belonging to 38 classes.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>5417 images belonging to 38 classes.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>5459 images belonging to 38 classes.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Layer</ns0:cell><ns0:cell>Output Shape</ns0:cell><ns0:cell>Param #</ns0:cell></ns0:row><ns0:row><ns0:cell>Input Layer</ns0:cell><ns0:cell>(32, 224, 224, 3)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2D</ns0:cell><ns0:cell>(32, 224, 224, 16)</ns0:cell><ns0:cell>448</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxPooling2</ns0:cell><ns0:cell>(32, 112, 112, 16)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2D</ns0:cell><ns0:cell>(32, 112, 112, 32)</ns0:cell><ns0:cell>4640</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxPooling2</ns0:cell><ns0:cell>(32, 56, 56, 32)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2D</ns0:cell><ns0:cell>(32, 56, 56, 64)</ns0:cell><ns0:cell>18496</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxPooling2</ns0:cell><ns0:cell>(32, 28, 28, 64)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2D</ns0:cell><ns0:cell>(32, 28, 28, 128)</ns0:cell><ns0:cell>73856</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxPooling2</ns0:cell><ns0:cell>(32, 14, 14, 128)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Reshape</ns0:cell><ns0:cell>(32, 14, 14, 1, 128)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>ConvCapsuleLayer</ns0:cell><ns0:cell>(32, 14, '4, 1, 256)</ns0:cell><ns0:cell>295168</ns0:cell></ns0:row><ns0:row><ns0:cell>Reshape</ns0:cell><ns0:cell>(32, 14, 14, 256)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Flatten</ns0:cell><ns0:cell>(32, 50176)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Dense</ns0:cell><ns0:cell>(32,38)</ns0:cell><ns0:cell>1906726</ns0:cell></ns0:row><ns0:row><ns0:cell>Total params: 2, 299, 334</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Trainable Params: 2, 299, 334</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Non-trainable params: 0</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>3.1.2 Training</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>has been used for training and testing. PlantVillage dataset have 54,306 images belonging to 14</ns0:figDesc><ns0:table /><ns0:note>8/15PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:2:0:NEW 24 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Details of PlantVillage Dataset used for Testing and Training of CapPlant</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Plant</ns0:cell><ns0:cell>Class Label</ns0:cell><ns0:cell>Name</ns0:cell><ns0:cell># of Training Samples</ns0:cell><ns0:cell># of Validation Samples</ns0:cell><ns0:cell># of Testing Samples</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>Apple scab</ns0:cell><ns0:cell>504</ns0:cell><ns0:cell>63</ns0:cell><ns0:cell>63</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Apple</ns0:cell><ns0:cell>1 2</ns0:cell><ns0:cell>Black rot Cedar Apple rust</ns0:cell><ns0:cell>496 220</ns0:cell><ns0:cell>62 27</ns0:cell><ns0:cell>63 28</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>3</ns0:cell><ns0:cell>Healthy</ns0:cell><ns0:cell>1316</ns0:cell><ns0:cell>164</ns0:cell><ns0:cell>165</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Blueberry</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>Healthy</ns0:cell><ns0:cell>1201</ns0:cell><ns0:cell>150</ns0:cell><ns0:cell>151</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Cherry</ns0:cell><ns0:cell>5 6</ns0:cell><ns0:cell>Healthy powdery mildew</ns0:cell><ns0:cell>683 841</ns0:cell><ns0:cell>85 105</ns0:cell><ns0:cell>86 106</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>7</ns0:cell><ns0:cell>Gray leaf spot</ns0:cell><ns0:cell>410</ns0:cell><ns0:cell>51</ns0:cell><ns0:cell>52</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Corn</ns0:cell><ns0:cell>8 9</ns0:cell><ns0:cell>Common rust Healthy</ns0:cell><ns0:cell>953 929</ns0:cell><ns0:cell>119 116</ns0:cell><ns0:cell>120 117</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>10</ns0:cell><ns0:cell>Northern leaf blight</ns0:cell><ns0:cell>788</ns0:cell><ns0:cell>98</ns0:cell><ns0:cell>99</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>11</ns0:cell><ns0:cell>black rot</ns0:cell><ns0:cell>944</ns0:cell><ns0:cell>118</ns0:cell><ns0:cell>118</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Grape</ns0:cell><ns0:cell>12 13</ns0:cell><ns0:cell>Esca black measles Healthy</ns0:cell><ns0:cell>1106 338</ns0:cell><ns0:cell>138 42</ns0:cell><ns0:cell>139 43</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>14</ns0:cell><ns0:cell>Leaf blight</ns0:cell><ns0:cell>860</ns0:cell><ns0:cell>107</ns0:cell><ns0:cell>109</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Orange</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>Haunglonbing</ns0:cell><ns0:cell>4405</ns0:cell><ns0:cell>550</ns0:cell><ns0:cell>552</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Peach</ns0:cell><ns0:cell>16 17</ns0:cell><ns0:cell>Bacterial spot Healthy</ns0:cell><ns0:cell>1837 288</ns0:cell><ns0:cell>229 36</ns0:cell><ns0:cell>231 36</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Pepper Bell</ns0:cell><ns0:cell>18 19</ns0:cell><ns0:cell>Bacterial spot Healthy</ns0:cell><ns0:cell>797 1182</ns0:cell><ns0:cell>99 147</ns0:cell><ns0:cell>101 149</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>20</ns0:cell><ns0:cell>Early blight</ns0:cell><ns0:cell>800</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Potato</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>Healthy</ns0:cell><ns0:cell>121</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>16</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>22</ns0:cell><ns0:cell>Late blight</ns0:cell><ns0:cell>800</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Raspberry</ns0:cell><ns0:cell>23</ns0:cell><ns0:cell>Healthy</ns0:cell><ns0:cell>296</ns0:cell><ns0:cell>37</ns0:cell><ns0:cell>38</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Soybean</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>Healthy</ns0:cell><ns0:cell>4072</ns0:cell><ns0:cell>509</ns0:cell><ns0:cell>509</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Squash</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>Powdery Mildew</ns0:cell><ns0:cell>1468</ns0:cell><ns0:cell>183</ns0:cell><ns0:cell>184</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Starberry</ns0:cell><ns0:cell>26 27</ns0:cell><ns0:cell>Healthy Leaf scorch</ns0:cell><ns0:cell>364 887</ns0:cell><ns0:cell>45 110</ns0:cell><ns0:cell>47 112</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>28</ns0:cell><ns0:cell>Bacterial spot</ns0:cell><ns0:cell>1701</ns0:cell><ns0:cell>212</ns0:cell><ns0:cell>214</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>29</ns0:cell><ns0:cell>Early blight</ns0:cell><ns0:cell>800</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>30</ns0:cell><ns0:cell>Healthy</ns0:cell><ns0:cell>1272</ns0:cell><ns0:cell>159</ns0:cell><ns0:cell>160</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>31</ns0:cell><ns0:cell>Late blight</ns0:cell><ns0:cell>1527</ns0:cell><ns0:cell>190</ns0:cell><ns0:cell>192</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>32</ns0:cell><ns0:cell>Leaf Mold</ns0:cell><ns0:cell>761</ns0:cell><ns0:cell>95</ns0:cell><ns0:cell>96</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Tomato</ns0:cell><ns0:cell>33</ns0:cell><ns0:cell>Septoria leaf spot</ns0:cell><ns0:cell>1416</ns0:cell><ns0:cell>177</ns0:cell><ns0:cell>178</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>34</ns0:cell><ns0:cell>Spider Mites</ns0:cell><ns0:cell>1340</ns0:cell><ns0:cell>167</ns0:cell><ns0:cell>169</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>35</ns0:cell><ns0:cell>Target spot</ns0:cell><ns0:cell>1123</ns0:cell><ns0:cell>140</ns0:cell><ns0:cell>141</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>36</ns0:cell><ns0:cell>Mosaic virus</ns0:cell><ns0:cell>298</ns0:cell><ns0:cell>37</ns0:cell><ns0:cell>38</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>37</ns0:cell><ns0:cell>Yellow leaf curl virus</ns0:cell><ns0:cell>4285</ns0:cell><ns0:cell>535</ns0:cell><ns0:cell>537</ns0:cell></ns0:row><ns0:row><ns0:cell>335</ns0:cell><ns0:cell cols='2'>4 CONCLUSION</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>336</ns0:cell><ns0:cell cols='6'>Advancement in DL and image processing provides a prospect to extend the research and applications</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='6'>of detection and classification of various diseases in plants using images. In this research, simple</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Comparison of different evaluation metrics measured for CapPlant with various State-of-the-art Models</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Year Model</ns0:cell><ns0:cell>Training Accuracy</ns0:cell><ns0:cell>Validation Accuracy</ns0:cell><ns0:cell cols='5'>Test Accuracy Precision Recall F1-Score Average</ns0:cell></ns0:row><ns0:row><ns0:cell>2018 VGG net</ns0:cell><ns0:cell>83.86%</ns0:cell><ns0:cell>81.92%</ns0:cell><ns0:cell>81.83%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>82.53%</ns0:cell></ns0:row><ns0:row><ns0:cell>2019 Capsule Network</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>88%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>88%</ns0:cell></ns0:row><ns0:row><ns0:cell>2020 CapPlant</ns0:cell><ns0:cell>98.06%</ns0:cell><ns0:cell>92.31%</ns0:cell><ns0:cell>93.07%</ns0:cell><ns0:cell cols='4'>93.07% 93.07% 93.07% 93.77%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='15'>/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63954:2:0:NEW 24 Sep 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Original Article Title: CapPlant: A Capsule Network based Framework for Plant Disease Classification To: Academic Editor, PeerJ Computer Science Re: Response to Editor Comments Dear Editor, We want to thank the Reviewers for their valuable suggestions on the manuscript and have edited the manuscript to address their concerns. We believe that the paper presentation is now more sound and convincing. We are uploading (a) our point-by-point response to the comments (response to Editor & Reviewers ), (b) an updated manuscript with yellow highlighting indicating changes, and (c) a clean updated manuscript without highlights (PDF main document). We believe that the manuscript is now suitable for publication in PeerJ! Best Regards, Omar Samin, Maryam Omar and Musadaq Mansoor Editor, Concern # 1 As mentioned in the reviewer response letter, provide the github link of the source code within the manuscript. Author Action: Thankyou for your suggestion. We have added a new section Availability of Data and Source Code (line 350-353) at the end of manuscript. In this section, we have added a link to dataset as well as github link for CapPlant source code. Editor, Concern # 2 Correct the English grammar of the manuscript, wherever required. Author Action: Thankyou for your suggestion. We have corrected several grammatical mistakes in our manuscript. All changes are reflected in tracked and clean manuscript. Reviewer#1, Concern # 1: In the methodology section, authors have mentioned that CapPlant is a real deep learning architecture because it uses end-to-end learning. However, in research paper end-to-end learning is not explained and discussed. It would be better if the authors can explain the concept of end-to-end learning in the paper. Author Action: Thank you for your valuable suggestion. As recommended, we have added a paragraph at the end of section 2: Methodology that explains the concept of end-to-end learning. Reviewer#1, Concern # 2: Figure 3 shows the learning curves obtained while calculating training and validation accuracy and it gives an idea that model was trained for 200 epochs. In contrast, Table 1 lists 100 epochs. Author should explain that why model was trained for 200 epochs but model till 100 epochs was used for testing and validation. Author Action: We have trained our model for total 200 epochs, however we have also obtained trained models at 50, 100, and 150 epochs for sake of comparison. Figure 3 shows the learning curves obtained while calculating training and validation accuracy and it gives an idea that model was trained for 200 epochs, as early stopping was employed at epoch 100 to avoid overfitting, therefore Table 1 lists 100 epochs. We have also added this explanation at the end of section 3.1.2 training. Reviewer#1, Concern # 3: Please cite Plant Village dataset in Table 2. Author Action: We have updated the manuscript by citing Plant Village dataset in caption of Table 3 (Table 2 in previous manuscript is now Table 3 in updated manuscript). Reviewer#1, Concern # 4: It would be helpful if the authors can include the values of standard deviation values along with the accuracies reported in Table 4. Author Action: Upon suggestion, we have calculated the Standard Deviation (SD) of training, test and validation accuracies of VGGnet and CapPlant model. SD of VGGnet is found to be 1.14 whereas for CapPlant it is 3.14. Higher SD is observed for CapPlant as accuracies are 98.06%, 92.31% and 93.07% (above 90% but distributed) whereas for VGGnet model it is 83.86%, 81.92% and 81.83%(low accuracy but less deviation). Therefore, upon reconsideration we have calculated and added MEAN instead of SD to compare our models further in Table 4. We would request our reviewer to reconsider this point and accept MEAN values for comparison. Reviewer#2, Concern #1: In introduction author has mentioned that “exploiting capsule layer enables the model to capture relative spatial and orientation relationship between different entities of an object in image”. How? Author Action: We have added the following paragraph in Introduction to clear ambiguities and to support our claim: As CNN stores information in scalar form, they are considered as translational and rotational invariant, whereas in capsule network, information is grouped together in form of vector where length of a capsule vector represents the probability of the existence of feature in an image and the direction of the vector would represent its pose information. Therefore, exploiting capsule layer enables the model to capture relative spatial and orientation relationship between different entities of an object in image. Reviewer#2, Concern # 2: 2. In Table 2, author has already assigned class labels, these labels should be used in Table 4 & Table 5 instead of class names while discussing F1 score, Precision and Recall of each class. Author Action: Thankyou for your suggestion, we tried to use assigned labels in Figure 5 & Figure 6 as shown in figure below, however the chart gets more amalgamated and complex to read as names are not available, therefore we have reverted the changes to original. In our opinion, it will be convenient for the reader to understand bar graph if names are available. We would request our reviewer to consider Bar Charts with names instead of class numbers. Reviewer#2, Concern # 3: Paper lacks detailed model summary of proposed CapPlant model. There should be a visual illustration that shows type of layer(input, CNN, Capsule, Dense etc ) input tensor and output tensor of a complete model. Author Action: Thankyou for your suggestion, we have added Figure 3, which is a visual illustration that shows type of layer (input, CNN, Capsule, Dense etc ) input tensor and output tensor of a complete model. Reviewer#3, Concern # 1: n Line 49-58, the authors mention that “These models have some major drawbacks, for instance one major issue with some of these models is in targeting less number of crops and diseases, secondly they are presenting results using limited or none evaluation metrices. In this research, a deep learning architecture; CapPlant is developed using CNN along with capsule network to classify and detect any disease found in plants accurately.” It would be nice to elaborate a bit the evaluation metrics that the existing techniques are not focussing on and the ones that CapPlant is dealing with. Author Action: Thankyou for your valuable suggestion we have updated the paragraph as follows: These models have some major drawbacks, for instance one major issue with some of these models is in targeting less number of crops and diseases, secondly they are presenting results using limited or none standard evaluation metrices like Accuracy (Testing, Training and Validation), Precision, Recall and F1-Score that are generally used for evaluating a classification model . In this research, a deep learning architecture; CapPlant is developed using CNN along with capsule network to classify and detect any disease found in plants accurately. Reviewer#3, Concern # 2: For the purpose of reproducibility, it would be nice to make available the code in the form of Git repository. Author Action: Upon suggestion, we have created a public repository for CapPlant. [Github Link] Reviewer#3, Concern # 2: Threats to Validity for the CapPlant approach should be provided. Author Action: Thank you for your suggestion, we have added the following paragraph in our conclusion that highlight threats to validity for the CapPlant: For now, model has been tested and validated for already publicly available PlantVillage dataset. The threats to the validity of results obtained from CapPlant model may depend upon properties such as size, unsharpness, bit depth, and noise in the underlying test images. "
Here is a paper. Please give your review comments after reading it.
256
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>This paper continues the work initiated by the authors on the feasibility of using ParaView as visualization software for the analysis of parallel Computational Fluid Dynamics (CFD) codes' performance. Current performance tools have limited capacity of displaying their data on top of three-dimensional, framed (i.e. time-stepped) representations of the cluster's topology. In our first paper, a plugin for the open-source performance tool Score-P was introduced, which intercepts an arbitrary number of manually selected code regions (mostly functions) and send their respective measurements -amount of executions and cumulative time spent --to ParaView (through its in situ library, Catalyst), as if they were any other flow-related variable. Our second paper added to such plugin the capacity to (also) map communication data (messages exchanged between MPI ranks) to the simulation's geometry. So far the tool was limited to codes which already have the in situ adapter; but in this paper, we will take the performance data and display it --also in codes without in situ --on a three-dimensional representation of the hardware resources being used by the simulation. Testing is done with the Multi-Grid and Block Tri-diagonal NPBs, as well as Rolls-Royce's CFD code, Hydra. The benefits and overhead of the plugin's new functionalities are discussed.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Computers have become crucial in solving engineering problems. However, standard computers do not have enough power to run more complex simulations (such as those involved in modern engineering problems, like designing an aircraft) on their own. They require parallelized simulation (for instance of the air flowing through the airplane's engine) to be run in High Performance Computing (HPC) hardware.</ns0:p><ns0:p>Such infrastructures are expensive, as well as time and energy consuming. It is thus imperative that the application has its parallel performance tuned for maximum productivity.</ns0:p><ns0:p>There are several tools for analyzing the performance of parallel applications. An example is Score-P 1 <ns0:ref type='bibr' target='#b12'>(Kn&#252;pfer et al., 2012)</ns0:ref>, which is developed in partnership with the Centre for Information Services and HPC (ZIH) of the Technische Universit&#228;t Dresden. It allows the user to instrument the simulation's code and monitor its execution, and can easily be turned on or off at compile time. When applied to a source code, the simulation will not only produce its native outputs at the end, but also the performance data.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> below illustrates the idea.</ns0:p><ns0:p>However, the tools currently available to visualize the performance data (generated by software like Score-P) lag in important features, like three-dimensionality, time-step association (i.e. frame playing), color encoding, manipulability of the generated views etc. As a different category of add-ons, tools for enabling in situ visualization of applications' output data -like temperature or pressure in a Computational Fluid Dynamics (CFD) simulation -already exist too; one example is Catalyst 2 <ns0:ref type='bibr' target='#b2'>(Ayachit et al., 2015)</ns0:ref>. They also work as an optional layer to the original code and can be activated upon request, by means of preprocessor directives at compilation stage. The simulation will then produce its native outputs, if any, plus the coprocessor's (a piece of code responsible for permitting the original application to interact with the in situ methods) ones, in separate files. This is illustrated in the bottom part of Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. These tools have been developed by visualization specialists for a long time and feature sophisticated visual resources.</ns0:p><ns0:p>In this sense, why not apply such in situ tools (which enable data extraction from the simulation by separate side channels, in the same way as performance instrumenters) to the performance analysis of parallel applications, thus filling the blank left by the lack of visual resources of the performance tools?</ns0:p><ns0:p>This work is the third in a series of our investigations on the feasibility of merging the aforementioned approaches. First, by unifying the coinciding characteristics of both types of tools, insofar as they augment a parallel application with additional features (which are not required for the application to work). Second, by using the advanced functionalities of specialized visualization software for the goal of performance analysis. Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> illustrates the idea.</ns0:p><ns0:p>In our first paper <ns0:ref type='bibr' target='#b0'>(Alves and Kn&#252;pfer, 2019)</ns0:ref>, we mapped performance measurements of code regions amount of executions and cumulative time spent -to the simulation's geometry, just like it is done for flow-related properties. In our second paper <ns0:ref type='bibr' target='#b1'>(Alves and Kn&#252;pfer, 2020)</ns0:ref>, we added to such mapping communication data (messages exchanged between MPI ranks). Henceforth this feature shall be called geometry mode.</ns0:p><ns0:p>Following feedback we have received since, we thought about how our approach could be used to assist with the performance optimization of codes without an in situ adapter. What happens if you move such adapter inside our tool? This corresponds to flipping the positions of the performance and the in situ add-ons on Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>; i.e. so far we were doing performance analysis inside in situ, now we will do in situ inside performance. In this paper, we present the result of such investigation: a new feature in our tool, called topology mode -the capacity of matching the performance data to a three-dimensional representation of the cluster's architecture.</ns0:p><ns0:p>There are two approaches to HPC performance analysis. One uses performance profiles which contain congregated data about the parallel execution behavior. Score-P produces them in the Cube4 format, to be visualized with Cube 3 . The other uses event traces collecting individual run-time events with precise timings and properties. Score-P produces them in the OTF2 format, to be visualized with Vampir 4 . The outputs of our tool are somehow a mixture of both: aggregated data, but by time step.</ns0:p><ns0:p>The presented solution is not intended for permanent integration into the source code of the target application. Instead it should be applied on demand only with little extra effort. This is solved in accordance with the typical approaches of parallel performance analysis tools on the one hand and in situ processing toolkits on the other hand. As evaluation cases, the Multi-Grid and Block Tri-diagonal NAS Parallel Benchmarks (NPB) <ns0:ref type='bibr' target='#b5'>(Frumkin et al., 1998)</ns0:ref> will be used, together with Rolls-Royce's in-house CFD code, Hydra <ns0:ref type='bibr' target='#b13'>(Lapworth, 2004</ns0:ref>). This paper is organized as follows: in section 1 we discuss the efforts made so far at the literature to map performance data to the computing architecture's topology and the limitations of their results.</ns0:p><ns0:p>In section 2 we present the methodology of our approach, which is then evaluated in the test-cases in section 3. Finally, section 4 discusses the overhead associated with using our tool. We then conclude the article with a summary.</ns0:p></ns0:div> <ns0:div><ns0:head n='1'>RELATED WORK</ns0:head><ns0:p>In order to support the developer of parallel codes in his optimization tasks, many software tools have been developed. For an extensive list of them, including information about their:</ns0:p><ns0:p>&#8226; scope, whether single or multiple nodes (i.e. shared or distributed memory);</ns0:p><ns0:p>&#8226; focus, be it performance, debugging, correctness or workflow (productivity);</ns0:p><ns0:p>&#8226; programming models, including MPI, OpenMP, Pthreads, OmpSs, CUDA, OpenCL, OpenACC, UPC, SHMEM and their combinations;</ns0:p><ns0:p>&#8226; languages: C, C++, Fortran or Python;</ns0:p><ns0:p>&#8226; processor architectures: x86, Power, ARM, GPU;</ns0:p><ns0:p>&#8226; license types, platforms supported, contact details, output examples etc.</ns0:p><ns0:p>the reader is referred to the Tools Guide 5 of the Virtual Institute -High Productivity Supercomputing (VI-HPS). Only one of them matches the performance data to the cluster's topology: ParaProf <ns0:ref type='bibr' target='#b4'>(Bell et al., 2003)</ns0:ref>, whose results can be seen in the tool's website 6 . The outputs are indeed three-dimensional, but their graphical quality is low, as one could expect from a tool which tries to recreate the visualization environment from scratch. The same hurdle can be found on the works of <ns0:ref type='bibr' target='#b10'>Isaacs et al. (2012) and</ns0:ref><ns0:ref type='bibr' target='#b16'>Schnorr et al. (2010)</ns0:ref>, which also attempt to create a whole new three-dimensional viewing tool (just for the sake of performance analysis). Finally, <ns0:ref type='bibr' target='#b19'>Theisen et al. (2014)</ns0:ref> combined multiple axes onto two-dimensional views:</ns0:p><ns0:p>the generated visualizations are undeniably rich, but without true three-dimensionality, the multiplicity of two-dimensional planes overlapping each other can quickly become cumbersome and preclude the understanding of the results.</ns0:p><ns0:p>On the other hand, when it comes to display messages exchanged between MPI ranks during the simulation, Vampir is the current state-of-the-art tool on the field, but it is still unable to generate threedimensional views. This impacts e.g. on the capacity to distinguish between messages coming from ranks running within the same compute node from those coming from ranks running in other compute nodes. Also, Vampir is not able to apply a color scale to the communication lines. Finally, it has no knowledge of the simulation's time-step, whereas this is the code execution delimiter the developers of CFD codes are naturally used to deal with. <ns0:ref type='bibr' target='#b8'>Isaacs et al. (2014)</ns0:ref> got close to it, by clustering event traces according to the self-developed idea of logical time, 'inferred directly from happened-before relationships'. This represents indeed an improvement when compared with not using any sorting, but it</ns0:p><ns0:p>is not yet the time-step loop as known by the programmer of a CFD code. Alternatively, it is possible to Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>isolate the events pertaining to the time step by manually instrumenting the application code and inserting a region called e.g. 'Iteration' (see section 2.1.1 below). <ns0:ref type='bibr' target='#b18'>Sol&#243;rzano et al. (2021) and</ns0:ref><ns0:ref type='bibr' target='#b14'>Miletto et al. (2021)</ns0:ref> have applied such method. We would like then to simplify this process and make it part of the tool's functioning itself. 7</ns0:p><ns0:p>Finally, with regards to in situ methods, for a comprehensive study of the ones currently available, the reader is referred to the work of <ns0:ref type='bibr' target='#b3'>Bauer et al. (2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>METHODOLOGY</ns0:head><ns0:p>This section presents what is necessary to implement our work.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Prerequisites</ns0:head><ns0:p>The objective aimed by this research depends on the combination of two scientifically established methods:</ns0:p><ns0:p>performance measurement and in situ processing.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.1'>Performance Measurement</ns0:head><ns0:p>When applied to a source file's compilation, Score-P automatically inserts probes between each code 'region' 8 , which will at run-time measure a) the number of times that region was executed and b) the Finally, the tool is also equipped with an API, which permits the user to increase its capabilities through plugins <ns0:ref type='bibr' target='#b17'>(Sch&#246;ne et al., 2017)</ns0:ref>. The combined solution proposed by this paper takes actually the form of such a plugin.</ns0:p><ns0:formula xml:id='formula_0'>total</ns0:formula></ns0:div> <ns0:div><ns0:head n='2.1.2'>In Situ Processing</ns0:head><ns0:p>In order for Catalyst to interface with a simulation code, an adapter needs to be created, which is responsible for exposing the native data structures (grid and flow properties) to the coprocessor component.</ns0:p><ns0:p>Its interaction with the simulation code happens through three function calls (initialize, run and finalize),</ns0:p><ns0:p>illustrated in blue at Figure <ns0:ref type='figure'>3</ns0:ref>. Once implemented, the adapter allows the generation of post-mortem files (by means of the VTK 9 library) and/or the live visualization of the simulation, both through ParaView 10 .</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Combining both Tools</ns0:head><ns0:p>In our previous works <ns0:ref type='bibr' target='#b0'>(Alves and Kn&#252;pfer, 2019;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alves and Kn&#252;pfer, 2020)</ns0:ref>, a Score-P plugin has been developed, which allows performance measurements for an arbitrary number of manually selected code regions and communication data (i.e. messages exchanged between MPI ranks) to be mapped to the simulation's original geometry, by means of its Catalyst adapter (a feature now called geometry mode).</ns0:p><ns0:p>In this paper, we are extending our software to map those measurements to a three-dimensional representation of the cluster's topology, by means of the plugin's own Catalyst adapter (a new feature named topology mode). The plugin must be turned on at run-time through an environment variable (export SCOREP SUBSTRATE PLUGINS=Catalyst), but works independently of Score-P's profiling or tracing modes being actually on or off. Like Catalyst, it needs three function calls (initialize, run and finalize)</ns0:p><ns0:p>to be introduced in the source code, illustrated in violet at Figure <ns0:ref type='figure'>3</ns0:ref>. However, if the tool is intended to be used exclusively in topology mode, the blue calls shown at Figure <ns0:ref type='figure'>3</ns0:ref> are not needed, given in this mode the plugin depends only on its own Catalyst adapter (i.e. the simulation code does not need to have any reference to VTK whatsoever). 7 The correspondent drawback is that the tool will not be suitable for detecting variations inside the course of one time step. For such analyses, the user is referred to the currently available tools, like Vampir.</ns0:p><ns0:p>8 Every 'function' is naturally a 'region', but the latter is a broader concept and includes any user-defined aggregation of code lines, which is then given a name. It could be used e.g. to gather all instructions pertaining to the main solver (time-step) loop. 9 https://www.vtk.org/ 10 https://www.paraview.org/</ns0:p></ns0:div> <ns0:div><ns0:head>4/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_5'>2021:03:58866:1:1:NEW 2 Aug 2021)</ns0:ref> Manuscript to be reviewed Finally, a call must be inserted before each function to be pipelined, as illustrated in Figure <ns0:ref type='figure'>4</ns0:ref> below. This layout ensures that the desired region will be captured when executed at that specific moment and not in others (if the same routine is called multiple times -with distinct inputs -throughout the code, as it is common for CFD simulations). The selected functions may even be nested. This is not needed when tracking communications between ranks, as the instrumentation of MPI regions is made independently at run-time (see section 2.1.1 above).</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>EVALUATION</ns0:head><ns0:p>This section presents how our work is going to be evaluated.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Settings</ns0:head><ns0:p>Three test-cases will be used to demonstrate the new functionality of the plugin: two well-known benchmarks and an industry-grade CFD Code. All simulations were done in Dresden University's HPC cluster (Taurus), whose nodes are interconnected through Infiniband. Everything was built / tested with release 2018a of Intel &#174; compilers in association with versions 6.0 of Score-P and 5.7.0 of ParaView.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.1'>Benchmarks</ns0:head><ns0:p>The NAS Parallel Benchmarks (NPB) <ns0:ref type='bibr' target='#b5'>(Frumkin et al., 1998</ns0:ref>) 'are a small set of programs designed to help evaluate the performance of parallel supercomputers. The benchmarks are derived from computational fluid dynamics (CFD) applications and consist of five kernels and three pseudo-applications'. Here one of each is used: the Multi-Grid (MG) and the Block Tri-diagonal (BT) respectively (version 3.4). Both were run in a Class D layout by four entire Sandy Bridge nodes, each with 16 ranks (i.e. pure MPI, no OpenMP), one per core and with the full core memory (1875 MB) available. Their grids consist of a parallelepiped with the same number of points in each cartesian direction. Finally, both are sort of 'steady-state' cases (i.e. the time-step is equivalent to an iteration-step).</ns0:p><ns0:p>In order for the simulations to last at least 30 minutes, 11 MG was run for 3000 iterations (each comprised of 9 multigrid levels), whereas BT for 1000. The plugin generated VTK output files every 100 iterations for MG (i.e. 30 'stage pictures' by the end of the simulation, 50 MB of data in total), every 50 iterations for BT (20 frames in the end, same amount of data), measuring the solver loop's central routine (mg3P and adi respectively) in each case.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.2'>Industrial CFD Code</ns0:head><ns0:p>Hydra is Rolls-Royce's in-house CFD code <ns0:ref type='bibr' target='#b13'>(Lapworth, 2004)</ns0:ref>, based on a preconditioned time marching of the Reynolds-averaged Navier-Stokes (RANS) equations. They are discretized in space using an edgebased, second-order finite volume scheme with an explicit, multistage Runge-Kutta scheme as a steady time marching approach. Steady-state convergence is improved by multigrid and local time-stepping acceleration techniques <ns0:ref type='bibr' target='#b11'>(Khanal et al., 2013)</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_3'>5</ns0:ref> shows the test case selected for this paper: it represents a simplified (single cell thickness), 360&#176;testing mesh of two turbine stages in an aircraft engine, discretized through approximately 1 million points. Unsteady RANS calculations have been made with time-accurate, second-order dual time-stepping. Turbulence modelling was based on standard 2-equation 11 Less than that would make the relative (percentage) statistical oscillation of the run time too big for valid comparisons (see section 4 below).</ns0:p></ns0:div> <ns0:div><ns0:head>6/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58866:1:1:NEW 2 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed Here the simulations were done using two entire Haswell nodes, each with 24 ranks (again pure MPI), one per core and with the entire core memory (2583 MB) available. Figure <ns0:ref type='figure' target='#fig_3'>5</ns0:ref> shows the domain's partitioning among the processes. The shape of the grid, together with the rotating nature of two of its four blade rings (the rotors), anticipates that the communication patterns here are expected to be extremely more complex than in the benchmarks.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>One full engine's shaft rotation was simulated, comprised of 200 time-steps (i.e. one per 1,8&#176;), each internally converged through 40 iteration steps. The plugin was generating post-mortem files every 20 th time-step (i.e. every 36&#176;), what led to 10 stage pictures (12 MB of data) by the end of the simulation.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Results</ns0:head><ns0:p>The second part of this section presents the results of applying our work on the selected test-cases. The benchmarks will be used more to illustrate how the tool works, whereas a true performance optimization task will be executed with the industrial CFD code.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.1'>Benchmarks</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_4'>6</ns0:ref> shows the plugin outputs for an arbitrary time-step in the MG benchmark. The hardware information (i.e. in which core, socket etc. each rank is running) is plotted on constant z planes; the network information (i.e. switches that need to be traversed in order for inter-node communications to be performed), on its turn, is shown on the x = 0 plane. Score-P's measurements, as well as the rank id number, are shown just below the processing unit (PU) where that rank is running, ordered from left to right (in the x direction) within one node, then from back to front (in the z direction) between nodes. Finally, the MPI communication made in the displayed time-step is represented through the lines connecting different rank ids' cells.</ns0:p><ns0:p>Here, notice how each compute node allocated to the job becomes a plane in ParaView. They are ordered by their id numbers (see the right-hand side of Figure <ns0:ref type='figure' target='#fig_4'>6</ns0:ref>) and separated by a fixed length (adjustable at run-time through the plugin's input file). Apart from the node id, it is also possible to color the planes by the topology type, i.e. if the cell refers to a socket, a L3 cache, a processing unit etc., as done on the left side of the figure. This means that, between any pair of planes in Figure <ns0:ref type='figure' target='#fig_4'>6</ns0:ref> there might be other compute nodes (by order of id number) in the cluster infrastructure; but, if that is the case, none of its cores are participating in the current simulation. The inter-node distance in ParaView will be bigger, however, if the user activated the drawing of network topology information and the compute nodes involved in the simulation happen to be located in different network islands, as shown on Figure <ns0:ref type='figure'>7</ns0:ref>. This is indeed intuitive, as messages exchanged between nodes under different switches will need to travel longer in order to be delivered (when compared to those exchanged between nodes under the same switch). Taurus uses Slurm 13 , which carefully allocates the MPI ranks by order of compute node id (i.e. the node with lower id will receive the first processes, whereas the node with higher id will receive the last processes). It also attempts to place those ranks as close as possible to one another (both from an intra and inter node perspectives), as to minimize their communications' latency. But just to illustrate the plugin's potential, Figure <ns0:ref type='figure'>8</ns0:ref> shows the results when forcing the scheduler to use at least a certain amount of nodes for the job. Notice how only the sockets (the cyan rectangles in the figure) where there are allocated cores are drawn in the visualization; the same applies to the L3 cache (the blue rectangles). Also, notice how the switches are positioned in a way that looks like a linkage between the machines (the yellow rectangles in the figure) they connect. This is intentional (it makes the visualization intuitive).</ns0:p><ns0:p>With regards to messages sent between ranks, in order to facilitate the understanding of the communication behavior, the source / destination data is also encoded in the position of the lines themselves: they start from the bottom of the sending rank and go downwards toward the receiving one. This way, it is possible to distinguish -and simultaneously visualize -messages sent from A to B and from B to A. In Figure <ns0:ref type='figure'>9</ns0:ref>, notice how the manipulation of the camera angle (an inherent feature of visualization software like ParaView) allows the user to immediately get useful insights about its code behaviour (e.g. the even nature of the communication channels in MG versus the cross-diagonal shape in BT).</ns0:p><ns0:p>In Figure <ns0:ref type='figure'>9</ns0:ref>, notice also how all ranks on both benchmarks talk either to receivers within the same node or the nodes immediately before / after. The big lines connecting the first and last nodes suggest some sort of periodic boundary condition inside the grid. This can be misleading: lines between cores in the first and last nodes will need to cross the entire visualization space, making it harder to understand. For 12 Companies like Rolls-Royce usually purchase computational resources: they are not willing to buy the compute time of e.g. 16 nodes when they only need 4 for a specific simulation. In this sense, performance degradation due to nearby jobs (sharing the same network switch) is seen as 'part of life'.</ns0:p><ns0:p>13 https://slurm.schedmd.com/</ns0:p></ns0:div> <ns0:div><ns0:head>8/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58866:1:1:NEW 2 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Finally, Slurm comes with a set of tools which will go through the cluster network (Infiniband, in our case) and automatically generate its connectivity information, saving it into a file. This file has been used</ns0:p><ns0:p>as the network topology configuration file and is read by the plugin at run time. If it is not found, the drawing of the planes in ParaView will not take the switches into account.</ns0:p></ns0:div> <ns0:div><ns0:head>9/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58866:1:1:NEW 2 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science We submitted such results to Rolls-Royce, whose developers then changed their code and sent it back to us. The new communication behavior can be seen in Figure <ns0:ref type='figure' target='#fig_1'>12</ns0:ref>. Notice how the minimum number of messages sent between any pair of processes dropped from 1500 to 170 (see the lower limit of the scale at the upper-right corner of the left picture); analogously, how the minimum amount of data sent raised from 0 to 68 kB (see the lower limit of the scale at the upper-right corner of the right picture). I.e. now there are no more empty messages being sent, and this is visible in the visualization of the communication lines.</ns0:p><ns0:p>The plugin has been successfully used in a real life performance optimization problem, whose detection <ns0:ref type='figure' target='#fig_0'>13</ns0:ref>. Example of a manual (user-defined) code instrumentation with Score-P; the optional if clauses ensure measurements are collected only at the desired time-steps would be difficult if using the currently available tools 14 .</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>OVERHEAD</ns0:head><ns0:p>Provided we are talking about performance analysis, it is necessary to investigate the impact of our tool itself on the performance of the instrumented code execution.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Settings</ns0:head><ns0:p>In the following tables, the baseline results refer to the pure simulation code, running as per the settings presented in Sec. 3; the numbers given are the average of 5 runs &#177; 1 relative standard deviation. The + Score-P results refer to when Score-P is added onto it, running with both profiling and tracing modes deactivated (as neither of them is needed for the plugin to work) 15 . Finally, ++ plugin refers to when the plugin is also used: running only in topology mode and in only one feature (regions or communication) at a time 16 and on the iterations when there would be generation of output files 17 . The percentages shown in these two columns are not the variation of the measurement itself, but its deviation from the average baseline result.</ns0:p><ns0:p>Score-P was always applied with the --nocompiler flag. This option is enough when the plugin is used to show communication between ranks, as no instrumentation (manual or automatic) is needed 14 Vampir, for instance, is not able to show an aggregated view of the communication pattern inside the time step, as it has no knowledge about it (when it starts and when it finishes). The data scales shown on Figures <ns0:ref type='figure' target='#fig_1'>11 and 12</ns0:ref> are not available then, what makes it difficult to spot channels (pairs of sender/receiver) through which no proper data is sent (messages with 0 size). 15 If activated, there would be at the end of the simulation, apart from the simulation's output files, those generated by Score-P for visualization in Cube (profiling mode) or Vampir (tracing mode). Their generation can co-exist with the plugin usage, but it is not recommended: the overheads sum up. 16 The plugin can perfectly run in all its modes and features at the same time (geometry mode requires the simulation to have a Catalyst adapter; see our previous papers). However, this is not recommended: the overheads sum up. 17 Given the simulation was not being visualized live in ParaView, there was no need to let the plugin work in time-steps when no data would be saved to disk.</ns0:p></ns0:div> <ns0:div><ns0:head>12/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58866:1:1:NEW 2 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science when solely MPI calls are being tracked. On the other hand, the instrumentation overhead is considerably higher when the target is to measure code regions, as every single function inside the simulation code is a potential candidate for analysis (as opposed to when tracking communications, when only MPI-related calls are intercepted). In this case, it was necessary to add the --user Score-P compile flag and manually instrument the simulation code (i.e. only the desired regions were visible to Score-P). An intervention as illustrated in Figure <ns0:ref type='figure' target='#fig_0'>13</ns0:ref> achieves this: if MODULO... additionally guarantees measurements are collected only when there would be generation of output files and at time-step 1 -the reason for it is that Catalyst runs even when there is no post-mortem files being saved to disk (as the user may be visualizing the simulation live) and the first time-step is of unique importance, as all data arrays must be defined then (i.e. the (dis)appearance of variables in later time-steps is not allowed) 18 . Finally, when measuring code regions, interception of MPI-related routines was turned off at run-time 19 .</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Results</ns0:head><ns0:p>Tables <ns0:ref type='table' target='#tab_5'>1 and 2</ns0:ref> show the impact of the proposed plugin on the test-cases performance. The memory section refers to the peak memory consumption per parallel process, reached somewhen during the simulation; it neither means that all ranks needed that amount of memory (at the same time or not), nor that the memory consumption was like that during the entire simulation. Score-P itself introduced no perceptible overhead; on its turn, the plugin did, an that is because it is equipped with a Catalyst adapter (whose footprint lies mostly on memory consumption <ns0:ref type='bibr' target='#b2'>(Ayachit et al., 2015)</ns0:ref>). Catalyst needs this memory to store the artificial geometry's (the topological representation of the hardware resources being used) coordinates and cells definition, plus all the data arrays associated with them (amount of times a function was executed, amount of messages sent between two ranks etc.), for each time-step during the simulation. Hence the added memory footprint is higher.</ns0:p><ns0:p>The run time overhead, on its turn, is only critical when measuring the two code regions selected in Hydra: they are called millions of times per time-step, hence their instrumentation is heavy. Otherwise the plugin's or Score-P's footprints lie within the statistical oscillation of the baseline results. </ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this paper, we have extended our software to allow mapping performance data to a three-dimensional representation of the cluster's architecture, by means of (combining) the code instrumenter Score-P and the graphics manipulation program ParaView. The tool, which takes the form of a Score-P plugin, introduces the following novel capabilities to the spectrum of code analysis resources:</ns0:p><ns0:p>&#8226; detailed view up to topology component level (i.e. in which core of which socket of which node a specific MPI rank is running);</ns0:p><ns0:p>18 Hence, there were two narrowing factors for Score-P in the end: the spacial (i.e. accompany only the desired functions) and the temporal (accompany only at the desired time-steps) ones.</ns0:p><ns0:p>19 By means of the SCOREP MPI ENABLE GROUPS environment variable (see Sec. 2.1.1 above).</ns0:p></ns0:div> <ns0:div><ns0:head>13/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58866:1:1:NEW 2 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; limit visualization to resources being used by the simulation;</ns0:p><ns0:p>&#8226; native association with the simulation's time-step;</ns0:p><ns0:p>&#8226; individual components of the visualization (like the network switches) are optional to produce and to display (i.e. see only what you want to see);</ns0:p><ns0:p>&#8226; easily distinguish between messages coming from ranks within the same compute node from those coming from ranks running in other compute nodes, something not possible in a tool like Vampir;</ns0:p><ns0:p>&#8226; individually applicable color scale to each element of the visualization, allowing, for example, to color the communication lines by amount of bytes sent, receiver id, sender id etc (something also not possible in Vampir);</ns0:p><ns0:p>All that under the graphic quality of today's top-of-the-art visualization program, ParaView: render views are fully manipulatable and tens of filters are available to further dig into the data. ParaView is the best option as visualization software because of all the resources already available in -and experience accumulated by -it after decades of continuous development. Visualization techniques do not use to be the specialization field of programmers working with code performance: it is more reasonable to take advantage of the currently available graphic programs than attempting to equip the performance tools with their own GUIs (from scratch).</ns0:p><ns0:p>Our tool is based exclusively on open-source dependencies; its source code is freely available 20 , as the raw data of the benchmark results presented in this paper 21 . It works with either automatic or manual code instrumentation and independently of Score-P's profiling or tracing modes. Lastly, its output frequency (when doing post-mortem analyses) is adjustable at run-time (through the plugin input file), like in Catalyst itself.</ns0:p></ns0:div> <ns0:div><ns0:head>FUTURE WORK</ns0:head><ns0:p>We plan to continue this work in multiple directions:</ns0:p><ns0:p>Scale the tool: To keep testing our tool in bigger and bigger test cases, in order to investigate its scalability limits (if any).</ns0:p><ns0:p>Develop new visualization schemes for performance data: To take advantage of the multiple filters available in ParaView for the benefit of the performance optimization branch, e.g. by recreating in it the statistical analysis -display of average and standard deviation between the threads/ranks' measurements -already available in other tools.</ns0:p><ns0:p>Remove the necessity of the topology configuration file: When running the plugin in topology mode, get the network details directly from system libraries (as done with the hardware details). Both Slurm and the hwloc team -through its sister project, netloc 22 <ns0:ref type='bibr' target='#b6'>(Goglin et al., 2014)</ns0:ref> -are straining in that direction, but it is currently not yet possible (partially because the retrieval of the switches configuration requires root access and therefore needs to be executed by the cluster's admins).</ns0:p><ns0:p>Extend list of supported communication calls: To make the tool capable of detecting calls of other communication protocols, like GPI-2 23 <ns0:ref type='bibr' target='#b7'>(Gr&#252;newald and Simmendinger, 2013)</ns0:ref>. This will require a respective extension of Score-P's substrate plugin API.</ns0:p><ns0:p>Extend list of detectable performance phenomena: To extend the list of performance-relevant phenomena which can be detected by the plugin, for example: cache misses, memory accesses, I/O flows etc. This will also require a respective extension of Score-P's substrate plugin API.</ns0:p><ns0:p>Use plugin for teaching: Finally, explore the possibility of using the tool for teaching of parallel computing, especially in topics like data locality, job allocation, computer architecture, sharing of computational resources etc. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Schematic of software components for parallel applications</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>2Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Schematic of the software components for a combined add-on</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .Figure 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Figure 3. Illustrative example of changes needed in a simulation code due to Catalyst (blue) and then due to the plugin (violet)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Geometry used in the industrial CFD code simulations (left) and its partitioning among processes for parallel execution (right)</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,413.57,221.74' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Plugin outputs for an arbitrary time-step at the MG benchmark, visualized from the same camera angle, but with different parameters on each side</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. Plugin outputs for the MG benchmark. The leaf switch information is encoded both on the color (light brown, orange and dark brown) and on the position of the node planes (notice the extra gap when they do not belong to the same switch)</ns0:figDesc><ns0:graphic coords='10,141.73,63.78,413.57,220.42' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 9 .Figure 11 .Figure 12 .</ns0:head><ns0:label>91112</ns0:label><ns0:figDesc>Figure 9. Side-by-side comparison of the communication pattern between the MG (left) and BT (right) benchmarks, at an arbitrary time-step, colored by source rank of messages</ns0:figDesc><ns0:graphic coords='11,141.73,63.78,413.57,224.12' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='9,141.73,63.78,413.57,224.12' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='12,141.73,63.78,413.56,224.12' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='12,141.73,349.14,413.57,214.48' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>3/16 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2021:03:58866:1:1:NEW 2 Aug 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>time spent in those executions, by each process (MPI rank) within the simulation. It is applied by simply prepending the word scorep into the compilation command, e.g.: scorep [Score-P's options] mpicc foo.c. It is possible to suppress regions from the instrumentation (e.g. to keep the associated overhead low), by adding the flag --nocompiler to the command above. In this scenario,</ns0:figDesc><ns0:table /><ns0:note>Score-P sees only user-defined regions (if any) and MPI-related functions, whose detection can be easily (de)activated at run-time, by means of an environment variable: export SCOREP MPI ENABLE GROUPS=[comma-separated list]. Its default value is set to catch all of them. If left blank, instrumentation of MPI routines will be turned off.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Plugin's overhead when measuring code functions on topology mode.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>running time</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>memory (MB)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>++ plugin</ns0:cell><ns0:cell>+ Score-P</ns0:cell><ns0:cell>baseline</ns0:cell><ns0:cell cols='2'>++ plugin + Score-P</ns0:cell><ns0:cell>baseline</ns0:cell></ns0:row><ns0:row><ns0:cell>MG</ns0:cell><ns0:cell cols='4'>31m42s (0%) 31m09s (-1%) 31m37s &#177; 2% 648 (42%)</ns0:cell><ns0:cell>479 (5%) 455 &#177; 0%</ns0:cell></ns0:row><ns0:row><ns0:cell>BT</ns0:cell><ns0:cell>34m28s (0%)</ns0:cell><ns0:cell cols='3'>34m26s (0%) 34m28s &#177; 1% 648 (42%)</ns0:cell><ns0:cell>478 (5%) 455 &#177; 0%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Hydra 47m04s (12%)</ns0:cell><ns0:cell cols='3'>43m52s (4%) 42m00s &#177; 0% 382 (22%)</ns0:cell><ns0:cell>323 (3%) 314 &#177; 0%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Plugin's overhead when showing communication on topology mode.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>running time</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>memory (MB)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>++ plugin</ns0:cell><ns0:cell>+ Score-P</ns0:cell><ns0:cell>baseline</ns0:cell><ns0:cell cols='2'>++ plugin + Score-P</ns0:cell><ns0:cell>baseline</ns0:cell></ns0:row><ns0:row><ns0:cell>MG</ns0:cell><ns0:cell cols='4'>31m34s (0%) 31m09s (-1%) 31m37s &#177; 2% 648 (42%)</ns0:cell><ns0:cell>479 (5%) 455 &#177; 0%</ns0:cell></ns0:row><ns0:row><ns0:cell>BT</ns0:cell><ns0:cell cols='4'>34m24s (0%) 34m08s (-1%) 34m28s &#177; 1% 648 (42%)</ns0:cell><ns0:cell>477 (5%) 455 &#177; 0%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Hydra 42m53s (2%)</ns0:cell><ns0:cell cols='3'>43m50s (4%) 42m00s &#177; 0% 397 (26%)</ns0:cell><ns0:cell>316 (1%) 314 &#177; 0%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>We are unable to provide the raw data related to Rolls-Royce's code due to copyright issues.</ns0:figDesc><ns0:table /><ns0:note>20 https://gitlab.hrz.tu-chemnitz.de/alves--tu-dresden.de/catalyst-score-p-plugin 21 https://dx.doi.org/10.25532/OPARA-119. 22 https://www.open-mpi.org/projects/netloc/ 23 The open-source implementation of the GASPI standard, see https://www.gaspi.de/. 14/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58866:1:1:NEW 2 Aug 2021)</ns0:note></ns0:figure> <ns0:note place='foot' n='16'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58866:1:1:NEW 2 Aug 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Reviewer 1 (Julian Kunkel) Basic reporting The text is generally easily readable fulfilling the needs of the journal. There are some presentation issues that should be improved though. 1) Abstract It reads a bit awkward and should be rewritten particularly related to the relationship to the previous publications. I wouldn't start with 'This paper continues the work initiated' but rather with the motivation (2nd sentence). Then I do not care which paper there was. I would say: In previous work, we developed XX and YY. In the introduction, you can clarify what exactly you did compared to previous work (and you actually did so). 'also in codes without in situ' => In this article, the tool is extended allowing users to visualize performance data on a 3D representation of the hardware resources ... Visualization can be performed post-mortem or in situ using Catalyst... 2) There are sections where various commas could be added after introductory clauses; they should be added for readability. 3) The presentation of listings in Figures 3, 4, 12 should be improved. 4) Evaluation There surely is plenty of information in Figures 6+. The description feels initially a bit rough and the organization should be improved. The first three images are presented and then explained. I felt a bit left alone trying to understand various information from the first paragraph but then couldn't figure it out. I would reorganize this text a bit, first describe the concept, Line 216ff. Help readers to understand, guide them better through even the first image, then go to the next. I would absolutely not start with Fig 6 as there is way too much info here. 5) There are often too many sections in (), you can try to shorten sentences and rather break them into two. Experimental design The experimental design validates the new presentation mode using standard benchmarks. The design itself is reasonable given the nature of this article. However, the presentation of it shall be improved (see notes). Validity of the findings The approach is sufficiently novel that it deserves publication. I can see that there are cases where this is useful. However, to me, the article reads slightly a bit like a manual instead of a research paper. I would have been intrigued if a study would have been made that actually shows that this presentation is beneficial for users. I know that would have been not easy. But as there can be X-papers that claim arbitrary new ways of presentation, the question is, is the given presentation useful. Personally, I find it potentially helpful but also quite cluttered with messages. I would be interested in qualitative analysis. The conclusions are mostly well stated, the key issue is sometimes to assess 'effectivity' of a presented result. The analysis of the figures is rather subjective and hence a bit speculative and not as clear as the article states. I would welcome more discussion about the quality of the approach. For instance, saying something like 'It is effective to identify X', the question is why couldn't this be analyzed in existing tools? Maybe another (sub)section can be added that discusses the findings critically and puts them into the perspective of existing tools. Comments for the Author 33. 'high-tuned' => tuned 40. remove () around generated... 48. Figure 1 should include 'coprocessor' (it isn't actually shown!) 75. 'None of the existing performance analysis tools incorporates sophisticated output visualization.' => You sure to leave the statement? I believe Vampir does have somewhat sophisticated vis... 81. (Hydra) => Hydra 99. (in the hyperlink) => But this is not linked; not good to have it hidden, remove this reference. I would prefer also a proper reference to ParaProf. Much better would be to use footnotes, a practice you use on page 4... Be consistent! 162. The selected functions.... => this sentence isn't clear, add one more. Clarify why 'independently'. Some readers are not aware of how this works... The presentation of listings in Figure 3 and 12 should be improved. Figuer 4. Remove extra whitespace. 181. 'statistical oscillation of the run time too' => clarify the footnote. 211. 'x = 0 plane' => I struggle to find x=0 in the figure. Fig 6 caption => 'with different [visualization] parameters' How do they actually differ? Fig 10 'and not (bottom)' => 'or not (bottom)' I find the order of Table 1 + 2 not intuitive, I would have started with baseline, then +Score-P, then +plugin. Would get rid of an awkard naming '++plugin' in Line 277. It isn't clear from the table that 'running time' includes the relative overhead as well. Again: if you start with baseline, you could just say overhead '12%', cause nobody needs the actual runtime for these anymore. 330. Use a footnote for the GitHub link. ======================================================================= REPLY: Dear Reviewer, Thank you very much for your comments. We have incremented section 3.2.2, which now shows the before and after applying our tool to Rools-Royce's code. Hopefully the effectivity of our plugin can now be more easily identified -- a real performance optimization task is done in the industrial CFD code. We also explain better how a tool like Vampir would have problems to identify the same performance bottleneck. We have also specified that the benchmarks are intended more to illustrate how the visualization is structured. The wording 'high-tuned' has been fixed. Regarding Figure 1, its goal is to illustrate how similar the performance and the in situ approaches are to one another when it comes to apply them to a simulation code. That's why we use the same wording (add-on) in the green and red boxes. The point here is not to explain how in situ works, what a coprocessor is etc. We will briefly do it later in the text, in section 2.1.2. The sentence 'None of the existing performance analysis tools incorporates sophisticated output visualization.' has been removed. The wording '(Hydra)' has been fixed. The links to the Tools Guide and to ParaProf's results have been added as footnotes. Also, a reference to ParaProf has been included. In the sentence 'The selected functions may even be nested.', 'functions' is used as a synonym of 'regions' (see the explanation at footnote 7). Footnote 10 has been clarified with a reference to the overhead section, where we present the relative (percentage) statistical oscillation of the measurements. The explanation of the word 'independently' at the end of section 2.2 is done at the first paragraph of section 2.1.1 (as pointed out in the text). We don't want to explain it again, as not to sound repetitive. Regarding the 'x = 0 plane', the reader doesn't need to worry too much about finding the exact position in the visualization. The important piece of information here is that the network switches are plotted in a plane of constant x coordinate (which just happens to be zero). There is only one plane with that configuration in the views, so finding it should not be a problem. Regarding the caption of Figure 6, there are many parameters which are different between the left and right pictures. It is not possible to describe them all in the caption, but only in the main text. Regarding the caption of Figure 10, 'and not (bottom)' has been fixed. The GitHub link to the repository has been placed inside a footnote. Finally, some of the issues pointed out are a matter of style. We have a different view on them, as the reviewers among themselves. We ask for understanding. ======================================================================= Reviewer 2 (Lucas Schnorr) Basic reporting English is okay, I have found no significant typos in the text, except perhaps in Line 235 where the phrase could be clarified. The authors cite literature correctly although further discussion about the differences is missing (see my complete review below, in the 'General comments for the author' box). Raw data about overhead measurements and collected traces are unavailable so authors could make them available thought some perenial archive such as Zenodo (from CERN). Content is not fully self-contained because in the Related work section authors refer to the VI-HPS (lines 97 and 98) using a URL link and not scientific references. The reviewer went to that website and fail to see 'an extensive list of them [many software tools]'. Perhaps citing the works individually would be better. Experimental design My concern here is about the variability of the results (see my complete review below, in the 'General comments for the author' box). The method about the overhead analysis is not fully described. How the experiments have been conducted: were they randomly sorted prior to execution? What is the experimental design (using the Jain's book from 1991 terminology). ISBN: 978-0-471-50336-1. Validity of the findings Although the software has been provided in a lab's gitlab installation, no raw data (traces, measurements, scripts) is available. See my other comments about the validity of the findings in my complete review below, in the 'General comments for the author' box. Comments for the Author The article 'Further Enhancing the in Situ Visualization of Performance Data in Parallel CFD Applications' presents results on using scientific visualization tools such as ParaView to depict performance metrics collected from CFD codes. The work is possible through a connection between a Score-P plugin, capable to collect performance data, and Catalyst, from ParaView, for visualization. The work described here is incremental because previous author's work have not dealt with the mapping of communication performance metrics and the simulation's geometry. The base idea relies in combining the 3D visualization of the computational resources with the performance metrics of the application executing on that platform. Validation is carried out with MG and BT from NAS, as well with Hydra, which is okay because it has both known benchmarks and a real HPC application. _Related Work_ The same authors themselves have already published articles with very close contributions to the topic of this submission, including the usage of same figures (that are repeated from these previous publications), here: 2019 https://link.springer.com/chapter/10.1007/978-3-030-48340-1_31 'In Situ Visualization of Performance-Related Data in Parallel CFD Applications' Rigel F. C. AlvesEmail authorAndreas Knüpfer and here: 2020 https://superfri.org/superfri/article/view/317 'Enhancing the in Situ Visualization of Performance Data in Parallel CFD Applications Rigel F. C. Alves, Andreas Knüpfer' http://dx.doi.org/10.14529/jsfi200402 The difference of this submission against those ones is basically stated in a paragraph in the introduction. In 2019, authors map metrics from code regions. In 2020, authors map metrics from communication operations. In these past contributions, performance metrics are always mapped to the simulation's geometry (the object being simulated by the HPC application). In this submission, authors replace such a simulation's geometry by an 3D object that depicts the computational resources. The idea in itself (performance metrics on top of a 3D representation of the computational resources) remains without novelty because several other works in the past have already proposed such features. Some of them are cited in this work (second paragraph), but authors fail to state the differences of this submission against them. Regarding the limitation of Score-P/Vampir about application's iteration tracking, paramount to many performance analysis activities, have been alleaviated by manually instrumenting the application code and inserting a region called 'Iteration'. Thanks to the fact that regions can be stacked, with a bit of data science manipulation, you can classify all events per iteration in a post-processing phase. Refer to these publications were this method has been already employed: 1. Temporal Load Imbalance on Ondes3D Seismic Simulator for Different Multicore Architectures Ana Luisa Veroneze Solórzano, Philippe Olivier Alexandre Navaux, Lucas Mello Schnorr. http://hpcs2020.cisedu.info/4-program/processedmanuscripts 2. Optimization of a Radiofrequency Ablation FEM Application Using Parallel Sparse Solvers. Marcelo Cogo Miletto, Claudio Schepke, Lucas Mello Schnorr. http://hpcs2020.cisedu.info/4-program/processed-manuscripts So, when you write 'We will then advance the state-of-the-art by introducing tracing per time-step itself.' note that many others have already been doing a very similar operation (from the implementation perspective) as a data science procedure. _Results_ Generally, the 3D views on Figures 6, 7, 8, 9, 10, 11 are too dark. They should have a white background or a more light background. The compute nodes id numbers in the right-hand side of Figure 6, when you write Line 216-217, is a gradient color, but understand that there are no node with id number 1.5 or 2.5 (as shown in the respective legend). Perhaps using fixed colors such as the left of Figure 6 to identify compute nodes would be better in a small scale scenario relying on gradient colors only when you don't have enough fixed colors to depict (with some sort of threshold for this). In a general sense, I had some trouble understanding this association (gradient color and id number), so perhaps adding some manual annotation in the figures with arrows and letters could help the reader to identify precisely what you want the reader to see, or stating 'gradient color legend' in the text would help. About the fixed colors, the order on which they appear in the legend should be the same order they appear in the 3D object, so, from top-to-bottom, Machine, Package, L3, L2, L1d, Core, PU. It is clear here that you disregard hyperthreading in the visualization, but probably you are representing only physical PU. The 'Results' section (Sec 3.2) presents the 3D views and discuss their capabilities. Most of section 3.2.1 (NPB-oriented) presents technical aspects of the visualizations themselves to make the reader understand and interpret them (the choice of depicting outwards messages downward in the view, the choice of depicting compute nodes as places and their inherent internal struture from the HW hierarchy, and so on). The only part the authors do the performance analysis activity itself appears by the very end of Section 3.2.1 (Lines 253, 254), and in Section 3.2.2 (Hydra-oriented). My general feeling about the paper, regarding the socalled 'results', is that it presents more technical aspects of the 3D views, showing more 'Hey, look how the 3D views work' instead of *using the 3D views to actually do some performance analysis work*, which you have briefly attempt on those two parts I mentionned above. What I would suggest is that the 3D views should be employed in an exploratory and them a specific performance analysis to clearly identify a performance problem. Please note that in such type of BSP applications - with a bunch of iterations with compute/communication phases performance problems are generally anomalies in a few ranks. How would you detect these anomalies in Figure 10 for instance? In all the views you have show, you select 'an arbitrary time-step'. So, your views are iterationoriented. It is very probably that the accumulated metric you are depicting hides important flutuaction (delays in only a subset of ranks, etc) in the metrics, so I wonder how would you cope with the 'temporal aspect' of such metrics. Vampir (and much of the other tools focused on MPI/OpenMP/GPU realm) has a timeline, it is in the core of the tool. Here in these 3D views, such a timeline is absent. How do you see the temporal evolution them? How do you compare two iterations? Or several? Or a trend that appears in several iterations? Such types of limitations of the views should be more clearly stated or developed. _Overhead_ Baseline refers to the 'pure simulation code' and then you present the time with the plugin and with score-p only (Tables 1 and 2). The first thing I don't grasp is how the execution can be faster (-1%) when you have score-p. That does not make sense because you are doing more things. I'd suggest to correctly incorporate variability analysis (check measurement distribution, standard error based on the verified assumption of a given type of distribution, etc) in your interpretation. For example, in Table 2 you have '-1%' of MG but your baseline has a standard deviation of +- 2%. So, that '-1%' might not be significant after all. So I agree in part when you write 'the plugin’s or Score-P’s footprints lie within the statistical oscillation of the baseline results' but I find those percentages weird because they do not reflect the variability of the plugin or scorep measurements but their relation with the baseline. I agree 'in part' because the running time of Hydra is 12% slower than the baseline. This should be more clearly marked, helping to extend the runtime overhead analysis that is. in its current state, too brief (only three lines 304-306). _Notes_ - There is an excessive usage of footnotes (see Pg11/15 for instance). Footnotes usually break the reading flow so I'd recommend either incorporating your discussion in the text (if they are paramount to understanding) or removing them. _Summary_ The paper is interesting in itself because it is a new effort in 3D visualization of performance metrics. This comes after several other efforts targetting the same goal in past years. I wonder why those efforts do not appear more often in papers that focus on the performance analysis of HPC applications instead of proposing methods. I pay attention to the fact that very frequently 3D views appear in related tools but they fail to actually be useful in a realistic performance analysis procedure. So, my concerns are: 1. discussion of the differences against related work (other attempts that depict very similar views of this contribution's topology view) 2. the usefulness to clearly detect performance problems and then, after the fix, use the same visualization to show that the problem is gone. ======================================================================= REPLY: Dear Reviewer, Thank you very much for your thorough comments. Line 235 (original numberings) has been simplified and is hopefully now clear. The raw data has been published in our University's repository under a CC-BY 4.0 license. The link has been added to the paper. The link to the Tools Guide has been added as a footnote and should now be immediately accessible. Our experiments consisted solely on repeatedly running the test cases with and without Score-P and the plugin and taking note of the time consumed and maximum memory usage during the simulation, as explained in the paper. We are unable to add more on the matter. The differences between the existing tools and our plugin are discussed in section 1. It is mostly our approach that differs from theirs: we want to use existing visualization software for the purpose of visualization (of the performance data), rather than equipping the performance tools themselves with visualization features (as the existing solutions in the literature do). We have added the information that it is possible to aggregate the events pertaining to the time step loop in an user-defined region and manually instrument the code. The two papers mentioned have been cited. Regarding the pictures from ParaView, it is important to highlight that this is not a tool we develop ourselves, but just use it as it is. Therefore things like the dark background, the gradient color to plot integer values, the order of the fixed colors (to plot strings) in the scale etc. are just the default behavior of ParaView, which we happen to be satisfied with. However, we do agree that when plotting integers, the scale should not have any non-integer numbers; but this is a feature request for ParaView itself, not for our tool. The usefulness of our tool is discussed in section 3.2.2, which has been incremented and now shows the before and after applying our tool to Rolls-Royce's code (compare figures 11 and 12). Hopefully the effectivity of our plugin can now be more easily identified -- a real performance optimization task is done in the industrial CFD code. We have also specified that the benchmarks are intended more to illustrate how the visualization is structured. We have added a statement (footnote 7) in which we clarify that our tool is not suitable for detecting variations inside the course of one time step. For such analyses, the user is referred to the currently available tools, like Vampir. Regarding the overhead section, we have added a clarification that the percentages shown in the +Score-P and ++plugin columns are not the variation of the measurement itself, but its deviation from the average baseline result. Finally, some of the issues pointed out are a matter of style. We have a different view on them, as the reviewers among themselves. We ask for understanding. ======================================================================= Reviewer 3 (Anonymous) Basic reporting The basic reporting needs to be improved in several wahy Writing ---------Overall the text is readable and mostly understandable. However, it needs significant wordsmithing to make it actually easily readable. It contains a lot of text in parentheses that makes reading less easy. Sometimes formulations are not optimal and order of words is wrong, see e. g. - line 20: 'added to such plugin' - line 88: the developer is referred to with the pronoun 'it' - line 123: 'The objective aimed by this ...' - line 139: 'to the simulation original geometry' -> add 's - line 182: 'whereas BT for 1000' - line 182: 'The plugin would generate'. Why 'would'? Does it or does it not? - line 256: 'generate automatically' -> adverbs usually should be in front of the verb not behind it - line 256: 'onto a file' ... more commonly 'into' than 'onto' - line 264: What does 'Inclusively' mean here? - line 295: 'somewhen' is archaic. Use 'sometime'. - lines 326: 'do not use to be' It appears a bit strange to me to read the judgement 'a fine example' if the authors refer to their *own* previous work. (line 34) I found a *central* part (lines 65 to 79) of the introduction not to be well well understandable. What does the 'flipping' mean in detail? What does 'performance ... inside in situ' mean in contrast to 'in situ inside performance'? The statement in Line 166 is at the section level and describes what will be discussed on the subsection level, while the related statement in line 206 is on the subsection level and describes what is discussed in the subsection. This is inconsistent. What does 'in the hyperlink' in line 99 refer to? I was surprised to see that acknowledgements section is mentioned (line 86) in the description of how the paper is organized. I have never seen that before and I would remove it. In line 143/144 the paper talks about 'the generation of ... files ...(by means of the VTK ...'. Ar the files really *generated* by VTK or are they *visualized* using VTK? Finally, stating that one has 'extended our software' in the first line of the conclusion does not seem appropriate. The software should be explicitly named here. In other words: Which software? References --------------The related work section appears to list relevant overview literature. At some other places in the paper, however, I am missing references supporting specific statements and claims: - Line 50: References supporting the claim 'for decades' are needed. - Line 97: A reference where to fin the 'Tools Guide' is needed. - In line 177: 'modified Class D' should be explained or a reference explaining it needs to provided. Figures ---------In general the figures demonstrate the proposed approach well. Unfortunately, they are not always as clear as possible. My main problem with the figures is, that it is often hard know which color bar shows the scale for which of the visualized data. This has several reasons. First, the coloring used is sometimes the same for different quantities but the color bars are only shown in the corners of the images. A relation between color bar and the parts of the visualization where they are used is thus hardly possible without very carefully reading the text. Additionally, most of the needed descriptions are only provided in the running text and not in the caption. Second, the meaning of the values at some of the color bars is unclear. E. g. the switch_id for example has values between 0 and 1.2e-38. I assume that bot ends of the scale are simply referring to the same zero value. But this is obscured by the values next to the color bar. Experimental design The very basic experiments appear to be well designed. Two common benchmark applications as well as one industry application are used to illustrate the software's performance and memory overhead as well as the resulting visualizations. An evaluation of or even expert comments on the usefulness of the presented approach is missing entirely. Thus the paper demonstrates that the approach can be used but not that is actually useful. A discussion of how the new approach/tool helped or even could help an expert user is needed Validity of the findings no addition comments Comments for the Author The paper presents an interesting approach/tool for enhanced performance analysis in highly parallel computation application. Demonstration of the usefulness and the writing need to improved to make the paper ready for publication. To allow for these improvements to be made, I recommend a major revision. ======================================================================= REPLY: Dear Reviewer, Thank you very much for your comments. The issues in lines 88, 139, 182, 256, 264 (original numberings) have been fixed. The expression 'a fine example' has been removed. The inconsistency between lines 166 and 206 has been fixed. The expression 'in the hyperlink' has been replaced by the address itself in a footnote. The reference to the acknowledgments has been removed from the last paragraph of the introduction. Regarding section 2.1.2, VTK is the set of libraries on top of which ParaView is built. The postmortem files are both generated and visualized thanks to VTK. Regarding the missing references, 'for decades' has been replaced, the link to the Tools Guide has been added as a footnote and the 'modified' has been removed (it is just standard Class D, but with a different number of iterations, as explained a bit further in the text). Regarding the figures, the coloring applied is the default used by ParaView (a tool we do not develop, but just use it as it is) to plot numbers. There is too much information available on the same visualization, so it is not possible to include all the details in the captions. On the other hand, we do agree that ParaView should not output a scale from 0 to 1.2e-38 when the values plotted are just all zero, as it confuses the user. But this is a feature request for ParaView itself. The usefulness of our tool is discussed in section 3.2.2, which has been incremented and now shows the before and after applying our tool to Rolls-Royce's code. Hopefully the effectivity of our plugin can now be more easily identified -- a real performance optimization task is done in the industrial CFD code. We also explain better how a tool like Vampir would have problems to identify the same performance bottleneck. We have also specified that the benchmarks are intended more to illustrate how the visualization is structured. Finally, some of the issues pointed out are a matter of style. We have a different view on them, as the reviewers among themselves. We ask for understanding. "
Here is a paper. Please give your review comments after reading it.
257
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>This paper continues the work initiated by the authors on the feasibility of using ParaView as visualization software for the analysis of parallel Computational Fluid Dynamics (CFD) codes' performance. Current performance tools have limited capacity of displaying their data on top of three-dimensional, framed (i.e. time-stepped) representations of the cluster's topology. In our first paper, a plugin for the open-source performance tool Score-P was introduced, which intercepts an arbitrary number of manually selected code regions (mostly functions) and send their respective measurements -amount of executions and cumulative time spent --to ParaView (through its in situ library, Catalyst), as if they were any other flow-related variable. Our second paper added to such plugin the capacity to (also) map communication data (messages exchanged between MPI ranks) to the simulation's geometry. So far the tool was limited to codes which already have the in situ adapter; but in this paper, we will take the performance data and display it --also in codes without in situ --on a three-dimensional representation of the hardware resources being used by the simulation. Testing is done with the Multi-Grid and Block Tri-diagonal NPBs, as well as Rolls-Royce's CFD code, Hydra. The benefits and overhead of the plugin's new functionalities are discussed.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Computers have become crucial in solving engineering problems. However, standard computers do not have enough power to run more complex simulations (such as those involved in modern engineering problems, like designing an aircraft) on their own. They require parallelized simulation (for instance of the air flowing through the airplane's engine) to be run in High Performance Computing (HPC) hardware.</ns0:p><ns0:p>Such infrastructures are expensive, as well as time and energy consuming. It is thus imperative that the application has its parallel performance tuned for maximum productivity.</ns0:p><ns0:p>There are several tools for analyzing the performance of parallel applications. An example is Score-P 1 <ns0:ref type='bibr' target='#b14'>(Kn&#252;pfer et al., 2012)</ns0:ref>, which is developed in partnership with the Centre for Information Services and HPC (ZIH) of the Technische Universit&#228;t Dresden. It allows the user to instrument the simulation's code and monitor its execution, and can easily be turned on or off at compile time. When applied to a source code, the simulation will not only produce its native outputs at the end, but also the performance data.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> below illustrates the idea.</ns0:p><ns0:p>However, the tools currently available to visualize the performance data (generated by software like Score-P) lag in important features, like three-dimensionality, time-step association (i.e. frame playing), color encoding, manipulability of the generated views etc. As a different category of add-ons, tools for enabling in situ visualization of applications' output data -like temperature or pressure in a Computational Fluid Dynamics (CFD) simulation -already exist too; one example is Catalyst 2 <ns0:ref type='bibr' target='#b2'>(Ayachit et al., 2015)</ns0:ref>. They also work as an optional layer to the original code and can be activated upon request, by means of preprocessor directives at compilation stage. The simulation will then produce its native outputs, if any, plus the coprocessor's (a piece of code responsible for permitting the original application to interact with the in situ methods) ones, in separate files. This is illustrated in the bottom part of Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. These tools have been developed by visualization specialists for a long time and feature sophisticated visual resources <ns0:ref type='bibr' target='#b3'>(Bauer et al., 2016)</ns0:ref>.</ns0:p><ns0:p>In this sense, why not apply such in situ tools (which enable data extraction from the simulation by separate side channels, in the same way as performance instrumenters) to the performance analysis of parallel applications, thus filling the blank left by the lack of visual resources of the performance tools?</ns0:p><ns0:p>This work is the third in a series of our investigations on the feasibility of merging the aforementioned approaches. First, by unifying the coinciding characteristics of both types of tools, insofar as they augment a parallel application with additional features (which are not required for the application to work). Second, by using the advanced functionalities of specialized visualization software for the goal of performance analysis. Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> In our first paper <ns0:ref type='bibr' target='#b0'>(Alves and Kn&#252;pfer, 2019)</ns0:ref>, we mapped performance measurements of code regions amount of executions and cumulative time spent -to the simulation's geometry, just like it is done for flow-related properties. In our second paper <ns0:ref type='bibr' target='#b1'>(Alves and Kn&#252;pfer, 2020)</ns0:ref>, we added to such mapping communication data (messages exchanged between MPI ranks). Henceforth this feature shall be called geometry mode.</ns0:p><ns0:p>Following feedback we have received since, we thought about how our approach could be used to assist with the performance optimization of codes without an in situ adapter. What happens if you move such adapter inside our tool (i.e. if you equip it with an in situ adapter of its own)? In this paper, we present the result of such investigation: a new feature in our tool, called topology mode -the capacity of matching the performance data to a three-dimensional, time stepped (framed) representation of the cluster's architecture.</ns0:p><ns0:p>There are two approaches to HPC performance analysis. One uses performance profiles which contain congregated data about the parallel execution behavior. Score-P produces them in the Cube4 format, to be visualized with Cube 3 . The other uses event traces collecting individual run-time events with precise timings and properties. Score-P produces them in the OTF2 format, to be visualized with Vampir 4 . The outputs of our tool are somehow a mixture of both: aggregated data, but by time step.</ns0:p><ns0:p>The presented solution is not intended for permanent integration into the source code of the target application. Instead it should be applied on demand only with little extra effort. This is solved in accordance with the typical approaches of parallel performance analysis tools on the one hand and in situ processing toolkits on the other hand. As evaluation cases, the Multi-Grid and Block Tri-diagonal NAS Parallel Benchmarks (NPB) <ns0:ref type='bibr' target='#b7'>(Frumkin et al., 1998)</ns0:ref> will be used, together with Rolls-Royce's in-house CFD code, Hydra <ns0:ref type='bibr' target='#b15'>(Lapworth, 2004)</ns0:ref>.</ns0:p><ns0:p>This paper is organized as follows: in section 1 we discuss the efforts made so far at the literature to map performance data to the computing architecture's topology and the limitations of their results.</ns0:p><ns0:p>In section 2 we present the methodology of our approach, which is then evaluated in the test-cases in section 3. Finally, section 4 discusses the overhead associated with using our tool. We then conclude the article with a summary.</ns0:p></ns0:div> <ns0:div><ns0:head n='1'>RELATED WORK</ns0:head><ns0:p>In order to support the developer of parallel codes in his optimization tasks, many software tools have been developed. For an extensive list of them, including information about their:</ns0:p><ns0:p>&#8226; scope, whether single or multiple nodes (i.e. shared or distributed memory);</ns0:p><ns0:p>&#8226; focus, be it performance, debugging, correctness or workflow (productivity);</ns0:p><ns0:p>&#8226; programming models, including MPI, OpenMP, Pthreads, OmpSs, CUDA, OpenCL, OpenACC, UPC, SHMEM and their combinations;</ns0:p><ns0:p>&#8226; languages: C, C++, Fortran or Python;</ns0:p><ns0:p>&#8226; processor architectures: x86, Power, ARM, GPU;</ns0:p><ns0:p>&#8226; license types, platforms supported, contact details, output examples etc.</ns0:p><ns0:p>the reader is referred to the Tools Guide 5 of the Virtual Institute -High Productivity Supercomputing (VI-HPS). Only one of them matches the performance data to the cluster's topology: ParaProf <ns0:ref type='bibr' target='#b6'>(Bell et al., 2003)</ns0:ref>, whose results can be seen in the tool's website 6 . The outputs are indeed three-dimensional, but their graphical quality is low, as one could expect from a tool which tries to recreate the visualization environment from scratch. The same hurdle can be found on the works of <ns0:ref type='bibr' target='#b12'>Isaacs et al. (2012) and</ns0:ref><ns0:ref type='bibr' target='#b17'>Schnorr et al. (2010)</ns0:ref>, which also attempt to create a whole new three-dimensional viewing tool (just for the sake of performance analysis). Finally, <ns0:ref type='bibr' target='#b20'>Theisen et al. (2014)</ns0:ref> combined multiple axes onto two-dimensional views:</ns0:p><ns0:p>the generated visualizations are undeniably rich, but without true three-dimensionality, the multiplicity of two-dimensional planes overlapping each other can quickly become cumbersome and preclude the understanding of the results.</ns0:p><ns0:p>On the other hand, when it comes to display messages exchanged between MPI ranks during the simulation, Vampir is the current state-of-the-art tool on the field, but it is still unable to generate threedimensional views. This impacts e.g. on the capacity to distinguish between messages coming from ranks running within the same compute node from those coming from ranks running in other compute nodes. Also, Vampir is not able to apply a color scale to the communication lines. Finally, it has no knowledge of the simulation's time-step, whereas this is the code execution delimiter the developers of CFD codes are naturally used to deal with. <ns0:ref type='bibr' target='#b10'>Isaacs et al. (2014)</ns0:ref> got close to it, by clustering event traces according to the self-developed idea of logical time, 'inferred directly from happened-before relationships'. This represents indeed an improvement when compared with not using any sorting, but it</ns0:p><ns0:p>is not yet the time-step loop as known by the programmer of a CFD code. Alternatively, it is possible to isolate the events pertaining to the time step by manually instrumenting the application code and inserting Manuscript to be reviewed</ns0:p><ns0:p>Computer Science a region called e.g. 'Iteration' (see section 2.1.1 below). <ns0:ref type='bibr' target='#b19'>Sol&#243;rzano et al. (2021) and</ns0:ref><ns0:ref type='bibr' target='#b16'>Miletto et al. (2021)</ns0:ref> have applied such method. We would like then to simplify this process and make it part of the tool's functioning itself. 7</ns0:p><ns0:p>Still regarding Vampir, it is indeed possible to manually instrument the code and tell Score-P to trace the communication inside the entire time step loop, which is then plotted (after the simulation is finished) into a two dimensional communication matrix in Vampir. 8 But in this case, the results apply to all time steps considered together (i.e. differences within time steps between themselves become invisible) and the overhead associated with the measurements increases considerably (as you need to run Score-P in tracing mode in order to produce data to be visualized in Vampir) -i.e. it would be hard to generate such 2D matrices in an industry-grade CFD code, like Hydra.</ns0:p><ns0:p>Finally, with regards to in situ methods, for a comprehensive study of the ones currently available, the reader is referred to the work of <ns0:ref type='bibr' target='#b3'>Bauer et al. (2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>METHODOLOGY</ns0:head><ns0:p>This section presents what is necessary to implement our work.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Prerequisites</ns0:head><ns0:p>The objective aimed by this research depends on the combination of two scientifically established methods:</ns0:p><ns0:p>performance measurement and in situ processing.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.1'>Performance Measurement</ns0:head><ns0:p>When applied to a source file's compilation, Score-P automatically inserts probes between each code 'region' 9 , which will at run-time measure a) the number of times that region was executed and b) the Finally, the tool is also equipped with an API, which permits the user to increase its capabilities through plugins <ns0:ref type='bibr' target='#b18'>(Sch&#246;ne et al., 2017)</ns0:ref>. The combined solution proposed by this paper takes actually the form of such a plugin.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.2'>In Situ Processing</ns0:head><ns0:p>In order for Catalyst to interface with a simulation code, an adapter needs to be created, which is responsible for exposing the native data structures (grid and flow properties) to the coprocessor component.</ns0:p><ns0:p>Its interaction with the simulation code happens through three function calls (initialize, run and finalize),</ns0:p><ns0:p>illustrated in blue at Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>. Once implemented, the adapter allows the generation of post-mortem files (by means of the VTK 10 library) and/or the live visualization of the simulation, both through ParaView 11 .</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Combining both Tools</ns0:head><ns0:p>In our previous works <ns0:ref type='bibr' target='#b0'>(Alves and Kn&#252;pfer, 2019;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alves and Kn&#252;pfer, 2020)</ns0:ref>, a Score-P plugin has been developed, which allows performance measurements for an arbitrary number of manually selected code regions and communication data (i.e. messages exchanged between MPI ranks) to be mapped to the simulation's original geometry, by means of its Catalyst adapter (a feature now called geometry mode).</ns0:p></ns0:div> <ns0:div><ns0:head>4/17</ns0:head><ns0:p>PeerJ <ns0:ref type='table' target='#tab_6'>2021:03:58866:2:1:NEW 21 Sep 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In this paper, we are extending our software to map those measurements to a three-dimensional representation of the cluster's topology, by means of the plugin's own Catalyst adapter (a new feature named topology mode). The plugin must be turned on at run-time through an environment variable (export SCOREP SUBSTRATE PLUGINS=Catalyst), but works independently of Score-P's profiling or tracing modes being actually on or off. Like Catalyst, it needs three function calls (initialize, run and finalize) to be introduced in the source code, illustrated in violet at Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>. However, if the tool is intended to be used exclusively in topology mode, the blue calls shown at Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref> Finally, a call must be inserted before each function to be pipelined, as illustrated in Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref> below. This layout ensures that the desired region will be captured when executed at that specific moment and not in others (if the same routine is called multiple times -with distinct inputs -throughout the code, as it is common for CFD simulations). The selected functions may even be nested. This is not needed when tracking communications between ranks, as the instrumentation of MPI regions is made independently at run-time (see section 2.1.1 above).</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>EVALUATION</ns0:head><ns0:p>This section presents how our work is going to be evaluated.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Settings</ns0:head><ns0:p>Three test-cases will be used to demonstrate the new functionality of the plugin: two well-known benchmarks and an industry-grade CFD Code. All simulations were done in Dresden University's HPC cluster (Taurus), whose nodes are interconnected through Infiniband. Everything was built / tested with release 2018a of Intel &#174; compilers in association with versions 6.0 of Score-P and 5.7.0 of ParaView.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.1'>Benchmarks</ns0:head><ns0:p>The NAS Parallel Benchmarks (NPB) <ns0:ref type='bibr' target='#b7'>(Frumkin et al., 1998</ns0:ref>) 'are a small set of programs designed to help evaluate the performance of parallel supercomputers. The benchmarks are derived from computational fluid dynamics (CFD) applications and consist of five kernels and three pseudo-applications'. Here one of each is used: the Multi-Grid (MG) and the Block Tri-diagonal (BT) respectively (version 3.4). Both were run in a Class D layout by four entire Sandy Bridge nodes, each with 16 ranks (i.e. pure MPI, no OpenMP), one per core and with the full core memory (1875 MB) available. Their grids consist of a parallelepiped with the same number of points in each cartesian direction. Finally, both are sort of 'steady-state' cases (i.e. the time-step is equivalent to an iteration-step).</ns0:p><ns0:p>In order for the simulations to last at least 30 minutes, 12 MG was run for 3000 iterations (each comprised of 9 multigrid levels), whereas BT for 1000. The plugin generated VTK output files every 100 iterations for MG (i.e. 30 'stage pictures' by the end of the simulation, 50 MB of data in total), every 50 iterations for BT (20 frames in the end, same amount of data), measuring the solver loop's central routine (mg3P and adi respectively) in each case.</ns0:p><ns0:p>12 Less than that would make the relative (percentage) statistical oscillation of the run time too big for valid comparisons (see section 4 below).</ns0:p></ns0:div> <ns0:div><ns0:head>6/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58866:2:1:NEW 21 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head n='3.1.2'>Industrial CFD Code</ns0:head><ns0:p>Hydra is Rolls-Royce's in-house CFD code <ns0:ref type='bibr' target='#b15'>(Lapworth, 2004)</ns0:ref>, based on a preconditioned time marching of the Reynolds-averaged Navier-Stokes (RANS) equations. They are discretized in space using an edgebased, second-order finite volume scheme with an explicit, multistage Runge-Kutta scheme as a steady time marching approach. Steady-state convergence is improved by multigrid and local time-stepping acceleration techniques <ns0:ref type='bibr' target='#b13'>(Khanal et al., 2013)</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> shows the test case selected for this paper: it represents a simplified (single cell thickness), 360&#176;testing mesh of two turbine stages in an aircraft engine, discretized through approximately 1 million points. Unsteady RANS calculations have been made with time-accurate, second-order dual time-stepping. Turbulence modelling was based on standard 2-equation closures. Preliminary analyses with Score-P and Cube revealed two code functions to be especially time-consuming: iflux edge and vflux edge; they were selected for pipelining.</ns0:p><ns0:p>Here the simulations were done using two entire Haswell nodes, each with 24 ranks (again pure MPI), one per core and with the entire core memory (2583 MB) available. Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> shows the domain's partitioning among the processes. The shape of the grid, together with the rotating nature of two of its four blade rings (the rotors), anticipates that the communication patterns here are expected to be extremely more complex than in the benchmarks.</ns0:p><ns0:p>One full engine's shaft rotation was simulated, comprised of 200 time-steps (i.e. one per 1,8&#176;), each internally converged through 40 iteration steps. The plugin was generating post-mortem files every 20 th time-step (i.e. every 36&#176;), what led to 10 stage pictures (12 MB of data) by the end of the simulation.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Results</ns0:head><ns0:p>The second part of this section presents the results of applying our work on the selected test-cases. The benchmarks will be used more to illustrate how the tool works, whereas a true performance optimization task will be executed with the industrial CFD code.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.1'>Benchmarks</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_6'>6</ns0:ref> shows the plugin outputs for an arbitrary time-step in the MG benchmark. The hardware information (i.e. in which core, socket etc. each rank is running) is plotted on constant z planes; the network information (i.e. switches that need to be traversed in order for inter-node communications to be performed), on its turn, is shown on the constant x plane.</ns0:p><ns0:p>Score-P's measurements, as well as the rank id number, are shown just below the processing unit (PU) where that rank is running, ordered from left to right (in the x direction) within one node, then from Here, notice how each compute node allocated to the job becomes a plane in ParaView. They are ordered by their id numbers (see the right-hand side of Figure <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>) and separated by a fixed length (adjustable at run-time through the plugin's input file). Apart from the node id, it is also possible to color the planes by the topology type, i.e. if the cell refers to a socket, a L3 cache, a processing unit etc., as done on the left side of the figure <ns0:ref type='figure'>.</ns0:ref> Only the resources being used by the job are shown in ParaView, as to minimize the plugin's overhead and in view that drawing the entire cluster would not help the user to understand the code's behaviour 13 . This means that, between any pair of planes in Figure <ns0:ref type='figure' target='#fig_6'>6</ns0:ref> there might be other compute nodes (by order of id number) in the cluster infrastructure; but, if that is the case, none of its cores are participating in the current simulation. The inter-node distance in ParaView will be bigger, however, if the user activated the drawing of network topology information and the compute nodes involved in the simulation happen to be located in different network islands, as shown on Figure <ns0:ref type='figure'>7</ns0:ref>. This is indeed intuitive, as messages exchanged between nodes under different switches will need to travel longer in order to be delivered (when compared to those exchanged between nodes under the same switch).</ns0:p><ns0:p>Taurus uses Slurm 14 , which carefully allocates the MPI ranks by order of compute node id (i.e. the node with lower id will receive the first processes, whereas the node with higher id will receive the last processes). It also attempts to place those ranks as close as possible to one another (both from an intra and inter node perspectives), as to minimize their communications' latency. But just to illustrate the plugin's potential, Figure <ns0:ref type='figure'>8</ns0:ref> shows the results when forcing the scheduler to use at least a certain amount of nodes for the job. Notice how only the sockets (the cyan rectangles in the figure) where there are allocated cores are drawn in the visualization; the same applies to the L3 cache (the blue rectangles). Also, notice how the switches are positioned in a way that looks like a linkage between the machines (the yellow rectangles in the figure) they connect. This is intentional (it makes the visualization intuitive).</ns0:p><ns0:p>With regards to messages sent between ranks, in order to facilitate the understanding of the communication behavior, the source / destination data is also encoded in the position of the lines themselves: they start from the bottom of the sending rank and go downwards toward the receiving one. This way, it is 13 Companies like Rolls-Royce usually purchase computational resources: they are not willing to buy the compute time of e.g. 16 nodes when they only need 4 for a specific simulation. In this sense, performance degradation due to nearby jobs (sharing the same network switch) is seen as 'part of life'. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In Figure <ns0:ref type='figure'>9</ns0:ref>, notice also how all ranks on both benchmarks talk either to receivers within the same node or the nodes immediately before / after. The big lines connecting the first and last nodes suggest some sort of periodic boundary condition inside the grid. This can be misleading: lines between cores in the first and last nodes will need to cross the entire visualization space, making it harder to understand. For that reason the plugin's runtime input file has an option to activate a periodic boundary condition tweak, whose outputs are visible in Figure <ns0:ref type='figure' target='#fig_0'>10</ns0:ref>. It shows the topology when using Haswell nodes, whose sockets Manuscript to be reviewed</ns0:p><ns0:p>Computer Science We submitted such results to Rolls-Royce, whose developers fixed this issue. The new communication behavior can be seen in Figure <ns0:ref type='figure' target='#fig_1'>12</ns0:ref>. Notice how the minimum number of messages sent between any pair of processes dropped from 1500 to 170 (see the lower limit of the scale at the upper-right corner of the left picture); analogously, how the minimum amount of data sent raised from 0 to 68 kB (see the lower limit of the scale at the upper-right corner of the right picture). I.e. now there are no more empty messages being sent, and this is visible in the visualization of the communication lines. The plugin has been successfully used in a real life performance optimization problem, whose detection would be difficult if using the currently available tools.</ns0:p><ns0:p>We also attempted to reproduce such analysis via the existing 2D communication matrix display in Vampir. This was done with the original, i.e. unoptimized, version of the code. We traced a single time step due to overhead considerations (higher overhead for full event tracing instead of profiling) and to explicitly isolate a single time step so that it direclty corresponds to Figures <ns0:ref type='figure' target='#fig_1'>11 and 12</ns0:ref>. 15 The results can be seen in Figure <ns0:ref type='figure' target='#fig_3'>13</ns0:ref> below: dark blue entries correspond to no messages resp. zero bytes sent between the sender and receiver ranks. With this, it is indeed possible to identify areas with medium high number of messages (the green spots on the left picture) but no bytes sent (no corresponding patterns in the right picture).</ns0:p><ns0:p>However, with the existing Vampir visualization and similar visualization schemes it is impossible to see the hardware topology next to the communication behavior. In our new visualization scheme in </ns0:p></ns0:div> <ns0:div><ns0:head n='4'>OVERHEAD</ns0:head><ns0:p>Provided we are talking about performance analysis, it is necessary to investigate the impact of our tool itself on the performance of the instrumented code execution.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Settings</ns0:head><ns0:p>In the following tables, the baseline results refer to the pure simulation code, running as per the settings presented in Sec. 3; the numbers given are the average of 5 runs &#177; 1 relative standard deviation. The + Score-P results refer to when Score-P is added onto it, running with both profiling and tracing modes deactivated (as neither of them is needed for the plugin to work) 16 . Finally, ++ plugin refers to when the plugin is also used: running only in topology mode and in only one feature (regions or communication) at a time 17 and on the iterations when there would be generation of output files 18 . The percentages shown in these two columns are not the variation of the measurement itself, but its deviation from the average baseline result.</ns0:p><ns0:p>Score-P was always applied with the --nocompiler flag. This option is enough when the plugin is used to show communication between ranks, as no instrumentation (manual or automatic) is needed when solely MPI calls are being tracked. On the other hand, the instrumentation overhead is considerably higher when the target is to measure code regions, as every single function inside the simulation code is a potential candidate for analysis (as opposed to when tracking communications, when only MPI-related calls are intercepted). In this case, it was necessary to add the --user Score-P compile flag and manually Example of a manual (user-defined) code instrumentation with Score-P; the optional if clauses ensure measurements are collected only at the desired time-steps instrument the simulation code (i.e. only the desired regions were visible to Score-P). An intervention as illustrated in Figure <ns0:ref type='figure' target='#fig_5'>15</ns0:ref> achieves this: if MODULO... additionally guarantees measurements are collected only when there would be generation of output files and at time-step 1 -the reason for it is that Catalyst runs even when there is no post-mortem files being saved to disk (as the user may be visualizing the simulation live) and the first time-step is of unique importance, as all data arrays must be defined then (i.e. the (dis)appearance of variables in later time-steps is not allowed) 19 . Finally, when measuring code regions, interception of MPI-related routines was turned off at run-time 20 .</ns0:p><ns0:formula xml:id='formula_0'>#</ns0:formula></ns0:div> <ns0:div><ns0:head n='4.2'>Results</ns0:head><ns0:p>Tables <ns0:ref type='table' target='#tab_6'>1 and 2</ns0:ref> show the impact of the proposed plugin on the test-cases performance. The memory section refers to the peak memory consumption per parallel process, reached somewhen during the simulation; it neither means that all ranks needed that amount of memory (at the same time or not), nor that the memory consumption was like that during the entire simulation. Score-P itself introduced no perceptible overhead; on its turn, the plugin did, an that is because it is equipped with a Catalyst adapter (whose footprint lies mostly on memory consumption <ns0:ref type='bibr' target='#b2'>(Ayachit et al., 2015)</ns0:ref>). Catalyst needs this memory to store the artificial geometry's (the topological representation of the hardware resources being used) coordinates and cells definition, plus all the data arrays associated with them (amount of times a function was executed, amount of messages sent between two ranks etc.), for each time-step during the simulation. Hence the added memory footprint is higher.</ns0:p><ns0:p>The run time overhead, on its turn, is only critical when measuring the two code regions selected in Hydra: they are called millions of times per time-step, hence their instrumentation is heavy. Otherwise the plugin's or Score-P's footprints lie within the statistical oscillation of the baseline results.</ns0:p><ns0:p>19 Hence, there were two narrowing factors for Score-P in the end: the spacial (i.e. accompany only the desired functions) and the temporal (accompany only at the desired time-steps) ones. 20 Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this paper, we have extended our software to allow mapping performance data to a three-dimensional representation of the cluster's architecture, by means of (combining) the code instrumenter Score-P and the graphics manipulation program ParaView. The tool, which takes the form of a Score-P plugin, introduces the following novel capabilities to the spectrum of code analysis resources:</ns0:p><ns0:p>&#8226; detailed view up to topology component level (i.e. in which core of which socket of which node a specific MPI rank is running);</ns0:p><ns0:p>&#8226; limit visualization to resources being used by the simulation;</ns0:p><ns0:p>&#8226; native association with the simulation's time-step;</ns0:p><ns0:p>&#8226; individual components of the visualization (like the network switches) are optional to produce and to display (i.e. see only what you want to see);</ns0:p><ns0:p>&#8226; easily distinguish between messages coming from ranks within the same compute node from those coming from ranks running in other compute nodes, something not possible in a tool like Vampir;</ns0:p><ns0:p>&#8226; individually applicable color scale to each element of the visualization, allowing, for example, to color the communication lines by amount of bytes sent, receiver id, sender id etc (something also not possible in Vampir);</ns0:p><ns0:p>All that under the graphic quality of today's top-of-the-art visualization program, ParaView: render views are fully manipulatable and tens of filters are available to further dig into the data. ParaView is the best option as visualization software because of all the resources already available in -and experience accumulated by -it after decades of continuous development. Visualization techniques do not use to be the specialization field of programmers working with code performance: it is more reasonable to take advantage of the currently available graphic programs than attempting to equip the performance tools with their own GUIs (from scratch). On the other hand, by working in close contact with Rolls-Royce's engineers, we have noticed how important it is for them to obtain the information they need (in our case, the performance of their code) in a straightforward manner. In this sense, using pre-existing visualization tools (like ParaView, which they already use to analyse the flow solution) represents a major benefit to them, as they don't need to learn new software (like Cube or Vampir) for the task, but rather stick with programs they are already used to.</ns0:p><ns0:p>Our tool is based exclusively on open-source dependencies; its source code is freely available 21 , as the raw data of the benchmark results presented in this paper 22 . It works with either automatic or manual code instrumentation and independently of Score-P's profiling or tracing modes. Lastly, its output frequency (when doing post-mortem analyses) is adjustable at run-time (through the plugin input file), like in Catalyst itself.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Schematic of software components for parallel applications</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Schematic of the software components for a combined add-on</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>total</ns0:head><ns0:label /><ns0:figDesc>time spent in those executions, by each process (MPI rank) within the simulation. It is applied by simply prepending the word scorep into the compilation command, e.g.: scorep [Score-P's options] mpicc foo.c. It is possible to suppress regions from the instrumentation (e.g. to keep the associated overhead low), by adding the flag --nocompiler to the command above. In this scenario, Score-P sees only user-defined regions (if any) and MPI-related functions, whose detection can be easily (de)activated at run-time, by means of an environment variable: export SCOREP MPI ENABLE GROUPS=[comma-separated list]. Its default value is set to catch all of them. If left blank, instrumentation of MPI routines will be turned off.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Illustrative example of changes needed in a simulation code due to Catalyst (blue) and then due to the plugin (violet)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Illustrative example of the call to tell the plugin to show the upcoming function's measurements in ParaView</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Geometry used in the industrial CFD code simulations (left) and its partitioning among processes for parallel execution (right)</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,413.57,221.74' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Plugin outputs for an arbitrary time-step at the MG benchmark, visualized from the same camera angle, but with different parameters on each side</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>14 https://slurm.schedmd.com/ 8/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58866:2:1:NEW 21 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. Plugin outputs for the MG benchmark. The leaf switch information is encoded both on the color (light brown, orange and dark brown) and on the position of the node planes (notice the extra gap when they do not belong to the same switch)</ns0:figDesc><ns0:graphic coords='10,141.73,63.78,413.57,220.42' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 9 .Figure 11 .</ns0:head><ns0:label>911</ns0:label><ns0:figDesc>Figure 9. Side-by-side comparison of the communication pattern between the MG (left) and BT (right) benchmarks, at an arbitrary time-step, colored by source rank of messages</ns0:figDesc><ns0:graphic coords='11,141.73,63.78,413.57,224.12' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 12 .Figure 13 .Figure 14 .</ns0:head><ns0:label>121314</ns0:label><ns0:figDesc>Figure 12. Visualization of the new communication pattern in Hydra from two different camera angles, at an arbitrary time-step, colored by number of MPI Isend calls (left) and total amount of bytes sent on those calls (right) on that time-step</ns0:figDesc><ns0:graphic coords='13,141.73,63.78,413.57,214.48' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='12,141.73,63.78,413.56,224.12' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='14,141.73,63.78,413.57,214.48' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>3/17 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2021:03:58866:2:1:NEW 21 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Plugin's overhead when measuring code functions on topology mode.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>running time</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>memory (MB)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>++ plugin</ns0:cell><ns0:cell>+ Score-P</ns0:cell><ns0:cell>baseline</ns0:cell><ns0:cell cols='2'>++ plugin + Score-P</ns0:cell><ns0:cell>baseline</ns0:cell></ns0:row><ns0:row><ns0:cell>MG</ns0:cell><ns0:cell cols='4'>31m42s (0%) 31m09s (-1%) 31m37s &#177; 2% 648 (42%)</ns0:cell><ns0:cell>479 (5%) 455 &#177; 0%</ns0:cell></ns0:row><ns0:row><ns0:cell>BT</ns0:cell><ns0:cell>34m28s (0%)</ns0:cell><ns0:cell cols='3'>34m26s (0%) 34m28s &#177; 1% 648 (42%)</ns0:cell><ns0:cell>478 (5%) 455 &#177; 0%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Hydra 47m04s (12%)</ns0:cell><ns0:cell cols='3'>43m52s (4%) 42m00s &#177; 0% 382 (22%)</ns0:cell><ns0:cell>323 (3%) 314 &#177; 0%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Plugin's overhead when showing communication on topology mode.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>running time</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>memory (MB)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>++ plugin</ns0:cell><ns0:cell>+ Score-P</ns0:cell><ns0:cell>baseline</ns0:cell><ns0:cell cols='2'>++ plugin + Score-P</ns0:cell><ns0:cell>baseline</ns0:cell></ns0:row><ns0:row><ns0:cell>MG</ns0:cell><ns0:cell cols='4'>31m34s (0%) 31m09s (-1%) 31m37s &#177; 2% 648 (42%)</ns0:cell><ns0:cell>479 (5%) 455 &#177; 0%</ns0:cell></ns0:row><ns0:row><ns0:cell>BT</ns0:cell><ns0:cell cols='4'>34m24s (0%) 34m08s (-1%) 34m28s &#177; 1% 648 (42%)</ns0:cell><ns0:cell>477 (5%) 455 &#177; 0%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Hydra 42m53s (2%)</ns0:cell><ns0:cell cols='3'>43m50s (4%) 42m00s &#177; 0% 397 (26%)</ns0:cell><ns0:cell>316 (1%) 314 &#177; 0%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='2'>https://www.paraview.org/in-situ/ 2/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58866:2:1:NEW 21 Sep 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot' n='15'>This could also be manually achieved by selecting a correct time interval in Vampir but there is no straightforward way to isolate the events pertaining to a single simulation time step.11/17PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58866:2:1:NEW 21 Sep 2021)Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='16'>If activated, there would be at the end of the simulation, apart from the simulation's output files, those generated by Score-P for visualization in Cube (profiling mode) or Vampir (tracing mode). Their generation can co-exist with the plugin usage, but it is not recommended: the overheads sum up.17 The plugin can perfectly run in all its modes and features at the same time (geometry mode requires the simulation to have a Catalyst adapter; see our previous papers). However, this is not recommended: the overheads sum up.18 Given the simulation was not being visualized live in ParaView, there was no need to let the plugin work in time-steps when no data would be saved to disk.13/17PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58866:2:1:NEW 21 Sep 2021)Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='21'>https://gitlab.hrz.tu-chemnitz.de/alves--tu-dresden.de/catalyst-score-p-plugin 22 https://dx.doi.org/10.25532/OPARA-119. We are unable to provide the raw data related to Rolls-Royce's code due to copyright issues.15/17PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58866:2:1:NEW 21 Sep 2021)Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='17'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58866:2:1:NEW 21 Sep 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Reviewer 1 (Julian Kunkel) Basic reporting The article is mostly the same compared to the last iteration. Minor improvements have been made. Regarding related work: Given the new Example 3.2.2, the important related work of '2D Communication Views' should be discussed in the article. 2D communication matrices have been used in Vampir and other tools, e.g. https://www.researchgate.net/profile/Michael-Wagner-45/publication/284440851/figure/fig7/ AS:614094566600711@1523422962550/Vampir-communication-matrix-taken-fromGWT14_W640.jpg https://apps.fz-juelich.de/jsc/linktest/html/images/linktest_report_matrix.png As it stands the hypothesis that the view is effective for performance analysis is not sufficiently supported. Some replies to modifications made: I agree with most of the changes made. > Regarding the 'x = 0 plane', the reader doesn't need to worry too much about finding I believe it then is not wise to mention this in Line 216. I spent quite some time trying to find the network information in the figure. Maybe the horizontal bar at the end, what can we see there in the first place? This description should be removed as in this figure we cannot see anything useful. It is discussed in Figure 7 as well where it makes sense. Not using syntax highlighting in the code presentations in Figure 13 is not acceptable for me. In Figure 3 you could at least use bold font to increase readability a bit. The coloring makes sense but maybe the ifdefs then could be removed to spare space and well it doesn't add new information... Experimental design The experimental design still has its major weakness to answer the question: is the 3D visualization effective for analysis? The example in Section 3.2.2 Industrial CFD Code is not convincing to show that the 3D visualization is superior to a 2D visualization. I would claim that the issue pointed out here (too many comms and empty ones during the iterations) can be visualized using 2D communication views and there can even be better seen. It has been claimed by authors of the same team those visualizations are effective to identify those issues. You must include the 2D visualization as related work and show that your example is more effective -- differentiate your work from this analysis method that had been used effectively in the past. I'm convinced you know about these methods and wonder why they haven't been included and discussed here. If it isn't a good example, then you need to find a better example or do a quantitative study. Validity of the findings The paper is a minor increment from the authors' last paper version. Generally, I believe the 3D visualization has its merits, however, the paper is not yet convincing. Additional comments Given the unsatisfactory reply of the authors to the key issue, the minimal changes made in the article for resubmission, and the issues raised by reviewers regarding the contribution, now, I have more reservations about the article than before. Is there no reasonable example that shows the benefit of 3D visualization? If it is just a matter of time to create such an example, then bring it on particularly as the last article of you included quite some aspects of this paper. The novelty of this article here would be to me to demonstrate the merits of 3D visualization. At the moment, the contribution remains too weak for me to justify publication in the journal - it would be OK for an applied workshop. Please continue the interesting research and demonstrate to the readers the merit. ======================================================================= REPLY: Dear Reviewer, Thank you very much once again for your comments. The allusion to the 'x = 0 plane' has been removed, as requested. In Figure 6, we illustrate how both the switch name and id number can be retrieved by our tool and shown on ParaView (see the scales at the upper left corner on both sides of the picture). Code syntax has been added to Figure 13; please accept our apologies for the delay. With respect to Figure 3, we need the #ifdefs because they illustrate the insertions needed in a simulation code due to Catalyst and then due to our plugin (what is the whole purpose of the figure itself). A discussion about 2D communication matrices has been added to the Related Work, as requested, in which we talk about their limitations. The novelty of our approach, however, resides not only on the three dimensionality, but also on the native association with the simulation's time step (we have made it clearer in line 70), allowing e.g. for frame playing in ParaView and the comparison of the performance (not only communication, but also function execution frequency and duration) between different time steps. Such feature introduces the capacity to analyze the performance of the application from a cyclical point of view, as the solver progresses through the time steps (there is a new analysis about this on section 3.2.2). On the other hand, there is also a implementation/technical aspect to be considered: we work in close contact with Rolls-Royce's engineers and we have noticed how important it is for them to obtain the information they need (in our case, the performance of their code) in a straightforward manner. In this sense, using preexisting visualization tools (like ParaView, which they already use to analyze the flow solution) represents a major benefit to them, as they don't need to learn new software (like Cube or Vampir) for the task, but rather stick with programs they are already used to (like ParaView). We have added a note about this in the conclusion of the paper. ======================================================================= Reviewer 2 (Lucas Schnorr) Basic reporting After reading the diff file sent by authors, I have found no significant typos in the text. The issue in the text that I have previously identified has been fixed. Authors have improved the discussion and the differences against other works. Raw data, whenever possible (copyright issues), has been made available through the DOI 10.25532/OPARA-119. The issue of lines 97 and 98 has been fixed by citing ParaProf directly. Experimental design The experimental design (factors, response variables, replications) is very synthetic in the sense that only a few factors and response variables are involved. That being said, such simplistic experimental design seems sufficient to generate the study cases to evaluate the approach, so it's okay. Validity of the findings My first major concern (summary of my previous review) was 'discussion of the differences against related work (other attempts that depict very similar views of this contribution's topology view)'. Authors very briefly compare themselves against RW in Section 1 ('Related Work') arguing that the main drawback of existing solutions are that they have developed a 3D viewing tool on its own instead of using an existing generic visualization tool as authors of the submitted manuscript do. So, the argument is purely implementation/technical and not scientific. I rather prefered a discussion about how the 3D views of the current work differ from the 3D views that have been implemented in those previous work, from the perspective of a performance analyst. Just to give you an idea, a very simplistic 3D topology/hardware view has been already proposed in the Triva tool. What is the difference between those views and yours? The V1 version has show no more ellaborate discussion about this specific point. Se my first concern remains. Authors have improved the results section to report the back and forth of the performance analysis/optimization with the real application. So I feel the result is more consistent now. Consequently, my second concern about usefulness is satisfied. In addition, raw data has been made available (DOI). It includes PDF files with the SLURM output (containing the measurements for the evaluation tables) and VTU files for the visualization. Additional comments I understand that many comments are a matter of style, I left to the editor to decide whether they are important or not for this journal. ======================================================================= REPLY: Dear Reviewer, Thank you very much once again for your comments. From a scientific point of view, the novelty of our approach resides not only on the three dimensionality, but also on the native association of the generated views with the simulation's time step (we have made it clearer in line 70), allowing e.g. for frame playing in ParaView and the comparison of the performance (not only communication, but also function execution frequency and duration) between different time steps. Such feature introduces the capacity to analyze the performance of the application from a cyclical point of view, as the solver progresses through the time steps (there is a new analysis about this on section 3.2.2). On the other hand, we think that the implementation/technical aspect is very important on its own: we work in close contact with RollsRoyce's engineers and we have noticed how important it is for them to obtain the information they need (in our case, the performance of their code) in a straightforward manner. In this sense, using pre-existing visualization tools (like ParaView, which they already use to analyze the flow solution) represents a major benefit to them, as they don't need to learn new software (like Cube, Vampir or Triva) for the task, but rather stick with programs they are already used to (like ParaView). We have added a note about this in the conclusion of the paper. ======================================================================= Reviewer 3 (Anonymous) Basic reporting The paper is easier to read after the revision because most of the explicit criticism regarding the writing has been addressed. One of the issues I mentioned that has not been addressed is the following: >>I found a *central* part (lines 65 to 79) of the introduction not to be well well understandable. What does the 'flipping' mean in detail? What does 'performance ... inside in situ' mean in contrast to 'in situ inside performance'?<< I did reread this part and still have problems to understand the actual meaning. So I believe other readers will have the same problem. The authors should try to reformulate this or provide additional information/description/explanation on its meaning and impact. Further, change 'for decades' to 'a long time' does not eliminate the need for reference(s) to support this claim. Regarding the color bar labels: Their format *can* be changed in ParaView: see 6.3 and 10.2.4 here: https://www.paraview.org/paraview-downloads/download.php? submit=Download&version=v5.8&type=data&os=Sources&downloadFile=ParaViewGuide5.8.1.pdf Also the placement of the color bar can be influenced. Experimental design In this revision, the experiments have been extended and do show the usefulness of the method now. Validity of the findings No additional comments. Additional comments The paper has been significantly improved in this revision. This brings it close to be ready for publication. I recommend to accept the paper after a *minor* revision addressing my new comments. ======================================================================= REPLY: Dear Reviewer, Thank you very much once again for your comments. The confusing excerpt between lines 65 to 79 ('flipping the positions of the performance and the in situ add-ons', 'performance inside in situ', 'in situ inside performance' etc.) has been removed. Please accept our apologies for the delay. A citation has been added to support the claim that in situ tools have been under continuous development for a long time. We are aware that the color bars can be fully customized in ParaView. For our paper, we have decided to just stick with the default configuration (blue/red scheme, placement at corners), as it is the one with which ParaView-generated figures are usually shown and in view that every scientist has their own personal style on the matter. "
Here is a paper. Please give your review comments after reading it.
258
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>With continuously rising trends in applications of information and communication technologies in diverse sectors of life, the networks are challenged to meet the stringent performance requirements. Increasing the bandwidth is one of the most common solutions to ensure that suitable resources are available to meet performance objectives such as sustained high data rates, minimal delays, and restricted delay variations. Guaranteed throughput, minimal latency, and the lowest probability of loss of the packets can ensure the quality of services over the networks. However, the traffic volumes that networks need to handle are not fixed and it changes with time, origin, and other factors. The traffic distributions generally follow some peak intervals and most of the time traffic remains on moderate levels. The network capacity determined by peak interval demands often requires higher capacities in comparison to the capacities required during the moderate intervals. Such an approach increases the cost of the network infrastructure and results in underutilized networks in moderate intervals. Suitable methods that can increase the network utilization in peak and moderate intervals can help the operators to contain the cost of network intrastate. This article proposes a novel technique to improve the network utilization and quality of services over networks by exploiting the packet scheduling-based erlang distribution of different serving areas. The experimental results show that significant improvement can be achieved in congested networks during the peak intervals with the proposed approach both in terms of utilization and quality of service in comparison to the traditional approaches of packet scheduling in the networks. Extensive experiments have been conducted to study the effects of the erlang-based packet scheduling in terms of packet-loss, end-to-end latency, delay variance and network utilization.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Packet Scheduling refers to allocation of sharing network resources among different types of flows with an objective to maximize the utilization and fair or ensure a targeted share for each flow. It is a process of selecting packets for outgoing transmission based on certain criteria. Incoming packets are added to queues based on certain criteria depending on algorithm used and packets from these queues are selected for transmission on outgoing transmission link. Different packet scheduling schemes exhibit different delay, jitter, packet drop and throughput characteristics for users. Most of the time, the focus is to minimize the degradation of quality of service and maximize link utilization.</ns0:p><ns0:p>The existing approaches use different criteria such as priority, fairness, class of service and packet size for making the packet scheduling decisions. Although some of these parameters can be used to classify the packets efficiently, none of the existing parameters can represent the traffic characteristics in terms of capacity or volume. The capacity or volume of traffic characteristics across time and origin domains is generally used in the network dimensioning and capacity planning, but don't have any explicit control over controlling the scheduling of the packets in network nodes. The volume of traffic in a certain interval and origin is generally described in terms of Erlang units. Erlang represents the volume of traffic (TV) generated in a given busy period(BP). It was first described by Agner Krarup Erlang in 1908 <ns0:ref type='bibr' target='#b0'>(Angus, 2000)</ns0:ref>. It has been widely used for network dimensioning and capacity planning in cellular networks and Private Automatic Branch Exchanges(PABX).</ns0:p><ns0:p>By calculating the Erlang traffic in an arbitrary period and origin, the traffic from different origins can be assigned in a way that higher Erlang values get more processing time as compared to low values. The allocation of the queue time in proportion to the traffic intensity provides efficient management of congestion and better utilization of network resources. The scheduling algorithm proposed in this paper utilizes above concepts to determine the period-based traffic intensity for each given origin of traffic. The proposed algorithm schedules traffic more efficiently according to Erlang capacities of traffic from different origins. The Erlang values when classified by origins are referred to Traffic Intensity (TI) of origin in a given interval. TI corresponding to a traffic profile of an origin is dynamically calculated by a capacity planning server. It can also be determined during the network development or planning phases using the network dimensioning processes. The distribution of Erlang profiles to different network nodes can be achieved in several ways such as a static configuration of network nodes and through dynamic communications using a TI server. In up-stream nodes of traffic flow, TI is accumulated to minimize the number of queues in core networks. The TI profiles can also categorize based on the type of service in order to realize the commonly known service-based scheduling so that Erlang values impact the scheduling decisions. The proposed algorithm can be used to implement integrated services or differentiated services mechanisms for controlling the quality of service. The focus of this paper is to present a novel approach to address the issues of network performance and utilization in congested conditions. Beside this, the following are major contributions.</ns0:p><ns0:p>&#8226; Provides a novel approach for scheduling decisions in networks with considerations of traffic planning and network conditions &#8226; Formulate a process to plan the traffic intensity profiles PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; Provide algorithm to make and influence packet scheduling decisions &#8226; Present comparison results of performance and utilization &#8226; Identify potential machine learning based solutions for intelligent scheduling decisions</ns0:p></ns0:div> <ns0:div><ns0:head>Related Literature</ns0:head><ns0:p>The existing techniques of reciprocal processing of packets in queuing systems is based on priority, class, or a group of services for wired networks with objectives to either increase the fair access or some controlled access to the network resources. The wireless networks in this regard also focus on the conditions of the channels and energy conservation. Both wired and wireless networks employ common algorithms such as First-In-First-Out(FIFO), Round-Robin (RR), Deficit Round-Robin (DRR), Random-Early-Detection, General fairness (GF), Stochastic Fairness(SF) or service priority(SP) based scheduling algorithms. <ns0:ref type='bibr' target='#b33'>Sungjoo et al. (2016)</ns0:ref> have proposed a packet scheduling scheme that can employ multiple wireless networks on the outgoing side. In their proposal they have modified the TCP-friendly Rate Control (TFRC) algorithm, and they can distribute the packet transmissions to a group of users using multiple outgoing interfaces that increases the QoS by reducing packet loss in the process of scheduling or due to network congestion. Originally their proposal is in fact, an alternate approach to add the network resources or bandwidth to serve the same users, the result is optimization of quality of service, but it reduces the network utilization as more resources are added up. In contrast, our proposed work in this paper focuses on increasing the quality of services with given network resources and also optimizing network utilization. In fact, we can see that if we can enhance quality of service with given network resources, it will improve network utilization.</ns0:p><ns0:p>For wireless networks energy efficiency is an important research area. <ns0:ref type='bibr' target='#b34'>Xu et al. (2016)</ns0:ref> proposed a packet scheduling approach to maximize energy efficiency and it specifically focuses on wireless sensor networks where energy constraints are critically important due to the shorter battery lives of the sensor nodes. They have used an efficient offline packet scheduling approach along with a rolling-window-based online algorithm for scheduling decisions. Our proposed approach can significantly enhance the energy efficiency by unnecessarily comparative long periods of packets in queues and enhancing the quality of service. It can address the energy efficiency problems if we can intelligently correlate the traffic profiles of sending nodes, so that network nodes have necessary information to make scheduling decisions.</ns0:p><ns0:p>Binary search algorithm(BSA) is also an approach to enhance the packet scheduling. In this regard, <ns0:ref type='bibr' target='#b27'>Pavithira &amp; Prabakaran (2016)</ns0:ref> has proposed a packet scheduling approach of Long-Term Evolution(LTE) based wireless networks using the novel binary search algorithm to increase the efficiency. It is noted that any such approach will lead to the degradation of the quality of service due to the fact that it incurs delays and jitter for the services in worst case and normal loads. In our proposed work we propose rather simple operations of constant complexity both in time and space. This cannot only increase the network utilization and hence the efficiency, but also optimize the quality of service by minimizing the delay, jitter, packet loss, buffer overflows etc.</ns0:p><ns0:p>For video streaming applications, network coding selection is a possible way to improve the throughput and minimize the delay. Since it is difficult to synchronize users or peers, the bandwidth utilization degrades. <ns0:ref type='bibr' target='#b13'>Huang et al. (2016)</ns0:ref> have proposed an approach which coordinates between layer selection algorithms with a distributed packet scheduling algorithm to enhance the quality of video streaming. In our work, we shall show that both the improvement of quality of service and bandwidth efficiency is achievable without adding such complexities to the networks. Fairness is an important aspect that scheduling algorithms need to ensure among users, this is applicable to both with the best effort models and guaranteed service models. Users should be able to have access to a fair amount of network resources as per their subscriptions. <ns0:ref type='bibr' target='#b5'>Deb et al. (2016)</ns0:ref> has evaluated the relationship between the packet delay and fairness of network resource allocation among traffic flows or users. In our research we have shown that our proposed scheme can ensure the end-to-end delays with fair network resource allocation. In fact, in our proposed scheme, traffic profiles ensure the minimum number of network resources that shall be guaranteed across the network's flows or users or services. <ns0:ref type='bibr' target='#b7'>Deshmukh &amp; Vaze (2016)</ns0:ref> have proposed another approach for an energy efficient packet scheduling algorithm. They have used deadlines where a fixed number of packets is assumed to arrive. It assumes that all packets arrive within equal intervals. This assumption is basically difficult to arrive at in practice as the packet size and intervals may not be fixed rather different users send different packet sizes. In comparison, in our proposed approach rather than knowing the total number of packets within a fixed interval we use traffic intensities and intervals that can vary without hard-specified limits. Fair queuing schemes focus on to ensure reasonable fairness of allocation of network resources to the users or traffic flows, in practice the traffic characteristics changes, but users may send traffic up to their subscribed rates. <ns0:ref type='bibr' target='#b26'>Patel &amp; Dalal (2016)</ns0:ref> have proposed a mechanism to adjust the weights assigned to traffic classes, that improves the performance in contrast to the fixed weights of Weighted Fair Queuing(WFQ) or Class Based Weighted Fair Queuing(CBWFQ). However, as the number of classes increases, the number of queues shall be increased so it may lead to scalability problems. In our proposed work, TIPS aggregate the queue on the upstream that minimizes the required number of queues. With a lesser number of queues, the performance of the network is increased, and complexity is decreased. <ns0:ref type='bibr' target='#b21'>Miao et al. (2015)</ns0:ref> has proposed a preemption-based packet-scheduling technique to reduce packet loss and enhance the global fairness for software defined data(SDN). <ns0:ref type='bibr' target='#b11'>(He et al., 2016)</ns0:ref> evaluated the resource allocation problems for cases requiring preemption and queues are not properly processed. Due to this there is an increase in the length of queues and for preempted flows the delay is increased. In such conditions either it is required to decrease throughput for the preempted queue or add up delay for packets. Furthermore, the approach cannot handle cases where multiple packets fulfill preemption criteria. This condition is frequent in the upper layer nodes of networks. Furthermore, this approach also does not consider time and origin-based problems of user traffic.</ns0:p><ns0:p>A dynamic core allocation to enhance the packet scheduling in multicore network processors is proposed by <ns0:ref type='bibr' target='#b15'>Iqbal et al. (2016)</ns0:ref>. They have considered a packet scheduling scheme that incorporates various dimensions of the locality to enhance throughput for network processors and minimize out of order packets. The specific problem addressed in this is the out-of-order packet arrival due to different processors handling traffic of the same flows. In our proposed work, the traffic originated from one source node is always handled in the same queue and this queue can be attached to a specific network processor and hence it can regulate the packet transmissions and minimize the out-of-order packet delivery. <ns0:ref type='bibr' target='#b30'>Sharifian et al. (2016)</ns0:ref> have proposed a scheme to handle the real-time and non-real time traffic in the common Radio Bearers (RB) for LTE based wireless networks. The improvements made by such an approach are augmented over all capacity. Such an approach is useful in access nodes; however the upper layer's networks have to rely on some other mechanism. In our proposed work we employ the role-based handling of the traffic, or we can use in-conjunction-mode where the access nodes can use a packet scheduling approach suitable to wireless medium or wireless technology to get relevant advantages. <ns0:ref type='bibr' target='#b22'>Mishra &amp; Venkitasubramaniam (2016)</ns0:ref> has presented a quantitative trade-off analysis between anonymity and fair network resource allocation. Accordingly, in anonymous networking, encrypted packets from various sources are re-ordered at routers randomly before processing in the outgoing direction. The authors have shown that it affects the fairness of network resource allocation. They have shown the results of First Come First Serve (FCFS) and fairness-based packet scheduling approaches using information-theoretic metric for anonymity and a common temporal fairness index that provides a degree of out-of-order packets transmitted. In our proposed work, the packets from the same sources are only allocated in the same queues within the specified busy periods and the change as Traffic Intensity(TI) indexes change hence through this way it ensures that packets are not always processed in the same queues, which indirectly provides anonymity. <ns0:ref type='bibr' target='#b35'>Yu et al. (2015)</ns0:ref> have proposed another approach for energy efficient packet scheduling through awareness of the delay experienced by the packets. We can see so far there are no proposals so far for the traffic intensity-based packet scheduling that employs the queue reordering not the packet reordering. <ns0:ref type='bibr' target='#b10'>Han et al. (2015)</ns0:ref> have proposed stochastic scheduling for optimal estimation of the parameters. In this regard, it is required to note that Stochastic Fair Queuing(SFQ) based packet scheduling approaches suffer from the issues of scalability, whereas in our proposed work we are proposing a scalable packet schedule scheme. <ns0:ref type='bibr' target='#b19'>Lee &amp; Choi (2015)</ns0:ref> have proposed a group based multi-level packet scheduling for 5G wireless networks. This approach considers the issue of user number and enhanced methods to reduce the interference between beams. This approach is in fact, an effort to address the issues of access nodes operating in the wireless domain. As with other approaches of wireless enhancements-based algorithms TIPS can allow the inconjunction mode operation where the wireless network related issues can be addressed and in upper layer nodes TIPS can provide better quality of service and network utilization.</ns0:p><ns0:p>There was another approach proposed for multi-resource environments that is a fair and efficient packet scheduling method called Active Time Fairness Queueing(ATFQ)(J. <ns0:ref type='bibr' target='#b38'>Zhang et al., 2015)</ns0:ref>. The authors have recommended the approach for middleware devices in data center environments. In fact, this addresses the different processing time requirements on diverse resources. The issue of different processing time on middleware or server can be addressed with our approach by using a traffic profile suitable for related servers. <ns0:ref type='bibr' target='#b18'>Kaur &amp; Singh (2015)</ns0:ref> have proposed a Weighted Fair Queue(WFQ) based on Send Best Packet Next(SBPN) algorithm to enhance quality of service with a specific focus on multimedia application in mobile ad-hoc networks. With the WFQ based algorithms we have seen that in practice traffic intensities may vary, but with given weights either class based or service-based algorithms, the network may remain utilized and there may be higher traffic intensity traffic whose performance is subject to degradation. The TIPS algorithm assigns period-based weights according to the traffic intensities of source nodes and handles these weights by queue reordering and short consecutive resource allocation. We shall show in the results section that in fact, such an approach dramatically increases the throughput, reduces jitter and end-to-end delay, and optimizes the network utilization. This approach also reduces out-of-order packet arrival that is a typical problem at the receiver side where the received packets shall be required to re-order and application shall wait till all packets are arrived for a certain limit.</ns0:p><ns0:p>A class based weighted fair queue algorithm with generic traffic shaping mechanism is proposed by <ns0:ref type='bibr' target='#b36'>Zakariyya &amp; Rahman (2015)</ns0:ref>. The class based weighted fair queuing is unable to respond to origin and PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>time-based problems i.e., time and origin of packets might be different but may carry the same classes, but class based weighted fair queuing will handle them in a similar way. <ns0:ref type='bibr' target='#b32'>Striegel &amp; Manimaran (2002)</ns0:ref> have proposed a scheduling technique that relies on signaling protocol for the resource reservation between end nodes; however, our proposed algorithm does not necessarily rely on any signaling for resource reservation; rather it works on scheduling packets based on planned or forecasted traffic intensities. In contrast to the signaling from end hosts for the resource reservation, the proposed algorithm uses the concept of Traffic Intensity signaling which is efficient in scalability and efficiency.</ns0:p><ns0:p>Differentiated services approach was a result of efforts to overcome limitations of integrated services and the basic idea in this was to aggregate traffic of similar characteristics based on bandwidth, delay guarantee requirements. In comparison, the proposed in this paper, although follows a similar methodology, but it limits the class or group-based decisions on access nodes and upper-level nodes relies on traffic intensity according to time, the region, and seasons.</ns0:p><ns0:p>Static Earliest Time First (SETF) and Dynamic Earliest Time First (DETF) approach, proposed by <ns0:ref type='bibr' target='#b40'>Zhang et al. (2001)</ns0:ref> is an aggregation-based packet-scheduling technique that works based on the First-In-First-Out(FIFO) principle and uses timestamps to schedule packets. These methods also do not consider traffic intensity, time and origin-based parameters in scheduling packets and nodes manage schedules in FIFO manner. Erlang based traffic Intensity has been extensively used for capacity planning on voice and Private Automatic Branch Exchange (PABX) systems <ns0:ref type='bibr' target='#b0'>(Angus, 2000)</ns0:ref> Public Switched Telephone Network (PSTN) Switches and Gateways. The commonly used Erlang distributions are erlang B and erlang C. For erlang B, it is assumed that traffic packets are discarded when free resources are not available whereas the traffic packets are put in a waiting queue to be served upon the availability of the resources for Erlang C.</ns0:p><ns0:p>In this work, the erlang type B and type C are transformed into packet transmission with arbitrary busy periods. Furthermore, the traffic intensities are classified based on their origin or serving area. Generally, the serving area is represented by an access node known as a Point of Presence(POP). The algorithm proposed in the article concentrates on a novel approach for packet-scheduling decisions aiming to improve the quality of service, network utilization for Congested Network Conditions(CNC) serving the different types of traffic generated by various sources with traffic intensity. This paper will show a comparison of performance, simplicity, scalability for Moderate Network Conditions (MNC). Erlang is widely used for network capacity planning and dimensioning of networks <ns0:ref type='bibr' target='#b0'>(Angus, 2000;</ns0:ref><ns0:ref type='bibr' target='#b2'>Choi, 2008;</ns0:ref><ns0:ref type='bibr' target='#b3'>Dahmouni et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b4'>Davies et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b9'>Glabowski &amp; Stasiak, 2012;</ns0:ref><ns0:ref type='bibr'>L. Zhang et al., 2000)</ns0:ref> in telecommunication industry. This work highlights the significance of Erlang TI for minimizing the delay, jitter, packet loss and maximizing the data rates for CNC by using the TI dynamics in terms of time and origin-based characteristics for optimizing the scheduling. The proposed algorithm offers dynamic interval-based priorities of queues and proportionate scheduling decisions based on Erlang distribution. It allows making scheduling decisions based on the role of the nodes, origin, and time-based features of the traffic on each node. The roles of nodes are different and depend on network topology and architecture such as in a hierarchical network that uses different layers such as access, code, edge, and aggregation. The role of nodes determines how traffic is scheduled in upstream and downstream interfaces.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>The existing Erlang Traffic Intensity definitions are based on the assumptions for the Time Division Multiplexing(TDM) domain. In order to extend the concept for the statistically multiplexed traffic Internet PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:07:64020:1:0:CHECK 10 Sep 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Protocol (IP) networks, some terminologies are required to be defined specifically. This section discusses those definitions and description of various terminologies that are used to describe the working of the proposed algorithm.</ns0:p></ns0:div> <ns0:div><ns0:head>The Role of a Node</ns0:head><ns0:p>It is common to implement the IP networks in hierarchical layers where each layer provides dedicated functionalities. The nodes in these layers generally have different technologies and protocols that are used to perform certain functionalities. The hierarchical networks generally have the access network layer, aggregation/distribution network layer, core network layer, transport network layer and edge network layer. In the telecom industry there are other layers such as the service network layer. The service layer deals only with the signaling traffic and hence has distinct characteristics and features. The service traffic also traverses the transport nodes in order to provide connectivity with physical distributed service nodes. The scheduling decisions are required to be made on each node; however, the objectives of scheduling vary per node. The roles considered in this work include access, aggregation distribution node, core, transport, edge and service node. The names of different layers discussed may vary from operator to operator, however, some other names may have been used to represent the same functionality. The proposed algorithm uses the roles defined above to decide if the Erlang TI needs to be accumulated or distributed.</ns0:p></ns0:div> <ns0:div><ns0:head>Traffic</ns0:head><ns0:p>Since the IP packets may have different types of flows such as Transmission Control Protocol(TCP) or User Datagram Protocol(UDP) flows and some packets may not belong to any flow definition, however those packets play a role when scheduling them over the interfaces. So, for the purpose of this work, traffic is referred to as the packet stream having some common attributes like a source, destination IP/port addresses. The traffic definition considered here incorporates the effect of the packet size. In practice, there are different units that can be used to describe it but, in this article, the traditional routine is used. The x &#119909;&#119861;&#119875;&#119878; refers to K for Kilo, M for Mega, G for Gigabits per second.</ns0:p></ns0:div> <ns0:div><ns0:head>Traffic Volume (TV)</ns0:head><ns0:p>In contrast to the traffic definition described in BPS, Traffic Volume (TV) refers to the aggregated count of bytes transmitted in an arbitrary interval and specific origin.</ns0:p></ns0:div> <ns0:div><ns0:head>Busy Interval (BI)</ns0:head><ns0:p>Erlang TI is defined in terms of busy hours(BH). In this work in order to address the granularity issues the Busy Interval(BI) is any arbitrary interval over which the traffic intensity has been specified. The BI could be measured in seasons, week, day, hours are seconds and sub seconds. The values of TI depend on BI and TV and consequently the packet size in given traffic.</ns0:p></ns0:div> <ns0:div><ns0:head>Traffic Intensity</ns0:head><ns0:p>With the extended definitions in the above sections, the traffic intensity represents the ratio between the expected volume of traffic and the actual capacity of the network or link in a given arbitrary busy interval and place in the network. Further details of the traffic intensities and methods to calculate them shall be covered in the following sections. For the end user point of view the traffic intensity is defined in the traffic volume generated by the user vs. the user subscription data rate in an arbitrary busy interval. In this article erlang C is used to calculate traffic intensities of end devices. On a given time, there may be several services PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:07:64020:1:0:CHECK 10 Sep 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science running such as IPTV, voice over IP and the high-speed internet. Considering the maximum data rate for &#120590; normal operation of a service and active time be h, traffic intensity t per service type is defined as below.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119905; = &#120590;&#8462;</ns0:head><ns0:p>Furthermore with be the number of queues corresponding to a traffic type in given POP, the probability &#119902; &#119889; of delay for an arbitrary queue and packet generation according to Poisson process and delay as exponential distribution. The probability of delay of a packet is given by the following relation.</ns0:p><ns0:formula xml:id='formula_0'>&#119889; = &#119905; &#119902; &#119902;! &#119902; &#119902; -&#119905; . 1 ( &#119902; -1 &#8721; &#119894; = 0 &#119905; &#119894; &#119894;!) + &#119905; &#119902; &#119902;! &#119902; &#119902; -&#119905;</ns0:formula></ns0:div> <ns0:div><ns0:head>Point of Presence (POP)</ns0:head><ns0:p>The term Point of Presence (POP) represents, in fact, the area of an access node of the network where subscribers are connected directly or indirectly. POPs can be classified based on origin, network function or layer such as Residential POP and Commercial POP represents the service node that serves the users in a residential area and commercial areas respectively. Similarly, access POP, aggregation POP and core POP represent the functionalities provided by nodes in the access layer, aggregation layer and core layer respectively. Other categories follow a similar method.</ns0:p></ns0:div> <ns0:div><ns0:head>Traffic Profile (TP)</ns0:head><ns0:p>Traffic profile is a representation of the traffic intensity of a user or node in the form of a histogram that has the BI, Packet Size, and other related parameters. These profiles are used by the proposed algorithm to determine the information of the arbitrary intervals and intensities and packet sizes. The TPs are aggregated on the upstream nodes while distributed on the downstream nodes. There are different methods to determine the TP that includes the Forecast Based (FOB), Historical Network Usage Based (HNUB), Application Signaling Based (ASB) and Operator Policy Based (OPB). The details of these methods shall be covered in the next sections. However, for the purpose of evaluation and comparison in this work, the FOB method has been used to determine the traffic profiles. FOB provides the upper bound to traffic estimates (represents what networks are designed to support) as compared to the ASB or HNUB that represents a more real state of the network.</ns0:p></ns0:div> <ns0:div><ns0:head>Time and Origin Characteristics (TAOC)</ns0:head><ns0:p>Different users in different types of serving areas have different network usage patterns. Some POPs could be seasonal, others could be generating traffic in different hours of the day or week based on geographical statistics. Furthermore, there would be different characteristics of the traffic generated by the different POPs in terms of the packet size, latency requirements and data rates. All these characteristics are referred to as TAOC characteristics and mainly used in traffic profiling processes.</ns0:p></ns0:div> <ns0:div><ns0:head>Scheduling Constants</ns0:head><ns0:p>Scheduling Constant SK and K depends on the traffic profile information of POPs. These constants are used to control the scheduling decisions and may also represent policies. The SK constant is linear additive to administratively manipulate scheduling behavior and is configurable per node whereas the constant K is defined globally on a network or layer domain. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Congested Network Conditions (CNC) and Moderate Network Conditions (MNC)</ns0:head><ns0:p>The Congested Network Conditions(CNC) represent a network condition where maximum users generate the traffic over a network. It is the situation where POP has high activity. Generally, not all POPs always operate in CNC state. There are only some periods that may have this condition. This fact allows the network provider to define their oversubscription policies based on the activity of a POP. On other hand, Moderate Network Conditions (MNC) refers to a POP condition that has moderate or less than maximum activity and users in those POPs may generate traffic that is intermittent or sparsely distributed over the timescale. In this state, free resources available on the network exceed the traffic load. The networks remain under-utilized because operators generally plan the network according to the CNC requirements. The proposed algorithm considers these considerations and helps to control extravagant investments on the network infrastructure to reduce the cost of network services.</ns0:p></ns0:div> <ns0:div><ns0:head>Comparison Parameters</ns0:head><ns0:p>The evaluation and comparison of the effects of using the proposed algorithm will be based on the certain parameters that includes QoS measures such as sustained throughput delay, latency and jitter and network utilization. These parameters shall be measured with the proposed algorithm and with other existing algorithms over a single network topology that consist of the hybrid simulated and emulated environments. The details of topology and environment shall be discussed in the subsequent sections.</ns0:p></ns0:div> <ns0:div><ns0:head>The Proposed Algorithm</ns0:head><ns0:p>This section covers the details and operation of the proposed algorithm called Traffic Intensity based Packet Scheduling (TIPS). It uses the TI profiles for scheduling decisions with consideration of various factors such as the node role, scheduling factors and TAOC. There are few parameters that need to be defined for description of the algorithm. These includes the that &#119876;, &#119904;&#119876;, &#119861;&#119868;,&#119879;&#119868;, &#119878;&#119861;&#119868;,&#119878;&#119876;, &#119890;&#119899;&#119876;, &#119889;&#119890;&#119876;,&#119876;&#119871;,&#119878;&#119862; &#119886;&#119899;&#119889; &#119909;&#119876; represents a packet scheduling queue, special queue for control purpose, buys interval, the traffic intensity, selection procedure for busy interval, queue selection procedure, procedure to add a packet to queue, removal procedure from a queue, queue length, scheduling criteria and the queue selected by the &#119878;&#119876; procedure respectively. The TIPS algorithm exploits a set of traffic profiles on nodes. On the down-stream Manuscript to be reviewed Computer Science it distributes the profiles into multiple profiles based on the traffic information and on upstream nodes it aggregates the profiles received from the other nodes. Through this way the number of queues is reduced consciously as traffic traverses the upper layers of the network hierarchy and increases the number of queues on the downstream nodes or nodes facing the users. The lesser number of queues in upper layer nodes reduces the processing delays and time spent whereas on the downstream nodes it allows the queues as required by the user types. The traffic profiles on the network nodes can be configured in several ways as enlisted below.</ns0:p><ns0:p>1. Add traffic profile database to each network node statically by administrator 2. Deliver the selected set of traffic profiles from a profiling server 3. Allow upstream nodes to learn or receive profiles from downstream nodes</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> shows the initial setup phase corresponding to the Initialize () procedure TIPS algorithm pseudo code (Algorithm I). In this phase, a number of queues are created according to the traffic profiles received or configured on the network node. The queue limit is determined based on the traffic intensity values in the given busy interval and these limits change with changes in the BI. Different queues are created with a size determined by the TI values for each profile on each node running the TIPS algorithm. On the upper layer nodes of the network hierarchy, profiles are either received from the downstream nodes or fetched from a local node database or any other method as discussed in the earlier section. This step is important as it defines the number queues and their size. In addition to this, the user traffic queues, a special queue is created to handle the packets generated for the control protocols that carry various information such as routing updates, link or channel setup, resource reservation messages. In the upstream direction the profiles are aggregated and on downstream nodes, the profiles are distributed according to the number of profiles. The special queue is processed immediately if its size is greater than zero. Furthermore, any other queue that is being processed is preempted and the and operations are performed on the special queue.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119890;&#119899;&#119876; &#119889;&#119890;&#119876;</ns0:head><ns0:p>The special queue can be considered as the highest priority queue with preemption. All the elements that are contained in the TIPS setup block are used for the initialization of the TIPS algorithm. During the initialization of TIPS, nodes have , current BI and different queues according to the traffic profiles. &#119904;&#119876;</ns0:p><ns0:p>In each BI, the enQ and deQ processes functions according to the information in the profiles available. The size of the queue is chosen according to the volume of the control traffic existing in the network and it is ensured that control packets do not encounter any packet loss. Receive side processing of the TIPS algorithm is shown in Figure <ns0:ref type='figure'>2</ns0:ref>. On the reception of a user packet the TIPS check if there are any control packets, if the is empty then the received packet is processed. For the user packets received on the &#119904;&#119876; interface, packets are processed by according to the limits. If the exceeds the packets are &#119890;&#119899;&#119876; &#119876;&#119871; &#119876;&#119871; immediately dropped without any further considerations processing, however, such a situation rarely occurs as while the packets are being enqueued during the BI, the deQ processes are also working to empty the queue according to TI values. This process is repeated on each incoming interface of the node. The processing of the TIPS algorithm on the outgoing side is shown in Figure <ns0:ref type='figure'>3</ns0:ref> where is de-queued with &#119904;&#119876; high priority. If the size of is zero, the queues having higher sizes are processed first according to the &#119904;&#119876; number specified by the algorithm. The current size of any queue is determined by the TI values, so higher the TI values for a given queue, a greater number of packets shall be processed repeatedly to equilibrate the queue system. During the processes the and parameters play an important role that defines the &#119889;&#119890;&#119876; &#119878;&#119870; &#119870; limit of the packets that can be processed by deQ regardless of the TI values. Generally, these factors are the upper limit to the TI values hence eliminating the chances of the non-processing of other queues.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>If a is by a limit specified by the TI values or the upper limit of SK, the next Q is selected by the &#119876; &#119889;&#119890;&#119876; SQL for deQ processing, and this process is repeated until all queue size reaches zero.</ns0:p><ns0:p>During packet departure, (where ) confirms the traffic &#119878;&#119870; &#119909; &#119878;&#119870; &#119909; = &#119878;&#119870; ( &#119894;&#119895; ) , &#119894; = &#119876; &#119899;&#119906;&#119898;&#119887;&#119890;&#119903; &#119886;&#119899;&#119889; &#119895; = &#119861;&#119868; with higher TIs does not interfere with the latency of queues having the smaller values of TI. It also makes sure that the traffic corresponding to the high values of TI is allocated sufficient resources.</ns0:p></ns0:div> <ns0:div><ns0:head>Methods of Traffic Profiling</ns0:head><ns0:p>In the earlier sections, different traffic profiling techniques have been mentioned. This section focuses on specific details of each method and discusses how these can be used for the proposed algorithm.</ns0:p></ns0:div> <ns0:div><ns0:head>A. Forecast or Business Planning Based (FOB)</ns0:head><ns0:p>This is the simplest method of building traffic profiles. This is similar to the procedures as used in the capacity planning of the network where the objective is to identify the services, the number of users, future margins, and over-subscriptions, Service Level Agreement (SLA) for each of POP in the given network. Generally, these are the forecasts part of the business plans of an organization. The typical classifying is based on the marketing product being offered and the relevant SLAs. The parameters that required converting a forecast to profile are as followed,</ns0:p><ns0:p>&#8226; Apparently, this looks to be a very tough task of suggesting the usage for each hour of the day of the week of the year. For HWU, the important thing that we need to know is to type POP. If POP is residential and the community is a city, and man and woman are employed, then we simply need to know the official working hours. In working hours residential usage will be low, whereas the commercial POP will have higher usage. For DWU , the major task here is identifying the working days and weekend days, holidays.</ns0:p><ns0:p>In case of WWU, we differentiate the weeks based on any specialty attached to it, if nothing important is scheduled then simply choose it as . For MWU , the month level specialty differences required, if 100% nothing different, then simply fill it with . 100%</ns0:p></ns0:div> <ns0:div><ns0:head>B. Historical Network Usage Based (HNB)</ns0:head><ns0:p>This technique is based on monitoring a given network for its network utilization. Monitoring is required at the access nodes, and the upper layer nodes shall use the fundamental concepts of profile aggregation. The traffic monitoring can be achieved with open-source tools like CACTI (Ian <ns0:ref type='bibr' target='#b14'>Berry et al., 2012)</ns0:ref> and MRTG <ns0:ref type='bibr' target='#b23'>(MRTG, 2012)</ns0:ref>. The things that need to be monitored included the following. </ns0:p></ns0:div> <ns0:div><ns0:head>Statistics collection Phase (SCP)</ns0:head><ns0:p>This phase requires the collection of hour wise interface utilization, subscriber information, services information, network topology &amp; service prioritization and bandwidth guarantees.</ns0:p></ns0:div> <ns0:div><ns0:head>Statistics Analysis Phase</ns0:head><ns0:p>This phase involves the categorization of access POPs, defining the usage patterns with respect to time (hour, day, week, month, season, day, and night), defining the usage patterns with respect to POP type and analyzing the packet sizes.</ns0:p></ns0:div> <ns0:div><ns0:head>Traffic Profiling phase (TPP)</ns0:head><ns0:p>This phase calculates the traffic volumes per hour per day per week per month/day/night/season/special event, determine the average packet size, calculate the Traffic intensities, define the distinct busy periods, and define the average packet sizes per BI.</ns0:p></ns0:div> <ns0:div><ns0:head>Profile Optimization Phase (TPOP)</ns0:head><ns0:p>In this phase the traffic policies like priority and bandwidth guarantees per service, time specific policies to traffic profiles, origin specific policies to traffic profiles, administrative policies to traffic profiles are applied to the output of the previous phase.</ns0:p></ns0:div> <ns0:div><ns0:head>Final phase (FP)</ns0:head><ns0:p>Using above information this phase is used to generate traffic profile files and apply to network nodes as part of the TIPS.</ns0:p></ns0:div> <ns0:div><ns0:head>Application Signaling Based (ASB)</ns0:head><ns0:p>In this case, we use an application based signaling protocol to inform the current status of the client machine by analyzing the following.</ns0:p><ns0:p>&#8226; Current running applications &#8226; Over-the-Air (OTA) settings &#8226; Operating System (OS) update settings &#8226; System up down behavior</ns0:p><ns0:p>We look at the networks enabled, running applications and their capabilities such as if it can initiate a voice session, file transfer, browsing, video session and file download. It will monitor when a session started, file download started, conferencing started, email retrieval started, retrieval frequency. In this scheme an agent is installed on each host that communicates with access nodes updating its current expected usage. An agent monitors the user's behavior, OTA settings, browsing behavior. We used a scenario where the user hosts communicate with the Customer Premises Equipment (CPE) which is generally a xDSL Modem or Wi-Fi Router. Router maintains a list of running applications and host capabilities which is updated by the agent. Generally, in residential cases the user's hosts may have mobile, notebooks and desktop, gaming, or IPTV.</ns0:p><ns0:p>The simple case may be just to enumerate the number of devices and their capabilities and apply common basic characteristics to each of them. We take this case in this study where the CPE just reports the user devices and their capabilities. The ASP traffic profiling scheme is shown in Figure <ns0:ref type='figure'>5</ns0:ref> and its integration with the nodes running TIPS is shown in Figure <ns0:ref type='figure'>6</ns0:ref>. CPE may collect the traffic details such as applications and their expectation and network usage pattern and communicate with the TIPS instance running the access node and provide information such as total interfaces, traffic volume, traffic intensity, service classes and a group per interface. This information is compared with existing traffic profiles associated with the CPE and if the changes are found the profiles are updated and communicated to other network nodes also.</ns0:p><ns0:p>The subsequent scheduling decisions shall follow the updated profiles to reflect changes in the traffic characteristics.</ns0:p></ns0:div> <ns0:div><ns0:head>Improving the Scheduling Performance</ns0:head><ns0:p>The TIPS algorithm operates on the fact that POPs generally exhibit different characteristics in terms of usage and users have different applications that generate different types of traffic. In addition to the above, different POPs have different characteristics in terms of the time when traffic is generated by users and the type of region. Furthermore, the network operators are interested in having control over the behavior of unexpected traffic originating from users. The existing commonly used packet scheduling techniques such as Deficit Round Robin (DRR), Round Robin(RR), Priority Queueing(PRQ), Fair Queue(FQ) and Class Based Fair Queuing (CBFQ) do not exploit these considerations in their operations. Consequently, the network providers opt to use additional techniques to control the scheduling in network nodes. Similar is the case with scheduling schemes for wireless networks that considers channel conditions and other factors. Due to the absence of above features in packet scheduling techniques following consequences are observed.</ns0:p><ns0:p>&#8226; The techniques are not aware of capacity and network dimensioning constraints &#8226; The commonly used algorithm cannot cope with the diversity in traffic characteristics &#8226; Their objectives are relative remain static &#8226; Existing techniques cannot cope with the dynamic nature of the traffic.</ns0:p><ns0:p>The absence of these features degrades the resource allocation fairness and quality of service parameters such as End-to-End Delay (E2ED) and its variance incurred by unpredicted traffic management in queues with static criteria, for example, the priority queuing is unable to distinguish between traffic generated by different POPs having common priority. Furthermore, it also cannot distinguish traffic with the same priority on different timescales. Similar is the case with round-robin or deficit round-robin and weighted fair queuing or class based weighted fair queuing as all traffic classification, their types and mechanisms of fairness are fixed over time and for each POP. In order to address these limitations, the proposed TIPS algorithm is devised to consider all these aspects and offer improved scheduling for a different quality of service parameter such as packet loss, delay, jitter, and sustained throughput.</ns0:p></ns0:div> <ns0:div><ns0:head>A. Management of Unpredicted Delays</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In , the TIPS algorithm chooses the highest TIs available in traffic profile, and enQ corresponding &#119861;&#119868; &#119909; packets in . After this, it selects next highest TI and enQ its packets in . This process repeats till</ns0:p><ns0:formula xml:id='formula_1'>&#119876; &#119909; &#119876; { &#119909; + 1}</ns0:formula><ns0:p>all traffic profiles are processed and all packets are enQ to corresponding queues. The arrived packets are dropped if the corresponding size of the queue exceeds TI value and the SK factor. At any given time, packets in all queues determine the total size of the queues ( ) and is equal to the sum of all TIs from all &#119878; &#119876; POPs.</ns0:p><ns0:formula xml:id='formula_2'>&#119878; &#119902;,&#119887;&#119901; = &#119899; &#8721; &#119894; = 1 &#119879;&#119868; &#119894;<ns0:label>( 1 )</ns0:label></ns0:formula><ns0:p>If surpasses QL then packets belonging to the corresponding profile are dropped where TI is exceeded.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119878; &#119902;,&#119887;&#119901;</ns0:head><ns0:p>For the deQ case, the maximum limit of the count of packets that are allowed to remain in a queue is the product of the average packet time and sum of all TIs. In the experiments, it is seen that the maximum time a packet occupies in a queue is five to seven times of their packet-size. The minimum time occupied in any queue is most of the time equal to one packet time.</ns0:p></ns0:div> <ns0:div><ns0:head>B. Regularizing the Traffic Bursts</ns0:head><ns0:p>The spikes in network traffic generally increase in delay while making decisions of packet scheduling. In cases the decisions are priority based, or class or round-robin based then such a situation could lead to packet loss. In the proposed algorithm the spikes in traffic are regulated with allocation of the length of the queue in proportions to size of TI. The Traffic having the same class or priority or group with different origin or time get distributed allocations.</ns0:p></ns0:div> <ns0:div><ns0:head>C. Handling the Large Packets in Traffic</ns0:head><ns0:p>Larger size packets lead to an increase in E2ED due to their large transmission time and occupancy time. The proposed algorithm inherently solves this issue with a cautious profiling of TI in each BI. TI values are based on traffic volume that considers the aspects of the size of the packet's users, sessions, or duration. Due to this the TIPS algorithm is capable of effectively controlling the additional delays due to large size of packets and jitter normalization.</ns0:p></ns0:div> <ns0:div><ns0:head>D. Variations in Delay</ns0:head><ns0:p>Generally, end-to-end delay variations occur in the traffic due to different network conditions and nonlinear distribution of decision of scheduling. The proposed algorithm TIPS handles this problem by successively dequeuing packets according to TI values. Since the TI values are directly dependent on the , hence the proportionate processing time is allocated to different types of the packet leading it to decrease delay variations. Furthermore, the proper planning of traffic profiles helps to eliminate such variations.</ns0:p></ns0:div> <ns0:div><ns0:head>E. Minimizing the Packet Loss</ns0:head><ns0:p>Packet loss in the network is due to various reasons. One of the reasons could be inefficient scheduling decisions where the queues are filled rapidly, and limits are reached. In such cases the network nodes have no option but to drop further packets. The dropping of few packets part of connection-oriented flows leads to impact on the congestion control mechanism adopted by the upper layers of the TCP stack. Furthermore, the large spikes of packets with lesser priority for PQ and lesser weight traffic for CBFQ don't get enough resources and queues quickly overflow that lead to the loss of packets. Due to the burst regularization in the TIPS algorithms, larger packets get resources according to their calculated TI values. This leads to reducing the probability queue overflow that subsequently minimizes the packet loss due to scheduling issues.</ns0:p></ns0:div> <ns0:div><ns0:head>F. Role Based Scheduling</ns0:head><ns0:p>In hierarchical topology of networks, nodes are arranged in layered fashion, such as access, distribution, or aggregation and code nodes, where each layer has specific network functionalities. The proposed algorithm provides the node role-based scheduling mechanism. The scheduling decisions in access nodes differ from other types of nodes. On access nodes, TIPS can be used to provide the class-based or group-based scheduling. These classes are groups that can be handled differently according to the needs of the class of groups. These groups or classes must be matched to the traffic profiles in order to benefit from this feature. On the nodes that implement the aggregation role, the TIPS algorithm makes decisions by classifying the traffic based on TI values received from other nodes. This reduces the queues count and simplifies the processing of queuing operations. With core roles it aggregated and redefined TI values. These aggregated values are then used in the making decisions for the scheduling of packets. This directly results in lowering the queues further and minimizing the processing delays.</ns0:p></ns0:div> <ns0:div><ns0:head>G. Shorter Queue Sizes</ns0:head><ns0:p>The proposed algorithm regularizes the traffic bursts by manipulating TI values. The short burst of the packets causes momentary rise in TI values and consequently more processing time allocated in such cases. This leads to minimal queue growth and maintains the average queue size within the capacity of interface buffers. Generally, shorter queue sizes or the lower rate of queue growth improves the throughput, minimizes the delay, and jitter.</ns0:p></ns0:div> <ns0:div><ns0:head>H. Joint Mode Support</ns0:head><ns0:p>The proposed algorithms support the joint operation with other scheduling techniques. The joint operation is based on the different nodes i.e., some nodes in the network may be using the traditional scheduling algorithms and other nodes use the TIPS Scheduling. With this support the objectives like prioritized handling or class-based handling of traffic can be retained.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental Setup for Evaluation</ns0:head><ns0:p>For evaluation of the proposed scheduling technique, A setup consisting of network simulation using Network Simulator (NS2, 2012) and containers (Dockers, 2012) was used. The end points were created in the Linux environment where each user and server were created standard docker containers. The traffic generated by the container hosts was connected to the access nodes in the NS2 simulated environment. The host containers generate traffic according to a selected scenario type of POP. The traffic generated by the hosts follow a random pattern following the BIs and given traffic intensities. The real-time traffic passes through core, aggregation and access nodes with topology as shown in Figure <ns0:ref type='figure'>7</ns0:ref> and the respective bandwidth parameters are given Table <ns0:ref type='table'>I</ns0:ref>. The network topology was created in a simulated environment. The simulated network nodes run the TIPS instances with statically configured roles as per topology of the network that follows the three-layer hierarchy commonly used in practical networks. In the analysis of the results of the TIPS algorithm, the color scheme as shown in the figure will be used and traffic traversing over the links will be defined with node colors. For example, the traffic generated on node will follow &#119860; 1 black color, will be blue and similarly for the remaining nodes. For IEEE 802.3 based scenarios as shown &#119860; 2</ns0:p><ns0:p>in topology, following interface capacities are defined both for CNC and MNC models.</ns0:p></ns0:div> <ns0:div><ns0:head>Application for the Traffic Generation</ns0:head><ns0:p>A custom-built model application (MAP) is used to produce traffic as per BIs, TIs and s for different nodes.</ns0:p><ns0:p>MAP is intended to operate in twenty-four BIs and s. Each BI has a distinct TI allocated for each node that is calculated using the FOB approach discussed in the earlier sections. The theoretical volumes of traffic in packets shown in Table <ns0:ref type='table'>II</ns0:ref> (a) for each of the notes for the experimental setup. The packets are set to 1k Bytes, which may vary in practical applications. In such cases the total bytes transferred are divided by the average for a common base for comparison. Above table shows traffic volumes for five periods; however, the simulation is run for several periods. the traffic volume of outgoing links on nodes.</ns0:p><ns0:formula xml:id='formula_3'>&#119879;&#119868; &#119894;&#119895; = {1 + &#119878;&#119870; (&#119879;&#119881; &#119894;&#119895; )/(&#119879;&#119881; &#119879; ) + &#119878;&#119870; 0 &gt; ( &#119879;&#119881; &#119894;&#119895; &#119879;&#119881; &#119879; ) &lt; 1 ( &#119879;&#119881; &#119894;&#119895; &#119879;&#119881; &#119879; ) &gt; 1</ns0:formula><ns0:p>( 2 )</ns0:p><ns0:p>The scheduling fact SK values chosen are 2,1,2,1,2 and 2 respectively for N1 to N6. The values of this parameter as derived using the Equation ( 3 ) where refers to the traffic intensity allocated to i th queue &#119879;&#119868; &#119894;&#119895; in j th BI, and represents the SK values for i th queue in j th BI. The i th queue is assigned dynamically to &#119878;&#119870; &#119894;&#119895; i th profile in the corresponding busy interval.</ns0:p><ns0:formula xml:id='formula_4'>&#119878;&#119870; &#119894;&#119895; = {0 &#119879;&#119868; &#119894;&#119895; /&#119879;&#119868; &#119894;&#119895; &#119879;&#119868; &#119894;&#119895; /&#119879;&#119868; &#119894;&#119895; &lt; 1 &#119900;&#119905;&#8462;&#119890;&#119903;&#119908;&#119894;&#119904;&#119890;<ns0:label>( 3 )</ns0:label></ns0:formula><ns0:p>In the experimental setup the K constant is set to 1 for all the nodes of the network.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>As discussed earlier, the experimental setup uses the docker container for creating the communication end points and NS2 for the network functionalities. The MAP application is used to generate traffic corresponding to the traffic profiles. The type of traffic was set to real-time RTP.</ns0:p></ns0:div> <ns0:div><ns0:head>A. Performance in Congested Network Conditions (CNC)</ns0:head><ns0:p>This section discusses the measurements carried out for TIPS performance in congested network conditions. The network setup was created with the link allocation as given in the earlier sections. On CNC, TIPS algorithm performance is significantly better as compared to traditional scheduling techniques.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Measurement Results of Delay</ns0:head><ns0:p>End-to-end Delay is an important parameter in the measure of quality of service of the applications. The end-to-end delay in the network could be due to various reasons where queuing delay is one factor. The TIPS algorithm is intended to minimize the delay incurred due to the waiting time of packets in queues.</ns0:p><ns0:p>Waiting time in the queue for packets can be lowered by considering the rate of packet origination from source nodes and adopting the scheduling behavior accordingly. Large values of traffic intensity require a faster processing of corresponding packets in comparison to smaller values of TI. The applications running on the host machines may have similar end-to-end requirements but different TI values in different BIs may increase the delay in the scheduling process for lesser and higher TIs. The TIPS algorithm manages the queue allocation time by selecting suitable profiles in the queues and manipulating the order of enQ and deQ functions. Figure <ns0:ref type='figure'>8</ns0:ref> shows the measurement results of end-to-end delay exhibited by the TIPS algorithm in the CNC case. It depicts the latency experienced by packets in the network in milliseconds( ) including the delay due to the propagation in the medium. Some nodes like Black, Red, Green, and &#119898;&#119904; blue are at a distance of three hops and other nodes with brown and yellow color are at a distance of two hops. In the experimental setup the propagation delay is set to the value of 10ms for all links.</ns0:p></ns0:div> <ns0:div><ns0:head>Results of Throughput Measurement</ns0:head><ns0:p>The Figure <ns0:ref type='figure'>9</ns0:ref> shows the measurement results of sustained throughput attained with the TIPS algorithm. It represents the packets-per-second(PPS) on y-axis and timescale on x-axis. The significance of these results is the smooth curve of throughput in every busy interval. The traffic was generated using a Model Source Application (MAP) that generates the Real-Time Protocol(RTP) traffic following the deterministic TI distribution. The comparison of the FOB traffic volumes and the actual traffic volumes successfully delivered is shown in Figure <ns0:ref type='figure'>10</ns0:ref>. FOB traffic volumes are calculated from hypothetical forecasts of services and their expected usage pattern. It represents the sum of packets on the y-axis and the respective source node on the x-axis. The overhead of the MAP application is about 3.6 percent that includes the packets sent/received for the keep-alive and acknowledgements during the lifetime of the session. The figure shows that the TV received is exactly 3.6 percent lower than the FOB traffic volumes and the TV Delivery ratio is 96.43 percent as shown in the figure.</ns0:p></ns0:div> <ns0:div><ns0:head>Delay Variance or Jitter Comparison</ns0:head><ns0:p>The Delay Variance(DV) is also a significant factor that can degrade the QoS of service for applications running on communication endpoints. It is a significant parameter for the jitter buffer and synchronization of multimedia endpoints. As discussed in ITU-T-G.8261.1/Y.1361.1, (2012); RFC-3339, (2012), the jitter is defined as the variance in end-to-end delay experienced by a given node between two successive packets. For the experimental evaluation, the Equation ( <ns0:ref type='formula' target='#formula_5'>4</ns0:ref>) is used to derive the jitter in the received traffic where &#119863; &#119894; and is delayed between two successive packets i and j.</ns0:p><ns0:formula xml:id='formula_5'>&#119863; &#119895; &#119869; &#119894;&#119895; = &#119860;&#119887;&#119904;(&#119863; &#119895; -&#119863; &#119894; )<ns0:label>( 4 )</ns0:label></ns0:formula><ns0:p>For DV or jitter, the average, minimum and maximum values are derived using the Equations( <ns0:ref type='formula' target='#formula_6'>5</ns0:ref>) to ( 8 ). For these equations the reordering of the packets has been ignored.</ns0:p><ns0:formula xml:id='formula_6'>&#119869; = &#119886;&#119887;&#119904;(&#119863; &#119895; -&#119863; &#119894; ) &#119895; -1 &#119894; &#8800; &#119895;<ns0:label>( 5 )</ns0:label></ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_7'>&#119869; &#119886;&#119907;&#119892; = &#119901;&#119888; &#8721; &#119899; = 1 &#119886;&#119887;&#119904;(&#119863; &#119895; -&#119863; &#119894; ) (&#119895; -1) * &#119901;&#119888; &#119894; &#8800; &#119895; ( 6 ) &#8970; &#119869; &#8971; = &#119886;&#119887;&#119904;(&#119863; &#119895; -&#119863; &#119894; ) (&#119895; -1) &#119894; &#8800; &#119895; ( 7 ) &#8968; &#119869; &#8969; = &#119886;&#119887;&#119904;(&#119863; &#119895; -&#119863; &#119894; ) (&#119895; -1) &#119894; &#8800; &#119895;<ns0:label>( 8 )</ns0:label></ns0:formula><ns0:p>In the above set of equations, pc refers to the count of packets, Abs refers to the absolute value delay experienced and i and j are the sequence number of packets. The N1 is located at a distance three hops from the destination node. The results show that there is steady DV in every BI of the traffic profile. Some BIs have significantly low jitter in the range of 0.1ms to 0.3ms. The other nodes show steady DV up to 0.5ms. As with N1 results it shows both the positive and negative values of jitter. The negative results show the delay was lesser than when the last packet arrived. N2 is located at a distance of 4 hops from the sink node. This is observed that N-2 also shows the same pattern as N1, however, each BI has dissimilar values as compared to N-1. For N3, both of the positive and negative values of variance are shown where the negative values represent the packet with shorter delay as compared to the last packet received. The N4 is located at a distance of 3-hops from the sink node. It has maximum variance up to . 2&#119898;&#119904;</ns0:p><ns0:p>The maximum value for N4 is only for few packets and average jitter, a pattern like other nodes. The N5 is located at a distance of 2 hops from the sink node. It can be observed that it shows quite steady jitter for N5 during the experimental transmission. The N6 is located at a distance of 2 hops from the sink node and it follows a similar pattern as N5. In this section, a term steady jitter has been used. The steady jitter is useful in many ways. It eliminates the presence of t of higher order variances such as 2nd order jitter.</ns0:p></ns0:div> <ns0:div><ns0:head>Packet Drop Characteristics</ns0:head><ns0:p>Packet loss is likewise a significant parameter that depicts the performance of networks. Greater the packet lost; the more are the re-transmissions on the network. Various methods exist to effectively react to retransmission requirements as opposed to retransmission of the entire portion of information such as selective-repeat and TCP-Reno. This article focuses on limiting the packet loss because of buffer overflow or queue overflow in the network with proficiently scheduling decisions. The packers are not allowed to wait in queues to their max limits irrespective the class, weight, or priority excluding the case where network allocations limits are reached, moreover, it guarantees an appropriate command over which packet should be dropped if the system is working in CNC state. In this experimental evaluation, it used a MAP application that produces traffic in such a manner that creates the CNC conditions of the network, and traditional scheduling techniques begin to drop bundles. They drop packets as well as queuing time Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>overshoots to high values. Using the TIPS algorithm, the CNC conditions are handled with a suitable intensity aware method to guarantee that packets are not dropped unless traffic limits are reached or resources exhausted on the interface. Figure <ns0:ref type='figure'>12</ns0:ref> shows that packet drop results for all nodes. It shows that there was not any packet loss at any point in the experimental network in CNC. Other traditional approaches of the scheduling cause an irregular packet loss for given traffic profiles because of the surpassing of the queue limits. Nonetheless, since the TIPS processes the queues according to TI values, queue limits remain within limits.</ns0:p></ns0:div> <ns0:div><ns0:head>B. TIPS in Moderate Operating Network Conditions (MNC) Measurement Results of E2ED</ns0:head><ns0:p>E2ED is the difference in time of packet creation at a source node and the reception time at a sink node. One side delay is called a one-way delay in contrast to the RTT that shows time of upstream and downstream sides. E2ED also includes time consumed in packet transmission at source nodes, its propagation, other processing at nodes and the waiting time in queues. The Equation( <ns0:ref type='formula' target='#formula_9'>9</ns0:ref>) shows the calculation method for the end-to-end delay where represents the delay and represents the time</ns0:p><ns0:formula xml:id='formula_8'>&#119863; &#119894; &#119879; &#119878; &#119894; +</ns0:formula><ns0:p>when a packet is created and represents the time when it is received at destination.</ns0:p><ns0:formula xml:id='formula_9'>&#119879; &#119877; &#119894;&#119903; &#119863; &#119894; = &#119879; &#119878; &#119894; + + &#119879; &#119877; &#119894;&#119903;<ns0:label>( 9 )</ns0:label></ns0:formula><ns0:p>Equation( <ns0:ref type='formula' target='#formula_9'>9</ns0:ref>) is ith packet sequence, represents the packet addition to the queue at the sending side &#119894; + , &#119903; and the packet reception at the receiver side respectively. In the experimental setup, the propagation delay is set to a constant value of 10 . In the comparison section, it will be shown that in MNC, E2ED generally &#119898;&#119904; remains near to propagation delay value. However, if more traffic arrives, and the network gets into the CNC state, the E2ED due to queuing processes becomes significant and values increase beyond the propagation delay of single hop. Figure <ns0:ref type='figure'>13</ns0:ref> represents the E2ED of traffic originating in access nodes in MNC conditions. The different colors in the figure represent traffic originated in the source nodes.</ns0:p></ns0:div> <ns0:div><ns0:head>Throughput Results</ns0:head><ns0:p>Figure <ns0:ref type='figure'>14</ns0:ref> illustrates the throughput of traffic originating in access nodes in MNC. It represents the PPS (Packet Per Second) on y-axis and timescale on x-axis. The colors refer to the sustained throughput of different nodes in different BIs. The Comparison of Figure <ns0:ref type='figure'>9</ns0:ref> and Figure <ns0:ref type='figure'>14</ns0:ref> shows that the proposed TIPS algorithms exhibited similar results, both in CNC and MNC. The throughput in MNC shows that required data rates are achieved. Other scheduling algorithms that are compared with TIPS algorithms also offer similar results in MNC. But with CNC ,data rates were significantly degraded with all other scheduling approaches.</ns0:p></ns0:div> <ns0:div><ns0:head>End-to-End Jitter</ns0:head><ns0:p>As with the case of CNS, the jitter in MNC also follows the finest limits and average values remain less than 1ms with spikes of . show that packet delay is less than the last sequence of the packet. It can be observed that it has a single hit of jitter surpassing 1.5 milliseconds while normal jitter remains under 1ms limit. It can be observed that N2 has two spikes of jitter more than 1.5ms and the average value remains less than 1ms limit. The N3 experiences multiple spikes of jitter with more than 1.5ms limit and the average jitter remains at 1ms. N4 and has a single spike of jitter with maximum value more around and the average jitter remained 1.5&#119898;&#119904; below 1ms. For N5 and N6 the jitter remains below . These two nodes are two hops away, whereas 0.6&#119898;&#119904; the first four nodes were at a distance of 3 hops from the destination. In a general network state, it can be observed that delay variance remains around for each hop. 0.3&#119898;&#119904;</ns0:p></ns0:div> <ns0:div><ns0:head>Packet Drop Behavior</ns0:head><ns0:p>Similar to the CNC results there was no packet loss observed for all nodes. However, it will be shown in next sections that in CNC, The TIPS algorithm was able to achieve higher throughput and hence has higher traffic volume delivery.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head></ns0:div> <ns0:div><ns0:head>Comparison with Other Algorithms</ns0:head><ns0:p>In order to compare the TIPS algorithm performance with others, the DRR, RR, FQ, RED, and SFQ were also tested on identical network and traffic profiles as used for the TIPS algorithm evaluation.</ns0:p></ns0:div> <ns0:div><ns0:head>A. Packet Drop Behavior</ns0:head><ns0:p>Different existing scheduling algorithms such as DRR, RR, FQ, RED, and SFQ were simulated on the experimental network. The measurement results were captured both for the CNC and MNC. It can be observed from the results that the TIPS algorithm provides a significantly improved performance in comparison to the traditional schemes of scheduling. In the CNC state, there is an obvious distinction in results of these approaches when contrasted with TIPS.</ns0:p><ns0:p>Table <ns0:ref type='table'>IV</ns0:ref> gives a synopsis of the rate of packet drop in CNC. The execution of all different methodologies debased essentially though TIPS created the same outcomes as in the event of MNC. Packet loss greater than on a network significantly degrades the performance of real-time services and proves to be fatal 1% for critical and interactive services that require the responses to be received within stringent limits. For other applications, greater packet losses decrease network utilization and efficiency due to frequent retransmissions. The frequent retransmission leads to further aggravation of congestion in the network.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>16</ns0:ref> illustrates the comparison of the successful packet delivery rate to the destination. The red line refers to the successful delivery of packets. It shows a comparison of successful delivery of generated packets at source and packet drop with different various packet scheduling algorithms. The SFQ has the lowest successful delivery ratio of packets due to higher packet losses. On other hand, the proposed TIPS algorithm exhibits the maximum successful delivery of generated packets. These results show that the TIPS algorithm provides high efficiency in CNC and increases the network utilization, minimizing congestion and providing optimization of overall performance.</ns0:p></ns0:div> <ns0:div><ns0:head>B. Comparison of E2ED in CNC</ns0:head><ns0:p>This section provides the comparison of E2ED improvements with TIPS algorithm with other traditional scheduling techniques. The method to calculate the E2ED has been also discussed in previous sections. The E2ED comparison is from three different aspects such as the average, minimum and maximum delay experienced by the packets from the source to destination over different nodes in CNC conditions. The comparison of these aspects is given in of all these schemes degrades in CNC state in comparison to the TIPS algorithm where performance is observed efficiently for the E2ED.</ns0:p></ns0:div> <ns0:div><ns0:head>C. Comparison of Throughput in CNC</ns0:head><ns0:p>The comparison of packet throughput attained with all the algorithms is presented here. The throughput is calculated as the fraction of packets sent by the source and packet received by the respective sink nodes. </ns0:p><ns0:formula xml:id='formula_10'>&#8721; &#119894; = 1 &#119877; &#119894;&#119895; &#119899; &#8721; &#119894; = 1 &#119878; &#119894;&#119895; ) * 100 ( 10 )</ns0:formula><ns0:p>The Table <ns0:ref type='table'>V</ns0:ref> shows that the TIPS algorithm is generally better to give data delivery. This fact 100% depends on packets sent by the source nodes and packets received by the sink node. Since there was no packet loss as we saw in previous sections, the TIPS algorithm gave significantly better throughput while delivering the data in CNC conditions. These outcomes have a direct relationship to packet loss in the network. The worst instance of packet loss was with the FQ approach that has the lowest throughput of effective delivery rate. Figure <ns0:ref type='figure'>18</ns0:ref> illustrates the analysis of the data delivery with different algorithms in comparison to the FOBPB and Figure <ns0:ref type='figure'>19</ns0:ref> shows throughput with different algorithms normalized to TIPS. Furthermore, the comparison of the actual delivered amount of data is shown in Figure <ns0:ref type='figure'>20</ns0:ref>. It is evident from these results that the TIPS algorithm provides the best data delivery rate in comparison to all other scheduling techniques and the max target data volume determined by the FOB process. It provided successful data delivery ratio and if adding MAP overhead it achieved approx. data 96.5% 3.6% 100% delivery ratio.</ns0:p></ns0:div> <ns0:div><ns0:head>D. Comparison of Delay Variance</ns0:head><ns0:p>The comparison of delay variance seen by packets from different nodes during the experiment is shown in Table <ns0:ref type='table'>VI</ns0:ref>. The maximum variance faced by packets with the TIPS algorithm is with a maximum 2.09&#119898;&#119904; distance of 3 hops. The average value of variance is and minimum variance is observed around the 0.35&#119898;&#119904; value of . With the TIPS algorithm the maximum jitter is minimized by thirty to forty-five percent 0.11&#119898;&#119904; in comparison to other scheduling algorithms. It shows significant improvement to the data delivery achieved with the TIPS algorithm.</ns0:p></ns0:div> <ns0:div><ns0:head>Other Algorithms</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021)</ns0:p></ns0:div> <ns0:div><ns0:head>Manuscript to be reviewed</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>There are some other scheduling schemes such as class-based queuing(CBQ), Class based Weighted Fair Queuing(CBWFQ) and Priority Queuing(PQ) that provide some additional approaches to assign the weights or priority to the different packets based on some user criteria. These algorithms work fine for prioritizing traffic classes or groups, but these suffer from issues if traffic characteristics are changed. Additionally, the limited number of classification methods puts limits on their performance. Furthermore, these algorithms support only static configurations and cannot cope with the changes in the traffic characteristics dynamically.</ns0:p></ns0:div> <ns0:div><ns0:head>Applications</ns0:head><ns0:p>We have seen in previous sections; the TIPS algorithm takes advantage of diversity in traffic characteristics across nodes. The typical application scenario is the outgoing interfaces on aggregation and core nodes of networks. The underlying reason is that one such application involves a number of POPs. With the traditional approaches the rule configuration may become complex. Another case is to utilize it in aggregation and core nodes or regional and national core nodes in telecommunication IP/Optical Transport Networks. Similar logic applies here as above as there is diversity in traffic originating from different. In web edge hubs, TIPS can essentially make distinction in deferral, jitter and parcel drop difficulties. Other potential applications incorporate WAN connections in big business organizations, where TIPS can give better execution and organization usage because of lower deferrals, dormancy, and bundle drops. Higher these qualities the lower network execution and organization usage as end clients would retransmit the lost or dropped packets or breaks because of enormous measures of delay.</ns0:p></ns0:div> <ns0:div><ns0:head>Future Research Directions</ns0:head><ns0:p>With the advancements in artificial intelligence and service requirements, the networking domain is also required to use the machine learning based techniques to help the autonomous and cheaper and faster decision-making in managing the network functionalities. There are various ML based approaches that can predict the traffic forecast based on the analysis of the previous historical usage of the data. These algorithms can extract the Spatio-temporal insights from the data and make predictions for the future. Furthermore, there are ML based techniques that can provide the efficient traffic classification considering the various aspects of previous network usage. With the machine learning based approaches the TIPS shall be able to work efficiently. Extreme learning, the Bayesian network and decision tree can classify the internet traffic with the focus on peer-to-peer applications <ns0:ref type='bibr'>(Sena &amp; Belzarena, 2012)</ns0:ref>. The traffic classification algorithms must provide classification information to other systems as soon as possible to make necessary decisions. Support vector machine based Early Traffic classification models can address this issue (L. <ns0:ref type='bibr' target='#b11'>He et al., 2016)</ns0:ref>. The ML models like Multi-Layer-Perception, Radial basis function, decision tree based C4.5, Bayesian networks can also provide online or offline traffic classification. These models can also classify the traffic using only the partial captures, i.e., only few packets of flow and allows the reduction of data volume to be analyzed <ns0:ref type='bibr' target='#b17'>(Jin et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b20'>Luo et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b24'>Nguyen et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b31'>Singh &amp; Agrawal, 2011;</ns0:ref><ns0:ref type='bibr'>C. Zhang et al., 2019)</ns0:ref>. AdaBoost based ML models can classify the traffic based on flows rather than per packet classification. It detects the flows or transactions in the captured traffic and associates the related packets to flow as per the flow definition <ns0:ref type='bibr'>(Kong et al., 2018)</ns0:ref>. Recurrent Neural Networks has the specific capability to detect the patterns from the time series data. In this regard, the long-short-termmemory based traffic model is proposed by (Z. <ns0:ref type='bibr' target='#b12'>He et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b28'>Reddy &amp; Hota, 2013;</ns0:ref><ns0:ref type='bibr'>C. Zhang et al., 2019)</ns0:ref>. These models can predict the traffic forecast on a daily basis to long term periods where the first kind of forecast is useful for the day-to-day optimizations and resource allocation and second can be used for long-term planning for the network. The convolutional neural network-based model has the capability to consider over-subscription of resources, SLA violations and can also detect and incorporate the spatio-temporal dependencies for the long-term forecast <ns0:ref type='bibr' target='#b1'>(Bega et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b28'>Reddy &amp; Hota, 2013)</ns0:ref>. A progressive transfer learning model can provide short term forecast and long-term traffic forecasting for individual POP locations in the operator's network (Z. <ns0:ref type='bibr' target='#b12'>He et al., 2019)</ns0:ref>. By using above ML models, traffic profiles can be dynamically learnt, and scheduling decisions are efficiently made in order to improve the QoS and network utilization. In addition to intelligent scheduling decisions, TIPS need to evaluate wireless networks with a focus on cellular networks. The traffic intensity signaling between nodes needs further research and it needs to be evaluated with other traditional algorithms also.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In a detailed discussion on simulation results, we saw the TIPS approach can significantly enhance performance of a network, its utilization, and quality of services. It provides opportunities to maximize network utilization to as compared to to with other approaches. We have also seen that 100% 95% 97% networks exhibit better performance in regard to minimizing delay caused by queuing in CNC. Maximum enhancements to delay are several hundred percent reductions as compared to FQ, DRR, RR and SFQ. This is an excellent improvement in response time especially for real-time services and interactive services that have strict bounds for response time. We also saw TIPS provide jitter reduction as compared to other 45% approaches. We also saw that TIPS can ensure required throughput targets with the best performance and optimal network utilization. In addition to this there was no packet drop in CNC which significantly reduces wastes of network resources used to retransmission of same packets. TIPS make use of the difference in traffic intensities in different time, origin, variations to make efficient packet scheduling decisions. In CNC cases, where services performance is degraded with other existing approaches, TIPS can effectively maintain services performance, it also helps in managing congestion by optimizing packet drop probabilities and end-to-end delay for packets which is also a factor of time-out of sessions or retransmissions. Thus, the need for minimal delay and jitter becomes very important as network diameter increases; it can dramatically provide opportunities to optimize network utilization. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Number of subscribers per service per POP &#8226; Bandwidth per service &#8226; Service classification Based on the demographic survey, it provides the following four values for each service for each POP.&#8226; Hour wise Usage (HWU)&#8226; Day wise Usage (DWU)&#8226; Week wise Usage (WWU)&#8226; Month wise Usage (MWU)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021) Manuscript to be reviewed Computer Science &#8226; Required information is access interface utilization, packet size accords the time scale. &#8226; The minimum, maximum and average packet size. &#8226; Identify the traffic classes or groups if required. &#8226; Obtain the policies for traffic groups The process of building the traffic profiles five pages as shown in Figure 4. It consists of five steps as discussed below.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Figure 11 shows the jitter or delay variance in the traffic received by different nodes. The values of jitter of each packet in the experiment duration were recorded and the values shown in the figure are instantaneous. This helps to clarify 1st-order and higherorder jitter for networks. The subfigures (a) to (f) represent the jitter in the traffic originating at N1 to N6 respectably. The values are shown in units and it shows both the negative and positive jitter values. The &#119898;&#119904; negative jitter values represent the packet arriving in lesser duration in comparison to the last packet e.g., the change in the variance is negative. The figure shows that smooth jitter is experienced by the packets originated at the node 1.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Figure 15 (a) to (f) illustrates the jitter in units experienced by traffic 2&#119898;&#119904; &#119898;&#119904; originated at N1 to N6 respectively. It shows both negative and positive values where the negative values</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>It shows the comparison of E2ED maximum, minimum, and average values in units as experienced by the traffic originated in different nodes. The delay values &#119898;&#119904; PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021)Manuscript to be reviewed Computer Science in these figures are not inclusive of the propagation delay. N1 to N4 are at a distance of three hops from destination and N5 and N6 both are at a distance of two hops. It can be observed that the traffic experiences least delay in CNC with the TIPS algorithm where the maximum value of the delay experienced is limited to and average values are in fractions of . The worst case can be observed for FQ where E2ED is 4&#119898;&#119904; &#119898;&#119904; beyond and the average delay remains around . This comparison shows that the performance 50&#119898;&#119904; 20&#119898;&#119904;</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='41,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='44,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='45,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='46,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='47,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='48,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Table II (b) represents the start busy interval that ends with the beginning of the next busy interval. Although the tables show the same values for five busy intervals having different traffic volumes, the experimental setup is run for several busy intervals. The equivalent TI derived</ns0:figDesc><ns0:table><ns0:row><ns0:cell>&#119879;&#119881; &#119894;&#119895;</ns0:cell><ns0:cell>&#119879;&#119881; &#119879;</ns0:cell><ns0:cell>is</ns0:cell></ns0:row></ns0:table><ns0:note>for these intervals is shown in TableIII. The process of derivation of TI values is discussed in previous sections. The main objective in this experimental setup is the delivery of the volume of traffic efficiently and reliably without loss of packets and optimized quality of service and performance parameters such as delay, jitter, and highest possible throughput. TableIIIshows Traffic intensities corresponding to each BI. It shows TIs for the first five BIs and for the remaining BI, the TI are calculated in a similar way. The TI table follows the relationship as per Equation (2) where is the intensity of traffic for node j in busy &#119879;&#119868; &#119894;&#119895; interval i, is the volume of traffic in node j and the busy interval I and SK values are constant.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>The throughput relationship follows a relationship as shown in the Equation ( 10 ) where represents the &#119875; &#119905;</ns0:figDesc><ns0:table><ns0:row><ns0:cell>packet throughput,</ns0:cell><ns0:cell>&#119877; &#119894;</ns0:cell><ns0:cell>represents the packets received by node sent from node and &#119895; &#119894;</ns0:cell><ns0:cell>&#119878; &#119894;&#119895;</ns0:cell><ns0:cell>represents the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>packets sent by node to node . &#119894; &#119895;</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>&#119875; &#119905; = (</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>&#119899;</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>&#119873; 2 &#119873; 3 &#119873; 4 &#119873; 5 &#119873; 6</ns0:figDesc><ns0:table><ns0:row><ns0:cell>BI &#119861;&#119868; 1</ns0:cell><ns0:cell cols='2'>&#119873; 1 2,985 404 &#119873; 2</ns0:cell><ns0:cell cols='2'>&#119873; 3 1,992 746 &#119873; 4</ns0:cell><ns0:cell>&#119873; 5 2,522</ns0:cell><ns0:cell>&#119873; 6 3,511</ns0:cell><ns0:cell cols='2'>BI &#119873; 1 1 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>&#119861;&#119868; 2</ns0:cell><ns0:cell cols='5'>1,950 1,840 1,538 1,008 3,733</ns0:cell><ns0:cell>2,124</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>&#119861;&#119868; 3</ns0:cell><ns0:cell cols='2'>1,406 877</ns0:cell><ns0:cell cols='3'>1,504 1,971 1,885</ns0:cell><ns0:cell>3,480</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>&#119861;&#119868; 4</ns0:cell><ns0:cell cols='3'>1,543 2,418 817</ns0:cell><ns0:cell>942</ns0:cell><ns0:cell>3,006</ns0:cell><ns0:cell>2,101</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>&#119861;&#119868; 5</ns0:cell><ns0:cell>171</ns0:cell><ns0:cell cols='4'>1,674 1,082 2,081 756</ns0:cell><ns0:cell>3,349</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>Sum 8,055 7,213 6,933 6,748 11,902 14,565.00</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 (on next page)</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>&#119873; 1 &#119873; 2 &#119873; 3 &#119873; 4 &#119873; 5 &#119873; 6</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Equivalent TI Distribution</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021) Manuscript to be reviewed Computer Science BI PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021)</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64020:1:0:CHECK 10 Sep 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Department of Computer Science COMSATS University Islamabad, Lahore Campus Defense Road, Off Raiwind Rd, Phase 1 LDA Avenue, Lahore, Punjab 54000 Tel: +92 (42) 111-001-007 Fax: +92 (42) 99203100 www.cuilahore.edu.pk Email1: sp19-pcs-006@cuilahore.edu.pk Email2: arif.husen@vu.edu.pk September 09, 2021 Dear Editors, We would like to take this opportunity to thank the editors and the reviewers for their invaluable comments and suggestions that we believe improve the quality of the manuscript. We addressed all comments and suggestions as detailed below. We believe that the manuscript is now suitable for publication in PeerJ. We are uploading (a) our point-by-point response to the comments (below) (response to editor and reviewers), (b) Revised manuscript with tracked changes, and (c) a clean updated manuscript without highlights. Best regards, Arif Husen on behalf all authors. Editor comments (Muhammad Aleem) Based on reviewers’ comments, you may resubmit the revised manuscript for further consideration. Please consider the reviewers’ comments carefully and submit a list of responses to the comments along with the revised manuscript. Response: All reviewers’ comments and suggestions have been considered and the manuscript is updated accordingly. Reviewer 2 has suggested that you cite specific references. You are welcome to add it/them if you believe they are relevant. However, you are not required to include these citations, and if you do not include them, this will not influence my decision. Response: We thank the reviewer 2 for the valuable suggestions for the citations of the IoT APIs, traffic prediction from CPU load on user devices, and machine learning methods for QoS prediction. These articles are of high relevance for the ASB based traffic profiling. However, this article focuses on the FOB based traffic profiling and we feel the discussion of the above articles will not be suitable. Reviewer 1 (Anonymous) Basic reporting The authors have used two terminologies to refer to the network nodes, Serving Area Points and Point of Presence. The author needs to discuss if there are differences between the two types of nodes or they provide the same role. If the same role is provided by them, the author should use consistent terminology in different sections. Authors should note that the term POP is more common and known to the research community than SAP. Response : Thanks for highlighting the discrepancy, Serving Area Points and Point of Presence(POP) both refer to same thing. As suggested, we have edited the article to use POP consistently throughout the article. Experimental design On page 8, line 243, authors have mentioned two types of the Erlang Distributions B and C. Authors needs to clarify what Distribution was used in the experiment and the underlying reasons. Response : We have used Erlang C for traffic intensity calculations as it incorporates the blocking probability as compared to Erlang B where the later simply drops the packets or session in case of unavailability of network resources. [Line 304-306] On Page 10, Line 296 authors have used the term Traffic Intensity provided the TI values Table 3 used in experiments. Authors should clarify what mathematical models were used to calculate the traffic intensities. Response : We have used well established Erlang C relationship for calculating TI values, However Erlang C relationship was adapted to digital signals. The relationship used has been added to the article. [Line 280 – 289] Validity of the findings On Figure 5, authors have provided the ASP traffic profiling with showing how the changes are propagated to upstream nodes. Authors needs clarify whether the symmetric or asymmetric behavior is used for the upstream and downstream traffic. Furthermore, authors need discuss the approach if the upstream and downstream data rates are different, that is the usual approach for the internet traffic. Response: For upstream/downstream, primary assumption is that traffic intensities are asymmetric and described by the uploads/ download speeds as discussed on line 300-302 and 344-348. Additional comments It is stated on the page 10 line 315 that FOB process has been used in the experiments. Authors should clarify why this process has been used and why not the ASP or HNUB? And whether the results would be same with either of the approach. Response: FOB process was chosen in the study due to the fact that a similar process Is used for the network dimensioning and capacity planning. While ASB and HNUB represents the actual network conditions, the FOB process provides the upper bound of the traffic intensities. Line 305-307. On page 11, line 330 the terms MNC and CNC are defined, however the how these conditions were simulated in the experiments, author needs to clarify. Response: Both MNC and CNC represents a specific network condition, where MNC represents when the free resources available in the network are greater than the actual traffic load otherwise the network falls in a congested state. Line 320 – 329 and Table 1. On page 18, line 560 the node role has been discussed, author needs to clarify how these roles are communicated to nodes or algorithm instances. Response: For the experimental evaluation, these roles were statically configured on individual nodes. Line 568. On page 26, authors have discussed the potential use of machine learning algorithms with TIPS, authors need to clarify whether the traffic forecasting techniques proposed in the literature are capable to provide prediction of the TI values. Response: Traffic forecasting techniques can potentially predict the TI values both for the short term and long-term basis as discussed in the future works section. Line 828-830. Reviewer 2 (Anonymous) Basic reporting Clear and unambiguous professional English Response: Thanks Experimental design Original primary research within Aims and Scope of the journal Response: Thanks Validity of the findings All underlying data have been provided Response: Thanks Additional comments The authors have introduced a new packet scheduling techniques based on the Erlang capacity planning methods for managing the congested networks. The results show improvement in the performance in several experiments. Following are few observations that needs to be addressed by authors. (a) - Authors needs to clarify what advantages Erlang based TI have in comparison to the common approach based on arrival rate ,packet length and transmission rate. There are some existing packet scheduling techniques that uses the traffic intensity information to make the scheduling decisions, how this approach is different? Response: It has been discussed in the introduction section[line 50-56], that Erlang is commonly used for network capacity planning purpose and represents the network dimensions. Existing techniques don’t consider these aspects. (b) - Two types of the Erlang B and C are mentioned. What is the effect if Erlang B or C is used? And how authors calculated during simulation? [line 243,] Response: Erlang C considers the waiting time in case of the existing resources are queues are not available on contrast to the Erlang B where the packets are discarded as soon as the resources are exhausted. The calculation details are added to the article in line 280 – 289. (c) - the TI values given in Table 3, are these randomly chosen or what criteria authors have used to calculate them? Response: The TI values have been calculated through the FOB technique as discussed in the section 3 line 305 and line 577. (d) - Authors stated that, FOB process is used, how the scheduling will work if there is no forecast information available? And how the sessions with asymmetric data rates are scheduled, authors need to provide clarification. Response: Generally, for each network there is baseline network planning process through which the network specifications and capacities are determined based on market requirements. The effect of asymmetric data rates is considered by separation of the TIs for the upstream and downstream traffic as stated in line 344,365. (e) - The result section shows presented the comparison with drr, sfq, rr and red, why these specific techniques were considered and why not others. Response: We have considered the most common primary scheduling schemes, the aspects of the CBWFQ and PQ have been discussed in lines 784-790. (f) - Authors have given the simulation topology in figure 7, Line 583. Authors need to clarify what type of traffic was used? what was the packet size and their propagation delays values? What is impact of changing these values? Response: In the experimental evaluation, real time traffic was used with packet with average packet size of 1000bytes as discussed in line 578. Furthermore, the effect of packet size variation is discussed in line 298 to 322. (g) - Authors have shown 100% delivery ratios for the proposed scheme, what benchmarks have been considered to calculate the percentage of delivery ratios? Are these comparatives or some fixed limits were used in the calculations? Response: The target value for delivery ratio is determined by the FOB process and is not fixed value, rather it changes with change in TI, and we compare the results of each scheme under evaluation with it as shown in figure 18 with blue line. (h) - The figure 20, in the comparison section, How the overhead was calculated, was it based on actual packet captures or some other assumptions are made? Response: The overhead is determined by the model application used to generate the traffic, it includes the TCP handshakes, acknowledgments etc. beyond it there was not internode signaling overhead involved. (i) - Authors need to make sure that abbreviations are consistent throughout the paper, and abbreviations must be defined where they occur first time. Response: Agreed, All the abbreviations are rechecked and corrected where required. (j) – It would be better if author consider discussing following papers in section 2 Gao, Honghao, et al. 'Collaborative learning-based industrial IoT API recommendation for software-defined devices: The implicit knowledge discovery perspective.' IEEE Transactions on Emerging Topics in Computational Intelligence (2020). Huang, Yuzhe, et al. 'SSUR: An Approach to Optimizing Virtual Machine Allocation Strategy Based on User Requirements for Cloud Data Center.' IEEE Transactions on Green Communications and Networking 5.2 (2021): 670-681. Hussain, Walayat, and Osama Sohaib. 'Analysing cloud QoS prediction approaches and its control parameters: Considering overall accuracy and freshness of a dataset.' IEEE Access 7 (2019): 82649-82671. Response: Thanks for the valuable suggestions, we highly appreciate the referred texts. please note that ASB based traffic profiling approach is mainly dependent on the communication with user devices. In this regard the IoT API framework suggested by Gao et al. may can provide a suitable source of collection of information from IoT devices. the However, since current article is focused on FOB, we feel the discussion of the suggested articles will unnecessarily increase the length of the paper. Similarly, SSUR approach proposes traffic prediction based on the CPU load where we the authors feel that CPU load will be insufficient to estimate the traffic profiles of the devices. Moreover, some of the analysis techniques proposed by Wilayat et Al. has been discussed in the context of future works with more relevant detail. The article shall be discussed in our upcoming research on the ASB and ML. Finally, the paper is well written there are some issues that need to address before the publication of the article. Response: Noted, Thanks "
Here is a paper. Please give your review comments after reading it.
259
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Undesirable vibrations resulting from the use of vibrating hand-held tools decrease the tool performance and user productivity. In addition, prolonged exposure to the vibration can cause ergonomic injuries known as the hand-arm vibration syndrome (HVAS). Therefore, it is very important to design a vibration suppression mechanism that can isolate or suppress the vibration transmission to the users' hands to protect them from HAVS. While viscoelastic materials in anti-vibration gloves are used as the passive control approach, an active vibration control has shown to be more effective but requires the use of sensors, actuators and controllers. In this paper, the design of a controller for an anti-vibration glove is presented. The aim is to keep the level of vibrations transferred from the tool to the hands within a healthy zone. The paper also describes the formulation of the handglove system's mathematical model and the design of a fuzzy parallel distributed compensation (PDC) controller that can cater for different hand masses. The performances of the proposed controller are evaluated through simulations and the results are benchmarked with two other active vibration control techniques -proportional integral derivative (PID) controller and active force controller (AFC). The simulation results show a superior performance of the proposed controller over the benchmark controllers. The designed PDC controller is able to suppress the vibration transferred to the user's hand 93% and 85% better than the PID controller and the AFC, respectively.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Vibrations of the hand-held tools can cause harmful health conditions among the operators. There have been many investigations on the effects of the long-term vibration exposure on the human body <ns0:ref type='bibr' target='#b34'>(Shen and House, 2017</ns0:ref>) and many damages have been reported including HAVS, muscle weakness, white finger, loss of grip strength, sensory nerve damage, and muscle and joint injuries in the hand and arm <ns0:ref type='bibr' target='#b7'>(Gerhardsson et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b42'>Vihlborg et al., 2017)</ns0:ref>. The effects of anti-vibration (AV) gloves on reducing the health risks of the vibrating tools have been investigated in many studies, and various types of anti-vibration gloves have been introduced to the market such as gel-filled, air-filled and leather AV gloves <ns0:ref type='bibr' target='#b9'>(Hamouda et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Various biodynamic models of the hand have been proposed in International Standard ISO 10068:2012 <ns0:ref type='bibr' target='#b3'>(Dong et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b30'>Rezali and Griffin, 2018)</ns0:ref> to investigate the transmissibility of the vibration to the hand. The strategy of building the hand model is based on introducing the simple mechanical damper and spring elements, which are arranged in series or parallel depending on the approach used such as Maxwell or Kelvin models of muscles <ns0:ref type='bibr' target='#b12'>(Jones, 2001)</ns0:ref>, to represent the viscoelastic characteristics of the hand soft tissues. The factors such as types of tools used, the points at which vibrations are transmitted (e.g. the PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63355:1:1:NEW 6 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science hand or the arm) and the required levels of accuracy determine the modeling approach to be used. For example, in <ns0:ref type='bibr' target='#b23'>(Mazlan and Ripin, 2015)</ns0:ref> a two degree of freedom (DOF) model was adopted while some higher DOF models can be found in <ns0:ref type='bibr' target='#b14'>(Kamalakar and Mitra, 2018)</ns0:ref>. A precise representation of different impulsive tools and their working postures and the directions of the applied hand force is illustrated in <ns0:ref type='bibr' target='#b4'>(Dong et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Incorporating a glove into the hand system requires additional mechanical damper and spring elements in the equivalent mechanical system. For example, the model proposed by Dong <ns0:ref type='bibr' target='#b3'>(Dong et al., 2009</ns0:ref>) is a 7-DOF model that represents the viscous, elastic and inertial properties of the glove by the equivalent springs and dampers.</ns0:p><ns0:p>Adding an active element to the passive anti-vibration glove can improve the efficacy of the system.</ns0:p><ns0:p>In active vibration control (AVC), an actuator is utilized to apply an external force or displacement based on the measurement of the system response through feedback control <ns0:ref type='bibr' target='#b27'>(Preumont, 2018)</ns0:ref>. The acceleration, displacement and velocity measurements from the sensors are used by the control system to provide control signals for actuators based on the chosen control strategy.</ns0:p><ns0:p>The design of the control scheme could be quite challenging in this area since many parameters are affecting the performance of the controller such as sensor and actuator's fault <ns0:ref type='bibr' target='#b6'>(Gao and Liu, 2020;</ns0:ref><ns0:ref type='bibr' target='#b38'>Tahoun, 2020)</ns0:ref> and uncertainties in the system parameters <ns0:ref type='bibr' target='#b1'>(Chen et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b18'>Li et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b37'>Tahoun, 2017)</ns0:ref>. Besides, the vibration of different body parts occurs at different frequencies and therefore would add complexity to the control design process <ns0:ref type='bibr' target='#b11'>(Hassan et al., 2010)</ns0:ref>.</ns0:p><ns0:p>Various types of control schemes have been developed and employed in AVC structures. Each new control method has been designed for a specific system with a specific dynamics, such as integral-based controllers in <ns0:ref type='bibr' target='#b45'>(Zuo and Wong, 2016)</ns0:ref>, sliding mode controls <ns0:ref type='bibr' target='#b10'>(Hamzah et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b19'>Lin et al., 2019)</ns0:ref>, PID control <ns0:ref type='bibr' target='#b29'>(Rani et al., 2011)</ns0:ref>, AFC <ns0:ref type='bibr' target='#b40'>(Theik and Mazlan, 2020)</ns0:ref> and artificial intelligent-based control strategies including fuzzy and neural networks <ns0:ref type='bibr' target='#b13'>(Kalaivani et al., 2016)</ns0:ref>. In <ns0:ref type='bibr' target='#b17'>(Lekshmi and Ramachandran, 2019)</ns0:ref> a Genetic Algorithm-optimized PID controller is proposed to suppress Parkinson's Tremor. In <ns0:ref type='bibr' target='#b20'>(Liu et al., 2018)</ns0:ref>, an adaptive neural network controller is proposed to control the suspension systems, and some fuzzy active suspension system controllers are also proposed in <ns0:ref type='bibr' target='#b35'>(Sun et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b36'>Tabatabaei et al., 2010)</ns0:ref>.</ns0:p><ns0:p>The AFC proposed in <ns0:ref type='bibr' target='#b44'>(Zain et al., 2008)</ns0:ref> is an effective method designed to carry out the robust control of dynamic systems in the presence of disturbance and uncertainties. It is proved that in AFC, the stability and robustness of the system remain unchanged by compensation action of the control strategy when a number of disturbances are applied to the system <ns0:ref type='bibr' target='#b22'>(Mailah et al., 2009)</ns0:ref>.</ns0:p><ns0:p>The research on fuzzy systems and control has resulted in the development of many controller designs especially by using the Takagi-Sugeno (T-S) fuzzy model as an effective and simple tool in modeling and control of nonlinear systems or systems with variable parameters <ns0:ref type='bibr' target='#b28'>(Rajabpour et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b33'>Sadeghi et al., 2016)</ns0:ref>. Additionally, The T-S model can generate an exact representation of the nonlinear system by a set of linear subsystems. Since the hand mass is variable for different users in the hand-glove system, we propose to use the fuzzy T-S model for the standard range of various hand masses and employing the idea of fuzzy parallel distributed compensation (PDC) technique, proposed by <ns0:ref type='bibr' target='#b43'>Wang et al. (Wang et al., 1995)</ns0:ref>, to control the fuzzy T-S modeled systems by designing a linear feedback controller for each linear subsystem.</ns0:p><ns0:p>Originally, the idea of PDC has been applied to control nonlinear systems by linearizing them around different operation points. Here, we used this idea to control the linear hand-glove system but we used the ability of PDC control to make the model and controller more flexible for different users with different hand masses which has not been considered in the previous controller designs. Building on the anti-vibration gloves literature, we propose the design of an active anti-vibration controller for the hand-glove system to suppress the transmitted vibration to the hand while using hand-held tools. To our knowledge, it is the first anti-vibration glove with an active controller. In this paper, the mathematical model of the hand-glove system is formulated and three different controllers-a PID controller, an AFC and a fuzzy PDC controller-are designed and applied to the hand-glove system. The performances of the three controllers are then compared.</ns0:p><ns0:p>To summarize, the key contributions of this paper are twofold: the T-S fuzzy modeling of the handglove system for variable hand mass parameter and the design of an active anti-vibration glove based on the fuzzy PDC controller, which is robust to variation in hand masses, making the active anti-vibration glove suitable for different users. This paper is organized as follows: Section 1 presents the mathematical formulation of the hand-glove </ns0:p></ns0:div> <ns0:div><ns0:head n='1'>MODELLING OF ACTIVE HAND-GLOVE SYSTEM</ns0:head><ns0:p>To represent the dynamic behavior of the hand-glove system under the influence of vibration, the system is represented by a three-degree of freedom model as represented in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. All the masses are assumed to be rigid and the mass of each segment is considered to be a percentage of the total body mass <ns0:ref type='bibr' target='#b30'>(Rezali and Griffin, 2018)</ns0:ref>. Since our main focus is on the transmitted vibration to the hand and to reduce the complexity of the model, the palm and fingers are considered to be one body segment and the palm is assumed to be lying on the vibration surface. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> lists the symbols, units and descriptions of the parameters of the hand-glove system shown in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='1.1'>The Equations of Motion of the Active Hand-Glove Model</ns0:head><ns0:p>In Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, since the actuator is assumed to be placed on the palm under the glove, the actuator force F a is applied between the soft tissue of the hand, mass m 2 and glove material with mass m 3 . The hand pushing the tool handle is represented by F p .</ns0:p><ns0:p>The dynamic equations of motion of the system can be written as follows:</ns0:p><ns0:formula xml:id='formula_0'>m 1 (t)z 1 + (c 1 + c 2 )&#380; 1 &#8722; c 2 &#380;2 + (k 1 + k 2 )z 1 &#8722; k 2 z 2 = 0 (1a) m 2 z2 &#8722; c 2 &#380;1 + (c 2 + c 3 )&#380; 2 &#8722; c 3 &#380;3 &#8722; k 2 z 1 + (k 2 + k3)z 2 &#8722; k 3 z 3 = F a &#8722; F p (1b) m 3 z3 &#8722; c 3 &#380;2 + c 3 &#380;3 &#8722; k 3 z 2 + k 3 z 3 = &#8722;F a + F w (1c)</ns0:formula><ns0:p>where z i , &#380;i and zi represent the displacement, velocity and acceleration of mass m i ; F a is the actuator force, F p is the push force and F w is the input vibration or disturbance to the system that needs to be controlled.</ns0:p><ns0:p>In this model, the mass of the hand is assumed to be non-constant, m 1 (t), i.e. the model can be used to represent the hand system for different users. The equations ( <ns0:ref type='formula'>1</ns0:ref>) can be arranged in the following form:</ns0:p><ns0:formula xml:id='formula_1'>z1 = &#8722; c 1 + c 2 m 1 (t) &#380;1 + c 2 m 1 (t) &#380;2 &#8722; k 1 + k 2 m 1 (t) z 1 + k 2 m 1 (t) z 2 (2a) z2 = c 2 m 2 &#380;1 &#8722; c 2 + c 3 m 2 &#380;2 + c 3 m 2 &#380;3 + k 2 m 2 z 1 &#8722; k 2 + k 3 m 2 z 2 + k 3 m 2 z 3 + F a m 2 &#8722; F p m 2 (2b) z3 = c 3 m 3 &#380;2 &#8722; c 3 m 3 &#380;3 + k 3 m 3 z 2 &#8722; k 3 m 3 z 3 &#8722; F a m 3 + F w m 3 (2c) Considering x 1 = z 1 , x 2 = z 2 , x 3 = z 3 , x 4 = &#380;1 ,</ns0:formula><ns0:p>x 5 = &#380;2 and x 6 = &#380;3 for each of the state variables in equation ( <ns0:ref type='formula'>2</ns0:ref>), it can be rewritten in the state-space equation given by Equation (3).</ns0:p><ns0:formula xml:id='formula_2'>&#7819;(t) = A(t)x(t) + B(t)u(t) + E(t)w(t) y(t) = Cx(t)<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where matrices A, B, C and E are the system matrix, the input matrix, output matrix and the disturbance matrix, respectively. The hand vibration, &#7819;4 or z1 is considered as the performance output reference.</ns0:p><ns0:p>Equation ( <ns0:ref type='formula' target='#formula_2'>3</ns0:ref>) can be further written in the following form:</ns0:p><ns0:formula xml:id='formula_3'>&#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; &#7819;1 &#7819;2 &#7819;3 &#7819;4 &#7819;5 &#7819;6 &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; = &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 &#8722; k 1 +k 2 m 1 (t) k 2 m 1 (t) 0 &#8722; c 1 +c 2 m 1 (t) c 2 m 1 (t) 0 k 2 m 2 &#8722; k 2 +k 3 m 2 k 3 m 2 c 2 m 2 &#8722; c 2 +c 3 m 2 c 3 m 2 0 k 3 m 3 &#8722; k 3 m 3 0 c 3 m 3 &#8722; c 3 m 3 &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; x 1 x 2 x 3 x 4 x 5 x 6 &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; + &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; 0 0 0 0 0 0 0 0 1 m 2 &#8722; 1 m 2 &#8722; 1 m 3 0 &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; F a F p + &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; 0 0 0 0 0 1 m 3 &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; F w (4) y(t) = 0 0 0 1 0 0 x 1 x 2 x 3 x 4 x 5 x 6 T (5)</ns0:formula><ns0:p>Another approach that can be used to model the active hand-glove system is the fuzzy T-S model and is described next.</ns0:p></ns0:div> <ns0:div><ns0:head>4/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2021:07:63355:1:1:NEW 6 Sep 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='1.2'>Fuzzy Takagi-Sugeno Model</ns0:head><ns0:p>Fuzzy logic has the ability to deal with nonlinearities and uncertainties in the system where standard analytical models are usually ineffective. It can be used for both modeling and design of the controller.</ns0:p><ns0:p>Most nonlinear systems can be approximated by a T-S fuzzy model. The T-S fuzzy model of a dynamic system with control input and disturbance input can be represented by the following rules:</ns0:p><ns0:p>Model Rule i:</ns0:p><ns0:formula xml:id='formula_4'>IF s 1 (t) is M i1 and &#8226; &#8226; &#8226; and s p (t) is M ip , THEN &#7819;(t) = A i x(t) + B i u(t) + E i w(t), y(t) = C i x(t), i = 1, 2, . . . , r,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where r shows the number of rules, M i j denotes the fuzzy set, and x(t) &#8712; R n&#215;n , u(t) &#8712; R n&#215;1 and w(t) &#8712; R n&#215;1 are state vectors, controlled input vector and disturbance input vector, respectively. Note that s i (t) is a hypothetical variable which can be a function of state variables, disturbance inputs, system parameters and/or time.</ns0:p><ns0:p>By considering each set of (A i , B i , E i , C i ) as a subsystem, the overall output of the fuzzy system can be represented as</ns0:p><ns0:formula xml:id='formula_5'>&#7819;(t) = &#8721; r i=1 &#945; i s(t) A i x(t) + B i u(t) + E i w(t) &#8721; r i=1 &#945; i s(t) = r &#8721; i=1 h i s(t) A i x(t) + B i u(t) + E i w(t) ,<ns0:label>(7a)</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>y(t) = &#8721; r i=1 &#945; i s(t) C i x(t) &#8721; r i=1 &#945; i s(t) = r &#8721; i=1 h i s(t) C i x(t) ,<ns0:label>(7b)</ns0:label></ns0:formula><ns0:p>in which for all t</ns0:p><ns0:formula xml:id='formula_7'>s(t) = [s 1 (t) s 2 (t) . . . s p (t)],<ns0:label>(8)</ns0:label></ns0:formula><ns0:formula xml:id='formula_8'>&#945; i s(t) = v &#8719; j=1 M i j s j (t)<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>and</ns0:p><ns0:formula xml:id='formula_9'>h i s(t) = &#945; i s(t) &#8721; r i=1 &#945; i s(t) . (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>)</ns0:formula><ns0:p>Note that the term M i j s j (t) shows the membership grade of s j (t) in M i . Also, since &#945; i s(t) &#8805; 0 and &#8721; r i=1 &#945; i s(t) &#8805; 0 for i = 1, 2, . . . , r, then we have h i s(t) &#8805; 0 and &#8721; r i=1 h i s(t) = 1 for all t.</ns0:p><ns0:p>Next, different types of active vibration control techniques applied to the system to suppress the vibration to the desired level are described.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>ACTIVE-VIBRATION CONTROLLER (AVC) DESIGN</ns0:head><ns0:p>In AVC, the control objective is to attenuate the undesired vibration to the desired level by use of an actuator, which could be a piezoelectric device or an electric motor. The incoming vibration is sensed by using a sensing mechanism and the actuator reacts to these vibrations by producing a cancelation signal. The sensors used in AVC are mainly of piezoelectric type <ns0:ref type='bibr' target='#b24'>(Miljkovi&#263;, 2009)</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The working principle of an AVC system is based on collecting data from sensing devices and then generating a control signal for the actuator. The control signal is generated from the measurements of the displacement, velocity or acceleration of the mass M that is fed back. Based on these measurements, the control signal produced depends on the chosen control strategy or algorithm. The followings explain three different control algorithms applied to the hand-glove system to suppress vibrations transmitted from the tools used to the workers' hands.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Proportional-Integral-Derivative (PID) Controller</ns0:head><ns0:p>The PID controller to be used in the AVC system is is a standard PID control algorithm given by Equation (11):</ns0:p><ns0:formula xml:id='formula_11'>u(t) = K p e(t) + K i e(t)dt + K d &#279;(t)<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>where u(t) is the control signal, e(t) is the error signal and K p , K i and K d are the proportional, integral and derivative gains, respectively. Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> shows the feedback control system employing the PID controller.</ns0:p><ns0:p>Equation ( <ns0:ref type='formula' target='#formula_11'>11</ns0:ref>) can be written in the following transfer function form:</ns0:p><ns0:formula xml:id='formula_12'>U(s) = K p + K i /s + K d s (12)</ns0:formula><ns0:p>The PID gains K p , K i and K d need to be tuned, by trial and error tuning method, since we get the desired response. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Active Force Controller</ns0:head><ns0:p>Active force control (AFC) proposed in <ns0:ref type='bibr' target='#b44'>(Zain et al., 2008)</ns0:ref> is an effective method designed to carry out the robust control of dynamic systems in the presence of disturbance and uncertainties. It is proved that in AFC, the stability and robustness of the system remain unchanged by compensation action of the control strategy when a number of disturbances are applied to the system <ns0:ref type='bibr' target='#b0'>(Abdelmaksoud et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b8'>Gohari and Tahmasebi, 2017)</ns0:ref>.</ns0:p><ns0:p>The schematic of the AFC controller for a suspension system, which has a similar representation of the hand-glove system, is shown in <ns0:ref type='bibr' target='#b22'>(Mailah et al., 2009)</ns0:ref>.</ns0:p><ns0:p>One practical example of applying AFC to a suspension system to control the unwanted vibration is given in <ns0:ref type='bibr' target='#b25'>(Mohamad et al., 2006)</ns0:ref> and it showed the effectiveness of the applied AFC to cancel the vibration with different disturbances levels.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Fuzzy Parallel Distributed Compensation (PDC) Controller</ns0:head><ns0:p>Fuzzy logic is an effective way to decompose a nonlinear system control into a group of local linear controls based on a set of design-specific model rules. PDC is one of the many varieties of fuzzy logic that can be implemented in control systems. For a given fuzzy T-S framework, the state feedback based on PDC is usually applied. The key idea of the PDC technique is to divide the nonlinear system to some linear subsystems and then design some controllers for each of the linear subsystems and subsequently obtain the overall controller by the fuzzy blending of the local controllers <ns0:ref type='bibr' target='#b32'>(Sadeghi et al., 2014</ns0:ref><ns0:ref type='bibr' target='#b31'>(Sadeghi et al., , 2015;;</ns0:ref><ns0:ref type='bibr' target='#b26'>Nguyen et al., 2016)</ns0:ref> i.e. designing a compensator for each rule of the fuzzy model.</ns0:p><ns0:p>The block diagram of the system with fuzzy PDC controller is illustrated in Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>. As can be seen, the overall PDC controller is applied to the original nonlinear system. </ns0:p><ns0:formula xml:id='formula_13'>IF s 1 (t) is M i1 and &#8226; &#8226; &#8226; and s p (t) is M ip , THEN u(t) = &#8722;F i x(t), i = 1, 2, . . . , r. (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>)</ns0:formula><ns0:p>The overall control output of the fuzzy PDC controller is then obtained as Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_15'>u(t) = &#8722; &#8721; r i=1 &#945; i s(t) F i x(t) &#8721; r i=1 &#945; i s(t) = &#8722; r &#8721; i=1 h i s(t) F i x(t) ,<ns0:label>(14</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Note that although in the PDC approach the controllers are local (Equation( <ns0:ref type='formula' target='#formula_15'>14</ns0:ref>)), the overall controller scheme should be designed globally to ensure global stability and suitable control performance. By substituting Equation ( <ns0:ref type='formula' target='#formula_15'>14</ns0:ref>) in ( <ns0:ref type='formula' target='#formula_5'>7</ns0:ref>) the closed loop system can be expressed as</ns0:p><ns0:formula xml:id='formula_16'>&#7819;(t) = r &#8721; i=1 r &#8721; j=1 h i s(t) h j s(t) A i &#8722; B i F j x(t) + E i w(t) .<ns0:label>(15)</ns0:label></ns0:formula><ns0:p>Now, by considering a positive definite matrix P and the quadratic Lyapunov function V (x) =</ns0:p><ns0:p>x T (t)Px(t), the closed-loop system ( <ns0:ref type='formula' target='#formula_16'>15</ns0:ref>) is asymptotically stable if the following condition is satisfied <ns0:ref type='bibr' target='#b39'>(Tanaka and Wang, 2004</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_17'>{A i &#8722; B i F j } T {A i &#8722; B i F j } &#8722; P &lt; 0 &#8704;i, j = 1, 2, . . . , r,<ns0:label>(16)</ns0:label></ns0:formula><ns0:p>for h i s(t) h j s(t) = 0, &#8704;t.</ns0:p><ns0:p>To design a PDC controller for the hand-glove system, Equation ( <ns0:ref type='formula'>4</ns0:ref>)-( <ns0:ref type='formula'>5</ns0:ref>), it is required to obtain the T-S fuzzy model of the dynamic system (4). Since our goal is to design an active controller that guarantees suitable performance for different users, i.e. different hand masses m 1 , it is supposed that</ns0:p><ns0:formula xml:id='formula_18'>m 1 &#8712; [m min 1 , m max 1 ]</ns0:formula><ns0:p>. By defining the premise variable &#968; h (t) as</ns0:p><ns0:formula xml:id='formula_19'>&#968; h (t) := 1 m 1 (t)<ns0:label>(17)</ns0:label></ns0:formula><ns0:p>and using the definition in Equation ( <ns0:ref type='formula' target='#formula_19'>17</ns0:ref>) and the possible range of m 1 (t), the minimum and maximum range of &#968; h (t) can be obtained as</ns0:p><ns0:formula xml:id='formula_20'>&#968; min h = min 1 m 1 (t) = 1 m max 1 ,<ns0:label>(18a)</ns0:label></ns0:formula><ns0:formula xml:id='formula_21'>&#968; max h = max 1 m 1 (t) = 1 m min 1 . (<ns0:label>18b</ns0:label></ns0:formula><ns0:formula xml:id='formula_22'>)</ns0:formula><ns0:p>Then &#968; h (t) can be written in terms of &#968; min h and &#968; max h as follows:</ns0:p><ns0:formula xml:id='formula_23'>&#968; h (t) = M 1 &#968; h (t) 1 m min 1 + M 2 &#968; h (t) 1 m max 1 (19)</ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_24'>M 1 &#968; h (t) + M 2 &#968; h (t) = 1. (<ns0:label>20</ns0:label></ns0:formula><ns0:formula xml:id='formula_25'>)</ns0:formula><ns0:p>By using Equations ( <ns0:ref type='formula'>19</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_24'>20</ns0:ref>), the membership functions can be obtained as follows:</ns0:p><ns0:formula xml:id='formula_26'>M 1 &#968; h (t) = &#968; h (t) &#8722; 1 m max 1 1 m min 1 &#8722; 1 m max 1 ,<ns0:label>(21a)</ns0:label></ns0:formula><ns0:formula xml:id='formula_27'>M 2 &#968; h (t) = 1 m min 1 &#8722; &#968; h (t) 1 m min 1 &#8722; 1 m max 1 (21b)</ns0:formula><ns0:p>which are demonstrated in Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>. Note that the membership functions M 1 and M 2 are called 'Small' and 'Large', respectively.</ns0:p><ns0:p>Then, the hand-glove system model Equation ( <ns0:ref type='formula'>4</ns0:ref> Model Rule 1:</ns0:p><ns0:formula xml:id='formula_28'>IF &#968; h (t) is 'Small', THEN &#7819;(t) = A 1 x(t) + B 1 u(t) + E 1 w(t) y(t) = C 1 x(t) (22a)</ns0:formula><ns0:p>Model Rule 2:</ns0:p><ns0:formula xml:id='formula_29'>IF &#968; h (t) is 'Large', THEN &#7819;(t) = A 2 x(t) + B 2 u(t) + E 2 w(t) y(t) = C 2 x(t) (22b)</ns0:formula><ns0:p>where the fuzzy system's submatrices associated with (22a) and (22b) are obtained as Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_30'>A 1 = &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 &#8722; k 1 +k 2 m min 1 k 2 m min 1 0 &#8722; c 1 +c 2 m min 1 c 2 m min 1 0 k 2 m 2 &#8722; k 2 +k 3 m 2 k 3 m 2 c 2 m 2 &#8722; c 2 +c 3 m 2 c 3 m 2 0 k 3 m 3 &#8722; k 3 m 3 0 c 3 m 3 &#8722; c 3 m 3 &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; A 2 = &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 &#8722; k 1 +k 2 m max 1 k 2 m max 1 0 &#8722; c 1 +c 2 m max 1 c 2 m max 1 0 k 2 m 2 &#8722; k 2 +k 3 m 2 k 3 m 2 c 2 m 2 &#8722; c 2 +c 3 m 2 c 3 m 2 0 k 3 m 3 &#8722; k 3 m 3 0 c 3 m 3 &#8722; c 3 m 3 &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; B 1 = B 2 = 0 0 0 0 1 m 2 &#8722; 1 m 3 0 0 0 0 &#8722; 1 m 2 0 T , E 1 = E 2 = 0 0 0 0 0 1 m 3 T ,<ns0:label>9</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_31'>C 1 = C 2 = 0 0 0 1 0 0</ns0:formula><ns0:p>The designed fuzzy controller shares the same input variables and fuzzy sets with the T-S fuzzy model in the IF parts. So, for the fuzzy models given in Equation ( <ns0:ref type='formula'>22</ns0:ref>), the following controller rules are designed via PDC:</ns0:p><ns0:p>Control Rule 1:</ns0:p><ns0:formula xml:id='formula_32'>IF &#968; h (t) is 'Small', THEN u(t) = &#8722;F 1 x(t) (24a)</ns0:formula><ns0:p>Control Rule 2:</ns0:p><ns0:formula xml:id='formula_33'>IF &#968; h (t) is 'Large', THEN u(t) = &#8722;F 2 x(t) (24b)</ns0:formula><ns0:p>where F 1 and F 2 are the feedback gains. Now, by considering that the disturbance input, i.e. vibration from the hand-held tool, has a limited energy it can be assumed that w &#8712; L 2 [0, &#8734;) and w 2 2 &#8804; w max . Then, the following control performances are aimed to be obtained:</ns0:p><ns0:p>1. The closed-loop system remains stable.</ns0:p><ns0:p>2. The transmitted vibration to the hand will be reduced significantly, i.e. the effect of disturbance input in the system output will be minimized or mathematically</ns0:p><ns0:formula xml:id='formula_34'>y 2 2 &lt; &#947; w 2 2 (<ns0:label>25</ns0:label></ns0:formula><ns0:formula xml:id='formula_35'>)</ns0:formula><ns0:p>for all w = 0 where &#947; is a predefined scalar.</ns0:p><ns0:p>3. The designed control input is enforced to a limited value, i.e.</ns0:p><ns0:formula xml:id='formula_36'>u 2 &#8804; u max . (<ns0:label>26</ns0:label></ns0:formula><ns0:formula xml:id='formula_37'>)</ns0:formula><ns0:p>4. The designed controller performs well for different users/hand masses.</ns0:p><ns0:p>Theorem: The feedback gains, F i , of the state feedback controller (24) that stabilize the fuzzy T-S system (22) while minimizing the &#947; in ( <ns0:ref type='formula' target='#formula_34'>25</ns0:ref>) and satisfying the control effort constraint ( <ns0:ref type='formula' target='#formula_36'>26</ns0:ref>) can be obtained by solving the following optimization linear matrix inequality (LMI)-based problem as follows:</ns0:p><ns0:p>min Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_38'>&#915;,M 1 ,...,Mr &#947; 2 subject to &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; &#8722; 1 2 {&#915;A T i &#8722; M T j B T i + A i &#915; &#8722; B i M j +&#915;A T j &#8722; M T i B T j + A j &#915; &#8722; B j M i } &#8722; 1 2 (E i + E j ) &#8722; 1 2 &#915;(C i +C j ) T &#8722; 1 2 (E i + E j ) T &#947; 2 I 0 &#8722; 1 2 (C i +C j )&#915; 0 I &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; &#8805; 0, &#8704;i, j = 1, . . . , r, i &#8804; j, (<ns0:label>27a</ns0:label></ns0:formula><ns0:formula xml:id='formula_39'>) &#915; M T i M i u 2 max I &#8805; 0, &#8704;i = 1, . . . , r, (27b) 1 x(0) T x(0) &#915; &#8805; 0 (27c) &#915; &gt; 0, (<ns0:label>27d</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>in which &#915; = P &#8722;1 , where P is a positive definite matrix and M i = F i &#915;.</ns0:p><ns0:p>Proof :</ns0:p><ns0:p>Part 1: Consider the quadratic Lyapunov function V x(t) = x T (t)Px(t), P &gt; 0, and &#947; &gt; 0 and u max &gt; 0 exist. To prove the disturbance rejection LMI (27a), suppose that for the system (7) the following inequality V x(t) + y T (t)y(t) &#8722; &#947; 2 w T (t)w(t) &#8804; 0 (28) is true. By integrating (28), one can write</ns0:p><ns0:formula xml:id='formula_40'>T 0 V x(t) + y T (t)y(t) &#8722; &#947; 2 w T (t)w(t) dt &#8804; 0. (<ns0:label>29</ns0:label></ns0:formula><ns0:formula xml:id='formula_41'>)</ns0:formula><ns0:p>Assuming that x(0) = 0, then we have</ns0:p><ns0:formula xml:id='formula_42'>V x(t) + T 0 y T (t)y(t) &#8722; &#947; 2 w T (t)w(t) dt &#8804; 0. (<ns0:label>30</ns0:label></ns0:formula><ns0:formula xml:id='formula_43'>)</ns0:formula><ns0:p>Since V x(t) &gt; 0, it can be obtained from (25) that</ns0:p><ns0:formula xml:id='formula_44'>y(t) 2 w(t) 2 &#8804; &#947;. (<ns0:label>31</ns0:label></ns0:formula><ns0:formula xml:id='formula_45'>)</ns0:formula><ns0:p>Thus the L 2 norm (31) constraint is true for the fuzzy system (7) if the inequality (28) holds.</ns0:p><ns0:p>Now the LMI condition (27a) can be derived from the inequality (28) as follows &#7819;T (t) P x(t) + x T (t) P &#7819;(t)</ns0:p><ns0:formula xml:id='formula_46'>+ r &#8721; i=1 r &#8721; j=1 h i s(t) h j s(t) x T (t)C T i C j x(t) &#8722; &#947; 2 w T (t)w(t) = r &#8721; i=1 r &#8721; j=1 h i s(t) h j s(t) x T (t)(A i &#8722; B i F j ) T Px(t) + r &#8721; i=1 r &#8721; j=1 h i s(t) h j s(t) x T (t)P(A i &#8722; B i F j )x(t) + r &#8721; i=1 r &#8721; j=1 h i s(t) h j s(t) x T (t)C T i C j x(t) &#8722; &#947; 2 w T (t)w(t) + r &#8721; i=1 h i s(t) x T (t)E T i Px(t) + r &#8721; i=1 h i s(t) x T (t)PE i w(t) = r &#8721; i=1 r &#8721; j=1 h i s(t) h j s(t) x T (t) w T (t) &#63726; &#63728; (A i &#8722; B i F j ) T P + P(A i &#8722; B i F j ) +C T i C j PE i E T i P &#8722;&#947; 2 I &#63737; &#63739; &#63726; &#63728; x(t) w(t) &#63737; &#63739; &#8804; 0. (<ns0:label>32</ns0:label></ns0:formula><ns0:formula xml:id='formula_47'>)</ns0:formula><ns0:p>From the inequality (32), it can be obtained Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_48'>Computer Science &#63726; &#63727; &#63727; &#63727; &#63727; &#63728; &#63723; &#63725; &#8722; &#8721; r i=1 &#8721; r j=1 h i s(t) h j s(t) (A i &#8722; B i F j ) T P +P(A i &#8722; B i F j ) +C T i C j &#63734; &#63736; &#8722;P &#8721; r i=1 h i s(t) E i &#8722; &#8721; r i=1 h i s(t) E T i P &#947; 2 I &#63737; &#63738; &#63738; &#63738; &#63738; &#63739; &#8805; 0,<ns0:label>(33)</ns0:label></ns0:formula><ns0:p>which can be decomposed as</ns0:p><ns0:formula xml:id='formula_49'>248 &#63726; &#63727; &#63727; &#63727; &#63727; &#63728; &#63723; &#63725; &#8722; &#8721; r i=1 &#8721; r j=1 h i s(t) h j s(t) (A i &#8722; B i F j ) T P + P(A i &#8722; B i F j ) &#63734; &#63736; &#8722;P &#8721; r i=1 h i s(t) E i &#8722; &#8721; r i=1 h i s(t) E T i P &#947; 2 I &#63737; &#63738; &#63738; &#63738; &#63738; &#63739; &#8722; &#63726; &#63728; &#8721; r i=1 &#8721; r j=1 h i s(t) h j s(t) C T i C j 0 0 0 &#63737; &#63739; = &#63726; &#63727; &#63727; &#63727; &#63727; &#63728; &#63723; &#63725; &#8722; &#8721; r i=1 &#8721; r j=1 h i s(t) h j s(t) (A i &#8722; B i F j ) T P + P(A i &#8722; B i F j ) &#63734; &#63736; &#8722;P &#8721; r i=1 h i s(t) E i &#8722; &#8721; r i=1 h i s(t) E T i P &#947; 2 I &#63737; &#63738; &#63738; &#63738; &#63738; &#63739; &#8722; &#63726; &#63728; &#8722; &#8721; r i=1 h i s(t) C T i 0 &#63737; &#63739; &#8721; r i=1 h i s(t) C i 0 &#8805; 0. (<ns0:label>34</ns0:label></ns0:formula><ns0:formula xml:id='formula_50'>)</ns0:formula><ns0:p>The inequality condition (34) is equivalent to</ns0:p><ns0:formula xml:id='formula_51'>249 &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; &#63723; &#63724; &#63724; &#63725; &#8722; &#8721; r i=1 &#8721; r j=1 h i s(t) h j s(t) (A i &#8722; B i F j ) T P +P(A i &#8722; B i F j ) &#63734; &#63735; &#63735; &#63736; &#8722;P &#8721; r i=1 h i s(t) E i &#8721; r i=1 h i s(t) C T i &#8722; &#8721; r i=1 h i s(t) E T i P &#947; 2 I 0 &#8721; r i=1 h i s(t) C i 0 I &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; &#8805; 0,<ns0:label>(35)</ns0:label></ns0:formula><ns0:p>which can be rewritten as</ns0:p><ns0:formula xml:id='formula_52'>250 r &#8721; i=1 r &#8721; j=1 h i s(t) h j s(t) &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; &#63723; &#63725; &#8722; 1 2 (A i &#8722; B i F j ) T P + P(A i &#8722; B i F j ) +(A j &#8722; B j F i ) T P + P(A j &#8722; B j F i ) &#63734; &#63736; &#8722; 1 2 P(E i + E j ) 1 2 (C i +C i ) T &#8722; 1 2 (E i + E j ) T P &#947; 2 I 0 1 2 (C i +C i ) 0 I &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; &#8805; 0.<ns0:label>(36)</ns0:label></ns0:formula><ns0:p>And, eventually, it can be derived from (36) that Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_53'>Computer Science &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; &#63723; &#63725; &#8722; 1 2 (A i &#8722; B i F j ) T P + P(A i &#8722; B i F j ) +(A j &#8722; B j F i ) T P + P(A j &#8722; B j F i ) &#63734; &#63736; &#8722; 1 2 P(E i + E j ) 1 2 (C i +C j ) T &#8722; 1 2 (E i + E j ) T P &#947; 2 I 0 1 2 (C i +C j ) 0 I &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; &#8805; 0 (37)</ns0:formula><ns0:p>Now by multiplying the both side of equation ( <ns0:ref type='formula'>37</ns0:ref>) by block-diagonal {&#915; I I}, LMI (27a) is obtained,</ns0:p><ns0:p>where</ns0:p><ns0:formula xml:id='formula_54'>&#915; = P &#8722;1 .</ns0:formula><ns0:p>Part 2: To prove the bounding LMI condition (27b), from u 2 &#8804; u max one can write</ns0:p><ns0:formula xml:id='formula_55'>u T (t)u(t) = r &#8721; i=1 r &#8721; j=1 h i s(t) h j s(t) x T (t)F T i F j x(t) &#8804; u 2 max ,<ns0:label>(38)</ns0:label></ns0:formula><ns0:p>and then</ns0:p><ns0:formula xml:id='formula_56'>1 u 2 max r &#8721; i=1 r &#8721; j=1 h i s(t) h j s(t) x T (t)F T i F j x(t) &#8804; 1. (39) Since x T (t)&#915; &#8722;1 x(t) &lt; x T (0)&#915; &#8722;1 x(0) &#8804; 1 for all t &gt; 0, the inequality (39) holds if 1 u 2 max r &#8721; i=1 r &#8721; j=1 h i s(t) h j s(t) x T (t)F T i F j x(t) &#8804; x T (t)&#915; &#8722;1 x(t),<ns0:label>(40)</ns0:label></ns0:formula><ns0:p>and consequently (39) holds. Thus, we have</ns0:p><ns0:formula xml:id='formula_57'>r &#8721; i=1 r &#8721; j=1 h i s(t) h j s(t) x T (t) 1 u 2 max F T i F j &#8722; &#915; &#8722;1 x(t) &#8804; 0. (<ns0:label>41</ns0:label></ns0:formula><ns0:formula xml:id='formula_58'>)</ns0:formula><ns0:p>The left-hand-side of ( <ns0:ref type='formula' target='#formula_57'>41</ns0:ref>) is equivalent to</ns0:p><ns0:formula xml:id='formula_59'>1 2 r &#8721; i=1 r &#8721; j=1 h i s(t) h j s(t) x T (t) 1 u 2 max F T i F j + 1 u 2 max F T j F i &#8722; 2&#915; &#8722;1 x(t) = 1 2 r &#8721; i=1 r &#8721; j=1 h i s(t) h j s(t) x T (t) 1 u 2 max F T i F j + F T j F i &#8722; 1 u 2 max F T i &#8722; F T j F i &#8722; F j &#8722; 2&#915; &#8722;1 x(t) &#8804; 1 2 r &#8721; i=1 r &#8721; j=1 h i s(t) h j s(t) x T (t) 1 u 2 max F T i F j + F T j F i &#8722; 2&#915; &#8722;1 x(t) = r &#8721; i=1 h i s(t) x T (t) 1 u 2 max F T i F i &#8722; &#915; &#8722;1 x(t).</ns0:formula><ns0:p>(42)</ns0:p><ns0:p>Then, the inequality (41) holds if</ns0:p><ns0:formula xml:id='formula_60'>1 u 2 max F T i F i &#8722; &#915; &#8722;1 &#8804; 0. (<ns0:label>43</ns0:label></ns0:formula><ns0:formula xml:id='formula_61'>)</ns0:formula><ns0:p>By defining</ns0:p><ns0:formula xml:id='formula_62'>M i = F i &#915;, we have 1 u 2 max M T i M i &#8722; &#915; &#8804; 0. (<ns0:label>44</ns0:label></ns0:formula><ns0:formula xml:id='formula_63'>)</ns0:formula><ns0:p>and using the Schur Complement, the LMI condition (27b) is obtained. The proof is completed.</ns0:p><ns0:p>The overall modeling steps and controller design stages are depicted in Figure <ns0:ref type='figure' target='#fig_12'>6</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head n='3'>SIMULATION RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head>260</ns0:head><ns0:p>According to <ns0:ref type='bibr' target='#b30'>(Rezali and Griffin, 2018)</ns0:ref>, the mass of the palm and fingers for any person can be estimated as a percentage of its whole body mass as represented in Table <ns0:ref type='table'>2</ns0:ref>. Thus, the total mass of the hand can be Segment Mass (kg) m palm 0.75 &#215; 0.006 &#215; M body m f ingers 0.25 &#215; 0.006 &#215; M body Table <ns0:ref type='table'>2</ns0:ref>. Mass of the palm and fingers. expressed as m 1 = 0.006 &#215; M body . Here, by assuming that the body mass of a human that works with the hand-held devices may vary between [50, 90] kg, the minimum and maximum possible values of m 1 and &#968; h based on Equation ( <ns0:ref type='formula' target='#formula_20'>18</ns0:ref>) are calculated and presented in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>The Hand-Glove System Representation Based on the Fuzzy T-S Model with PDC controller</ns0:head><ns0:p>By considering the T-S fuzzy model ( <ns0:ref type='formula'>22</ns0:ref>) and the fuzzy rules (24) derived for the hand-glove system with variable hand masses and by assuming the maximum and minimum hand masses of m min 1 = 0.3Kg , m max 1 = 0.54Kg, respectively, the associated subsystems based on the Equation ( <ns0:ref type='formula'>23</ns0:ref>) are derived as follows: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_64'>A 1 = &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 &#8722;12223.</ns0:formula><ns0:formula xml:id='formula_65'>&#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; A 2 = &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 &#8722;6790.</ns0:formula><ns0:formula xml:id='formula_66'>&#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; B 1 = B 2 = 0 0 0 0 100 &#8722;1.77 0 0 0 0 &#8722;100 0 T E 1 = E 2 = 0 0 0 0 0 1.77 T C 1 = C 2 = 0 0 0 1 0 0 . (<ns0:label>46</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In the simulation work, the vibration source is assumed to be a 2-stroke engine that is based on <ns0:ref type='bibr' target='#b15'>(Ko et al., 2011)</ns0:ref>, producing a vibration given by the sum of a white noise signal and sine waves with frequencies of 50, 80 and 100 Hz and amplitude of 8, 5 and 3, respectively. The input disturbance to the system and its frequency spectrum is shown in Figure <ns0:ref type='figure'>8</ns0:ref> (A) and (B), respectively. As can be seen in The amount of vibration received by the user in the passive mode i.e. no controller is applied to the system is represented in Figure <ns0:ref type='figure' target='#fig_15'>9</ns0:ref>. By applying the input disturbance to the system without using any controller, the hand vibration decreased to under 10 m/s 2 but still, it is not in the healthy range for humans.</ns0:p><ns0:p>As shown in Figure <ns0:ref type='figure' target='#fig_15'>9</ns0:ref> (B), the vibration spectra peak related to the hand acceleration in the passive model is reduced to 2.5 m/s 2 .</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Applying the Proposed Fuzzy Robust PDC Controller</ns0:head><ns0:p>In the hand-glove fuzzy system (46), the mass of the hand is considered to be 0.4 kg and values of m min 1 and m max 1 is chosen to be 0.3 and 0.54 kg, respectively (Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>). By considering the system matrices in (46) and solving the LMIs in Theorem 1, which were implemented using MATLAB R2018a and YALMIP <ns0:ref type='bibr' target='#b21'>(Lofberg, 2004)</ns0:ref> and solved using MOSEK 8, the values of the feedback gains of the fuzzy PDC controller (F 1 and F 2 ) and the P matrix are obtained as follows: Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_67'>P = &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728;<ns0:label>(47)</ns0:label></ns0:formula><ns0:formula xml:id='formula_69'>&#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739;<ns0:label>(48)</ns0:label></ns0:formula><ns0:p>The above feedback gains F1 and F2 give positive eigenvalues of P that satisfy the positive definiteness To show the robustness of the proposed controller to the variation of the hand masses m 1 , we have changed the hand mass of hand m 1 from 0.4 kg to 0.5 kg. As shown in Figure <ns0:ref type='figure' target='#fig_3'>12</ns0:ref>, the amplitude of the hand vibration has only a small variation when compared with Figure <ns0:ref type='figure' target='#fig_16'>10</ns0:ref> (A) and is still within the healthy range for humans.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>CONCLUSION</ns0:head><ns0:p>In this paper, a biodynamic model of the hand-glove system was developed based on the available models for the hand-glove system but focusing more on the vibration experienced by the hand. Then, a robust fuzzy PDC controller was proposed to minimize the vibration transmitted to the hand. By applying the designed controller to the hand-glove system, the amount of vibration experienced by the hand reduced to around 0.4 m/s 2 well within a healthy vibration range. Figure <ns0:ref type='figure' target='#fig_4'>13</ns0:ref> summarizes the performances of the different controllers for the hand-glove system experiencing vibrations from vibrating tool usage. The vibrations transferred to the hand using the proposed fuzzy PDC were 93% and 85% less compared to the PID controller and the active force controller (AFC), respectively. Also, the proposed controller was robust to the changes of hand masses. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:63355:1:1:NEW 6 Sep 2021) Manuscript to be reviewed Computer Science model. Section 2 describes the development of the fuzzy PDC active anti-vibration controller for the hand-glove system. Section 3 demonstrates the simulation results and analysis and the Section 4 concludes the paper.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The three DOF hand-glove model</ns0:figDesc><ns0:graphic coords='4,260.47,243.90,176.10,226.77' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Figure 2 shows 5/22 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63355:1:1:NEW 6 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Active vibration control diagram</ns0:figDesc><ns0:graphic coords='7,206.79,63.77,283.47,156.22' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. PID control schematic</ns0:figDesc><ns0:graphic coords='7,178.44,587.13,340.16,104.97' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The closed-loop system with fuzzy PDC controller</ns0:figDesc><ns0:graphic coords='8,171.35,344.85,354.35,142.64' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:63355:1:1:NEW 6 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Membership Functions M 1 &#968; h (t) and M 2 &#968; h (t) .</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>/ 22 PeerJ</ns0:head><ns0:label>22</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:07:63355:1:1:NEW 6 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:63355:1:1:NEW 6 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:63355:1:1:NEW 6 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:63355:1:1:NEW 6 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. The overall design procedure</ns0:figDesc><ns0:graphic coords='15,177.55,63.78,341.94,326.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Membership Functions M 1 &#968; h (t) and M 2 &#968; h (t) for &#968; min h = 1.85 and &#968; max h = 3.33.</ns0:figDesc><ns0:graphic coords='16,235.13,63.78,226.78,126.59' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 8 (Figure 8 .</ns0:head><ns0:label>88</ns0:label><ns0:figDesc>Figure 8 (B) the engine has the highest peak at 50 Hz with a magnitude of 5.5 m/s 2 .</ns0:figDesc><ns0:graphic coords='17,183.09,141.64,330.87,388.95' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Vibration of the passive model. (A) The hand vibration in passive model, without applying a controller, (B) Frequency spectra of the hand vibration in passive model</ns0:figDesc><ns0:graphic coords='18,183.09,63.78,330.87,375.72' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. System vibration with PDC controller. (A) The vibration at the hand after applying PDC controller (B) The frequency spectra of the hand vibration after applying PDC controller</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:63355:1:1:NEW 6 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 11 .Figure 12 .</ns0:head><ns0:label>1112</ns0:label><ns0:figDesc>Figure 11. The comparison result of applying different controllers</ns0:figDesc><ns0:graphic coords='20,145.19,63.78,400.64,300.48' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,164.27,63.78,368.51,221.10' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Gloved-hand model parameters.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Symbol Description</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>. In the following, by considering the Minimum and maximum values of m 1 and &#968; h . membership functions given in Equation (21) and by substituting the minimum and maximum values of m 1 from Table3, they are obtained as in Equation (45) and represented in Figure7.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Parameter Minimum Maximum</ns0:cell></ns0:row><ns0:row><ns0:cell>m 1</ns0:cell><ns0:cell>0.3</ns0:cell><ns0:cell>0.54</ns0:cell></ns0:row><ns0:row><ns0:cell>&#968; h</ns0:cell><ns0:cell>1.85</ns0:cell><ns0:cell>3.33</ns0:cell></ns0:row><ns0:row><ns0:cell>M 1 &#968; h (t) = 0.68&#968; h (t) &#8722; 1.25,</ns0:cell><ns0:cell /><ns0:cell>(45a)</ns0:cell></ns0:row><ns0:row><ns0:cell>M 2 &#968; h (t) = &#8722;0.68&#968; h (t) + 2.25</ns0:cell><ns0:cell /><ns0:cell>(45b)</ns0:cell></ns0:row></ns0:table><ns0:note>14/22PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63355:1:1:NEW 6 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Parameter</ns0:cell><ns0:cell>Value</ns0:cell><ns0:cell>Parameter</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell>m min 1 m max 1</ns0:cell><ns0:cell>0.3Kg 0.54Kg</ns0:cell><ns0:cell>c 2 c 3</ns0:cell><ns0:cell>36N/m 1.53N/m</ns0:cell></ns0:row><ns0:row><ns0:cell>m 2</ns0:cell><ns0:cell>0.01Kg</ns0:cell><ns0:cell>k 1</ns0:cell><ns0:cell>0.01N.s/m</ns0:cell></ns0:row><ns0:row><ns0:cell>m 3</ns0:cell><ns0:cell>0.078Kg</ns0:cell><ns0:cell>k 2</ns0:cell><ns0:cell>3667N.s/m</ns0:cell></ns0:row><ns0:row><ns0:cell>c 1</ns0:cell><ns0:cell>1N/m</ns0:cell><ns0:cell>k 3</ns0:cell><ns0:cell>1152N.s/m</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The parameter's values used in simulations.</ns0:figDesc><ns0:table /></ns0:figure> </ns0:body> "
" Manuscript ID : 63355 Title : Design of a robust active Fuzzy parallel distributed compensation anti-vibration controller for a hand-glove system Authors : Leila Rajabpour Hazlina Selamat (Corresponding Author) Alireza Barzegar Mohamad Fadzli Haniff Dear Sir/Madam, Response to Reviewers’ Comments Thank you very much for forwarding the reviewers’ comments. The following are my responses to address the comments on my original submitted paper. Comments from Reviewer 1: 1. More recent research results should be added and commented in the introduction. I have included more recent works in the Introduction section as follows: Line 33: Shen and House, 2017. Line 36: Gerhardsson, et al., 2020; Vihlborg et al., 2017. Line 38: Hamouda et al., 2017. Line 41: Rezali and Griffin, 2018 Line 48: Kamalakar and Mitra, 2018. Line 57: Preumont, 2018. Line 61: Gao and Liu, 2020; Tahoun, 2020. Line 62: Chen et al., 2019; Li et al., 2017; Tahoun, 2017. Line 67: Hamzah et al., 2012; Lin et al., 2019. Line 68: Theik and Mazlan, 2020. Line 69: Lekshmi and Ramachandran, 2019. Line 71: Liu et al., 2018. Line 72: Sun et al., 2018. Line 79: Rajabpour et al., 2019. Line 174: Abdelmaksoud et al., 2020; Gohari and Tahmasebi, 2017. 2. What is the criterion used to avoid vibration in your work? Our approach involves incorporating an active vibration control system to the anti-vibration gloves (lines 23 – 30 of the Abstract and lines 92 – 95 in the Introduction section). In particular, we designed a fuzzy parallel distributed compensation (Fuzzy PDC) controller for this purpose. 3. The novelty of your results over earlier ones should be clearly discussed in the introduction. Thank you for the feedback. We have included the novelty of this paper in lines 86 – 95 of the Introduction section. We highlighted that the novelty of the paper are the T-S fuzzy modeling of the hand-glove system for variable hand mass parameter and the design of an active anti-vibration glove based on the fuzzy PDC controller. This controller is robust to variation in hand masses, making the active anti-vibration glove suitable for different users. 4. In your work, you assumed that the premise variables are known, what about unknown premise variables? The assumption on the system’s parameters are addressed in lines 106 – 110 of Section 1 and line 120 of Section 1.1. The assumption on disturbance to the system are mentioned in lines 228- 230 of Section 2.3 and the source of vibration considered in simulations is mentioned in lines 267 – 269 of Sec. 3.1. The parameter values that used in the simulation work are addressed in Table 4. 5. In the industry, all control systems work in noisy environment in which there exist faults, disturbances/noise and delays, what about these uncertainties in your work. Please discuss This has been addressed in comment no 1. I have included some citations. In this paper, we have assumed that the vibration source is based on Ko, et. al. (2011) (lines 267 – 269). The uncertainties included are the hand masses and sensor measurement noise in terms of a white noise signal. 6. There are many design parameters. Therefore, a complete flowchart or design procedure should be added to the text. Figure 6 is added for better clarity of the design procedure. 7. You should compare your results with the existing results. I have included the performance of the designed controller with two benchmark controllers in line 25-30 of the Abstract, which is later shown in the Simulation section. 8. Some remarks should be added to declare the effects of the design parameters in the simulation results. I have added the remarks for: Figures 11 and 13 that show the performance of the different controllers in rejecting the vibration from hand. Figure 12 that shows the vibration of the hand with PDC controller while changing mass of hand (). Comments from Reviewer 2: Additional comments 1. There are some minor problems that need attention. Comment and suggestions are: Equation (2c) is part of Equation (2b). Both Equations should be one. Thank you for mentioning that. It has been updated. 2. In line 133 of subsection 1.2, should the variables x, u and w be in bold letter? Yes, I have already updated this and is now in line 136 of subsection 1.2. 3. In simulation results, How were the PID gains adjusted? Were they adjusted by experimentation? Some remarks should be added to the paper. Yes, they have been adjusted by trial and error method. I have mentioned this in line 167 of subsection. 2.1. 4. In the description of the active vibration control diagram of section 2, the authors comments that the incoming vibration is sensed by using a sensing mechanism, and then a control signal is generated from the measurements of the displacement, velocity or acceleration of the mass M that are fed back. It would be worthy to mention what type of sensors can be used to perform the measurements. Piezoelectric sensors are used to measure the vibration signal which has been used in the other related works. I remarked this in line 151. 5. The raw data is shared, however, there are some problems. There is not any description of how to run the simulations files, I figured it out. Simulation file of AFC controller, PID controller and Passive glove hand system run without any problem. However, the Simulink file of Fuzzy PDC controller fails because variables K_F1 and K_F2 were not declared correctly. I fixed it by changing the variables K_F1 and K_F2 to K_f1 and K_f2 in the file fuzzy_PDC_Controlled_Gloved_Hand_System_Data.m, respectively. I think is important to add a description to run the files and upload the fuzzy_PDC_Controlled_Gloved_Hand_System_Data.m file fixed. Thank you so much for mentioning that. The files have been updated. Comments from Reviewer 3: Basic reporting 1. Some explanation can be provided in detail to highlight the contribution. I have added some explanation on the novelty of our work which has been addressed in lines 86-95 of the Introduction section. 2. No comparisons are made with the results published in other pertinent references. We have included the discussion in the Abstract (lines 27-30) and also in the Simulation section (Fig. 11 and 13) that compare the performance of the designed controller over the two other benchmark controllers. 3. More details about the selection of the design parameters should be provided for better understand. Thank you for mentioning that. The design parameters are obtained based on solving the LMI’s given in Theorem 1 and Equations 27(a) to 27(d). 4. What is the new of this research? Thank you for the feedback. We have included the novelty of this paper in lines 86 – 95 of the Introduction section. We highlighted that the novelty of the paper are the T-S fuzzy modeling of the hand-glove system for variable hand mass parameter and the design of an active anti-vibration glove based on the fuzzy PDC controller. This controller is robust to variation in hand masses, making the active anti-vibration glove suitable for different users. 5. In simulation part, the procedure is too brief to reproduce for interested readers. Some parameters should be given. Figure 6 was added to represent the procedure of the modelling of the system and controller design steps. The parameter values that used in the simulation work are addressed in Table 4. 6. A general revision of the text is necessary, because there are small typing problems. Thank you for mentioning this, we have gone through the paper in details and make the necessary corrections. 7. What are the differences between this work and previous work for controller algorithm? The designed fuzzy PDC controller for reducing the vibration of the gloved-hand system with variable hand masses have been proposed in this work. This is the key difference between the work presented in this paper and other published works. 8. The presentation quality for simulation results should be further improved. Moreover, the physical meaning for the discussion for results should be given, which is very important. I have improved the quality of the figures and made them more organised. I have added a bar chart for better presentation of results. Figure 6 is added for better understanding of the design procedure. 9. The reference list must be updated. I have updated the reference list and added the following references in the paper: Line 33: Shen and House, 2017. Line 36: Gerhardsson, et al., 2020; Vihlborg et al., 2017. Line 38: Hamouda et al., 2017. Line 41: Rezali and Griffin, 2018 Line 48: Kamalakar and Mitra, 2018. Line 57: Preumont, 2018. Line 61: Gao and Liu, 2020; Tahoun, 2020. Line 62: Chen et al., 2019; Li et al., 2017; Tahoun, 2017. Line 67: Hamzah et al., 2012; Lin et al., 2019. Line 68: Theik and Mazlan, 2020. Line 69: Lekshmi and Ramachandran, 2019. Line 71: Liu et al., 2018. Line 72: Sun et al., 2018. Line 79: Rajabpour et al., 2019. Line 174: Abdelmaksoud et al., 2020; Gohari and Tahmasebi, 2017 "
Here is a paper. Please give your review comments after reading it.
260
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Anomaly detection is a challenging task that frequently arises in practically all areas of industry and science, from fraud detection and data quality monitoring to finding rare cases of diseases and searching for a new physics. Most of the conventional approaches to anomaly detection, such as one-class SVM and Robust Auto-Encoder, are one-class classification methods, i.e. focus on separating normal data from the rest of the space. Such methods are based on the assumption of separability of normal and anomalous classes, and subsequently do not take into account any available samples of anomalies. Nonetheless, in practical settings, some anomalous samples are often available; however, usually in amounts far lower than required for a balanced classification task, and the separability assumption might not always hold. This leads to an important task -incorporating known anomalous samples into training procedures of anomaly detection models.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>In this work, we propose a novel model-agnostic training procedure to address this task. We reformulate one-class classification as a binary classification problem with normal data being distinguished from pseudo-anomalous samples. The pseudo-anomalous samples are drawn from low-density regions of a normalizing flow model by feeding tails of the latent distribution into the model. Such an approach allows to easily include known anomalies into the training process of arbitrary classifier. We demonstrate that our approach shows comparable performance on one-class problems, and, most importantly, achieves comparable or superior results on tasks with variable amounts of known anomalies.</ns0:p></ns0:div> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>The anomaly detection (AD) problem is one of the important tasks in the analysis of real-world data.</ns0:p><ns0:p>Possible applications range from the data-quality certification (for example, <ns0:ref type='bibr' target='#b4'>Borisyak et al., 2017)</ns0:ref> to finding the rare specific cases of the diseases in medicine <ns0:ref type='bibr' target='#b40'>(Spence et al., 2001)</ns0:ref>. The technique can be also used in the credit card fraud detection <ns0:ref type='bibr' target='#b1'>(Aleskerov et al., 1997)</ns0:ref>, complex systems failure predictions <ns0:ref type='bibr' target='#b45'>(Xu and Li, 2013)</ns0:ref>, and novelty detection in time series data <ns0:ref type='bibr' target='#b39'>(Schmidt and Simic, 2019)</ns0:ref>.</ns0:p><ns0:p>Formally, AD is a classification problem with a representative set of normal samples and small, non-representative or empty set of anomalous examples. Such a setting makes conventional binary classification methods to be overfitted and not to be robust w.r.t. novel anomalies <ns0:ref type='bibr' target='#b12'>G&#246;rnitz et al. (2012)</ns0:ref>.</ns0:p><ns0:p>In contrast, conventional one-class classification (OC-) methods <ns0:ref type='bibr' target='#b7'>Breunig et al. (2000)</ns0:ref>; <ns0:ref type='bibr' target='#b21'>Liu et al. (2012)</ns0:ref> are typically robust against all types of outliers. However, OC-methods do not take into account known anomalies which often results to suboptimal performance in cases when normal and anomalous classes are not perfectly separable <ns0:ref type='bibr' target='#b8'>Campos et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b26'>Pang et al. (2019b)</ns0:ref>). The research in the area address several challenges <ns0:ref type='bibr' target='#b23'>Pang et al. (2021)</ns0:ref> that lie in the field of increasing precision, generalising to unknown anomaly class, and tackling multi-dimensional data. Several review of classical <ns0:ref type='bibr' target='#b47'>Zimek et al. (2012)</ns0:ref>; <ns0:ref type='bibr' target='#b0'>Aggarwal (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b6'>Boukerche et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b3'>Belhadi et al. (2020)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We propose 1 addressing the class-imbalanced classification task by modifying the learning procedure that effectively makes anomaly detection methods suitable for a two-class classification. Our approach relies on imbalanced dataset augmentation by surrogate anomalies sampled from normalizing flow-based generative models.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>PROBLEM STATEMENT</ns0:head><ns0:p>Classical AD methods consider anomalies a priori significantly different from the normal samples <ns0:ref type='bibr' target='#b0'>(Aggarwal, 2016)</ns0:ref>. In practice, while such samples are, indeed, most likely to be anomalous, often some anomalies might not be distinguishable from normal samples <ns0:ref type='bibr' target='#b13'>(Hunziker et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b34'>Pol et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b4'>Borisyak et al., 2017)</ns0:ref>. This provides a strong motivation to include known anomalous samples into the training procedure to improve the performance of the model on these ambiguous samples. Technically, this leads to a binary classification problem which is typically solved by minimizing cross-entropy loss function L BCE :</ns0:p><ns0:formula xml:id='formula_0'>f * (x) = arg min f L BCE ( f );</ns0:formula><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_1'>L BCE ( f ) = P(C + ) E x&#8764;C + log f (x) + P(C &#8722; ) E x&#8764;C &#8722; log (1 &#8722; f (x)) ;<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where: f is a arbitrary model (e.g., a neural network), C + and C &#8722; denote normal and anomalous classes.</ns0:p><ns0:p>In this case, the solution f * approaches the optimal Bayesian classifier:</ns0:p><ns0:formula xml:id='formula_2'>f * (x) = P(C + | x) = p(x | C + )p(C + ) p(x | C + )p(C + ) + p(x | C &#8722; )p(C &#8722; ) .<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Notice, that f * implicitly relies on the estimation of the probability densities P(x | C + ) and P(x | C &#8722; ).</ns0:p><ns0:p>A good estimation of these densities is possible only when a sufficiently large and representative sample is available for each class. In practical settings, this assumption certainly holds for the normal class.</ns0:p><ns0:p>However, the anomalous dataset is rarely large or representative, often consisting of only few samples or covering only a portion of all possible anomaly types 2 . With only a small number of examples (or a non-representative sample) to estimate the second term of Equation ( <ns0:ref type='formula' target='#formula_1'>2</ns0:ref>), L BCE effectively does not depend on f (x) in x &#8712; suppC &#8722; \ suppC + , which leads to solutions with arbitrary predictions in the area, i.e., to classifiers that are not robust to novel anomalies.</ns0:p><ns0:p>One-class classifiers avoid this problem by aiming to explicitly separate normal class from the rest of the space. As discussed above, this approach, however, ignores available anomalous samples, potentially leading to incorrect predictions on ambiguous samples.</ns0:p><ns0:p>Recently, semi-supervised AD algorithms like 1 + &#949;-classification method <ns0:ref type='bibr' target='#b5'>(Borisyak et al., 2020)</ns0:ref>, Deep Semi-supervised AD method <ns0:ref type='bibr' target='#b36'>(Ruff et al., 2019)</ns0:ref>, Feature Encoding with AutoEncoders for Weaklysupervised Anomaly Detection <ns0:ref type='bibr' target='#b46'>(Zhou et al., 2021)</ns0:ref> and Deep Weakly-supervised Anomaly Detection <ns0:ref type='bibr' target='#b24'>(Pang et al., 2019a)</ns0:ref> were put forward. They aim to combine the main properties of both unsupervised (one-class)</ns0:p><ns0:p>and supervised (binary classification) approaches: proper posterior probability estimations of binary classification and robustness against novel anomalies of one-class classification.</ns0:p><ns0:p>In this work, we propose a method that extends the 1 + &#949;-classification method <ns0:ref type='bibr' target='#b5'>(Borisyak et al., 2020)</ns0:ref> scheme by exploiting normalizing flows. The method is based on sampling the surrogate anomalies to augment the existing anomalies dataset using advanced techniques.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>NORMALIZING FLOWS</ns0:head><ns0:p>The normalizing flows <ns0:ref type='bibr' target='#b35'>(Rezende and Mohamed, 2015)</ns0:ref> generative model aims to fit the exact probability distribution of data. It represents a set of invertible transformations { f i (&#8226;; &#952; i )} with parameters &#952; i , to obtain a bijection between given distribution of training samples and some domain distribution with known probability density function (PDF). However, in the case of non-trivial bijection z 0 &#8596; z k , the distribution density at the final point z k (training sample) differs from the density at point z 0 (domain).</ns0:p><ns0:p>1 Source code is available at https://gitlab.com/lambda-hse/nfad 2 Moreover, anomalies typically cover a much larger 'phase space' than normal samples, thus, generic models (e.g. a deep neural network with fully connected layers) might require significantly more anomalous examples than normal ones.</ns0:p></ns0:div> <ns0:div><ns0:head>2/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60998:1:1:NEW 27 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This is due to the fact that each non-trivial transformation f i (&#8226;; &#952; i ) changes the infinitesimal volume at some points. Thus, the task is not only to find a flow of invertible transformations { f i (&#8226;; &#952; i )}, but also to know how the distribution density is changed at each point after each transformation f i (&#8226;; &#952; i ).</ns0:p></ns0:div> <ns0:div><ns0:head>Consider the multivariate transformation of variable z z z</ns0:head><ns0:formula xml:id='formula_3'>i = f f f i (z z z i&#8722;1 ; &#952; i ) with parameters &#952; i for i &gt; 0.</ns0:formula><ns0:p>Then, Jacobian for a given transformation f f f i (z z z i&#8722;1 ; &#952; i ) at given point z z z i&#8722;1 has the following form:</ns0:p><ns0:formula xml:id='formula_4'>J( f f f i |z z z i&#8722;1 ) = &#8706; f f f i &#8706; z 1 i&#8722;1 . . . &#8706; f f f i &#8706; z n i&#8722;1 = &#63726; &#63727; &#63727; &#63727; &#63728; &#8706; f 1 i&#8722;1 &#8706; z 1 i&#8722;1 . . . &#8706; f 1 i&#8722;1 &#8706; z n i&#8722;1 . . . . . . . . . &#8706; f m i&#8722;1 &#8706; z 1 i&#8722;1 . . . &#8706; f m i&#8722;1 &#8706; z n i&#8722;1 &#63737; &#63738; &#63738; &#63738; &#63739; (4)</ns0:formula><ns0:p>Then, the distribution density at point z z z i after the transformation f f f i of point z z z i&#8722;1 can be written in a following common way:</ns0:p><ns0:formula xml:id='formula_5'>p(z z z i ) = p(z z z i&#8722;1 ) | det J( f f f i |z z z i&#8722;1 )| ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>where det <ns0:ref type='bibr' target='#b35'>and Mohamed, 2015)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_6'>J( f f f i |z z z i&#8722;1 ) is a determinant of the Jacobian matrix J( f f f i |z z z i&#8722;1 ) (Rezende</ns0:formula><ns0:p>Thus, given a flow of invertible transformations</ns0:p><ns0:formula xml:id='formula_7'>f f f = { f f f i (&#8226;; &#952; i )} N i=1 with known {det J( f f f i |&#8226;)} N i=1 and domain distribution of z 0 z 0 z 0 with known p.d.f. p(z 0 z 0 z 0 ), we obtain likelihood p(x x x) for each object x x x = z N z N z N . This way, the parameters {&#952; i } N i=1 of NF model f f f can be fitted by explicit maximizing the likelihood p(x x x) for training objects x x x &#8712; X. In practice, Monte-Carlo estimate of log p(X) = log &#928; x&#8712;X p(x) = &#931; x&#8712;X log p(x) is</ns0:formula><ns0:p>optimized, which is an equivalent optimization procedure. Also, the likelihood p(X) can be used as a metric of how well the NF model f f f fits given data X.</ns0:p><ns0:p>The main bottleneck of that scheme is located in that det</ns0:p><ns0:formula xml:id='formula_8'>J(&#8226;|&#8226;) computation, which is O(n 3 ) in a</ns0:formula><ns0:p>common case (n is the dimension of variable z z z). In order to deal with that problem, specific normalizing flows with specific families of transformations f f f are used, for which Jacobian computation is way faster <ns0:ref type='bibr' target='#b35'>(Rezende and Mohamed, 2015;</ns0:ref><ns0:ref type='bibr' target='#b28'>Papamakarios et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b16'>Kingma et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b10'>Chen et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>ALGORITHM</ns0:head><ns0:p>The suggested NF-based AD method (NFAD) is a two-step procedure. In the first step, we train normalizing flow on normal samples to sample new surrogate anomalies. Here, we assume that anomalies differ from normal samples and its likelihood p NF (x &#8722; |C + ) is less than likelihood of normal samples p NF (x + |C + ). In the second step, we sample new surrogate anomalies from tails of normal samples distribution using NF and train an arbitrary binary classifier on normal samples and a mixture of real and sampled surrogate anomalies.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Step 1. Training normalizing flow</ns0:head><ns0:p>We train normalizing flow on normal samples. It can be trained by standard for normalizing flows scheme of maximization the log-likelihood (see Section 3): </ns0:p><ns0:formula xml:id='formula_9'>max &#952; L NF (6) L NF = E x&#8764;C + log p f (x) (7) = E z&#8764; f &#8722;1 (C + ;&#952; ) log p(z) &#8722; log | det J( f |z)| ,<ns0:label>(8)</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>(x) = p(z) J( f |z) z= f &#8722;1 (x;&#952; ) .</ns0:formula><ns0:p>After NF for sampling is trained, it can be used to sample new anomalies. To produce new anomalies, we sample z from tails of normal domain distribution, where p-value of tails is a hyperparameter (see Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>3/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60998:1:1:NEW 27 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_11'>L J = E z&#8764;N (0,1) log(| det J( f |z)|) 2 (9) max &#952; L NF &#8722; &#955; * L J , &#955; &#8805; 0, (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_12'>)</ns0:formula><ns0:p>where &#955; denotes the regularization hyperparameter. We estimate the regularization term L J in (9) by direct sampling of z from the domain distribution N (0, I) to cover the whole sampling space. The theorem below proofs that any level of expected distortion can be obtained with such a regularization: Then</ns0:p><ns0:formula xml:id='formula_13'>&#8704;&#949; &gt; 0 &#8707;&#955; &#8805; 0 such that E z&#8764;D log(| det J( f |z)|) 2 &#952; * &lt; &#949; &#8704;z &#8764; &#8486; &#8712; R d , where &#952; * &#8712; arg min &#952; &#8722; E x&#8764;C + log p f (x) + &#955; E z&#8764;D log(| det J( f |z)|) 2 , p f (x) = p(z) J( f |z) z= f &#8722;1 (x;&#952; ) .</ns0:formula><ns0:p>Proof. Suppose the opposite. Let &#8707;&#949; &gt; 0 s.t. &#8704;&#955; &#8805; 0 :</ns0:p><ns0:formula xml:id='formula_14'>E z&#8764;D log(|J( f |z)|) 2 &#952; * &#8805; &#949; for all &#952; * &#8712; arg min &#952; &#8722; E x&#8764;C + log p f (x) + &#955; E z&#8764;D log(| det J( f |z)|) 2 . Since &#8707;&#952; 0 : f (&#8226;; &#952; 0 ) = I, p f ( f (z; &#952; 0 )) = p(z) &#8704;z &#8764; &#8486;, the term E z&#8764;D log(| det J( f |z)|) 2 &#952; 0 = 0 since p(z) p( f (z; &#952; 0 )) = | det J( f |z)| &#952; 0 = 1 &#8704;z &#8712; &#8486; 4/12</ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60998:1:1:NEW 27 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Let <ns0:ref type='bibr' target='#b33'>(Pedregosa et al., 2011)</ns0:ref>. Without extra regularization distribution density of domain distribution (2a) significantly differs from the target distribution (2b) because of non-unit Jacobian. To preserve the distribution density after NF transformations Jacobian regularization (9) can be used (2c and 2d respectively)</ns0:p><ns0:formula xml:id='formula_15'>&#8722; E x&#8764;C + log p f (x) &#952; 0 = &#8722; E z&#8764;C + log p(z) = c 0 , min &#952; &#8722; E x&#8764;C + log p f (x) = c</ns0:formula></ns0:div> <ns0:div><ns0:head n='4.2'>Step 2. Training classifier</ns0:head><ns0:p>Once normalizing flow for anomaly sampling is trained, a classifier can be trained on normal samples and a mixture of real and surrogate anomalies sampled from NF (Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref>). During the research, we used binary cross-entropy objective (2) to train the classifier. We do not focus on classifier configuration since any classification model can be used at this step.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Final algorithm</ns0:head><ns0:p>The final scheme of algorithm is shown at Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref> Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head n='5'>RESULTS</ns0:head><ns0:p>We evaluate the proposed method on the following tabular and image datasets: KDD-99 <ns0:ref type='bibr' target='#b41'>(Stolfo et al., 1999)</ns0:ref>, SUSY <ns0:ref type='bibr' target='#b42'>(Whiteson, 2014)</ns0:ref>, HIGGS <ns0:ref type='bibr' target='#b2'>(Baldi et al., 2014)</ns0:ref>, <ns0:ref type='bibr'>MNIST (LeCun et al., 1998)</ns0:ref>, Omniglot <ns0:ref type='bibr' target='#b19'>(Lake et al., 2015)</ns0:ref> and CIFAR <ns0:ref type='bibr' target='#b18'>(Krizhevsky and Hinton, 2009)</ns0:ref>. In order to reflect typical AD cases behind approach, we derive multiple tasks from each dataset by varying size of anomalous datasets.</ns0:p><ns0:p>As the proposed method targets problems that are intermediate between one-class and two-class problems, we compare the proposed approach with the following algorithms:</ns0:p><ns0:p>&#8226; one-class methods: Robust AutoEncoder (RAE-OC, <ns0:ref type='bibr' target='#b9'>Chalapathy et al. (2017)</ns0:ref>) and Deep SVDD <ns0:ref type='bibr' target='#b37'>(Ruff et al., 2018)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>6/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60998:1:1:NEW 27 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Since not all of the evaluated algorithms allow for a probabilistic interpretation, ROC AUC metric is reported. Tables <ns0:ref type='table' target='#tab_2'>1, 2 and 3</ns0:ref> show the experimental results on tabular data. Tables <ns0:ref type='table' target='#tab_5'>4, 5 and 6</ns0:ref> one class 100 1000 10000 1000000 rae-oc 0.531 &#177; 0.000 0.531 &#177; 0.000 0.531 &#177; 0.000 0.531 &#177; 0.000 0.531 &#177; 0.000 deep-svdd-oc 0.513 &#177; 0.000 0.513 &#177; 0.000 0.513 &#177; 0.000 0.513 &#177; 0.000 0.513 &#177; 0.000 two-class -0.504 &#177; 0.017 0.529 &#177; 0.007 0.566 &#177; 0.006 0.858 &#177; 0.002 dae -0.502 &#177; 0.003 0.522 &#177; 0.003 0.603 &#177; 0.002 0.745 &#177; 0.005 brute-force-ope 0.508 &#177; 0.000 0.500 &#177; 0.009 0.520 &#177; 0.003 0.572 &#177; 0.005 0.859 &#177; 0.001 hmc-eope 0.509 &#177; 0.000 0.523 &#177; 0.005 0.567 &#177; 0.008 0.648 &#177; 0.005 0.848 &#177; 0.001 rmsprop-eope 0.503 &#177; 0.000 0.506 &#177; 0.008 0.531 &#177; 0.008 0.593 &#177; 0.011 0.861 &#177; 0.000 deep-eope 0.531 &#177; 0.000 0.537 &#177; 0.011 0.560 &#177; 0.008 0.628 &#177; 0.005 0.860 &#177; 0.001 devnet -0.565 &#177; 0.011 0.697 &#177; 0.006 0.748 &#177; 0.004 0.748 &#177; 0.003 feawad -0.551 &#177; 0.009 0.555 &#177; 0.014 0.554 &#177; 0.020 0.549 &#177; 0.018 deep-sad 0.502 &#177; 0.010 0.511 &#177; 0.006 0.561 &#177; 0.016 0.740 &#177; 0.011 0.833 &#177; 0.002 pro -0.533 &#177; 0.022 0.569 &#177; 0.011 0.570 &#177; 0.012 0.582 &#177; 0.015 nfad (iaf) 0.572 &#177; 0.009 0.574 &#177; 0.008 0.586 &#177; 0.009 0.623 &#177; 0.007 0.750 &#177; 0.008 nfad (nsf) 0.531 &#177; 0.010 0.519 &#177; 0.008 0.554 &#177; 0.009 0.659 &#177; 0.007 0.807 &#177; 0.007 Manuscript to be reviewed</ns0:p><ns0:p>Computer Science one class 100 1000 10000 1000000 rae-oc 0.586 &#177; 0.000 0.586 &#177; 0.000 0.586 &#177; 0.000 0.586 &#177; 0.000 0.586 &#177; 0.000 deep-svdd-oc 0.568 &#177; 0.000 0.568 &#177; 0.000 0.568 &#177; 0.000 0.568 &#177; 0.000 0.568 &#177; 0.000 two-class -0.652 &#177; 0.031 0.742 &#177; 0.011 0.792 &#177; 0.004 0.878 &#177; 0.000 dae -0.715 &#177; 0.020 0.766 &#177; 0.009 0.847 &#177; 0.002 0.876 &#177; 0.000 brute-force-ope 0.597 &#177; 0.000 0.672 &#177; 0.020 0.748 &#177; 0.012 0.792 &#177; 0.003 0.878 &#177; 0.000 hmc-eope 0.528 &#177; 0.000 0.738 &#177; 0.019 0.770 &#177; 0.012 0.816 &#177; 0.006 0.877 &#177; 0.000 rmsprop-eope 0.528 &#177; 0.000 0.714 &#177; 0.019 0.760 &#177; 0.016 0.807 &#177; 0.004 0.877 &#177; 0.000 deep-eope 0.652 &#177; 0.000 0.670 &#177; 0.054 0.746 &#177; 0.024 0.813 &#177; 0.003 0.878 &#177; 0.000 devnet -0.747 &#177; 0.023 0.849 &#177; 0.002 0.853 &#177; 0.002 0.854 &#177; 0.004 feawad -0.758 &#177; 0.019 0.760 &#177; 0.028 0.760 &#177; 0.022 0.762 &#177; 0.025 deep-sad 0.534 &#177; 0.022 0.581 &#177; 0.027 0.785 &#177; 0.014 0.860 &#177; 0.009 0.872 &#177; 0.008 pro -0.833 &#177; 0.008 0.861 &#177; 0.002 0.863 &#177; 0.001 0.863 &#177; 0.002 nfad (iaf) 0.701 &#177; 0.007 0.801 &#177; 0.007 0.829 &#177; 0.007 0.868 &#177; 0.006 0.880 &#177; 0.000 nfad (nsf) 0.785 &#177; 0.001 0.811 &#177; 0.013 0.855 &#177; 0.012 0.865 &#177; 0.001 0.876 &#177; 0.003 </ns0:p></ns0:div> <ns0:div><ns0:head n='6'>DISCUSSION</ns0:head><ns0:p>On tabular data (Tables <ns0:ref type='table' target='#tab_2'>1, 2 and 3</ns0:ref>), the proposed NFAD method shows statistically significant improvement over other AD algorithms in many experiments, where the amount of anomalous samples is extremely low. Our tests on toy datasets suggest that the best results are achieved in case normal class distribution has single mode and convex borders.</ns0:p><ns0:p>On image data (Tables <ns0:ref type='table' target='#tab_4'>4, 5</ns0:ref>, 6), the proposed method shows competitive quality along with other state-of-the-art AD methods, significantly outperforming the existing algorithms on CIFAR dataset.</ns0:p><ns0:p>Our experiments suggest the main reason for the proposed method to have lower performance with respect to others on image data is a tendency of normalizing flows to estimate the likelihood of images by its local features instead of common semantics, as described by <ns0:ref type='bibr' target='#b17'>Kirichenko et al. (2020)</ns0:ref>. We also find that the overfitting of the classifier must be carefully monitored and addressed, as this might lead to deterioration of algorithm.</ns0:p><ns0:p>However, the results obtained on HIGGS, KDD, SUSY and CIFAR-10 datasets demonstrated big potential of the proposed method over previous AD algorithms. With the advancement of new ways of NF application to images the results are expected to improve for this class of datasets as well. In particular, we believe our method to be widely applicable in the industrial environment, where the task of AD can take advantage of both tabular and image-like datasets.</ns0:p></ns0:div> <ns0:div><ns0:head n='7'>CONCLUSION</ns0:head><ns0:p>In this work, we present a new model-agnostic anomaly detection training scheme that deals efficiently with hard-to-address problems both by one-class or two-class methods. The solution combines the best features of one-class and two-class approaches. In contrast to one-class approaches, the proposed method makes the classifier to effectively utilize any number of known anomalous examples, but, unlike conventional two-class classification, does not require an extensive number of anomalous samples. The proposed algorithm significantly outperforms the existing anomaly detection algorithms in most of realistic anomaly detection cases. This approach is especially beneficial for anomaly detection problems, in which anomalous data is non-representative, or might drift over time.</ns0:p><ns0:p>The proposed method is fast, stable and flexible both in terms of training and inference stages; unlike previous methods, any classifier can be used in the scheme (Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref>). Such a universal augmentation scheme opens wide prospects for further anomaly detection study and makes possible to use any classifier on any kind of data. Also, the results on datasets with images are improvable with new techniques of normalising flows become available.</ns0:p></ns0:div> <ns0:div><ns0:head>FUNDING STATEMENT</ns0:head><ns0:p>The research leading to these results has received funding from Russian Science Foundation under grant agreement n &#8226; 19-71-30020. The research was also supported in part through computational resources of Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>HPC facilities at NRU HSE.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>and deep-learning methods Pang et al. (2021) were published that describe the literature in detail. With advancement of the neural generative modeling methods based on generative adversarial networks Schlegl et al. (2017), variational autoencoders Xu et al. (2018), and normalising flows Pathak (2019) are introduced for the AD task. PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60998:1:1:NEW 27 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>where f (&#8226;; &#952; ) is NF transformation with parameters &#952; , J( f |z) is Jacobian of transformation f (z; &#952; ) at point z, z are samples from multivariate standard normal domain distribution p(z) = N (z|0, I), x are normal samples from the training dataset, p f</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. NF bijection between tails of standard normal domain distribution and the Moon dataset (Pedregosa et al., 2011) for different tail p-values</ns0:figDesc><ns0:graphic coords='5,245.13,63.78,206.78,158.38' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Theorem 4.1. Let &#8486; &#8834; R d a sample space with probability (domain) distribution D, C + &#8834; &#8486; a class of normal samples, f (&#8226;; &#952; ) : R d &#8594; R d is a set of invertible transformations parametrized by &#952; and &#8707;&#952; 0 : f (&#8226;; &#952; 0 ) = I (identical transformation exists).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>min &lt; c 0 (minimum exists since negative log likelihood is lower bounded by 0). Then &#8704;&#955; :c 0 &gt; c min + &#955; E z&#8764;D log(| det J( f |z)|) 2 &#952; * &#8805; c min + &#955; &#949; But &#955; &gt; c 0 &#8722;c min&#949; leads to contradiction. In this work, we use Neural Spline Flows (NSF, Durkan et al., 2019) and Inverse (IAF, Kingma et al., 2016) Autoregressive Flows for tabular anomalies sampling. We also use Residual Flow (ResFlow, Chen et al., 2019) for anomalies sampling on image datasets. All the flows satisfy the conditions of Theorem 4.1. The proposed algorithms are called 'nfad-nsf', 'nfad-iaf' and 'nfad-resflow' respectively.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>( a )Figure 2 .</ns0:head><ns0:label>a2</ns0:label><ns0:figDesc>Figure 2. Density distortion of normalizing flows on the Moon dataset<ns0:ref type='bibr' target='#b33'>(Pedregosa et al., 2011)</ns0:ref>. Without extra regularization distribution density of domain distribution (2a) significantly differs from the target distribution (2b) because of non-unit Jacobian. To preserve the distribution density after NF transformations Jacobian regularization (9) can be used (2c and 2d respectively)</ns0:figDesc><ns0:graphic coords='6,147.71,391.48,99.25,56.55' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>accompanied with pseudocode Algorithm 1. All training details are given in Appendix A. 5/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60998:1:1:NEW 27 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Normalizing flows for anomaly detection (NFAD). Surrogate anomalies are sampled from the tails of gaussian distribution and transformed by NF to be mixed into real samples. Then, any classifier can be trained on that mixture.</ns0:figDesc><ns0:graphic coords='7,193.68,63.78,309.67,233.81' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>&#8226;</ns0:head><ns0:label /><ns0:figDesc>conventional two-class classification; &#8226; semi-supervised methods: dimensionality reduction by an Deep AutoEncoder followed by twoclass classification (DAE), Feature Encoding with AutoEncoders for Weakly-supervised Anomaly Detection (FEAWAD, Zhou et al. (2021)), DevNet (Pang et al., 2019b), 1+&#949; method (Borisyak et al., 2020) ('*ope'), Deep SAD (Ruff et al., 2019) and Deep Weakly-supervised Anomaly Detection (PRO, Pang et al. (2019a))</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:60998:1:1:NEW 27 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>show the experimental results on image data. Also, some of the aforementioned algorithms like DevNet are applicable only to tabular data and not reported on image data. In these tables, columns represent tasks with varying number of negative samples presented in the training set: numbers in the header indicate either number of classes that form negative class (in case of KDD, CIFAR, OMNIGLOT and MNIST datasets) or number of negative samples used (HIGGS and SUSY); 'one-class' denotes the absence of known anomalous samples. As one-class algorithms do not take into account negative samples, their results are identical for the tasks with any number of known anomalies. The best performance cases are ROC AUC on KDD-99 dataset. 'nfad*' is our algorithm.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>underlined with bold.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>one class</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>rae-oc</ns0:cell><ns0:cell cols='5'>0.972 &#177; 0.006 0.972 &#177; 0.006 0.972 &#177; 0.006 0.972 &#177; 0.006 0.972 &#177; 0.006</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-svdd-oc</ns0:cell><ns0:cell cols='5'>0.939 &#177; 0.014 0.939 &#177; 0.014 0.939 &#177; 0.014 0.939 &#177; 0.014 0.939 &#177; 0.014</ns0:cell></ns0:row><ns0:row><ns0:cell>two-class</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='4'>0.571 &#177; 0.213 0.700 &#177; 0.182 0.687 &#177; 0.268 0.619 &#177; 0.257</ns0:cell></ns0:row><ns0:row><ns0:cell>dae</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='4'>0.685 &#177; 0.258 0.531 &#177; 0.286 0.758 &#177; 0.171 0.865 &#177; 0.087</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>brute-force-ope 0.564 &#177; 0.122 0.667 &#177; 0.175 0.606 &#177; 0.261 0.737 &#177; 0.187 0.541 &#177; 0.257</ns0:cell></ns0:row><ns0:row><ns0:cell>hmc-eope</ns0:cell><ns0:cell cols='5'>0.739 &#177; 0.245 0.885 &#177; 0.152 0.919 &#177; 0.055 0.863 &#177; 0.094 0.958 &#177; 0.023</ns0:cell></ns0:row><ns0:row><ns0:cell>rmsprop-eope</ns0:cell><ns0:cell cols='5'>0.765 &#177; 0.216 0.960 &#177; 0.017 0.854 &#177; 0.187 0.964 &#177; 0.016 0.976 &#177; 0.011</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-eope</ns0:cell><ns0:cell cols='5'>0.602 &#177; 0.279 0.701 &#177; 0.230 0.528 &#177; 0.300 0.749 &#177; 0.209 0.785 &#177; 0.259</ns0:cell></ns0:row><ns0:row><ns0:cell>devnet</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='4'>0.557 &#177; 0.104 0.594 &#177; 0.111 0.698 &#177; 0.163 0.812 &#177; 0.164</ns0:cell></ns0:row><ns0:row><ns0:cell>feawad</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='4'>0.862 &#177; 0.088 0.913 &#177; 0.069 0.892 &#177; 0.101 0.937 &#177; 0.083</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-sad</ns0:cell><ns0:cell cols='5'>0.803 &#177; 0.236 0.868 &#177; 0.182 0.942 &#177; 0.022 0.943 &#177; 0.069 0.968 &#177; 0.007</ns0:cell></ns0:row><ns0:row><ns0:cell>pro</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='4'>0.726 &#177; 0.179 0.728 &#177; 0.163 0.870 &#177; 0.128 0.905 &#177; 0.106</ns0:cell></ns0:row><ns0:row><ns0:cell>nfad (iaf)</ns0:cell><ns0:cell cols='5'>0.981 &#177; 0.001 0.984 &#177; 0.002 0.993 &#177; 0.002 0.997 &#177; 0.002 0.997 &#177; 0.002</ns0:cell></ns0:row><ns0:row><ns0:cell>nfad (nsf)</ns0:cell><ns0:cell cols='5'>0.704 &#177; 0.007 0.875 &#177; 0.121 0.901 &#177; 0.082 0.926 &#177; 0.041 0.945 &#177; 0.022</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>ROC AUC on HIGGS dataset. 'nfad*' is our algorithm.</ns0:figDesc><ns0:table /><ns0:note>7/12PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60998:1:1:NEW 27 Aug 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>ROC AUC on SUSY dataset. 'nfad*' is our algorithm.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>one class</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>nn-oc</ns0:cell><ns0:cell cols='4'>0.787 &#177; 0.139 0.787 &#177; 0.139 0.787 &#177; 0.139 0.787 &#177; 0.139</ns0:cell></ns0:row><ns0:row><ns0:cell>rae-oc</ns0:cell><ns0:cell cols='4'>0.978 &#177; 0.017 0.978 &#177; 0.017 0.978 &#177; 0.017 0.978 &#177; 0.017</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-svdd-oc</ns0:cell><ns0:cell cols='4'>0.641 &#177; 0.086 0.641 &#177; 0.086 0.641 &#177; 0.086 0.641 &#177; 0.086</ns0:cell></ns0:row><ns0:row><ns0:cell>two-class</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='3'>0.879 &#177; 0.108 0.957 &#177; 0.050 0.987 &#177; 0.014</ns0:cell></ns0:row><ns0:row><ns0:cell>dae</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='3'>0.934 &#177; 0.035 0.964 &#177; 0.032 0.984 &#177; 0.012</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>brute-force-ope 0.783 &#177; 0.120 0.915 &#177; 0.096 0.968 &#177; 0.041 0.986 &#177; 0.015</ns0:cell></ns0:row><ns0:row><ns0:cell>hmc-eope</ns0:cell><ns0:cell cols='4'>0.694 &#177; 0.167 0.933 &#177; 0.060 0.974 &#177; 0.023 0.989 &#177; 0.011</ns0:cell></ns0:row><ns0:row><ns0:cell>rmsprop-eope</ns0:cell><ns0:cell cols='4'>0.720 &#177; 0.186 0.933 &#177; 0.062 0.977 &#177; 0.023 0.990 &#177; 0.009</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-eope</ns0:cell><ns0:cell cols='4'>0.793 &#177; 0.129 0.942 &#177; 0.048 0.979 &#177; 0.016 0.991 &#177; 0.007</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-sad</ns0:cell><ns0:cell cols='4'>0.636 &#177; 0.114 0.859 &#177; 0.094 0.908 &#177; 0.071 0.947 &#177; 0.059</ns0:cell></ns0:row><ns0:row><ns0:cell>pro</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='3'>0.911 &#177; 0.096 0.944 &#177; 0.065 0.952 &#177; 0.079</ns0:cell></ns0:row><ns0:row><ns0:cell>nfad (resflow)</ns0:cell><ns0:cell cols='4'>0.682 &#177; 0.115 0.909 &#177; 0.959 0.935 &#177; 0.111 0.972 &#177; 0.019</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>ROC AUC on MNIST dataset. 'nfad*' is our algorithm.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>one class</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>nn-oc</ns0:cell><ns0:cell cols='4'>0.532 &#177; 0.101 0.532 &#177; 0.101 0.532 &#177; 0.101 0.532 &#177; 0.101</ns0:cell></ns0:row><ns0:row><ns0:cell>rae-oc</ns0:cell><ns0:cell cols='4'>0.585 &#177; 0.126 0.585 &#177; 0.126 0.585 &#177; 0.126 0.585 &#177; 0.126</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-svdd-oc</ns0:cell><ns0:cell cols='4'>0.546 &#177; 0.058 0.546 &#177; 0.058 0.546 &#177; 0.058 0.546 &#177; 0.058</ns0:cell></ns0:row><ns0:row><ns0:cell>two-class</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='3'>0.659 &#177; 0.093 0.708 &#177; 0.086 0.748 &#177; 0.082</ns0:cell></ns0:row><ns0:row><ns0:cell>dae</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='3'>0.587 &#177; 0.109 0.634 &#177; 0.109 0.671 &#177; 0.093</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>brute-force-ope 0.540 &#177; 0.101 0.688 &#177; 0.087 0.719 &#177; 0.079 0.757 &#177; 0.073</ns0:cell></ns0:row><ns0:row><ns0:cell>hmc-eope</ns0:cell><ns0:cell cols='4'>0.547 &#177; 0.116 0.678 &#177; 0.091 0.709 &#177; 0.084 0.739 &#177; 0.074</ns0:cell></ns0:row><ns0:row><ns0:cell>rmsprop-eope</ns0:cell><ns0:cell cols='4'>0.565 &#177; 0.111 0.678 &#177; 0.081 0.715 &#177; 0.083 0.746 &#177; 0.069</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-eope</ns0:cell><ns0:cell cols='4'>0.564 &#177; 0.094 0.674 &#177; 0.100 0.690 &#177; 0.092 0.719 &#177; 0.099</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-sad</ns0:cell><ns0:cell cols='4'>0.532 &#177; 0.061 0.653 &#177; 0.072 0.680 &#177; 0.069 0.689 &#177; 0.065</ns0:cell></ns0:row><ns0:row><ns0:cell>pro</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='3'>0.635 &#177; 0.081 0.653 &#177; 0.075 0.670 &#177; 0.069</ns0:cell></ns0:row><ns0:row><ns0:cell>nfad (resflow)</ns0:cell><ns0:cell cols='4'>0.597 &#177; 0.083 0.800 &#177; 0.095 0.863 &#177; 0.042 0.877 &#177; 0.045</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>ROC AUC on CIFAR-10 dataset. 'nfad*' is our algorithm. &#177; 0.166 0.521 &#177; 0.166 0.521 &#177; 0.166 0.521 &#177; 0.166 rae-oc 0.771 &#177; 0.221 0.771 &#177; 0.221 0.771 &#177; 0.221 0.771 &#177; 0.221 deep-svdd-oc 0.640 &#177; 0.153 0.640 &#177; 0.153 0.640 &#177; 0.153 0.640 &#177; 0.153 two-class -0.799 &#177; 0.162 0.862 &#177; 0.115 0.855 &#177; 0.125 dae -0.737 &#177; 0.134 0.821 &#177; 0.104 0.805 &#177; 0.121 brute-force-ope 0.503 &#177; 0.213 0.724 &#177; 0.222 0.765 &#177; 0.208 0.825 &#177; 0.126 hmc-eope 0.710 &#177; 0.178 0.801 &#177; 0.139 0.842 &#177; 0.112 0.842 &#177; 0.115 rmsprop-eope 0.678 &#177; 0.274 0.821 &#177; 0.143 0.855 &#177; 0.112 0.863 &#177; 0.111 deep-eope 0.696 &#177; 0.172 0.808 &#177; 0.140 0.851 &#177; 0.110 0.842 &#177; 0.122 deep-sad 0.832 &#177; 0.123 0.856 &#177; 0.123 0.885 &#177; 0.095 0.884 &#177; 0.091 pro -0.750 &#177; 0.160 0.765 &#177; 0.163 0.787 &#177; 0.153 nfad (resflow) 0.567 &#177; 0.108 0.727 &#177; 0.188 0.868 &#177; 0.111 0.870 &#177; 0.102</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>one class</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>nn-oc</ns0:cell><ns0:cell>0.521</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>8/12PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60998:1:1:NEW 27 Aug 2021)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>ROC AUC on Omniglot dataset. Note that for this task only Greek, Futurama and Braille alphabets were considered as normal classes. 'nfad*' is our algorithm.</ns0:figDesc><ns0:table /></ns0:figure> </ns0:body> "
"Editor PeerJ Computer Science August 22, 2021 National Research University Higher School of Economics 20 Myasnitskaya Ulitsa, Moscow 101000 Russia, aryzhikov@hse.ru Dear Editor, first of all, we would like to thank you and the reviewers for the useful feedback and considering our paper. Below, we would like to respond to the reviewers’ commentaries and indicate changes made in the paper. The most prominent change we have made is an addition of 2 new algorithms in the comparison, as requested by the reviewer. We also want to indicate that we have updated the instructions to run the code, which, we believe now to be more clear. Rewiever #1 (Yingjie Zhou) Basic reporting The introduction could be more detailed, such as adding the reason why the proposed method in the manuscript works. We reformulated the introduction based on the comment. Also, we expanded the introduction with more references to literature. Our method is based on a merger of previously published works (1 + ε classification and normalizing flows), we thus consider that this method should have the strengths and weaknesses inherited from the models. More justification of the method was also added to Algorithm section including the theorem which ensures the correctness of the method. I suggest that the authors add more recent and closely related references on the same problem, such as: (1) Feature Encoding with AutoEncoders for Weakly-supervised Anomaly Detection, IEEE Transactions on Neural Networks and Learning Systems, 2021. (2) Deep Weakly-supervised Anomaly Detection. arXiv preprint: 1910.13601 (2019). I also encourage the authors to consider comparing their performance with those of the two citations I recommended. The authors had use an early version of the second recommended citation as a comparing method(devnet); the first citation has a superior performance than that of devnet on the same problem. The first paper is my own paper. And the second one has no conflict of interest with me. Many thanks for your suggestions! We have included the models into evaluation. Note, that the first paper is evaluated using authors’ code. While the code for the second paper is not public, we thus use our own implementation which presents in the code repository. The manuscript is not well formated in the current version. Also, there are some typos and errors in the content, e.g., the caption of some tables are wrongly represented as ”Figure X.”. Excludingly, this manuscript basically meets the requirements. Thanks! We put efforts to correct the style and captions. Experimental design In the experiments, the analysis of influencing factors is missing, and the author can add some experiments to this regard. We have added a sentence to discussion, which is the outcome of our tests. The text now reads: ”Our tests on toy datasets suggest that the best results are achieved in case normal class distribution has single mode and convex borders.” If you feel there are new tests to be added to the paper, we will be happy to add them. Validity of the findings More experiments are suggested to conduct to show the effect of the proposed method as well as the major factors that may have impact on its performance. The findings are based on Borisyak et al ”(1 + ε)-class Classification: an 2 Anomaly Detection Method for Highly Imbalanced or Incomplete Data Sets”, together with a newly introduced theorem 1 in paper, we believe they form a solid foundation to our conclusions. However, if you suggest any particular experiment, we are open to perform it. Finally, we would like to thank the reviewer for the valuable feedback. Additional comments I attempted to run the code, but an error occured. Maybe there’s something wrong with my environment or my network. But I hope the authors make sure the code could run automatically without any additional change. Also for this code, the authors are suggested to provide more detailed instructions. Many thanks for pointing this out. Local reproduction instruction is added to README (in code repository1 ), code successfully runs locally. Also, we show implementation details in the appendix. Rewiever #2 Dear Reviewer, many thanks for your insightful review. We believe that it helped us improve the article quite a lot. Kindly find our results inline. Basic Reporting The language is correct and the article is written in clear and professional English Thank you The literature on related papers could be more explored. Just a few numbers of papers were cited, and the article mentions just a few methods of AD, specially based on classification, while there are others as ones based on Clustering, distance and density (as in F. Meng, G. Yuan, S. Lv, Z. Wang, and S. Xia, “An overview on trajectory outlier detection,” Artificial Intelligence Review, vol. 52, no. 4, pp. 2437–2456, 2019) Many thanks for this comment, we tried to improve the wording and added several more references including the one you suggested. We also included 1 Source code is available at https://gitlab.com/lambda-hse/nfad 3 2 more models to evaluation, as suggested by the other reviewer. The structure doesn’t follow the format suggest by the journal. It combines “Results” and “Discussion” in just one item, what makes the paper less clear, because there are discussions presented before results. Result are present in 3 paragraphs after Figure 4 and before figures 7 to 9. In Figure 1, specially the last diagram is too hard to see the tail p-values. Figures 4, 5, 6, 7, 8 and 9 are not figures but table. They should shave been mentioned as tables not figures. Figures 8 and 9 are misplaced in the text. They are in “Conclusion”, but they should be included in “Results and Discussions” as figures 4 to 7 are. In figure 7 and 8, the best results are bolded as in other figures. They should have been bolded despite the best results are not provided by the proposed algorithm. Thank you for your comment, we reviewed this part, it now has a proper formatting. Figures are compiled in the correct place, captions and structure are also updated, results table format is fixed. All appropriate raw data have been made available in accordance with our Data Sharing policy. The article proposes a new technique to address DA problem. The problems are well stated, and the hypothesis is tested, showing relevant results, even the ones that the proposed algorithm doesn’t performance well. The submission is ‘self-contained,’ and represents an appropriate ‘unit of publication’. It also includes all results relevant to the hypothesis. Thank you In topic 3, when authors state that normalizing flows generative mode aims to fit the exact probability distribution of data, they don’t mention that normalizing flows provides exact inference and log-likelihood evaluation as its merits (D. P. Kingma and P. Dhariwal, “Glow: Generative flow with invertible 1x1 convolutions,” in Advances in Neural Information Processing Systems, 2018, pp. 10 215–10 224). It is important because when the model is trained in topic 4, it is trained by standard for normal4 izing scheme of maximization the log-likelihood, which is not described previously when normalizing flows are described (Topic 3). Thank you for suggestion, the text now contains that important detail. Experimental design The submission is within Aims and Scope of the journal, because proposes a new algorithm to AD task, a field related to Computer Sciences. The paper is a Research Article, in which a well stated hypothesis is tested and the results of these tests are presented. Thank you The research question is well defined and also the article scope. However, in the second line of “Problem Statement” item, the statement “In practice, while such samples are, indeed, most likely to be anomalous, often some anomalies might not be distinguishable from normal samples” doesn’t have any reference. We added citations to the papers, where this happen in real-world datasets. Also, the effect of the number of anomaly samples in the training data should be addressed more deeply, as in Exploring normalizing flows for Anomaly Detection (Pathak, C. 2019), as this effect is one of the most important issues in the paper. Thanks for pointing to this reference, we now cite it in the introduction. We have tests that show the dependendence on the number of anomalies available (Tables 2,3) and number of anomalous classes (Tables 1, 4-6). If you have another test in mind, we are happy to implement it. The investigation must have been conducted rigorously and to a high technical standard. The research must have been conducted in conformity with the prevailing ethical standards in the field. We follow these requirements in the manuscript, but if you have any suggestions related with this comment we would be happy to add it. Methods should be described with sufficient information to be 5 reproducible by another investigator. The text provides a reference to the implementation of the algorithm. In addition, we put implementation details in the appendix. Validity of the Findings Conclusions are based in a metric presented in tables, so we can see clearly where the proposed algorithm surpass others and where it doesn’t. However, the metric employed is not clearly described. We infer that authors use accuracy, but this option is not justified. We use ROC AUC, and now give the rationale in the text: ”Since not all of the evaluated algorithms allow for a probabilistic interpretation, ROC AUC metric is reported.”. Also, we updated captions of the tables and added metric there as well. The data is provided and it was possible to rerun the experiment. Thank you The conclusions are appropriately stated, and connected to the original question investigated, and are supported by the results, as based in obtained results described in figures 4 to 9. The proposed algorithm outperformances many tested algorithms in most of situations, except when dialing with images dataset. Thank you General comments As general conclusion, the paper can be published after mandatory corrections, specially these described in item 1. Thank you for your assessment. 6 Summary We hope, that after addressing these valuable commentaries, the paper now provides a clear picture of the proposed methods, and we wish to thank the reviewers once again. Sincerely, Artem Ryzhikov on behalf of authors. 7 "
Here is a paper. Please give your review comments after reading it.
261
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Anomaly detection is a challenging task that frequently arises in practically all areas of industry and science, from fraud detection and data quality monitoring to finding rare cases of diseases and searching for new physics. Most of the conventional approaches to anomaly detection, such as one-class SVM and Robust Auto-Encoder, are one-class classification methods, i.e. focus on separating normal data from the rest of the space. Such methods are based on the assumption of separability of normal and anomalous classes, and subsequently do not take into account any available samples of anomalies. Nonetheless, in practical settings, some anomalous samples are often available; however, usually in amounts far lower than required for a balanced classification task, and the separability assumption might not always hold. This leads to an important task -incorporating known anomalous samples into training procedures of anomaly detection models.</ns0:p><ns0:p>In this work, we propose a novel model-agnostic training procedure to address this task. We reformulate one-class classification as a binary classification problem with normal data being distinguished from pseudo-anomalous samples. The pseudo-anomalous samples are drawn from low-density regions of a normalizing flow model by feeding tails of the latent distribution into the model. Such an approach allows to easily include known anomalies into the training process of an arbitrary classifier. We demonstrate that our approach shows comparable performance on one-class problems, and, most importantly, achieves comparable or superior results on tasks with variable amounts of known anomalies.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>The anomaly detection (AD) problem is one of the important tasks in the analysis of real-world data.</ns0:p><ns0:p>Possible applications range from the data-quality certification (for example, <ns0:ref type='bibr' target='#b4'>Borisyak et al., 2017)</ns0:ref> to finding the rare specific cases of the diseases in medicine <ns0:ref type='bibr' target='#b43'>(Spence et al., 2001)</ns0:ref>. The technique can be also used in credit card fraud detection <ns0:ref type='bibr' target='#b1'>(Aleskerov et al., 1997)</ns0:ref>, complex systems failure predictions <ns0:ref type='bibr' target='#b48'>(Xu and Li, 2013)</ns0:ref>, and novelty detection in time series data <ns0:ref type='bibr' target='#b41'>(Schmidt and Simic, 2019)</ns0:ref>.</ns0:p><ns0:p>Formally, AD is a classification problem with a representative set of normal samples and a small, non-representative or empty set of anomalous examples. Such a setting makes conventional binary classification methods to be overfitted and not to be robust w.r.t. novel anomalies <ns0:ref type='bibr' target='#b13'>(G&#246;rnitz et al., 2012)</ns0:ref>. In contrast, conventional one-class classification (OC-) methods <ns0:ref type='bibr' target='#b7'>(Breunig et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b24'>Liu et al., 2012)</ns0:ref> are typically robust against all types of outliers. However, OC-methods do not take into account known anomalies which often result to suboptimal performance in cases when normal and anomalous classes are not perfectly separable <ns0:ref type='bibr' target='#b8'>(Campos et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b28'>Pang et al., 2019b)</ns0:ref>). The research in the area addresses several challenges <ns0:ref type='bibr' target='#b26'>(Pang et al., 2021)</ns0:ref> that lie in the field of increasing precision, generalizing to unknown anomaly classes, and tackling multi-dimensional data. Several reviews of classical <ns0:ref type='bibr' target='#b50'>(Zimek et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b0'>Aggarwal, 2016;</ns0:ref><ns0:ref type='bibr' target='#b6'>Boukerche et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b3'>Belhadi et al., 2020)</ns0:ref> and deep-learning methods <ns0:ref type='bibr' target='#b26'>(Pang et al., 2021)</ns0:ref> were published that describe the literature in detail. With the advancement of the neural generative modeling, methods based on generative adversarial networks <ns0:ref type='bibr' target='#b40'>(Schlegl et al., 2017)</ns0:ref>, variational autoencoders <ns0:ref type='bibr' target='#b46'>(Xu et al., 2018)</ns0:ref>, and normalizing flows <ns0:ref type='bibr' target='#b33'>(Pathak, 2019)</ns0:ref> are introduced for the AD task. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We propose 1 addressing the class-imbalanced classification task by modifying the learning procedure that effectively makes anomaly detection methods suitable for a two-class classification. Our approach relies on imbalanced dataset augmentation by surrogate anomalies sampled from normalizing flow-based generative models.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>PROBLEM STATEMENT</ns0:head><ns0:p>Classical AD methods consider anomalies a priori significantly different from the normal samples <ns0:ref type='bibr' target='#b0'>(Aggarwal, 2016)</ns0:ref>. In practice, while such samples are, indeed, most likely to be anomalous, often some anomalies might not be distinguishable from normal samples <ns0:ref type='bibr' target='#b15'>(Hunziker et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b35'>Pol et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b4'>Borisyak et al., 2017)</ns0:ref>. This provides a strong motivation to include known anomalous samples into the training procedure to improve the performance of the model on these ambiguous samples. Technically, this leads to a binary classification problem which is typically solved by minimizing cross-entropy loss function L BCE :</ns0:p><ns0:formula xml:id='formula_0'>f * (x) = arg min f L BCE ( f );</ns0:formula><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_1'>L BCE ( f ) = P(C + ) E x&#8764;C + log f (x) + P(C &#8722; ) E x&#8764;C &#8722; log (1 &#8722; f (x)) ;<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where: f is a arbitrary model (e.g., a neural network), C + and C &#8722; denote normal and anomalous classes.</ns0:p><ns0:p>In this case, the solution f * approaches the optimal Bayesian classifier:</ns0:p><ns0:formula xml:id='formula_2'>f * (x) = P(C + | x) = p(x | C + )p(C + ) p(x | C + )p(C + ) + p(x | C &#8722; )p(C &#8722; ) .<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Notice, that f * implicitly relies on the estimation of the probability densities P(x | C + ) and P(x | C &#8722; ).</ns0:p><ns0:p>A good estimation of these densities is possible only when a sufficiently large and representative sample is available for each class. In practical settings, this assumption certainly holds for the normal class.</ns0:p><ns0:p>However, the anomalous dataset is rarely large or representative, often consisting of only a few samples or covering only a portion of all possible anomaly types 2 . With only a small number of examples (or a non-representative sample) to estimate the second term of Equation ( <ns0:ref type='formula' target='#formula_1'>2</ns0:ref>), L BCE effectively does not depend on f (x) in x &#8712; suppC &#8722; \ suppC + , which leads to solutions with arbitrary predictions in the area, i.e., to classifiers that are not robust to novel anomalies.</ns0:p><ns0:p>One-class classifiers avoid this problem by aiming to explicitly separate the normal class from the rest of the space <ns0:ref type='bibr' target='#b23'>(Liu et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b42'>Scholkopf and Smola, 2018)</ns0:ref>. As discussed above, this approach, however, ignores available anomalous samples, potentially leading to incorrect predictions on ambiguous samples.</ns0:p><ns0:p>Recently, semi-supervised AD algorithms like 1 + &#949;-classification method <ns0:ref type='bibr' target='#b5'>(Borisyak et al., 2020)</ns0:ref>, Deep Semi-supervised AD method <ns0:ref type='bibr' target='#b38'>(Ruff et al., 2019)</ns0:ref>, Feature Encoding with AutoEncoders for Weaklysupervised Anomaly Detection <ns0:ref type='bibr' target='#b49'>(Zhou et al., 2021)</ns0:ref> and Deep Weakly-supervised Anomaly Detection <ns0:ref type='bibr' target='#b27'>(Pang et al., 2019a)</ns0:ref> were put forward. They aim to combine the main properties of both unsupervised (one-class)</ns0:p><ns0:p>and supervised (binary classification) approaches: proper posterior probability estimations of binary classification and robustness against novel anomalies of one-class classification.</ns0:p><ns0:p>In this work, we propose a method that extends the 1 + &#949;-classification method <ns0:ref type='bibr' target='#b5'>(Borisyak et al., 2020)</ns0:ref> scheme by exploiting normalizing flows. The method is based on sampling the surrogate anomalies to augment the existing anomalies dataset using advanced techniques.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>NORMALIZING FLOWS</ns0:head><ns0:p>The normalizing flows <ns0:ref type='bibr' target='#b37'>(Rezende and Mohamed, 2015)</ns0:ref> generative model aims to fit the exact probability distribution of data. It represents a set of invertible transformations { f i (&#8226;; &#952; i )} with parameters &#952; i , to obtain a bijection between the given distribution of training samples and some domain distribution with known probability density function (PDF). However, in the case of non-trivial bijection z 0 &#8596; z k , the distribution density at the final point z k (training sample) differs from the density at point z 0 (domain).</ns0:p><ns0:p>1 Source code is available at https://gitlab.com/lambda-hse/nfad 2 Moreover, anomalies typically cover a much larger 'phase space' than normal samples, thus, generic models (e.g. a deep neural network with fully connected layers) might require significantly more anomalous examples than normal ones.</ns0:p></ns0:div> <ns0:div><ns0:head>2/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60998:2:0:NEW 28 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This is due to the fact that each non-trivial transformation f i (&#8226;; &#952; i ) changes the infinitesimal volume at some points. Thus, the task is not only to find a flow of invertible transformations { f i (&#8226;; &#952; i )}, but also to know how the distribution density is changed at each point after each transformation f i (&#8226;; &#952; i ).</ns0:p><ns0:p>Consider the multivariate transformation of variable z z z i = f f f i (z z z i&#8722;1 ; &#952; i ) with parameters &#952; i for i &gt; 0. Then, Jacobian for a given transformation f f f i (z z z i&#8722;1 ; &#952; i ) at given point z z z i&#8722;1 has the following form:</ns0:p><ns0:formula xml:id='formula_3'>J( f f f i |z z z i&#8722;1 ) = &#8706; f f f i &#8706; z 1 i&#8722;1 . . . &#8706; f f f i &#8706; z n i&#8722;1 = &#63726; &#63727; &#63727; &#63727; &#63728; &#8706; f 1 i&#8722;1 &#8706; z 1 i&#8722;1 . . . &#8706; f 1 i&#8722;1 &#8706; z n i&#8722;1 . . . . . . . . . &#8706; f m i&#8722;1 &#8706; z 1 i&#8722;1 . . . &#8706; f m i&#8722;1 &#8706; z n i&#8722;1 &#63737; &#63738; &#63738; &#63738; &#63739; (4)</ns0:formula><ns0:p>Then, the distribution density at point z z z i after the transformation f f f i of point z z z i&#8722;1 can be written in a following common way:</ns0:p><ns0:formula xml:id='formula_4'>p(z z z i ) = p(z z z i&#8722;1 ) | det J( f f f i |z z z i&#8722;1 )| ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>where det <ns0:ref type='bibr' target='#b37'>and Mohamed, 2015)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_5'>J( f f f i |z z z i&#8722;1 ) is a determinant of the Jacobian matrix J( f f f i |z z z i&#8722;1 ) (Rezende</ns0:formula><ns0:p>Thus, given a flow of invertible transformations</ns0:p><ns0:formula xml:id='formula_6'>f f f = { f f f i (&#8226;; &#952; i )} N i=1 with known {det J( f f f i |&#8226;)} N i=1 and domain distribution of z 0 z 0 z 0 with known p.d.f. p(z 0 z 0 z 0 ), we obtain likelihood p(x x x) for each object x x x = z N z N z N . This way, the parameters {&#952; i } N i=1 of NF model f f f can be fitted by explicit maximizing the likelihood p(x x x) for training objects x x x &#8712; X. In practice, Monte-Carlo estimate of log p(X) = log &#928; x&#8712;X p(x) = &#931; x&#8712;X log p(x) is</ns0:formula><ns0:p>optimized, which is an equivalent optimization procedure. Also, the likelihood p(X) can be used as a metric of how well the NF model f f f fits given data X.</ns0:p><ns0:p>The main bottleneck of that scheme is located in that det</ns0:p><ns0:formula xml:id='formula_7'>J(&#8226;|&#8226;) computation, which is O(n 3 ) in a</ns0:formula><ns0:p>common case (n is the dimension of variable z z z). In order to deal with that problem, specific normalizing flows with specific families of transformations f f f are used, for which Jacobian computation is way faster <ns0:ref type='bibr' target='#b37'>(Rezende and Mohamed, 2015;</ns0:ref><ns0:ref type='bibr' target='#b29'>Papamakarios et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b18'>Kingma et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b11'>Chen et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>ALGORITHM</ns0:head><ns0:p>The suggested NF-based AD method (NFAD) is a two-step procedure. In the first step, we train normalizing flow on normal samples to sample new surrogate anomalies. Here, we assume that anomalies differ from normal samples, and its likelihood p NF (x &#8722; |C + ) is less than likelihood of normal samples p NF (x + |C + ). In the second step, we sample new surrogate anomalies from tails of normal samples distribution using NF and train an arbitrary binary classifier on normal samples and a mixture of real and sampled surrogate anomalies.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Step 1. Training normalizing flow</ns0:head><ns0:p>We train normalizing flow on normal samples. It can be trained by a standard for normalizing flows scheme of maximization the log-likelihood (see Section 3):</ns0:p><ns0:formula xml:id='formula_8'>max &#952; L NF (6) L NF = E x&#8764;C + log p f (x) (7) = E z&#8764; f &#8722;1 (C + ;&#952; ) log p(z) &#8722; log | det J( f |z)| ,<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_9'>f (&#8226;; &#952; ) is NF transformation with parameters &#952; , J( f |z) is Jacobian of transformation f (z; &#952; ) at point z, z are samples from multivariate standard normal domain distribution p(z) = N (z|0, I), x are normal samples from the training dataset, p f (x) = p(z) J( f |z) z= f &#8722;1 (x;&#952; ) .</ns0:formula><ns0:p>After NF for sampling is trained, it can be used to sample new anomalies. To produce new anomalies, we sample z from tails of normal domain distribution, where p-value of tails is a hyperparameter (see Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>3/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60998:2:0:NEW 28 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Jacobian:</ns0:p><ns0:formula xml:id='formula_10'>L J = E z&#8764;N (0,1) log(| det J( f |z)|) 2 (9) max &#952; L NF &#8722; &#955; * L J , &#955; &#8805; 0, (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_11'>)</ns0:formula><ns0:p>where &#955; denotes the regularization hyperparameter. We estimate the regularization term L J in (9) by direct Then &#8704;&#949; &gt; 0 &#8707;&#955; &#8805; 0 such that</ns0:p><ns0:formula xml:id='formula_12'>E z&#8764;D log(| det J( f |z)|) 2 &#952; * &lt; &#949; &#8704;z &#8764; &#8486; &#8712; R d , where &#952; * &#8712; arg min &#952; &#8722; E x&#8764;C + log p f (x) + &#955; E z&#8764;D log(| det J( f |z)|) 2 , p f (x) = p(z) J( f |z) z= f &#8722;1 (x;&#952; ) .</ns0:formula><ns0:p>Proof. Suppose the opposite. Let &#8707;&#949; &gt; 0 s.t. &#8704;&#955; &#8805; 0 :</ns0:p><ns0:formula xml:id='formula_13'>E z&#8764;D log(|J( f |z)|) 2 &#952; * &#8805; &#949; for all &#952; * &#8712; arg min &#952; &#8722; E x&#8764;C + log p f (x) + &#955; E z&#8764;D log(| det J( f |z)|) 2 .</ns0:formula><ns0:p>Since &#8707;&#952; 0 :</ns0:p><ns0:formula xml:id='formula_14'>f (&#8226;; &#952; 0 ) = I, p f ( f (z; &#952; 0 )) = p(z) &#8704;z &#8764; &#8486;, the term E z&#8764;D log(| det J( f |z)|) 2 &#952; 0 = 0 since p(z) p( f (z; &#952; 0 )) = | det J( f |z)| &#952; 0 = 1 &#8704;z &#8712; &#8486; Let &#8722; E x&#8764;C + log p f (x) &#952; 0 = &#8722; E z&#8764;C + log p(z) = c 0 , min &#952; &#8722; E x&#8764;C + log p f (x) = c min &lt; c 0 (</ns0:formula><ns0:p>minimum exists since negative log likelihood is lower bounded by 0). Then &#8704;&#955; : Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_15'>c 0 &gt; c min + &#955; E z&#8764;D log(| det J( f |z)|) 2 &#952; * &#8805; c min + &#955; &#949; But &#955; &gt; c 0 &#8722;c min</ns0:formula><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Final algorithm</ns0:head><ns0:p>The final scheme of the algorithm is shown in Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>RESULTS</ns0:head><ns0:p>We evaluate the proposed method on the following tabular and image datasets: KDD-99 <ns0:ref type='bibr' target='#b44'>(Stolfo et al., 1999)</ns0:ref>, SUSY <ns0:ref type='bibr' target='#b45'>(Whiteson, 2014)</ns0:ref>, HIGGS <ns0:ref type='bibr' target='#b2'>(Baldi et al., 2014)</ns0:ref>, <ns0:ref type='bibr'>MNIST (LeCun et al., 1998)</ns0:ref>, Omniglot <ns0:ref type='bibr' target='#b21'>(Lake et al., 2015)</ns0:ref> and CIFAR <ns0:ref type='bibr' target='#b20'>(Krizhevsky et al., 2009)</ns0:ref>. In order to reflect typical AD cases behind the approach, we derive multiple tasks from each dataset by varying sizes of anomalous datasets.</ns0:p><ns0:p>As the proposed method targets problems that are intermediate between one-class and two-class problems, we compare the proposed approach with the following algorithms:</ns0:p><ns0:p>&#8226; one-class methods: Robust AutoEncoder (RAE-OC, <ns0:ref type='bibr' target='#b10'>(Chalapathy et al., 2017)</ns0:ref>) and Deep SVDD <ns0:ref type='bibr' target='#b39'>(Ruff et al., 2018)</ns0:ref>.</ns0:p><ns0:p>&#8226; conventional two-class classification;</ns0:p><ns0:p>&#8226; semi-supervised methods: dimensionality reduction by an Deep AutoEncoder followed by twoclass classification (DAE), Feature Encoding with AutoEncoders for Weakly-supervised Anomaly Detection (FEAWAD, <ns0:ref type='bibr' target='#b49'>(Zhou et al., 2021)</ns0:ref>), DevNet <ns0:ref type='bibr' target='#b28'>(Pang et al., 2019b)</ns0:ref>, 1 + &#949; method <ns0:ref type='bibr' target='#b5'>(Borisyak et al., 2020)</ns0:ref> ('*ope'), Deep SAD <ns0:ref type='bibr' target='#b38'>(Ruff et al., 2019)</ns0:ref> and Deep Weakly-supervised Anomaly Detection (PRO, <ns0:ref type='bibr' target='#b27'>(Pang et al., 2019a</ns0:ref>))</ns0:p><ns0:p>We compare the algorithms using the ROC AUC metric to avoid unnecessary optimization for threshold-dependent metrics like accuracy, precision, or F1. Tables <ns0:ref type='table' target='#tab_1'>1, 2 and 3</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science one class 100 1000 10000 1000000 rae-oc 0.531 &#177; 0.000 0.531 &#177; 0.000 0.531 &#177; 0.000 0.531 &#177; 0.000 0.531 &#177; 0.000 deep-svdd-oc 0.513 &#177; 0.000 0.513 &#177; 0.000 0.513 &#177; 0.000 0.513 &#177; 0.000 0.513 &#177; 0.000 two-class -0.504 &#177; 0.017 0.529 &#177; 0.007 0.566 &#177; 0.006 0.858 &#177; 0.002 dae -0.502 &#177; 0.003 0.522 &#177; 0.003 0.603 &#177; 0.002 0.745 &#177; 0.005 brute-force-ope 0.508 &#177; 0.000 0.500 &#177; 0.009 0.520 &#177; 0.003 0.572 &#177; 0.005 0.859 &#177; 0.001 hmc-eope 0.509 &#177; 0.000 0.523 &#177; 0.005 0.567 &#177; 0.008 0.648 &#177; 0.005 0.848 &#177; 0.001 rmsprop-eope 0.503 &#177; 0.000 0.506 &#177; 0.008 0.531 &#177; 0.008 0.593 &#177; 0.011 0.861 &#177; 0.000 deep-eope 0.531 &#177; 0.000 0.537 &#177; 0.011 0.560 &#177; 0.008 0.628 &#177; 0.005 0.860 &#177; 0.001 devnet -0.565 &#177; 0.011 0.697 &#177; 0.006 0.748 &#177; 0.004 0.748 &#177; 0.003 feawad -0.551 &#177; 0.009 0.555 &#177; 0.014 0.554 &#177; 0.020 0.549 &#177; 0.018 deep-sad 0.502 &#177; 0.010 0.511 &#177; 0.006 0.561 &#177; 0.016 0.740 &#177; 0.011 0.833 &#177; 0.002 pro -0.533 &#177; 0.022 0.569 &#177; 0.011 0.570 &#177; 0.012 0.582 &#177; 0.015 nfad (iaf) 0.572 &#177; 0.009 0.574 &#177; 0.008 0.586 &#177; 0.009 0.623 &#177; 0.007 0.750 &#177; 0.008 nfad (nsf) 0.531 &#177; 0.010 0.519 &#177; 0.008 0.554 &#177; 0.009 0.659 &#177; 0.007 0.807 &#177; 0.007</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref>. ROC AUC on HIGGS dataset. 'nfad*' is our algorithm.</ns0:p><ns0:p>one class 100 1000 10000 1000000 rae-oc 0.586 &#177; 0.000 0.586 &#177; 0.000 0.586 &#177; 0.000 0.586 &#177; 0.000 0.586 &#177; 0.000 deep-svdd-oc 0.568 &#177; 0.000 0.568 &#177; 0.000 0.568 &#177; 0.000 0.568 &#177; 0.000 0.568 &#177; 0.000 two-class -0.652 &#177; 0.031 0.742 &#177; 0.011 0.792 &#177; 0.004 0.878 &#177; 0.000 dae -0.715 &#177; 0.020 0.766 &#177; 0.009 0.847 &#177; 0.002 0.876 &#177; 0.000 brute-force-ope 0.597 &#177; 0.000 0.672 &#177; 0.020 0.748 &#177; 0.012 0.792 &#177; 0.003 0.878 &#177; 0.000 hmc-eope 0.528 &#177; 0.000 0.738 &#177; 0.019 0.770 &#177; 0.012 0.816 &#177; 0.006 0.877 &#177; 0.000 rmsprop-eope 0.528 &#177; 0.000 0.714 &#177; 0.019 0.760 &#177; 0.016 0.807 &#177; 0.004 0.877 &#177; 0.000 deep-eope 0.652 &#177; 0.000 0.670 &#177; 0.054 0.746 &#177; 0.024 0.813 &#177; 0.003 0.878 &#177; 0.000 devnet -0.747 &#177; 0.023 0.849 &#177; 0.002 0.853 &#177; 0.002 0.854 &#177; 0.004 feawad -0.758 &#177; 0.019 0.760 &#177; 0.028 0.760 &#177; 0.022 0.762 &#177; 0.025 deep-sad 0.534 &#177; 0.022 0.581 &#177; 0.027 0.785 &#177; 0.014 0.860 &#177; 0.009 0.872 &#177; 0.008 pro -0.833 &#177; 0.008 0.861 &#177; 0.002 0.863 &#177; 0.001 0.863 &#177; 0.002 nfad (iaf) 0.701 &#177; 0.007 0.801 &#177; 0.007 0.829 &#177; 0.007 0.868 &#177; 0.006 0.880 &#177; 0.000 nfad (nsf) 0.785 &#177; 0.001 0.811 &#177; 0.013 0.855 &#177; 0.012 0.865 &#177; 0.001 0.876 &#177; 0.003 </ns0:p></ns0:div> <ns0:div><ns0:head n='6'>DISCUSSION</ns0:head><ns0:p>Our tests suggest that the best results are achieved when the normal class distribution has single mode and convex borders. These requirements are data-specific and can not be effectively addressed in our algorithm. The effects can be seen in Figure <ns0:ref type='figure'>2</ns0:ref>, where two modes result in the 'bridge' in the reconstructed standard class shape, and the non-convexity of the borders ends up in the worse separation line description.</ns0:p><ns0:p>Also, hyperparameters like Jacobian regularization &#955; and tail size p must be accurately chosen. This fact is illustrated in Figures <ns0:ref type='figure' target='#fig_1'>1 and 2</ns0:ref>, where we show the different samples quality and the performance of our algorithm for different hyperparameters values. To find suitable values, some heuristics can be used.</ns0:p><ns0:p>For instance, optimal tail location p can be estimated based on known anomalies from the training dataset, whereas Jacobian regularization &#955; in the NF training process can be linearly scheduled like KL factor in <ns0:ref type='bibr' target='#b14'>(Hasan et al., 2020)</ns0:ref>.</ns0:p><ns0:p>On tabular data (Tables 1, 2 and 3), the proposed NFAD method shows statistically significant improvement over other AD algorithms in many experiments, where the amount of anomalous samples is extremely low.</ns0:p><ns0:p>On image data (Tables 4, 5, 6), the proposed method shows competitive quality along with other state-of-the-art AD methods, significantly outperforming the existing algorithms on CIFAR dataset.</ns0:p><ns0:p>Our experiments suggest the main reason for the proposed method to have lower performance with respect to others on image data is a tendency of normalizing flows to estimate the likelihood of images by its local features instead of common semantics, as described by <ns0:ref type='bibr' target='#b19'>(Kirichenko et al., 2020)</ns0:ref>. We also find Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>that the overfitting of the classifier must be carefully monitored and addressed, as this might lead to the deterioration of the algorithm.</ns0:p><ns0:p>However, the results obtained on HIGGS, KDD, SUSY and CIFAR-10 datasets demonstrated the big potential of the proposed method over previous AD algorithms. With the advancement of new ways of NF application to images, the results are expected to improve for this class of datasets as well. In particular, we believe our method to be widely applicable in the industrial environment, where the task of AD can take advantage of both tabular and image-like datasets.</ns0:p><ns0:p>It also should be emphasized that unlike state-of-the-art AD algorithms <ns0:ref type='bibr' target='#b27'>(Pang et al., 2019a;</ns0:ref><ns0:ref type='bibr' target='#b49'>Zhou et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b38'>Ruff et al., 2019)</ns0:ref>, we propose a model-agnostic data augmentation algorithm that does not modify AD model training scheme and architecture. It enriches the input training anomalies set requiring only normal samples in the augmentation process (Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head n='7'>CONCLUSION</ns0:head><ns0:p>In this work, we present a new model-agnostic anomaly detection training scheme that deals efficiently with hard-to-address problems both by one-class or two-class methods. The solution combines the best features of one-class and two-class approaches. In contrast to one-class approaches, the proposed method makes the classifier effectively utilize any number of known anomalous examples, but, unlike conventional two-class classification, does not require an extensive number of anomalous samples. The proposed algorithm significantly outperforms the existing anomaly detection algorithms in most realistic anomaly detection cases. This approach is especially beneficial for anomaly detection problems, in which anomalous data is non-representative, or might drift over time.</ns0:p><ns0:p>The proposed method is fast, stable and flexible both in terms of training and inference stages; unlike previous methods, any classifier can be used in the scheme with any number of anomalies in the training dataset. Such a universal augmentation scheme opens wide prospects for further anomaly detection study and makes it possible to use any classifier on any kind of data. Also, the results on datasets with images are improvable with new techniques of normalizing flows become available.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60998:2:0:NEW 28 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. NF bijection between tails of standard normal domain distribution (left) and 2D Moon dataset (Pedregosa et al., 2011) samples (right). Rows represent different tail p-values choices. The value of the ROC AUC of the anomaly classifier is shown on the right side. The classifier is trained on the mixture of C + samples from the Moon dataset and surrogate anomalies sampled from the tails.</ns0:figDesc><ns0:graphic coords='5,141.73,63.77,413.58,413.58' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60998:2:0:NEW 28 Sep 2021) Manuscript to be reviewed Computer Science sampling of z from the domain distribution N (0, I) to cover the whole sampling space. The theorem below proofs that any level of expected distortion can be obtained with such a regularization: Theorem 4.1. Let &#8486; &#8834; R d a sample space with probability (domain) distribution D, C + &#8834; &#8486; a class of normal samples, f (&#8226;; &#952; ) : R d &#8594; R d is a set of invertible transformations parametrized by &#952; and &#8707;&#952; 0 : f (&#8226;; &#952; 0 ) = I (identical transformation exists).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>&#949;</ns0:head><ns0:label /><ns0:figDesc>leads to contradiction. In this work, we use Neural Spline Flows (NSF, Durkan et al., 2019) and Inverse (IAF, Kingma et al., 2016) Autoregressive Flows for tabular anomalies sampling. We also use Residual Flow (ResFlow, Chen et al., 2019) for anomalies sampling on image datasets. All the flows satisfy the conditions of Theorem 4.1. The proposed algorithms are called 'nfad-nsf', 'nfad-iaf' and 'nfad-resflow' respectively.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>( a )Figure 2 .</ns0:head><ns0:label>a2</ns0:label><ns0:figDesc>Figure 2. Density distortion of normalizing flows on the Moon dataset<ns0:ref type='bibr' target='#b34'>(Pedregosa et al., 2011)</ns0:ref>. Without extra regularization distribution density of domain distribution (2a) significantly differs from the target distribution (2b) because of non-unit Jacobian. To preserve the distribution density after NF transformations, Jacobian regularization (9) can be used (2c and 2d respectively)</ns0:figDesc><ns0:graphic coords='6,147.71,512.75,99.25,56.55' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Normalizing flows for anomaly detection (NFAD). Surrogate anomalies are sampled from the tails of gaussian distribution and transformed by NF to be mixed into real samples. Then, any classifier can be trained on that mixture.</ns0:figDesc><ns0:graphic coords='7,193.68,112.77,309.67,233.81' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>show the experimental results on tabular data. Tables 4, 5 and 6 show the experimental results on image data. Also, some of the aforementioned algorithms like DevNet are applicable only to tabular data and not reported on image data. In these tables, columns represent tasks with a varying number of negative samples presented in the training set: numbers in the header indicate either number of classes that form negative class (in case of</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>one class</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>rae-oc</ns0:cell><ns0:cell cols='5'>0.972 &#177; 0.006 0.972 &#177; 0.006 0.972 &#177; 0.006 0.972 &#177; 0.006 0.972 &#177; 0.006</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-svdd-oc</ns0:cell><ns0:cell cols='5'>0.939 &#177; 0.014 0.939 &#177; 0.014 0.939 &#177; 0.014 0.939 &#177; 0.014 0.939 &#177; 0.014</ns0:cell></ns0:row><ns0:row><ns0:cell>two-class</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='4'>0.571 &#177; 0.213 0.700 &#177; 0.182 0.687 &#177; 0.268 0.619 &#177; 0.257</ns0:cell></ns0:row><ns0:row><ns0:cell>dae</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='4'>0.685 &#177; 0.258 0.531 &#177; 0.286 0.758 &#177; 0.171 0.865 &#177; 0.087</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>brute-force-ope 0.564 &#177; 0.122 0.667 &#177; 0.175 0.606 &#177; 0.261 0.737 &#177; 0.187 0.541 &#177; 0.257</ns0:cell></ns0:row><ns0:row><ns0:cell>hmc-eope</ns0:cell><ns0:cell cols='5'>0.739 &#177; 0.245 0.885 &#177; 0.152 0.919 &#177; 0.055 0.863 &#177; 0.094 0.958 &#177; 0.023</ns0:cell></ns0:row><ns0:row><ns0:cell>rmsprop-eope</ns0:cell><ns0:cell cols='5'>0.765 &#177; 0.216 0.960 &#177; 0.017 0.854 &#177; 0.187 0.964 &#177; 0.016 0.976 &#177; 0.011</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-eope</ns0:cell><ns0:cell cols='5'>0.602 &#177; 0.279 0.701 &#177; 0.230 0.528 &#177; 0.300 0.749 &#177; 0.209 0.785 &#177; 0.259</ns0:cell></ns0:row><ns0:row><ns0:cell>devnet</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='4'>0.557 &#177; 0.104 0.594 &#177; 0.111 0.698 &#177; 0.163 0.812 &#177; 0.164</ns0:cell></ns0:row><ns0:row><ns0:cell>feawad</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='4'>0.862 &#177; 0.088 0.913 &#177; 0.069 0.892 &#177; 0.101 0.937 &#177; 0.083</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-sad</ns0:cell><ns0:cell cols='5'>0.803 &#177; 0.236 0.868 &#177; 0.182 0.942 &#177; 0.022 0.943 &#177; 0.069 0.968 &#177; 0.007</ns0:cell></ns0:row><ns0:row><ns0:cell>pro</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='4'>0.726 &#177; 0.179 0.728 &#177; 0.163 0.870 &#177; 0.128 0.905 &#177; 0.106</ns0:cell></ns0:row><ns0:row><ns0:cell>nfad (iaf)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>KDD, CIFAR, OMNIGLOT and MNIST datasets) or a number of negative samples used (HIGGS and SUSY); 'one-class' denotes the absence of known anomalous samples. As one-class algorithms do not take into account negative samples, their results are identical for the tasks with any number of known anomalies. The best score in each column is highlighted in bold font. 0.981 &#177; 0.001 0.984 &#177; 0.002 0.993 &#177; 0.002 0.997 &#177; 0.002 0.997 &#177; 0.002 nfad (nsf) 0.704 &#177; 0.007 0.875 &#177; 0.121 0.901 &#177; 0.082 0.926 &#177; 0.041 0.945 &#177; 0.022Table 1. ROC AUC on KDD-99 dataset. 'nfad*' is our algorithm. 7/13 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60998:2:0:NEW 28 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>ROC AUC on SUSY dataset. 'nfad*' is our algorithm.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>one class</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>nn-oc</ns0:cell><ns0:cell cols='4'>0.787 &#177; 0.139 0.787 &#177; 0.139 0.787 &#177; 0.139 0.787 &#177; 0.139</ns0:cell></ns0:row><ns0:row><ns0:cell>rae-oc</ns0:cell><ns0:cell cols='4'>0.978 &#177; 0.017 0.978 &#177; 0.017 0.978 &#177; 0.017 0.978 &#177; 0.017</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-svdd-oc</ns0:cell><ns0:cell cols='4'>0.641 &#177; 0.086 0.641 &#177; 0.086 0.641 &#177; 0.086 0.641 &#177; 0.086</ns0:cell></ns0:row><ns0:row><ns0:cell>two-class</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='3'>0.879 &#177; 0.108 0.957 &#177; 0.050 0.987 &#177; 0.014</ns0:cell></ns0:row><ns0:row><ns0:cell>dae</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='3'>0.934 &#177; 0.035 0.964 &#177; 0.032 0.984 &#177; 0.012</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>brute-force-ope 0.783 &#177; 0.120 0.915 &#177; 0.096 0.968 &#177; 0.041 0.986 &#177; 0.015</ns0:cell></ns0:row><ns0:row><ns0:cell>hmc-eope</ns0:cell><ns0:cell cols='4'>0.694 &#177; 0.167 0.933 &#177; 0.060 0.974 &#177; 0.023 0.989 &#177; 0.011</ns0:cell></ns0:row><ns0:row><ns0:cell>rmsprop-eope</ns0:cell><ns0:cell cols='4'>0.720 &#177; 0.186 0.933 &#177; 0.062 0.977 &#177; 0.023 0.990 &#177; 0.009</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-eope</ns0:cell><ns0:cell cols='4'>0.793 &#177; 0.129 0.942 &#177; 0.048 0.979 &#177; 0.016 0.991 &#177; 0.007</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-sad</ns0:cell><ns0:cell cols='4'>0.636 &#177; 0.114 0.859 &#177; 0.094 0.908 &#177; 0.071 0.947 &#177; 0.059</ns0:cell></ns0:row><ns0:row><ns0:cell>pro</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='3'>0.911 &#177; 0.096 0.944 &#177; 0.065 0.952 &#177; 0.079</ns0:cell></ns0:row><ns0:row><ns0:cell>nfad (resflow)</ns0:cell><ns0:cell cols='4'>0.682 &#177; 0.115 0.909 &#177; 0.959 0.935 &#177; 0.111 0.972 &#177; 0.019</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>ROC AUC on MNIST dataset. 'nfad*' is our algorithm. &#177; 0.101 0.532 &#177; 0.101 0.532 &#177; 0.101 0.532 &#177; 0.101 rae-oc 0.585 &#177; 0.126 0.585 &#177; 0.126 0.585 &#177; 0.126 0.585 &#177; 0.126 deep-svdd-oc 0.546 &#177; 0.058 0.546 &#177; 0.058 0.546 &#177; 0.058 0.546 &#177; 0.058 two-class -0.659 &#177; 0.093 0.708 &#177; 0.086 0.748 &#177; 0.082 dae -0.587 &#177; 0.109 0.634 &#177; 0.109 0.671 &#177; 0.093 brute-force-ope 0.540 &#177; 0.101 0.688 &#177; 0.087 0.719 &#177; 0.079 0.757 &#177; 0.073 hmc-eope 0.547 &#177; 0.116 0.678 &#177; 0.091 0.709 &#177; 0.084 0.739 &#177; 0.074 rmsprop-eope 0.565 &#177; 0.111 0.678 &#177; 0.081 0.715 &#177; 0.083 0.746 &#177; 0.069 deep-eope 0.564 &#177; 0.094 0.674 &#177; 0.100 0.690 &#177; 0.092 0.719 &#177; 0.099 deep-sad 0.532 &#177; 0.061 0.653 &#177; 0.072 0.680 &#177; 0.069 0.689 &#177; 0.065 pro -0.635 &#177; 0.081 0.653 &#177; 0.075 0.670 &#177; 0.069 nfad (resflow) 0.597 &#177; 0.083 0.800 &#177; 0.095 0.863 &#177; 0.042 0.877 &#177; 0.045</ns0:figDesc><ns0:table><ns0:row><ns0:cell>8/13</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>ROC AUC on CIFAR-10 dataset. 'nfad*' is our algorithm.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>one class</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>nn-oc</ns0:cell><ns0:cell cols='4'>0.521 &#177; 0.166 0.521 &#177; 0.166 0.521 &#177; 0.166 0.521 &#177; 0.166</ns0:cell></ns0:row><ns0:row><ns0:cell>rae-oc</ns0:cell><ns0:cell cols='4'>0.771 &#177; 0.221 0.771 &#177; 0.221 0.771 &#177; 0.221 0.771 &#177; 0.221</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-svdd-oc</ns0:cell><ns0:cell cols='4'>0.640 &#177; 0.153 0.640 &#177; 0.153 0.640 &#177; 0.153 0.640 &#177; 0.153</ns0:cell></ns0:row><ns0:row><ns0:cell>two-class</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='3'>0.799 &#177; 0.162 0.862 &#177; 0.115 0.855 &#177; 0.125</ns0:cell></ns0:row><ns0:row><ns0:cell>dae</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='3'>0.737 &#177; 0.134 0.821 &#177; 0.104 0.805 &#177; 0.121</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>brute-force-ope 0.503 &#177; 0.213 0.724 &#177; 0.222 0.765 &#177; 0.208 0.825 &#177; 0.126</ns0:cell></ns0:row><ns0:row><ns0:cell>hmc-eope</ns0:cell><ns0:cell cols='4'>0.710 &#177; 0.178 0.801 &#177; 0.139 0.842 &#177; 0.112 0.842 &#177; 0.115</ns0:cell></ns0:row><ns0:row><ns0:cell>rmsprop-eope</ns0:cell><ns0:cell cols='4'>0.678 &#177; 0.274 0.821 &#177; 0.143 0.855 &#177; 0.112 0.863 &#177; 0.111</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-eope</ns0:cell><ns0:cell cols='4'>0.696 &#177; 0.172 0.808 &#177; 0.140 0.851 &#177; 0.110 0.842 &#177; 0.122</ns0:cell></ns0:row><ns0:row><ns0:cell>deep-sad</ns0:cell><ns0:cell cols='4'>0.832 &#177; 0.123 0.856 &#177; 0.123 0.885 &#177; 0.095 0.884 &#177; 0.091</ns0:cell></ns0:row><ns0:row><ns0:cell>pro</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='3'>0.750 &#177; 0.160 0.765 &#177; 0.163 0.787 &#177; 0.153</ns0:cell></ns0:row><ns0:row><ns0:cell>nfad (resflow)</ns0:cell><ns0:cell cols='4'>0.567 &#177; 0.108 0.727 &#177; 0.188 0.868 &#177; 0.111 0.870 &#177; 0.102</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>ROC AUC on Omniglot dataset. Note that for this task only Greek, Futurama and Braille alphabets were considered as normal classes. 'nfad*' is our algorithm.</ns0:figDesc><ns0:table /></ns0:figure> </ns0:body> "
"Editor PeerJ Computer Science September 28, 2021 National Research University Higher School of Economics 20 Myasnitskaya Ulitsa, Moscow 101000 Russia, aryzhikov@hse.ru Dear Editor, first of all, we would like to thank you and the reviewers for taking the prevision revision into account and giving a new feedback. Below, we would like to respond to the Yingjie Zhou’ commentaries and indicate the changes made in the paper. The revision contains an additional Figure and Discussion changes clarifying for the reviewer’ comments. Also, we fixed the references format in References section according to the comment. Reviewer #1 (Yingjie Zhou) Basic reporting for the two references, ”(1) Feature Encoding with AutoEncoders for Weakly-supervised Anomaly Detection, IEEE Transactions on Neural Networks and Learning Systems, 2021. (2) Deep Weakly-supervised Anomaly Detection. arXiv preprint: 1910.13601 (2019).” What is the major technical differences between the references and the proposed method? . . . Thank you for the comment! The main difference is that, unlike to the existing anomaly detection methods, our algorithm represents model-agnostic dataset augmentation scheme which can be used with an arbitrary binary classifier (including the decision trees and other conventional classification algorithms) and arbitrary number of training anomalies in the anomaly detection task. This fact is noted in the end of Conclusion section. Now we also added some related notes in the end of the Discussion section as well. . . . I guess that the authors did not display the bad cases for the proposed method, which the authors should present and high- light them. Thank you for your comment! We designed some toy experiments in order to show the bad cases of the proposed method and added some more analysis in the Discussion section (first two paragraphs) based on your comment. some references are not complete, e.g., ““Zhou, Y., Song, X., Zhang, Y., Liu, F., Zhu, C., and Liu, L. (2021). Feature encoding with autoencoders for weakly-supervised anomaly detection.””, which lacks the publication title and etc. ”241 Chen, R. T. Q., Behrmann, J., Duvenaud, D., and Jacobsen, J.-H. (2019). Residual flows for invertible 242 generative modeling.” We do apologise for such an inaccurate citation. Now we fixed all the cited references along with the bibliography format. a detailed lists of revisions are needed in the response letter for each coments in the previous review, so that the reviewer can see the revisions more clear. Originally we attached such an annotated revised manuscript. However, the editor recommended us to attach the automatically generated diff only along with a response letter. During this revision, we also attached such a ”.tex” diff, but would be also glad to additionally provide a changes annotated manuscript if you need and if the editor is ok with that. Reviewer #2 Thank you for reading our previous revision and the acceptance of all the previous changes. We will be glad to answer and fix any additional comments if you have ones. 2 Summary We do appreciate all your comments and took into account all of them preparing the next revision. We hope, that after addressing these valuable commentaries, the paper now provides a clear picture of the proposed method and its advantages, and we wish to thank the reviewers once again. Sincerely, Artem Ryzhikov on behalf of authors. 3 "
Here is a paper. Please give your review comments after reading it.
262
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>We describe a method for assessing data set complexity based on the estimation of the underlining probability distribution and Hellinger distance. Contrary to some popular measures it is not focused on the shape of decision boundary in a classification task but on the amount of available data with respect to attribute structure. Complexity is expressed in terms of graphical plot, which we call complexity curve. We use it to propose a new variant of learning curve plot called generalisation curve. Generalisation curve is a standard learning curve with x-axis rescaled according to the data set complexity curve. It is a classifier performance measure, which shows how well the information present in the data is utilised. We perform theoretical and experimental examination of properties of the introduced complexity measure and show its relation to the variance component of classification error. We compare it with popular data complexity measures on 81 diverse data sets and show that it can contribute to explaining the performance of specific classifiers on these sets. Then we apply our methodology to a panel of benchmarks of standard machine learning algorithms on typical data sets, demonstrating how it can be used in practice to gain insights into data characteristics and classifier behaviour.</ns0:p><ns0:p>Moreover, we show that complexity curve is an effective tool for reducing the size of the training set (data pruning), allowing to significantly speed up the learning process without reducing classification accuracy. Associated code is available to download at: https://github.com/zubekj/complexity_curve (open source Python implementation).</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>It is common knowledge in machine learning community that the difficulty of classification problems varies greatly. Sometimes it is enough to use simple out of the box classifier to get a very good result and sometimes careful preprocessing and model selection are needed to get any non-trivial result at all. The difficulty of a classification task clearly stems from certain properties of the data set, yet we still have problems with defining those properties in general.</ns0:p><ns0:p>Bias-variance decomposition <ns0:ref type='bibr' target='#b8'>(Domingos, 2000)</ns0:ref> demonstrates that the error of a predictor can be attributed to three sources: bias, coming from inability of an algorithm to build an adequate model for the relationship present in data, variance, coming from inability to estimate correct model parameters from an imperfect data sample, and some irreducible noise. Following this line of reasoning, difficulty of a classification problem may come partly from the complexity of the relation between dependent variable and explanatory variables, partly from the scarcity of information in the training sample, and partly from an overlap between classes. This is identical to sources of classification difficulty identified by <ns0:ref type='bibr' target='#b16'>Ho and Basu (2002)</ns0:ref>, who labelled the three components: 'complex decision boundary', 'small sample size and dimensionality induced sparsity' and 'ambiguous classes'.</ns0:p><ns0:p>In this article we introduce a new measure of data complexity targeted at sample sparsity, which 1 is mostly associated with variance error component. We aim to measure information saturation of a data set without making any assumptions on the form of relation between dependent variable and the rest of variables, so explicitly disregarding shape of decision boundary and classes ambiguity. Our complexity measure takes into account the number of samples, the number of attributes and attributes internal structure, under a simplifying assumption of attribute independence. The key idea is to check how well a data set can be approximated by its subsets. If the probability distribution induced by a small data sample is very similar to the probability distribution induced by the whole data set we say that the set is saturated with information and presents an opportunity to learn the relationship between variables without promoting the variance. To operationalise this notion we introduce two kinds of plots:</ns0:p><ns0:p>&#8226; Complexity curve -a plot presenting how well subsets of growing size approximate distribution of attribute values. It is a basic method applicable to clustering, regression and classification problems.</ns0:p><ns0:p>&#8226; Conditional complexity curve -a plot presenting how well subsets of growing size approximate distribution of attribute values conditioned on class. It is applicable to classification problems and more robust against class imbalance or differences in attributes structure between classes.</ns0:p><ns0:p>Since the proposed measure characterise the data sample itself without making any assumptions as to how that sample will be used it should be applicable to all kinds of problems involving reasoning from data. In this work we focus on classification tasks since this is the context in which data complexity measures were previously applied. We compare area under the complexity curve with popular data complexity measures and show how it complements the existing metrics. We also demonstrate that it is useful for explaining classifier performance by showing that the area under the complexity curve is correlated with the area under the receiver operating characteristic (AUC ROC) for popular classifiers tested on 81 benchmark data sets.</ns0:p><ns0:p>We propose two immediate applications of the developed method. The first one is connected with the fundamental question: how much of the original sample is needed to build a successful predictor? We pursue this topic by proposing a data pruning strategy based on complexity curve and evaluating it on large data sets. We show that it can be considered as an alternative to progressive sampling strategies <ns0:ref type='bibr' target='#b32'>(Provost et al., 1999)</ns0:ref>.</ns0:p><ns0:p>The second proposed application is classification algorithm comparison. Knowing characteristics of benchmark data sets it is possible to check which algorithms perform well in the context of scarce data. To fully utilise this information, we present a graphical performance measure called generalisation curve. It is based on learning curve concept and allows to compare the learning process of different algorithms while controlling the variance of the data. To demonstrate its validity we apply it to a set of popular algorithms. We show that the analysis of generalisation curves points to important properties of the learning algorithms and benchmark data sets, which were previously suggested in the literature.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED LITERATURE</ns0:head><ns0:p>Problem of measuring data complexity in the context of machine learning is broadly discussed. Our beliefs are similar to <ns0:ref type='bibr' target='#b15'>Ho (2008)</ns0:ref>, who stated the need for including data complexity analysis in algorithm comparison procedures. The same need is also discussed in fields outside machine learning, for example in combinatorial optimisation <ns0:ref type='bibr' target='#b43'>(Smith-Miles and Lopes, 2012)</ns0:ref>.</ns0:p><ns0:p>The general idea is to select a sufficiently diverse set of problems to demonstrate both strengths and weaknesses of the analysed algorithms. The importance of this step was stressed by <ns0:ref type='bibr' target='#b27'>Maci&#224; et al. (2013)</ns0:ref>, who demonstrated how algorithm comparison may be biased by benchmark data sets selection, and showed how the choice my guided by complexity measures. Characterising problem space with some metrics makes it possible to estimate regions in which certain algorithms perform well <ns0:ref type='bibr' target='#b26'>(Luengo and Herrera, 2013)</ns0:ref>, and this opens up possibilities of meta-learning <ns0:ref type='bibr' target='#b42'>(Smith-Miles et al., 2014)</ns0:ref>.</ns0:p><ns0:p>In this context complexity measures are used not only as predictors of classifier performance but more importantly as diversity measures capturing various properties of the data sets. It is useful when the measures themselves are diverse and focus on different aspects of the data to give as complete characterisation of the problem space as possible. In the later part of the article we demonstrate that complexity curve fits well into the landscape of currently used measures, offering new insights into data characteristics.</ns0:p></ns0:div> <ns0:div><ns0:head>2/34</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:1:2:CHECK 31 May 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Measuring data complexity</ns0:head><ns0:p>A set of practical measures of data complexity with regard to classification was introduced by <ns0:ref type='bibr' target='#b16'>Ho and Basu (2002)</ns0:ref>, and later extended by <ns0:ref type='bibr' target='#b17'>Ho et al. (2006)</ns0:ref> and <ns0:ref type='bibr' target='#b30'>Orriols-Puig et al. (2010)</ns0:ref>. It is routinely used in tasks involving classifier evaluation <ns0:ref type='bibr' target='#b27'>(Maci&#224; et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b26'>Luengo and Herrera, 2013)</ns0:ref> and meta-learning <ns0:ref type='bibr' target='#b11'>(D&#237;ez-Pastor et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b28'>Mantovani et al., 2015)</ns0:ref>. Some of these measures are based on the overlap of values of specific attributes, examples include Fisher's discriminant ratio, volume of overlap region, attribute efficiency etc. The others focus directly on class separability, this groups includes measures such as the fraction of points on the boundary, linear separability, the ratio of intra/inter class distance. In contrast to our method, such measures focus on specific properties of the classification problem, measuring decision boundary and class overlap. Topological measures concerned with data sparsity, such as ratio of attributes to observations, attempt to capture similar properties as complexity curve. <ns0:ref type='bibr' target='#b24'>Li and Abu-Mostafa (2006)</ns0:ref> defined data set complexity in the context of classification using the general concept of Kolmogorov complexity. They proposed a way to measure data set complexity using the number of support vectors in support vector machine (SVM) classifier. They analysed the problems of data decomposition and data pruning using above methodology. A graphical representation of the data set complexity called the complexity-error plot was also introduced. The main problem with their approach is the selection of very specific and complex machine learning algorithms, which may render the results in less universal way, and which is prone to biases specific for SVMs. This make their method unsuitable for diverse machine learning algorithms comparison.</ns0:p><ns0:p>Another approach to data complexity is to analyse it on instance level. This kind of analysis is performed by <ns0:ref type='bibr' target='#b41'>Smith et al. (2013)</ns0:ref> who attempted to identify which instances are misclassified by various classification algorithm. They devised local complexity measures calculated with respect to single instances and later tried to correlate average instance hardness with global data complexity measures of <ns0:ref type='bibr' target='#b16'>Ho and Basu (2002)</ns0:ref>. They discovered that is mostly correlated with class overlap. This makes our work complementary, since in our complexity measure we deliberately ignore class overlap and individual instance composition to isolate another source of difficulty, namely data scarcity. <ns0:ref type='bibr' target='#b49'>Yin et al. (2013)</ns0:ref> proposed a method of feature selection based on Hellinger distance (a measure of similarity between probability distributions). The idea was to choose features, which conditional distributions (depending on the class) have minimal affinity. In the context of our framework this could be interpreted as measuring data complexity for single features. The authors demonstrated experimentally that for the high-dimensional imbalanced data sets their method is superior to popular feature selection methods using Fisher criterion, or mutual information.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluating classifier performance</ns0:head><ns0:p>The basic schema of classifier evaluation is to train a model on one data sample (training set) and then collect its predictions on another, independent data set (testing set). Overall performance is then calculated using some measure taking into account errors made on the testing set. The most intuitive measure is accuracy, but other measures such as precision, recall or F-measure are widely used. When we are interested in comparing classification algorithms, not just trained classifiers, this basic schema is limited.</ns0:p><ns0:p>It allows only to perform a static comparison of different algorithms under specified conditions. All algorithms' parameters are fixed, so are the data sets. The results may not be conclusive since the same algorithm may perform very well or very poor depending on the conditions. Such analysis provides a static view of classification task -there is little to be concluded on the dynamics of the algorithm: its sensitivity to the parameter tuning, requirements regarding the sample size etc.</ns0:p><ns0:p>A different approach, which preserves some of the dynamics, is receiver operating characteristic (ROC) curve <ns0:ref type='bibr' target='#b12'>(Fawcett, 2006)</ns0:ref>. It is possible to perform ROC analysis for any binary classifier, which returns continuous decisions. The fraction of correctly classified examples in class A is plotted against the fraction of incorrectly classified in class B for different values of the classification threshold. The ROC curve captures not only the sole performance of a classifier, but also its sensitivity to the threshold value selection.</ns0:p><ns0:p>Another graphical measure of classifier performance, which visualises its behaviour depending on a threshold value, is cost curve introduced by <ns0:ref type='bibr' target='#b9'>Drummond and Holte (2006)</ns0:ref>. They claim that their method is more convenient to use because it allows to visualise confidence intervals and statistical significance of differences between classifiers. However, it still measures the performance of a classifier in a relatively static situation where only threshold value changes. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Both ROC curves and cost curves are applicable only to classifiers with continuous outputs and to two class problems, which limits their usage. What is important is the key idea behind them: instead of giving the user a final solution they give freedom to choose an optimal classifier according to some criteria from a range of options.</ns0:p><ns0:p>The learning curve technique presents in a similar fashion the impact of the sample size on the classification accuracy. The concept itself originates from psychology. It is defined as a plot of learner's performance against the amount of effort invested in learning. Such graphs are widely used in medicine <ns0:ref type='bibr' target='#b35'>(Schlachta et al., 2001)</ns0:ref>, economics <ns0:ref type='bibr' target='#b29'>(Nemet, 2006)</ns0:ref>, education <ns0:ref type='bibr' target='#b22'>(Karpicke and Roediger, 2008)</ns0:ref>, or engineering <ns0:ref type='bibr' target='#b19'>(Jaber and Glock, 2013)</ns0:ref>. They allow to describe the amount of training required for an employee to perform certain job. They are also used in entertainment industry to scale difficulty level of video games <ns0:ref type='bibr' target='#b44'>(Sweetser and Wyeth, 2005)</ns0:ref>. In machine learning context they are sometimes referred to as the performance curve <ns0:ref type='bibr' target='#b38'>(Sing et al., 2005)</ns0:ref>. The effort in such curve is measured with the number of examples in the training set.</ns0:p><ns0:p>Learning curve is a visualisation of an incremental learning process in which data is accumulated and the accuracy of the model increases. It captures the algorithm's generalisation capabilities: using the curve it is possible to estimate what amount of data is needed to successfully train a classifier and when collecting additional data does not introduce any significant improvement. This property is referred to in literature as the sample complexity -a minimal size of the training set required to achieve acceptable performance.</ns0:p><ns0:p>As it was noted above, standard learning curve in machine learning expresses the effort in terms of the training set size. However, for different data sets the impact of including an additional data sample may be different. Also, within the same set the effect of including first 100 samples and last 100 samples is very different. Generalisation curve -an extension of learning curve proposed in this article -deals with these problems by using an effort measure founded on data complexity instead of raw sample size.</ns0:p></ns0:div> <ns0:div><ns0:head>DEFINITIONS</ns0:head><ns0:p>In the following sections we define formally all measures used throughout the paper. Basic intuitions, assumptions, and implementation choices are discussed. Finally, algorithms for calculating complexity curve, conditional complexity curve, and generalisation curve are given.</ns0:p></ns0:div> <ns0:div><ns0:head>Measuring data complexity with samples</ns0:head><ns0:p>In a typical machine learning scenario we want to use information contained in a collected data sample to solve a more general problem which our data describe. Problem complexity can be naturally measured by the size of a sample needed to describe the problem accurately. We call the problem complex, if we need to collect a lot of data in order to get any results. On the other hand, if a small amount of data suffices we say the problem has low complexity.</ns0:p><ns0:p>How to determine if a data sample describes the problem accurately? Any problem can be described with a multivariate probability distribution P of a random vector X. From P we sample our finite data sample D. Now, we can use D to build the estimated probability distribution of X -P D . P D is the approximation of P. If P and P D are identical we know that data sample D describes the problem perfectly and collecting more observations would not give us any new information. Analogously, if P D is very different from P we can be certain that the sample is too small.</ns0:p><ns0:p>To measure similarity between probability distributions we use Hellinger distance. For two continuous distributions P and P D with probability density functions p and p D it is defined as:</ns0:p><ns0:formula xml:id='formula_0'>H 2 (P, P D ) = 1 2 p(x) &#8722; p D (x) 2 dx</ns0:formula><ns0:p>The minimum possible distance 0 is achieved when the distributions are identical, the maximum 1 is achieved when any event with non-zero probability in P has probability 0 in P D and vice versa. Simplicity and naturally defined 0-1 range make Hellinger distance a good measure for capturing sample information content.</ns0:p><ns0:p>In Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the reference distribution. Any subset S &#8834; D can be treated as a data sample and a probability distribution P S estimated from it will be an approximation of P D . By calculating H 2 (P D , P S ) we can assess how well a given subset represent the whole available data, i.e. determine its information content.</ns0:p><ns0:p>Obtaining a meaningful estimation of a probability distribution from a data sample poses difficulties in practice. The probability distribution we are interested in is the joint probability on all attributes. In that context most of the realistic data sets should be regarded as extremely sparse and na&#239;ve probability estimation using frequencies of occurring values would result in mostly flat distribution. This can be called the curse of dimensionality. Against this problem we apply a na&#239;ve assumption that all attributes are independent. This may seem like a radical simplification but, as we will demonstrate later, it yields good results in practice and constitute a reasonable baseline for common machine learning techniques. Under the independence assumption we can calculate the joint probability density function f from the marginal density functions f 1 , . . . , f n :</ns0:p><ns0:formula xml:id='formula_1'>f (x) = f 1 (x 1 ) f 2 (x 2 ) . . . f n (x n )</ns0:formula><ns0:p>We will now show the derived formula for Hellinger distance under the independence assumption.</ns0:p><ns0:p>Observe that the Hellinger distance for continuous variables can be expressed in another form:</ns0:p><ns0:formula xml:id='formula_2'>1 2 f (x) &#8722; g(x) 2 dx = 1 2 f (x) &#8722; 2 f (x)g(x) + g(x) dx = 1 2 f (x) dx &#8722; f (x)g(x) dx + 1 2 g(x) dx = 1 &#8722; f (x)g(x) dx</ns0:formula><ns0:p>In the last step we used the fact the that the integral of a probability density over its domain must be one.</ns0:p><ns0:p>We will consider two multivariate distributions F and G with density functions:</ns0:p><ns0:formula xml:id='formula_3'>f (x 1 , . . . , x n ) = f 1 (x 1 ) . . . f n (x n ) g(x 1 , . . . , x n ) = g 1 (x 1 ) . . . g n (x n )</ns0:formula><ns0:p>The last formula for Hellinger distance will now expand:</ns0:p><ns0:formula xml:id='formula_4'>1 &#8722; &#8226; &#8226; &#8226; f (x 1 , . . . , x n )g(x 1 , . . . , x n ) dx 1 . . . dx n = 1 &#8722; &#8226; &#8226; &#8226; f 1 (x 1 ) . . . f n (x n )g 1 (x 1 ) . . . g n (x n ) dx 1 . . . dx n = 1 &#8722; f 1 (x 1 )g 1 (x 1 ) dx 1 . . . f n (x n )g n (x n ) dx n</ns0:formula><ns0:p>In this form variables are separated and parts of the formula can be calculated separately.</ns0:p></ns0:div> <ns0:div><ns0:head>Practical considerations</ns0:head><ns0:p>Calculating the introduced measure of similarity between data set in practice poses some difficulties.</ns0:p><ns0:p>First, in the derived formula direct multiplication of probabilities occurs, which leads to problems with numerical stability. We increased the stability by switching to the following formula: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_5'>1 &#8722; f 1 (x 1 )g 1 (x 1 ) dx 1 . . . f n (x n )g n (x n ) dx n = 5/</ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_6'>1 &#8722; 1 &#8722; 1 2 f 1 (x 1 ) &#8722; g 1 (x 1 ) 2 dx 1 . . . 1 &#8722; 1 2 f n (x n ) &#8722; g n (x n ) 2 dx 2 = 1 &#8722; 1 &#8722; H 2 (F 1 , G 1 ) . . . 1 &#8722; H 2 (F n , G n )</ns0:formula><ns0:p>For continuous variables probability density function is routinely done with kernel density estimation (KDE) -a classic technique for estimating the shape continuous probability density function from a finite data sample <ns0:ref type='bibr' target='#b37'>(Scott, 1992)</ns0:ref>. For sample (x 1 , x 2 , . . . , x n ) estimated density function has a form:</ns0:p><ns0:formula xml:id='formula_7'>fh (x) = 1 nh n &#8721; i=1 K x &#8722; x i h</ns0:formula><ns0:p>where K is the kernel function and h is a smoothing parameter -bandwidth. In our experiments we used Gaussian function as the kernel. This is a popular choice, which often yields good results in practice. The bandwidth was set according to the modified Scott's rule <ns0:ref type='bibr' target='#b37'>(Scott, 1992)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_8'>h = 1 2 n &#8722; 1 d+4 ,</ns0:formula><ns0:p>where n is the number of samples and d number of dimensions.</ns0:p><ns0:p>In many cases the independence assumption can be supported by preprocessing input data in a certain way. A very common technique, which can be applied in this situation is the whitening transform. It transforms any set of random variables into a set of uncorrelated random variables. For a random vector X with a covariance matrix &#931; a new uncorrelated vector Y can be calculated as follows:</ns0:p><ns0:formula xml:id='formula_9'>&#931; = PDP &#8722;1 W = PD &#8722; 1 2 P &#8722;1 Y = XW</ns0:formula><ns0:p>where D is diagonal matrix containing eigenvalues and P is matrix of right eigenvectors of &#931;. Naturally, lack of correlation does not implicate independence but it nevertheless reduces the error introduced by our independence assumption. Furthermore, it blurs the difference between categorical variables and continuous variables putting them on an equal footing. In all further experiments we use whitening transform preprocessing and then treat all variables as continuous.</ns0:p><ns0:p>A more sophisticated method is a signal processing technique known as Independent Component Analysis (ICA) <ns0:ref type='bibr' target='#b18'>(Hyv&#228;rinen and Oja, 2000)</ns0:ref>. It assumes that all components of an observed multivariate signal are mixtures of some independent source signals and that the distribution of the values in each source signal is non-gaussian. Under these assumption the algorithm attempts to recreate the source signals by splitting the observed signal into the components as independent as possible. Even if the assumptions are not met, ICA technique can reduce the impact of attributes interdependencies. Because of its computational complexity we used it as an optional step in our experiments.</ns0:p></ns0:div> <ns0:div><ns0:head>Machine learning task difficulty</ns0:head><ns0:p>Our data complexity measure can be used for any type of problem described through a multivariate data sample. It is applicable to regression, classification and clustering tasks. The relation between the defined data complexity and the difficulty of a specific machine learning task needs to be investigated. We will focus on supervised learning case. Classification error will be measured as mean 0-1 error. Data complexity will be measured as mean Hellinger distance between real and estimated probability distributions of attributes conditioned on target variable:</ns0:p><ns0:formula xml:id='formula_10'>1 m m &#8721; i=1 H 2 (P(X|Y = y i ), P D (X|Y = y i ))</ns0:formula><ns0:p>where X -vector of attributes, Y -target variable, y 1 , y 2 , . . . y m -values taken by Y .</ns0:p><ns0:p>It has been shown that error of an arbitrary classification or regression model can be decomposed into three parts: <ns0:ref type='table' target='#tab_9'>-2016:03:9443:1:2:CHECK 31 May 2016)</ns0:ref> Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b8'>Domingos (2000)</ns0:ref> proposed an universal scheme of decomposition, which can be adapted for different loss functions. For a classification problem and 0-1 loss L expected error on sample x for which the true label is t, and the predicted label given a traning set D is y can be expressed as:</ns0:p><ns0:formula xml:id='formula_11'>Error = Bias + Variance + Noise 6/34 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula><ns0:formula xml:id='formula_12'>E D,t [1(t = y)] = 1(E t [t] = E D [y]) + c 2 E D [1(y = E D [y])] + c 1 E t [1(t = E t [t])] = B(x) + c 2 V (x) + c 1 N(x)</ns0:formula><ns0:p>where B -bias, V -variance, N -noise. Coefficients c 1 and c 2 are added to make the decomposition consistent for different loss functions. In this case they are equal to:</ns0:p><ns0:formula xml:id='formula_13'>c 1 = P D (y = E t [t]) &#8722; P D (y = E t [t])P t (y = t | E t [t] = t) c 2 = 1 if E t [t] = E D [y] &#8722; P D (y = E t [t] | y = E D [y])</ns0:formula><ns0:p>otherwise.</ns0:p><ns0:p>Bias This intuition can be supported by comparing our complexity measure with the error of the Bayes classifier. We will show that they are closely related. Let Y be the target variable taking on values v 1 , v 2 , . . . , v m , f i (x) an estimation of P(X = x|Y = v i ) from a finite sample D, and g(y) an estimation of P(Y = y). In such setting 0-1 loss of the Bayes classifier on a sample x with the true label t is:</ns0:p><ns0:formula xml:id='formula_14'>1(t = y) = 1 t = arg max i (g(v i ) f i (x))</ns0:formula><ns0:p>Let assume that t = v j . Observe that:</ns0:p><ns0:formula xml:id='formula_15'>v j = arg max i (g(v i ) f i (x)) &#8660; &#8704; i g(v j ) f j (x) &#8722; g(v i ) f i (x) &#8805; 0</ns0:formula><ns0:p>which for the case of equally frequent classes reduces to:</ns0:p><ns0:formula xml:id='formula_16'>&#8704; i f j (x) &#8722; f i (x) &#8805; 0</ns0:formula><ns0:p>We can simultanously add and substract term P</ns0:p><ns0:formula xml:id='formula_17'>(X = x |Y = v j ) &#8722; P(X = x |Y = v i ) to obtain: &#8704; i ( f j (x) &#8722; P(X = x |Y = v j )) + (P(X = x |Y = v i ) &#8722; f i (x)) + (P(X = x |Y = v j ) &#8722; P(X = x |Y = v i )) &#8805; 0 We know that P(X = x | Y = v j ) &#8722; P(X = x | Y = v i ) &#8805; 0,</ns0:formula><ns0:p>so as long as estimations f i (x), f j (x) do not deviate too much from real distributions the inequality is satisfied. It will not be satisfied (i.e. an error will take place) only if the estimations deviate from the real distributions in a certain way (i.e.</ns0:p><ns0:formula xml:id='formula_18'>f j (x) &lt; P(X = x|Y = v j ) and f i (x) &gt; P(X = x|Y = v i ))</ns0:formula><ns0:p>and the sum of these deviations is greater than <ns0:ref type='table' target='#tab_9'>-2016:03:9443:1:2:CHECK 31 May 2016)</ns0:ref> Manuscript to be reviewed Computer Science</ns0:p><ns0:formula xml:id='formula_19'>7/34 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula><ns0:formula xml:id='formula_20'>P(X = x|Y = v j ) &#8722; P(X = x|Y = v i ).</ns0:formula><ns0:p>The Hellinger distance between f i (x) and P(X = x|Y = v i ) measures the deviation. This shows that by minimising Hellinger distance we are also minimising error of the Bayes classifier. Converse may not be true: not all deviations of probability estimates result in classification error.</ns0:p><ns0:p>In the introduced complexity measure we assumed independency of all attributes, which is analogous to the assumption of na&#239;ve Bayes. Small Hellinger distance between class-conditioned attribute distributions induced by sets A and B means that na&#239;ve Bayes trained on set A and tested on set B will have only very slight variance error component. Of course, if the indepedence assumption is broken bias error component may still be substantial.</ns0:p></ns0:div> <ns0:div><ns0:head>Complexity curve</ns0:head><ns0:p>Complexity curve is a graphical representation of a data set complexity. It is a plot presenting the expected Hellinger distance between a subset and the whole set versus subset size:</ns0:p><ns0:formula xml:id='formula_21'>CC(n) = E[H 2 (P, Q n )]</ns0:formula><ns0:p>where P is the empirical probability distribution estimated from the whole set and Q n is the probability distribution estimated from a random subset of size n &#8804; |D|. Let us observe that CC(|D|) = 0 because P = Q |D| . Q 0 is undefined, but for the sake of convenience we assume CC(0) = 1.</ns0:p><ns0:p>Algorithm 1 Procedure for calculating complexity curve. D -original data set, K -number of random subsets of the specified size.</ns0:p><ns0:p>1. Transform D with whitening transform and/or ICA to obtain D I .</ns0:p><ns0:p>2. Estimate probability distribution for each attribute of D I and calculate joint probability distribution -P.</ns0:p><ns0:p>3. For i in 1 . . . |D I | (with an optional step size d):</ns0:p><ns0:p>(a) For j in 1 . . . K:</ns0:p><ns0:p>i. Draw subset S j i &#8838; D I such that |S j i | = i. ii. Estimate probability distribution for each attribute of S j i and calculate joint probability distribution -Q j i . iii. Calculate Hellinger distance:</ns0:p><ns0:formula xml:id='formula_22'>l j i = H 2 (P, Q j i ).</ns0:formula><ns0:p>(b) Calculate mean m i and standard error s i :</ns0:p><ns0:formula xml:id='formula_23'>m i = 1 K K &#8721; j=1 l j i s i = 1 K K &#8721; j=1 m i &#8722; l j i 2</ns0:formula><ns0:p>Complexity curve is a plot of m i &#177; s i vs i.</ns0:p><ns0:p>To estimate complexity curve in practice, for each subset size K random subsets are drawn and the mean value of Hellinger distance, along with standard error, is marked on the plot. The Algorithm 1 presents the exact procedure. Parameters K (the number of samples of a specified size) and d (sampling means that the subset size has a far greater impact on the Hellinger distance that the composition of the individual subsets.</ns0:p><ns0:p>The shape of the complexity curve captures the information on the complexity of the data set. If the data is simple, it is possible to represent it relatively well with just a few instances. In such case, the complexity curve is very steep at the beginning and flattens towards the end of the plot. If the data is complex, the initial steepness of the curve is smaller. That information can be aggregated into a single parameter -the area under the complexity curve (AUCC). If we express the subset size as the fraction of the whole data set, then the value of the area under the curve becomes limited to the range [0, 1] and can be used as an universal measure for comparing complexity of different data sets.</ns0:p></ns0:div> <ns0:div><ns0:head>Conditional complexity curve</ns0:head><ns0:p>The complexity curve methodology presented so far deals with the complexity of a data set as a whole.</ns0:p><ns0:p>While this approach gives information about data structure, it may assess complexity of the classification task incorrectly. This is because data distribution inside each of the classes may vary greatly from the overall distribution. For example, when the number of classes is larger, or the classes are imbalanced, a random sample large enough to represent the whole data set may be too small to represent some of the classes. To take this into account we introduce conditional complexity curve. We calculate it by splitting each data sample according to the class value and taking the arithmetic mean of the complexities of each sub-sample. Algorithm 2 presents the exact procedure.</ns0:p><ns0:p>Comparison of standard complexity curve and conditional complexity curve for iris data set is given by Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref>. This data set has 3 distinct classes. Our expectation is that estimating conditional distributions for each class would require larger data samples than estimating the overall distribution. Shape of the conditional complexity curve is consistent with this expectation: it is less steep than the standard curve and has larger AUCC value.</ns0:p></ns0:div> <ns0:div><ns0:head>Generalisation curve</ns0:head><ns0:p>Generalisation curve is the proposed variant of learning curve based on data set complexity. It is the plot presenting accuracy of a classifier trained on a data subset versus subset's information content, i.e. its Hellinger distance from the whole set. To construct the plot, a number of subsets of a specified size are drawn, the mean Hellinger distance and the mean classifier accuracy are marked on the plot. Trained classifiers are always evaluated on the whole data set, which represents the source of full information.</ns0:p><ns0:p>Using such resubstitution in the evaluation procedure may be unintuitive since the obtained scores do not represent true classifier performance on independent data. However this strategy corresponds to information captured by complexity curve and allows to utilise full data set for evaluation without relying (a) For j in 1 . . . K:</ns0:p><ns0:formula xml:id='formula_24'>i. Draw subset S j i &#8838; D I such that |S j i | = i. ii. Split S j</ns0:formula><ns0:p>i according to the class into S j,1 i , S j,2 i , . . . , S j,C i . iii. From S j,1 i , S j,2 i , . . . , S j,C i estimate probability distributions Q j,1 i , Q j,2 i , . . . , Q j,C i . iv. Calculate mean Hellinger distance:</ns0:p><ns0:formula xml:id='formula_25'>l j i = 1 C &#8721; C k=1 H 2 (P k , Q j,k i ).</ns0:formula><ns0:p>(b) Calculate mean m i and standard error s i :</ns0:p><ns0:formula xml:id='formula_26'>m i = 1 K K &#8721; j=1 l j i s i = 1 K K &#8721; j=1 m i &#8722; l j i 2</ns0:formula><ns0:p>Conditional complexity curve is a plot of m i &#177; s i vs i. iii. Train the classifier on S j i and evaluate it on D to get its accuracy a j i .</ns0:p><ns0:p>(b) Calculate mean l i and mean a i :</ns0:p><ns0:formula xml:id='formula_27'>l i = 1 K K &#8721; j=1 l j i a i = 1 K K &#8721; j=1 a j i</ns0:formula><ns0:p>Generalisation curve is a plot of a i vs l i .</ns0:p><ns0:p>Standard learning curve and generalisation curve for the same data and classifier are depicted in Figure <ns0:ref type='figure' target='#fig_6'>3</ns0:ref>. The generalisation curve gives more insight into algorithm learning dynamics, because it emphasises initial learning phases in which new information is acquired. In the case of k-neighbours classifier we can see that it is unable to generalise if the training sample is too small. Then it enters a rapid learning phase which gradually shifts to a final plateau, when the algorithm is unable to incorporate any new information into the model.</ns0:p><ns0:p>In comparison with standard learning curve, generalisation curve should be less dependent on data characteristics and more suitable for the comparison of algorithms. Again the score, which can be easily obtained from such plot is the area under the curve.</ns0:p></ns0:div> <ns0:div><ns0:head>PROPERTIES</ns0:head><ns0:p>To support validity of the proposed method, we perform an in-depth analysis of its properties. We start from purely mathematical analysis giving some intuitions on complexity curve convergence rate and identifying border cases. Then we perform experiments with toy artificial data sets testing basic assumptions behind complexity curve. After that we compare it experimentally with other complexity data measures and show its usefulness in explaining classifier performance.</ns0:p></ns0:div> <ns0:div><ns0:head>Mathematical properties</ns0:head><ns0:p>Drawing a random subset S n from a finite data set D of size N corresponds to sampling without replacement. Let assume that the data set contains k distinct values {v 1 , v 2 , . . . , v k } occurring with frequencies P = (p 1 , p 2 , . . . , p k ). Q n = (q 1 , q 2 , . . . , q k ) will be a random vector which follows a multivariate hypergeometric distribution.</ns0:p><ns0:formula xml:id='formula_28'>q i = 1 n &#8721; y&#8712;S n 1{y = v i }</ns0:formula><ns0:p>The expected value for any single element is: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_29'>E[q i ] = p i</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The probability of obtaining any specific vector of frequencies:</ns0:p><ns0:formula xml:id='formula_30'>P (Q n = (q 1 , q 2 , . . . , q k )) = p 1 N q 1 n p 2 N q 2 n &#8226; &#8226; &#8226; p k N q k n N n with &#8721; k i=1 q i = 1.</ns0:formula><ns0:p>322</ns0:p><ns0:p>We will consider the simplest case of discrete probability distribution estimated through frequency counts without using the independence assumption. In such case complexity curve is by definition:</ns0:p><ns0:formula xml:id='formula_31'>CC(n) = E[H 2 (P, Q n )]</ns0:formula><ns0:p>It is obvious that CC(N) = 0 because when n = N we draw all available data. This means that complexity curve always converges. We can ask whether it is possible to say anything about the rate of this convergence. This is the question about the upper bound on the tail of hypergeometric distribution. Such bound is given by Hoeffding-Chv&#225;tal inequality <ns0:ref type='bibr' target='#b6'>(Chv&#225;tal, 1979;</ns0:ref><ns0:ref type='bibr' target='#b40'>Skala, 2013)</ns0:ref>. For the univariate case it has the following form:</ns0:p><ns0:formula xml:id='formula_32'>P (|q i &#8722; p i | &#8805; &#948; ) &#8804; 2e &#8722;2&#948; 2 n</ns0:formula><ns0:p>which generalises to a multivariate case as:</ns0:p><ns0:formula xml:id='formula_33'>P (|Q n &#8722; P| &#8805; &#948; ) &#8804; 2ke &#8722;2&#948; 2 n</ns0:formula><ns0:p>where |Q n &#8722; P| is the total variation distance. Since H 2 (P, Q n ) &#8804; |Q n &#8722; P| this guarantees that complexity 323 curve converges at least as fast.</ns0:p></ns0:div> <ns0:div><ns0:head>324</ns0:head><ns0:p>Now we will consider a special case when n = 1. In this situation the multivariate hypergeometric distribution is reduced to a simple categorical distribution P. In such case the expected Hellinger distance is:</ns0:p><ns0:formula xml:id='formula_34'>E[H 2 (P, Q 1 )] = k &#8721; i=1 p i &#8730; 2 k &#8721; j=1 &#8730; p j &#8722; 1{ j = k} 2 = k &#8721; i=1 p i &#8730; 2 1 &#8722; p i + ( &#8730; p i &#8722; 1) 2 = k &#8721; i=1 p i 1 &#8722; &#8730; p i</ns0:formula><ns0:p>This corresponds to the first point of complexity curve and determines its overall steepness.</ns0:p></ns0:div> <ns0:div><ns0:head>325</ns0:head><ns0:p>Theorem: E[H 2 (P, Q 1 )] is maximal for a given k when P is an uniform categorical distribution over k categories, i.e.:</ns0:p><ns0:formula xml:id='formula_35'>E[H 2 (P, Q 1 )] = k &#8721; i=1 p i 1 &#8722; &#8730; p i &#8804; 1 &#8722; 1 k Proof:</ns0:formula><ns0:p>We will consider an arbitrary distribution P and the expected Hellinger distance E[H 2 (P, Q 1 )].</ns0:p><ns0:p>We can modify this distribution by choosing two states l and k occurring with probabilities p l and p k such as that p l &#8722; p k is maximal among all pairs of states. We will redistribute the probability mass between the two states creating a new distribution P . The expected Hellinger distance for the distribution P will be:</ns0:p><ns0:formula xml:id='formula_36'>E[H 2 (P , Q 1 )] = k &#8721; i=1,i =k,i =l p i 1 &#8722; &#8730; p i + a 1 &#8722; &#8730; a + (p k + p l &#8722; a) 1 &#8722; &#8730; p k + p l &#8722; a</ns0:formula><ns0:p>where a and p k + p l &#8722; a are new probabilities of the two states in P . We will consider a function</ns0:p><ns0:formula xml:id='formula_37'>f (a) = a 1 &#8722; &#8730; a + (p k + p l &#8722; a) 1 &#8722; &#8730; p k + p l</ns0:formula><ns0:p>and look for its maxima.</ns0:p><ns0:formula xml:id='formula_38'>&#8706; f (x) &#8706; a = &#8722; 1 &#8722; &#8730; p k + p l &#8722; a + &#8730; p k + p l &#8722; a 4 1 &#8722; &#8730; p k + p l &#8722; a + 1 &#8722; &#8730; a &#8722; &#8730; a 4 1 &#8722; &#8730; a 13/34</ns0:formula><ns0:p>The derivative is equal to 0 if and only if a = p k +p l 2 . We can easily see that:</ns0:p><ns0:formula xml:id='formula_39'>f (0) = f (p k + p l ) = (p k + p l ) 1 &#8722; &#8730; p k + p l &lt; (p k + p l ) 1 &#8722; p k + p l 2</ns0:formula><ns0:p>This means that f (a) reaches its maximum for a = p k +p l 2 . From that we can conclude that for any distribution P if we produce distribution P by redistributing probability mass between two states equally the following holds:</ns0:p><ns0:formula xml:id='formula_40'>E[H 2 (P , Q 1 )] &#8805; E[H 2 (P, Q 1 )]</ns0:formula><ns0:p>If we repeat such redistribution arbitrary number of times the outcome distribution converges to uniform distribution. This proves that the uniform distribution leads to the maximal expected Hellinger distance for a given number of states.</ns0:p><ns0:p>Theorem: Increasing the number of categories by dividing an existing category into two new categories always increases the expected Hellinger distance, i.e.</ns0:p><ns0:formula xml:id='formula_41'>k &#8721; i=1 p i 1 &#8722; &#8730; p i &#8804; k &#8721; i=1,i =l p i 1 &#8722; &#8730; p i + a 1 &#8722; &#8730; a + (p l &#8722; a) 1 &#8722; &#8730; p l &#8722; a Proof:</ns0:formula><ns0:p>Without the loss of generality we can assume that a &lt; 0.5p l . We can subtract terms occurring on both sides of the inequality obtaining:</ns0:p><ns0:formula xml:id='formula_42'>p l 1 &#8722; &#8730; p l &#8804; a 1 &#8722; &#8730; a + (p l &#8722; a) 1 &#8722; &#8730; p l &#8722; a p l 1 &#8722; &#8730; p l &#8804; a 1 &#8722; &#8730; a + p l 1 &#8722; &#8730; p l &#8722; a &#8722; a 1 &#8722; &#8730; p l &#8722; a p l 1 &#8722; &#8730; p l + a 1 &#8722; &#8730; p l &#8722; a &#8804; a 1 &#8722; &#8730; a + p l 1 &#8722; &#8730; p l &#8722; a</ns0:formula><ns0:p>Now we can see that:</ns0:p><ns0:formula xml:id='formula_43'>p l 1 &#8722; &#8730; p l &#8804; p l 1 &#8722; &#8730; p l &#8722; a and a 1 &#8722; &#8730; p l &#8722; a &#8804; a 1 &#8722; &#8730; a</ns0:formula><ns0:p>which concludes the proof.</ns0:p><ns0:p>From the properties stated by these two theorems we can gain some intuitions about complexity curves in general. First, by looking at the formula for the uniform distribution E[H 2 (P,</ns0:p><ns0:formula xml:id='formula_44'>Q 1 )] = 1 &#8722; 1 k we can see that when k = 1 E[H 2 (P, Q 1 )] = 0 and when k &#8594; &#8734; E[H 2 (P, Q 1 )] &#8594; 1.</ns0:formula><ns0:p>The complexity curve will be less steep if the variables in the data set take multiple values and each value occurs with equal probability. This is consistent with our intuition: we need a larger sample to cover such space and collect information. For smaller number of distinct values or distributions with mass concentrated mostly in a few points smaller sample will be sufficient to represent most of the information in the data set.</ns0:p></ns0:div> <ns0:div><ns0:head>Complexity curve and the performance of an unbiased model</ns0:head><ns0:p>To confirm validity of the assumptions behind complexity curve we performed experiments with artificial data generated according to the known model. Error of the corresponding classifier trained on such data does not contain bias component, so it is possible to observe if variance error component is indeed upper bounded by the complexity curve. We used the same scenario as when calculating the complexity curve:</ns0:p><ns0:p>classifiers were trained on random subsets and tested on the whole data set. We matched first and last points of complexity curve and learning curve and observed their relation in between.</ns0:p></ns0:div> <ns0:div><ns0:head>14/34</ns0:head><ns0:p>PeerJ The first kind of data followed the logistic model (logit data set). Matrix X (1000 observations, 12 attributes) contained values drawn from normal distribution with mean 0 and standard deviation 1. Class vector Y was defined as follows:</ns0:p><ns0:formula xml:id='formula_45'>P(Y |x) = e &#946; x 1 + e &#946; x</ns0:formula><ns0:p>where &#946; = (0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0, 0, 0, 0, 0, 0). All attributes were independent and conditionally independent. Since Y values were not deterministic, there was some noise present -classification error of the logistic regression classifier trained and tested on the full data set was larger than zero. for values lesser than 0.25 or greater than 0.75 the class was 1, for other values the class was 0. This kind of relation can be naturally modelled by a decision tree, and all the attributes are again independent and conditionally independent.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref> presents complexity curve and adjusted error of decision tree classifier on the generated data.</ns0:p><ns0:p>Once again the assumptions of complexity curve methodology are satisfied and the complexity curve is indeed an upper bound for the error.</ns0:p><ns0:p>What would happen if the attribute conditional independence assumption was broken? To answer this question we generated another type of data modelled after multidimensional chessboard (chessboard data set). X matrix contained 1000 observations and 2, 3 attributes drawn from an uniform distribution on range [0, 1). Class vector Y had the following values:</ns0:p><ns0:formula xml:id='formula_46'>0 if &#931; m i=0 x i</ns0:formula><ns0:p>s is even 1 otherwise where s was a grid step in our experiments set to 0.5. There is clearly strong attribute dependence, but since all parts of decision boundary are parallel to one of the attributes this kind of data can be modelled with a decision tree with no bias. . Complexity curves for whitened data (dashed lines) and not whitened data (solid lines). Areas under the curves are given in the legend. 8I -set of 8 independent random variables with Student's t distribution. 8R -one random variable with Student's t distribution repeated 8 times. 8I w -whitened 8I. 8R w -whitened 8R.</ns0:p><ns0:p>dimensions, the more dependencies between attributes violating complexity curve assumptions. For 3 dimensional chessboard the classification problem becomes rather hard and the observed error decreases slowly, but the complexity curve remains almost the same as for 2 dimensional case.</ns0:p><ns0:p>Results of experiments with controlled artificial data sets are consistent with our theoretical expectations. Basing on them we can introduce a general interpretation of the difference between complexity curve and learning curve: learning curve below the complexity curve is an indication that the algorithm is able to build a good model without sampling the whole domain, limiting the variance error component.</ns0:p><ns0:p>On the other hand, learning curve above the complexity curve is an indication that the algorithm includes complex attributes dependencies in the constructed model, promoting the variance error component.</ns0:p></ns0:div> <ns0:div><ns0:head>Impact of whitening and ICA</ns0:head><ns0:p>To evaluate the impact of the proposed preprocessing techniques (whitening and ICA -Independent Component Analysis) on complexity curves we performed experiments with artificial data. In the first experiment we generated two data sets of 300 observations and with 8 attributes distributed according to Student's t distribution with 1.5 degrees of freedom. In one data set all attributes were independent, in the other the same attribute was repeated 8 times. To both sets small Gaussian noise was added. Figure <ns0:ref type='figure'>7</ns0:ref> shows complexity curves calculated before and after whitening transform. We can see that whitening had no significant effect on the complexity curve of the independent set. In the case of the dependent set complexity curve calculated after whitening decreases visibly faster and the area under the curve is smaller. This is consistent with our intuitive notion of complexity: data set with repeated attributes should be significantly less complex.</ns0:p><ns0:p>In the second experiment two data sets with 100 observations and 4 attributes were generated. The first data set was generated from the continuous uniform distribution on interval [0, 2], the second one from the discrete (categorical) uniform distribution on the same interval. To both sets small Gaussian noise was added. Figure <ns0:ref type='figure' target='#fig_11'>8</ns0:ref> presents complexity curves for original, whitened and ICA-transformed data.</ns0:p><ns0:p>Among the original data sets the intuitive notion of complexity is preserved: area under the complexity curve for categorical data is smaller. The difference disappears for the whitened data but is again visible in the ICA-transformed data. </ns0:p></ns0:div> <ns0:div><ns0:head>Complexity curve variability and outliers</ns0:head><ns0:p>Complexity curve is based on the expected Hellinger distance and the estimation procedure includes some variance. The natural assumption is that the variability caused by the sample size is greater than the variability resulting from a specific composition of a sample. Otherwise averaging over samples of the same size would not be meaningful. This assumption is already present in standard learning curve methodology, when classifier accuracy is plotted against training set size. We expect that the exact variability of the complexity curve will be connected with the presence of outliers in the data set. Such influential observations will have a huge impact depending whether they will be included in a sample or not.</ns0:p><ns0:p>To verify whether these intuitions were true, we constructed two new data sets by introducing artificially outliers to WINE data set. In WINE001 we modified 1% of values by multiplying them by a random number from range (&#8722;10, 10). In WINE005 5% of values were modified in such manner.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_12'>9</ns0:ref> presents conditional complexity curves for all three data sets. WINE001 curve has indeed a higher variance and is less regular than WINE curve. WINE005 curve is characterised not only by a higher variance but also by a larger AUCC value. This means that adding so much noise increased the overall complexity of the data set significantly.</ns0:p><ns0:p>The result support our hypothesis that large variability of complexity curve signify an occurrence of highly influential observations in the data set. This makes complexity curve a valuable diagnostic tool for such situations. However, it should be noted that our method is unable to distinguish between important outliers and plain noise. To obtain this kind of insight one has to employ different methods.</ns0:p></ns0:div> <ns0:div><ns0:head>Comparison with other complexity measures</ns0:head><ns0:p>The set of data complexity measures developed by <ns0:ref type='bibr' target='#b16'>Ho and Basu (2002)</ns0:ref> and extended by <ns0:ref type='bibr' target='#b17'>Ho et al. (2006)</ns0:ref> continues to be used in experimental studies to explain performance of various classifiers <ns0:ref type='bibr' target='#b11'>(D&#237;ez-Pastor et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b28'>Mantovani et al., 2015)</ns0:ref>. We decided to compare experimentally complexity curve with those measures. Descriptions of the measures used are given in Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>.</ns0:p><ns0:p>According to our hypothesis conditional complexity curve should be robust in the context of class imbalance. To demonstrate this property we used for the comparison 88 imbalanced data sets used previously in the study by <ns0:ref type='bibr' target='#b11'>D&#237;ez-Pastor et al. (2015)</ns0:ref>. These data sets come originally from HDDT <ns0:ref type='bibr' target='#b7'>(Cieslak et al., 2011)</ns0:ref> and KEEL <ns0:ref type='bibr' target='#b0'>(Alcal&#225; et al., 2010)</ns0:ref> repositories. We selected only binary classification problems.</ns0:p><ns0:p>The list of data sets with their properties is presented as Tables <ns0:ref type='table' target='#tab_5'>2, 3</ns0:ref>. </ns0:p><ns0:formula xml:id='formula_47'>AUCC F1 F1v F2 F3 F4 L1 L2 L3 N1 N2 N3 N4 T1 T2 log(T2) log(T2) T2 T1 N4 N3 N2 N1 L3 L2 L1 F4 F3 F2 F1v F1 AUCC</ns0:formula><ns0:p>-0.79 -0.091-0.079-0.033 -0.27 -0.41 0.2 -0.17 -0.22 -0.15 -0.16 -0.14 0.33 0.092 0.75 1 -0.49 -0.087-0.047 -0.05 -0.13 -0.22 0.093 -0.14 -0.11 -0.16 -0.19 -0.15 0.15 0.099 1 0.75 -0.023 -0.81 -0.51 0.15 -0.62 -0.42 0.095 0.32 0.55 0.36 0.5 0.37 0.51 1 0.099 0.092 -0.37 -0.49 -0.23 0.25 -0.55 -0.59 0.015 0.3 0.31 0.53 0.51 0.51 1 0.51 0.15 0.33 -0.017 -0.35 -0.16 0.51 -0.49 -0.59 0.34 0.74 0.2 0.98 0.89 1 0.51 0.37 -0.15 -0.14 0.055 -0.47 -0.28 0.33 -0.55 -0.56 0.21 0.61 0.26 0.85 1 0.89 0.51 0.5 -0.19 -0.16 -0.058 -0.36 -0.16 0.52 -0.46 -0.56 0.38 0.81 0.21 1 0.85 0.98 0.53 0.36 -0.16 -0.15 0.25 -0.63 -0.21 0.017 -0.2 -0.07 -0.46 0.25 1 0.21 0.26 0.2 0.31 0.55 -0.11 -0.22 -0.11 -0.31 -0.17 0.38 -0.33 -0.38 0.51 1 0.25 0.81 0.61 0.74 0.3 0.32 -0.14 -0.17 Manuscript to be reviewed</ns0:p><ns0:p>Computer Science For each data set we calculated area under the complexity curve using the previously described procedure and the values of other data complexity measures using DCOL software <ns0:ref type='bibr' target='#b30'>(Orriols-Puig et al., 2010</ns0:ref>). Pearson's correlation was then calculated for all the measures. As T2 measure seemed to have non-linear characteristics destroying the correlation additional column log T2 was added to comparison.</ns0:p><ns0:p>Results are presented as Figure <ns0:ref type='figure' target='#fig_13'>10</ns0:ref>. Clearly AUCC is mostly correlated with log T2 measure. This is to be expected as both measures are concerned with sample size in relation to attribute structure. The difference is that T2 takes into account only the number of attributes while AUCC considers also the complexity of distributions of the individual attributes. Correlations of AUCC with other measures are much lower and it can be assumed that they capture different aspects of data complexity and may be potentially complementary.</ns0:p><ns0:p>The next step was to show that information captured by AUCC is useful for explaining classifier performance. In order to do so we trained a number of different classifiers on the 81 benchmark data sets and evaluated their performance using random train-test split with proportion 0.5 repeated 10 times. The performance measure used was the area under ROC curve. We selected three linear classifiers -na&#239;ve Bayes with gaussian kernel, linear discriminant analysis (LDA) and logistic regression -and two families of non-linear classifiers of varying complexity: k-nearest neighbour classifier (k-NN) with different values of parameter k and decision tree (CART) with the limit on maximal tree depth. The intuition was as follows: the linear classifiers do not model attributes interdependencies, which is in line with complexity curve assumptions. Selected non-linear classifiers on the other hand are -depending on the parametrisation -more prone to variance error, which should be captured by complexity curve.</ns0:p><ns0:p>Correlations between AUCC, log T2, and classifier performance are presented in Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref>. Most of the correlations are weak and do not reach statistical significance, however some general tendencies can be observed. As can be seen, AUC ROC scores of linear classifiers have very little correlation with AUCC and log T2. This may be explained by the high-bias and low-variance nature of these classifiers: they are not strongly affected by data scarcity but their performance depends on other factors. This is especially true for LDA classifier, which has the weakest correlation among linear classifiers.</ns0:p><ns0:p>In k-NN classifier complexity depends on k parameter: with low k values it is more prone to variance error, with larger k it is prone to bias if the sample size is not large enough <ns0:ref type='bibr' target='#b8'>(Domingos, 2000)</ns0:ref>. Both AUCC and log T2 seem to capture the effect of sample size in case of large k value well (correlations -0.2249 and 0.2395 for 35-NN). However, for k = 1 the correlation with AUCC is stronger (-0.1256 vs 0.0772).</ns0:p><ns0:p>Depth parameter in decision tree also regulates complexity: the larger the depth the more classifier is prone to variance error and less to bias error. This suggests that AUCC should be more strongly correlated with performance of deeper trees. On the other hand, complex decision trees explicitly model attribute interdependencies ignored by complexity curve, which may weaken the correlation. This is observed in the obtained results: for a decision stub (tree of depth 1), which is low-variance high-bias classifier, correlation with AUCC and log T2 is very weak. For d = 3 and d = 5 it becomes visibly stronger, and then for larger tree depth it again decreases. It should be noted that with large tree depth, as with small k values in k-NN, AUCC has stronger correlation with the classifier performance than log T2.</ns0:p><ns0:p>A slightly more sophisticated way of applying data complexity measures is an attempt to explain classifier performance relative to some other classification method. In our experiments LDA is a good candidate for reference method since it is simple, has low variance and is not correlated with either AUCC or log T2. Table <ns0:ref type='table'>5</ns0:ref> presents correlations of both measures with classifier performance relative to LDA.</ns0:p><ns0:p>Here we can see that correlations for AUCC are generally higher than for log T2 and reach significance for the majority of classifiers. Especially in the case of decision tree AUCC explains relative performance better than log T2 (correlation 0.1809 vs -0.0303 for d = inf).</ns0:p><ns0:p>Results of the presented correlation analyses demonstrate the potential of complexity curve to complement the existing complexity measures in explaining classifier performance. As expected from theoretical considerations, there is a relation between how well AUCC correlates with classifier performance and the classifier's position in bias-variance spectrum. It is worth noting that despite the attribute independence assumption of complexity curve method it proved useful for explaining performance of complex non-linear classifiers. Table <ns0:ref type='table'>7</ns0:ref>. Areas under conditional complexity curve (AUCC) for microarray data sets along AUC ROC values for different classifiers. k-NNk-nearest neighbour, DT -CART decision tree, LDA -linear discriminant analysis, NB -na&#239;ve Bayes, LR -logistic regression.</ns0:p></ns0:div> <ns0:div><ns0:head>Large p, small n problems</ns0:head><ns0:p>There is a special category of machine learning problems in which the number of attributes p is large with respect to the number of samples n, perhaps even order of magnitudes larger. Many important biological data sets, most notably data from microarray experiments, fall into this category <ns0:ref type='bibr' target='#b20'>(Johnstone and Titterington, 2009)</ns0:ref>. To test how our complexity measure behaves in such situations, we calculated AUCC scores for a few microarray data sets and compared them with AUC ROC scores of some simple classifiers. Classifiers were evaluated as in the previous section. Detailed information about the data sets is given by Table <ns0:ref type='table' target='#tab_7'>6</ns0:ref>.</ns0:p><ns0:p>Results of the experiment are presented in Table <ns0:ref type='table'>7</ns0:ref>. As expected, with the number of attributes much larger than the number of observations data is considered by our metric as extremely scarce -values of AUCC are in all cases above 0.95. On the other hand, AUC ROC classification performance is very varied between data sets with scores approaching or equal to 1.0 for LEUKEMIA and LYMPHOMA data sets, and scores around 0.5 baseline for PROSTATE. This is because despite the large number of dimensions the form of the optimal decision function can be very simple, utilising only a few of available dimensions.</ns0:p><ns0:p>Complexity curve does not consider the shape of decision boundary at all and thus does not reflect differences in classification performance.</ns0:p><ns0:p>From this analysis we concluded that complexity curve is not a good predictor of classifier performance for data sets containing a large number of redundant attributes, as it does not differentiate between important and unimportant attributes. The logical way to proceed in such case would be to perform some form of feature selection or dimensionality reduction on the original data, and then calculate complexity curve in the reduced dimensions.</ns0:p></ns0:div> <ns0:div><ns0:head>APPLICATIONS Interpreting complexity curves</ns0:head><ns0:p>In order to prove the practical applicability of the proposed methodology, and show how complexity curve plot can be interpreted, we performed experiments with six simple data sets from UCI Machine Learning Repository <ns0:ref type='bibr' target='#b13'>(Frank and Asuncion, 2010)</ns0:ref> Looking at the individual graphs, it is now possible to compare complexity of different sets. From the sets considered, MONKS-1 and CAR are dense data sets with a lot of instances and medium number of attributes. The information they contain can be to a large extend recovered from relatively small subsets.</ns0:p><ns0:p>Such sets are natural candidates for data pruning. On the other hand, WINE and GLASS are small data sets with a larger number of attributes or classes -they can be considered complex, with no redundant information.</ns0:p><ns0:p>Besides the slope of the complexity curve we can also analyse its variability. We can see that the shape of WINE complexity curve is very regular with small variance in each point, while the GLASS curve displays much higher variance. This mean that the observations in GLASS data set are more diverse and some observations (or their combinations) are more important for representing data structure than the other.</ns0:p></ns0:div> <ns0:div><ns0:head>Data pruning with complexity curves</ns0:head><ns0:p>The problem of data pruning in the context of machine learning is defined as reducing the size of training sample in order to reduce classifier training time and still achieve satisfactory performance. It becomes extremely important as the data grows and a) does not fit the memory of a single machine, b) training times of more complex algorithms become very long.</ns0:p><ns0:p>A classic method for performing data pruning is progressive sampling -training the classifier on data samples of increasing size as long as its performance increases. <ns0:ref type='bibr' target='#b32'>Provost et al. (1999)</ns0:ref> analysed various schedules for progressive sampling and recommended geometric sampling, in which sample size is multiplied by a specified constant in each iteration, as the reasonable strategy in most cases. Geometric sampling uses samples of sizes a i n 0 , where n 0 -initial sample size, a -multiplier, i -iteration number.</ns0:p><ns0:p>In our method instead of training classifier on the drawn data sample we are probing the complexity curve. We are not detecting convergence of classifier accuracy, but searching for a point on the curve corresponding to some reasonably small Hellinger distance value, e.g. 0.005. This point designates the smallest data subset which still contains the required amount of information.</ns0:p><ns0:p>In this setting we were not interested in calculating the whole complexity curve but just in finding the minimal data subset, which still contains most of the original information. The search procedure should be as fast as possible, since the goal of the data pruning is to save time spent on training classifiers. To comply with these requirements we constructed a criterion function of the form f</ns0:p><ns0:formula xml:id='formula_48'>(x) = H 2 (G x , D) &#8722; t,</ns0:formula><ns0:p>where D denotes a probability distribution induced by the whole data set, G x a distribution induced by random subset of size x and t is the desired Hellinger distance. We used classic Brent method <ns0:ref type='bibr' target='#b4'>(Brent, 1973)</ns0:ref> to find a root of the criterion function. In this way data complexity was calculated only for the points visited by Brent's algorithm. To speed up the procedure even further we used standard complexity curve instead of the conditional one and settled for whitening transform as the only preprocessing technique. To verify if this idea is of practical use, we performed an experiment with three bigger data sets from UCI repository. Their basic properties are given by Table <ns0:ref type='table' target='#tab_9'>9</ns0:ref>.</ns0:p><ns0:p>For all data sets we performed a stratified 10 fold cross validation experiment. The training part of a split was pruned according to our criterion function with t = 0.005 (CC pruning) or using geometric progressive sampling with multiplier a = 2 and initial sample size n 0 = 100 (PS pruning). Achieving the same accuracy as with CC pruning was used as a stop criterion for progressive sampling. Classifiers were trained on pruned and unpruned data and evaluated on the testing part of each cross validation split.</ns0:p><ns0:p>Standard error was calculated for the obtained values. We have used machine learning algorithms from scikit-learn library <ns0:ref type='bibr' target='#b31'>(Pedregosa et al., 2011)</ns0:ref> and the rest of the procedure was implemented in Python with the help of NumPy and SciPy libraries. Calculations were done on a workstation with 8 core Intel R</ns0:p><ns0:p>Core TM i7-4770 3.4 Ghz CPU working under Arch GNU/Linux. These results present complexity curve pruning as a reasonable model-free alternative to progressive sampling. It is more stable and often less demanding computationally. It does not require additional convergence detection strategy, which is always an important consideration when applying progressive sampling in practice. What is more, complexity curve pruning can also be easily applied in the context of online learning, when the data is being collected on the fly. After appending a batch of new examples to the data set, Hellinger distance between the old data set and the extended one can be calculated. If the distance is smaller than the chosen threshold, the process of data collection can be stopped.</ns0:p></ns0:div> <ns0:div><ns0:head>Generalisation curves for benchmark data sets</ns0:head><ns0:p>Another application of the proposed methodology is comparison of classification algorithms based on generalisation curves. We evaluated a set of standard algorithms available in scikit-learn library <ns0:ref type='bibr' target='#b31'>(Pedregosa et al., 2011)</ns0:ref>. As benchmark data sets we used the same sets from UCI repository as in section demonstrating interpretability of complexity curves. The following classification algorithms were evaluated:</ns0:p><ns0:p>&#8226; MajorityClassifier -artificial classifier which always returns the label of the most frequent class in the training set,</ns0:p><ns0:p>&#8226; GaussianNB -na&#239;ve Bayes classifier with Gaussian kernel probability estimate,</ns0:p><ns0:p>&#8226; KNeighborsClassifier -k-nearest neighbours, k = 5,</ns0:p><ns0:p>&#8226; DecisionTreeClassifier -CART decision tree algorithm,</ns0:p><ns0:p>&#8226; RandomForestClassifier -random forest with 10 CART trees,</ns0:p><ns0:p>&#8226; LinearSVC -linear spport vector machine (without kernel transformation), cost parameter C = 1, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; SVC -support vector machine with radial basis function kernel (RBF):</ns0:p><ns0:formula xml:id='formula_49'>exp(&#8722; 1 n |x &#8722; x | 2 ), n -number of features, cost parameter C = 1.</ns0:formula><ns0:p>Generalisation curves were calculated for all classifiers with the same random seed, to make sure that the algorithms are trained on exactly the same data samples. The obtained curves are presented in Figure <ns0:ref type='figure' target='#fig_5'>12</ns0:ref>.</ns0:p><ns0:p>The performance of the majority classifier is used as a baseline. We expect that the worst-case performance of any classifier should be at least at the baseline level. This is indeed observed in the plots:</ns0:p><ns0:p>most classifiers start at the baseline level and then their accuracy steadily increases as more data are accumulated. The notable exception is the CAR data set, where the accuracy of decision tree and linear SVM stays below the accuracy of the majority classifier in the initial phase of learning. We attribute this to the phenomena known as anti-learning <ns0:ref type='bibr' target='#b23'>(Kowalczyk and Chapelle, 2005)</ns0:ref> In an ideal situation the learning algorithm is able to utilise every bit of additional information identified by the complexity curve to improve the classification and the accuracy gain is linear. The generalisation curve should be then a straight line. Convex generalisation curve indicates that complexity curve is only a loose upper bound on classifier variance, in other words algorithm is able to fit a model using less information than indicated by the complexity curve. On the other hand, concave generalisation curve corresponds to a situation when the independence assumption is broken and including information on attributes interdependencies, not captured by complexity curve, is necessary for successful classification.</ns0:p><ns0:p>On most of the benchmark data sets generalisation curves are generally convex, which means that the underlining complexity curves constitute proper upper bounds on the variance error component. The bound is relatively tight in the case of GLASS data set, looser in the case of IRIS, and the loosest for WINE and BREAST-CANCER-WISCONSIN data. A natural conclusion is that a lot of variability contained in this last data set and captured by the Hellinger distance is irrelevant to the classification task. The most straightforward explanation would be the presence of unnecessary attributes uncorrelated with class, which can be ignored altogether. This is consistent with the results of various studies in feature selection. <ns0:ref type='bibr' target='#b5'>Choubey et al. (1996)</ns0:ref> identified that in GLASS data 7-8 attributes (78-89%) are relevant, in IRIS data 3 attributes (75%), and in BREAST-CANCER-WISCONSIN 5-7 attributes (56-78%). Similar results were obtained for BREAST-CANCER-WISCONSIN in other studies, which found that only 4 of the original attributes (44%) contribute to the classification <ns0:ref type='bibr' target='#b34'>(Ratanamahatana and Gunopulos, 2003;</ns0:ref><ns0:ref type='bibr' target='#b25'>Liu et al., 1998)</ns0:ref>. <ns0:ref type='bibr' target='#b10'>Dy and Brodley (2004)</ns0:ref> obtained best classification results for WINE data set with 7 attributes (54%).</ns0:p><ns0:p>On MONKS-1 and CAR data generalisation curves for all algorithms besides na&#239;ve Bayes and linear SVM are concave. This is an indication of models relying heavily on attribute interdependencies to determine the correct class. This is not the case for na&#239;ve Bayes and linear SVM because these methods are unable to model attribute interactions. This is not surprising: both MONKS-1 and CAR are artificial data sets with discrete attributes devised for evaluation of rule-based and tree-based classifiers <ns0:ref type='bibr'>Thrun et al. (1991)</ns0:ref>; <ns0:ref type='bibr' target='#b3'>Bohanec and Rajkovi&#269; (1988)</ns0:ref>. Classes are defined with logical formulas utilising relations of multiple attributes rather than single values -clearly the attributes are interdependent.</ns0:p><ns0:p>An interesting case is RBF SVM on WINE data set. Even though it is possible to model the problem basing on a relatively small sample, it overfits strongly by trying to include unnecessary interdependencies. This is a situation when variance of a model is greater than indicated by the complexity curve.</ns0:p><ns0:p>To compare performance of different classifiers, we computed areas under generalisation curves (AUGC) for all data sets. Results are presented in Table <ns0:ref type='table' target='#tab_12'>11</ns0:ref>. Random forest classifier obtained the highest scores on all data sets except MONKS-1 where single decision tree performed the best. On WINE data set na&#239;ve Bayes achieved AUGC comparable with random forest.</ns0:p><ns0:p>AUGC values obtained on different data sets are generally not comparable, especially when the base level -majority classifier performance -differs. Therefore, to obtain a total ranking we ranked classifiers separately on each data set and averaged the ranks. According to this criteria random forest is the best classifier on these data sets, followed by decision tree and support vector machine with radial basis function kernel.</ns0:p><ns0:p>Comparison of algorithms using AUGC favours an algorithm which is characterised simultaneously by good accuracy and small sample complexity (ability to draw conclusions from a small sample). The proposed procedure helps to avoid applying an overcomplicated model and risking overfitting when a </ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this article we introduced a measure of data complexity targeted specifically at data sparsity. This distinguish it from other measures focusing mostly on the shape of optimal decision boundary in classification problems. The introduced measure has a form of graphical plot -complexity curve. We showed that it exhibits desirable properties through a series of experiments on both artificially constructed and real-world data sets. We proved that complexity curve capture non-trivial characteristics of the data sets and is useful for explaining the performance of high-variance classifiers. With conditional complexity curve it was possible to perform a meaningful analysis even with heavily imbalanced data sets.</ns0:p><ns0:p>Then we demonstrated how complexity curve can be used in practice for data pruning (reducing the size of training set) and that it is a feasible alternative to progressive sampling technique. This result is immediately applicable to all the situations when data overabundance starts to pose a problem.</ns0:p><ns0:p>For instance, it is possible to perform a quick exploration study on a pruned data set before fitting computationally expensive models on the whole set. Pruning result may also provide a suggestion for choosing proper train-test split ratio or number of folds of cross-validation in the evaluation procedure.</ns0:p><ns0:p>Knowing data sparseness is useful both for evaluating the trained classifiers and classification algorithms in general. Using the concept of the complexity curve, we developed a new performance measurean extension of learning curve called generalisation curve. This method presents classifier generalisation capabilities in a way that depends on the data set information content rather than its size. It provided more insights into classification algorithm dynamics than commonly used approaches.</ns0:p><ns0:p>We argue that new performance metrics, such as generalisation curves, are needed to move away from a relatively static view of classification task to a more dynamic one. It is worth to investigate how various algorithms are affected by certain data manipulations, for example when new data become available or the underlying distribution shifts. This would facilitate the development of more adaptive and universal algorithms capable of working in a dynamically changing environment.</ns0:p><ns0:p>Experiments showed that in the presence of large number of redundant attributes not contributing to the classification task complexity curve does not correlate well with classifier performance. It correctly identifies dimensional sparseness of the data, but that is misleading since the actual decision boundary may still be very simple. Because of this as the next step in our research we plan to apply similar probabilistic approach to measure information content of different attributes in a data set and use that knowledge for performing feature selection. Graphs analogical to complexity curves and generalisation curves would be valuable tools for understanding characteristics of data sets and classification algorithms related to attribute structure.</ns0:p><ns0:p>Our long-term goal is to gain a better understanding of the impact of data set structure, both in Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>ensembles. We hope that a better control over data sets used in experiments will allow to perform a more systematic study of classifier diversity and consensus methods.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>comes from an inability of the applied model to represent the true relation present in data, variance comes from an inability to estimate optimal model parameters from the data sample, noise is inherent to the solved task and irreducible. Since our complexity measure is model agnostic it clearly does not include bias component. As it does not take into account the dependent variable, it cannot measure noise either. All that is left to investigate is the relation between our complexity measure and variance component of the classification error. The variance error component is connected with overfitting, when the model fixates over specific properties of a data sample and looses generalisation capabilities over the whole problem domain. If the training sample represented the problem perfectly and the model was fitted with perfect optimisation procedure variance would be reduced to zero. The less representative the training sample is for the whole problem domain, the larger the chance for variance error.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>step size) controls the trade-off between the precision of the calculated curve and the computation time. In all experiments, unless stated otherwise, we used values K = 20, d = |D| 60 . Regular shapes of the obtained curves did not suggest the need for using larger values.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 Figure 1 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure1presents a sample complexity curve. It demonstrates how by drawing larger subsets of the data we get better approximations of the original distribution, as indicated by the decreasing Hellinger distance.The logarithmic decrease of the distance is characteristic: it means that with a relatively small number of samples we can recover general characteristics of the distribution, but to model the details precisely we need a lot more data points. The shape of the curve is very regular, with just minimal variations. It</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:1:2:CHECK 31 May 2016) Manuscript to be reviewed Computer Science Algorithm 2 Procedure for calculating conditional complexity curve. D -original data set, C -number of classes, N -number of subsets, K -number of samples. 1. Transform D with whitening transform and/or ICA to obtain D I . 2. Split D I according to the class into D 1 I , D 2 I , . . . , D C I . 3. From D 1 I , D 2 I , . . . , D C I estimate probability distributions P 1 , P 2 , . . . , P C . 4. For i in 1 . . . |D I | with a step size |D I | N :</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>CC (AUCC: 0.10) Conditional CC (AUCC: 0.19)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Complexity curve (solid) and conditional complexity curve (dashed) for iris data set.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Learning curve (A) and generalisation curve (B) for data set IRIS and k-neighbours classifier (k = 5).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Complexity curve and learning curve of the logistic regression on the logit data.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 presents complexity curve and adjusted error of logistic regression for the generated data. After ignoring noise error component, we can see that the variance error component is indeed upper bounded by the complexity curve. Different kind of artificial data represented multidimensional space with parallel stripes in one dimension (stripes data set). It consisted of X matrix with 1000 observations and 10 attributes drawn from an uniform distribution on range [0, 1). Class values Y dependent only on value of one of the attributes:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 6 Figure 5 .</ns0:head><ns0:label>65</ns0:label><ns0:figDesc>Figure 6 presents complexity curves and error curves for different dimensionalities of chessboard data. Indeed here classification error becomes larger than indicated by complexity curve. The more</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Complexity curve and learning curve of the decision tree on the chessboard data.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Complexity curves for whitened data (dashed lines), not whitened data (solid lines) and ICA-transformed data (dotted lines). Areas under the curves are given in the legend. U -data sampled from uniform distribution. C -data sampled from categorical distribution. U w -whitened U. C wwhitened C. U ICA -U w after ICA. C ICA -C w after ICA.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Complexity curves for WINE and its counterparts with introduced outliers. For the sake of clarity only contours were drawn.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Pearson's correlations between complexity measures.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>Decision tree d = 10 -0.1035 0.0695 Decision tree d = 15 -0.0995 0.0375 Decision tree d = 20 -0.0921 0.0394 Decision tree d = 25 -0.0757 0.0298 Decision tree d = 30 -0.0677 0.0227 Decision tree d = inf -0.0774 0.0345</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>tree d = 3 0.3209 -0.1852 LDA -Decision tree d = 5 0.3184 -0.2362 LDA -Decision tree d = 10 0.2175 -0.0838 LDA -Decision tree d = 15 0.2146 -0.0356 LDA -Decision tree d = 20 0.2042 -0.0382 LDA -Decision tree d = 25 0.1795 -0.0231 LDA -Decision tree d = 30 0.1636 -0.0112 LDA -Decision tree d = inf 0.1809 -0.0303Table 5. Pearson's correlations coefficients between classifier AUC ROC performances relative to LDA performance and complexity measures. Values larger than 0.22 or smaller than -0.22 are significant at &#945; = 0.05 significance level.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Conditional complexity curves for six different data sets from UCI Machine Learning repository with areas under complexity curve (AUCC) reported: A -CAR, AUCC: 0.08, B -MONKS-1, AUCC: 0.05, C -IRIS, AUCC: 0.19, D -BREAST-CANCER-WISCONSIN, AUCC: 0.13, E -GLASS, AUCC: 0.44, F -WINE, AUCC: 0.35.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>. It occurs in certain situations, when the sample size is smaller than the number of attributes, and correct classification of the examples in the training set may lead to an inverted classification of the examples in the testing set.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head /><ns0:label /><ns0:figDesc>terms of contained examples and attributes, and use that knowledge to build heterogeneous classification 31/34 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:1:2:CHECK 31 May 2016)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>most cases we do not know the underlining probability distribution P representing the problem and all we have is a data sample D, but we can still use the described complexity measure. Let us picture our data D as the true source of knowledge about the problem and the estimated probability distribution P D as</ns0:figDesc><ns0:table><ns0:row><ns0:cell>4/34</ns0:cell></ns0:row><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:1:2:CHECK 31 May 2016)</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Data complexity measures used in experiments.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Id</ns0:cell><ns0:cell>Description</ns0:cell></ns0:row><ns0:row><ns0:cell>F1</ns0:cell><ns0:cell>Maximum Fisher's discriminant ratio</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>F1v Directional-vector maximumline Fisher's discriminant ratio</ns0:cell></ns0:row><ns0:row><ns0:cell>F2</ns0:cell><ns0:cell>Overlap of the per-classline bounding boxes</ns0:cell></ns0:row><ns0:row><ns0:cell>F3</ns0:cell><ns0:cell>Maximum individual feature efficiency</ns0:cell></ns0:row><ns0:row><ns0:cell>F4</ns0:cell><ns0:cell>Collective feature efficiency</ns0:cell></ns0:row><ns0:row><ns0:cell>L1</ns0:cell><ns0:cell>Minimized sum of the error distance of a linear classifier</ns0:cell></ns0:row><ns0:row><ns0:cell>L2</ns0:cell><ns0:cell>Training error of a linear classifier</ns0:cell></ns0:row><ns0:row><ns0:cell>L3</ns0:cell><ns0:cell>Nonlinearity of a linear classifier</ns0:cell></ns0:row><ns0:row><ns0:cell>N1</ns0:cell><ns0:cell>Fraction of points on the class boundary</ns0:cell></ns0:row><ns0:row><ns0:cell>N2</ns0:cell><ns0:cell>Ratio of average intra/inter class nearest neighbor distance</ns0:cell></ns0:row><ns0:row><ns0:cell>N3</ns0:cell><ns0:cell>Leave-one-out error rate of the one-nearest neighbor classifier</ns0:cell></ns0:row><ns0:row><ns0:cell>N4</ns0:cell><ns0:cell>Nonlinearity of the one-nearest neighbor classifier</ns0:cell></ns0:row><ns0:row><ns0:cell>T1</ns0:cell><ns0:cell>Fraction of maximum covering spheres</ns0:cell></ns0:row><ns0:row><ns0:cell>T2</ns0:cell><ns0:cell>Average number of points per dimension</ns0:cell></ns0:row></ns0:table><ns0:note>19/34 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:1:2:CHECK 31 May 2016) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Properties of HDDT data sets used in experiments.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Properties of KEEL data sets used in experiments.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>1.86</ns0:cell></ns0:row></ns0:table><ns0:note>21/34PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:1:2:CHECK 31 May 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Pearson's correlations coefficients between classifier AUC ROC performances and complexity measures. Values larger than 0.22 or smaller than -0.22 are significant at &#945; = 0.05 significance level.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Properties of microarray data sets used in experiments.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>23/34PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:1:2:CHECK 31 May 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>. The sets were chosen only as illustrative examples. They have Basic properties of the benchmark data sets. no missing values and represent only classification problems, not regression ones. Basic properties of the data sets are given in Table 8. For each data set we calculated conditional complexity curve, as it should capture data properties in the context of classification better than standard complexity curve. The curves are presented in Figure 11. Shape of the complexity curve portrays the learning process. The initial examples are the most important since there is a huge difference between having some information and having no information at all. After some point including additional examples still improves probability estimation, but does not introduce such a dramatic change.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>24/34</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>13, E -GLASS, AUCC: 0.44, F -WINE, AUCC: 0.35. Basic properties of the data pruning benchmark data sets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>26/34</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 10</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>presents measured times and obtained accuracies. As can be seen, the difference in classification accuracies between pruned and unpruned training data is negligible. CC compression rate differs for the three data sets, which suggests that they are of different complexity: for LED data only 5% is needed to perform successful classification, while ADULT data is pruned at 33%. CC compression rate is rather stable with only small standard deviation, but PS compression rate is characterised with huge variance. In this regard, complexity curve pruning is preferable as a more stable pruning criterion.In all cases when training a classifier on the unpruned data took more than 10 seconds, we observed huge speed-ups. With the exception of SVC on LED data set, complexity curve pruning performed better than progressive sampling in such cases. Unsurprisingly, real speed-ups were visible only for computationally intensive methods such as Support Vector Machines, Random Forest and Gradient Boosted Decision Trees. For simple methods such as Na&#239;ve Bayes, Decision Tree or Logistic Regression fitting the model on the unpruned data is often faster than applying pruning strategy.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>CAR 0.70 (7) 0.71 (5.5) 0.76 (4) 0.79 (2.5) 0.80 (1) 0.71 (5.5) 0.79 (2.5) MONKS-1 0.50 (7) 0.57 (6) 0.58 (5) 0.63 (1) Areas under generalisation curves for various algorithms. Values given in brackets are ranks among all algorithms (ties solved by ranking randomly and averaging ranks). M -majority classifier, NB -na&#239;ve Bayes, kNN -k-nearest neighbours, DT -decision tree, RF -random forest, SVM r -support vector machine with RBF kernel, SVM l -linear support vector machine. simpler model is adequate. It takes into account algorithm properties ignored by standard performance metrics.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Classifiers</ns0:cell></ns0:row></ns0:table><ns0:note>30/34 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:1:2:CHECK 31 May 2016) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> </ns0:body> "
"# Editor *** Please thoroughly address all the comments from the reviewers, especially in relation to experimentally comparing this work with previous work on data complexity (not just citing them) as well as using a much larger set of benchmark datasets. (...) Many more datasets, larger datasets (in both #instances and #attributes), and datasets facing some particular challenge (class imbalance, sparseness, multi-class, multi-label, etc) should be used. *** We made an effort to address all the comments. Following your suggestion, we added a full new section comparing our method to existing complexity measures on 88 benchmark data sets representing imbalanced problems of various size. We also included examples concerning extremely sparse microarray data. # Reviewer: Núria Macià *** They claim to introduce a universal measure of complexity. What do they mean by universal? A single complexity measure can neither characterize the problem’s difficulty nor fully capture the relationship between the problem's difficulty and the learner’s ability. *** We reworded the article introduction to avoid confusion. Citing the revised manuscript: ' In this article we introduce a new measure of data complexity targeted at sample sparsity, which is mostly associated with variance error component. We aim to measure information saturation of a data set without making any assumptions on the form of relation between dependent variable and the rest of variables, so explicitly disregarding shape of decision boundary and classes ambiguity. ' Our measure is universal only in that sense that is targeted at identifying data sparsity, which affects all learning algorithms. Of course it does not identify all sources of problem difficulty. *** I would suggest that the authors thoroughly review the literature on data complexity. *** We are grateful for that suggestion and literature references. We extended the related literature section of the article. *** The authors claim that their measure can be applied not only to classification, regression, and clustering tasks but also to online learning tasks. However, they only provide empirical results on classification problems. *** Indeed, while the general form of our measure suggest its wider applicability, we focused on analysing complexity in the context of classification performance, as this is the context in which complexity measures were applied previously. *** The authors performed experiments on a test bed composed of six data sets from the UCI Machine Learning Repository, which has 255 classification problems. How did the authors select these six data sets? *** The experiments with simple UCI data sets were intended only as an illustration how complexity curve plot can be interpreted. To reflect this we renamed the section 'Interpreting complexity curves'. The larger scale experiments involving 88 data sets were added as a new section 'Comparison with other complexity measures'. *** What would be the impact on bioinformatic data sets usually described by hundreds of attributes? *** We performed an initial investigation of this problem in an added section 'Large p, small n problems'. Complexity curves correctly identify such data sets as extremely sparse, which, however, does not automatically translates to poor classification performance. *** We observe that there is no correlation between the complexity measure and the performance obtained by the classifiers. *** Because sample sparsity is only one of the sources of classification difficulty, connected with variance error component, the correlation is not expected to be very strong. In the revised manuscript we calculated Pearson's correlation over 81 data sets between the area under the complexity curve and AUC ROC (see section: 'Comparison with other complexity measures'). We hope these results are more convincing. *** A decision tree algorithm is run on the stripe and chessboard data sets. Which algorithm in particular? C4.5? Why did the authors choose this combination: stripe/chessboard and decision tree? *** We clarified that we use CART decision trees in all experiments. Because of the shape of decision boundaries, both stripe and chessboards data sets can be modelled with decision trees without bias, so expected that the observed classification error comes from variance component. # Reviewer 2 *** One of the main issues with this paper is related to the introduction. Although the aim is clearly stated, this is not precisely described by clear objectives. *** We are grateful for drawing attention to this issue. We rewrote the introduction to describe the objectives more clearly and link them to experimental results. *** (...) firstly, when discussing evaluating classifier perform much focus is on ROC curves, where not all classification approaches have a threshold parameter/value in order to generate such a curve. *** We are aware of this limitation of ROC analysis. We refer to this method because of the conceptual similarities to generalisation curve. We hope that the following paragraph added to the revised manuscript makes it clear: ' Both ROC curves and cost curves are applicable only to classifiers with continuous outputs and to two class problems, which limits their usage. What is important is the key idea behind them: instead of giving the user a final solution they give freedom to choose an optimal classifier according to some criteria from a range of options. ' *** In general, the topic of epistasis in datasets should be much more thoroughly addressed. Both toy problem domains, e.g. multiplexer problems, and real world problem domains e.g. bioinformatics, are highly epistatic. *** We performed an experiment with an artificial data set 'chessboard' with strong attribute dependencies (see section section 'Complexity curve and the performance of an unbiased model'). In the 'Generalisation curves for benchmark data sets' we added references supporting our claim of attribute interdependencies in monks-1 and car data sets. Moreover, we added two new sections ('Comparison with other complexity measures' and 'Large p, small n problems') in which we present experimental results on a much larger collection of data sets, among them microarray data important for bioinformatics. *** It is often considered that the BCW has irrelevant information, which the results indicate (page 24). It is recommended to cite past work where classification algorithms have identified the considered irrelevant data. *** The adequate references were included. *** The background could be strengthened on recent work on determining problem difficulty, e.g Smith-Miles, K. A., and Lopes, L. B., “Measuring Instance Difficulty for Combinatorial Optimization Problems”, *** We extended the literature section and included the suggested references. *** How does the system address noise on the target variable instead of simply on the source variables? Please comment on whether the type of noise have any significant effect? *** No, our measure disregards this type of noise. It is only concerned with sample sparsity. To make it clear, we added the following to the introduction: ' In this article we introduce a new measure of data complexity targeted at sample sparsity, which is mostly associated with variance error component. We aim to measure information saturation of a data set without making any assumptions on the form of relation between dependent variable and the rest of variables, so explicitly disregarding shape of decision boundary and classes ambiguity. ' *** Is the speculation that there are no redundant variables in the wine or glass datasets supported by external evidence? *** We added relevant references to studies concerning feature selection. However, our speculation was that wine data contain some redundant variables, which is consistent with the results from literature. *** How does the introduced technique perform with datasets containing many outliers? *** This kind of analysis is performed in the section 'Complexity curve variability and outliers'. # Reviewer 1 *** In terms of English, this work contains multiple grammar mistakes (...), and the writing needs to be improved, avoiding some awkward sentences. *** We strived to improve the language of the manuscript. Introduction and conclusions sections were heavily rewritten. *** The literature covered in this work needs to be enlarged including related works in the application areas that the authors are then using in the experimental validation. *** In the revised manuscript, we extended the literature section including also references to more recent works. *** To assess the validity of the findings of this new kind of plot, the authors need to include a comparison with other data complexity measures in the application domains considered. *** We added an extensive comparison with popular data complexity measures in the context of classification. The section is labelled 'Comparison with other complexity measures'. *** Apart from including comparison methods, I think that the authors should include discussion (new section) on how this plot can help us when dealing for instance with imbalanced data, high dimensional data or unlabeled data (semi-supervised learning)? *** Imbalanced data and high dimensional data were included in the new experiments. Immediate implications are discussed in the respective sections. "
Here is a paper. Please give your review comments after reading it.
263
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>We describe a method for assessing data set complexity based on the estimation of the underlining probability distribution and Hellinger distance. In contrast to some popular complexity measures it is not focused on the shape of a decision boundary in a classification task but on the amount of available data with respect to the attribute structure. Complexity is expressed in terms of graphical plot, which we call complexity curve. It demonstrates the relative increase of available information with the growth of sample size. We perform theoretical and experimental examination of properties of the introduced complexity mea-sure and show its relation to the variance component of classification error. Then we compare it with popular data complexity measures on 81 diverse data sets and show that it can contribute to explaining performance of specific classifiers on these sets. We also apply our methodology to a panel of simple benchmark data sets, demonstrating how it can be used in practice to gain insights into data characteris-tics. Moreover, we show that complexity curve is an effective tool for reducing the size of the training set (data pruning), allowing to significantly speed up the learning process without compromising classification accuracy. Associated code is available to download at: https://github.com/zubekj/complexity_curve (open source Python implementation).</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>It is common knowledge in machine learning community that the difficulty of classification problems varies greatly. Sometimes it is enough to use simple out of the box classifier to get a very good result and sometimes careful preprocessing and model selection are needed to get any non-trivial result at all. The difficulty of a classification task clearly stems from certain properties of the data set, yet we still have problems with defining those properties in general.</ns0:p><ns0:p>Bias-variance decomposition <ns0:ref type='bibr' target='#b7'>(Domingos, 2000)</ns0:ref> demonstrates that the error of a predictor can be attributed to three sources: bias, coming from inability of an algorithm to build an adequate model for the relationship present in data, variance, coming from inability to estimate correct model parameters from an imperfect data sample, and the irreducible error component commonly called noise. Following this line of reasoning, difficulty of a classification problem may come partly from the complexity of the relation between dependent variable and explanatory variables, partly from the scarcity of information in the training sample, and partly from class ambiguity (due to noise in the target variable or an overlap between classes). This is identical to sources of classification difficulty identified by <ns0:ref type='bibr' target='#b12'>Ho and Basu (2002)</ns0:ref>, who labelled the three components: 'complex decision boundary', 'small sample size and dimensionality induced sparsity' and 'ambiguous classes'.</ns0:p><ns0:p>In this article we introduce a new measure of data complexity targeted at sample sparsity, which is mostly associated with variance error component. We aim to measure information saturation of a data set Since the proposed measure characterise the data sample itself without making any assumptions as to how that sample will be used it should be applicable to all kinds of problems involving reasoning from data. In this work we focus on classification tasks since this is the context in which data complexity measures were previously applied. We compare area under the complexity curve with popular data complexity measures and show how it complements the existing metrics. We also demonstrate that it is useful for explaining classifier performance by showing that the area under the complexity curve is correlated with the area under the receiver operating characteristic (AUC ROC) for popular classifiers tested on 81 benchmark data sets.</ns0:p><ns0:p>We propose an immediate application of the developed method connected with the fundamental question: how large data sample is needed to build a successful predictor? We pursue this topic by proposing a data pruning strategy based on complexity curve and evaluating it on large data sets. We show that it can be considered as an alternative to progressive sampling strategies <ns0:ref type='bibr' target='#b23'>(Provost et al., 1999)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED LITERATURE</ns0:head><ns0:p>Problem of measuring data complexity in the context of machine learning is broadly discussed. Our beliefs are similar to <ns0:ref type='bibr' target='#b11'>Ho (2008)</ns0:ref>, who stated the need for including data complexity analysis in algorithm comparison procedures. Similar needs are also discussed in fields outside machine learning, for example in combinatorial optimisation <ns0:ref type='bibr' target='#b30'>(Smith-Miles and Lopes, 2012)</ns0:ref>.</ns0:p><ns0:p>The general idea is to select a sufficiently diverse set of problems to demonstrate both strengths and weaknesses of the analysed algorithms. The importance of this step was stressed by <ns0:ref type='bibr' target='#b19'>Maci&#224; et al. (2013)</ns0:ref>, who demonstrated how algorithm comparison may be biased by benchmark data set selection, and showed how the choice may be guided by complexity measures. Characterising problem space with some metrics makes it possible to estimate regions in which certain algorithms perform well <ns0:ref type='bibr' target='#b18'>(Luengo and Herrera, 2013)</ns0:ref>, and this opens up possibilities of meta-learning <ns0:ref type='bibr' target='#b29'>(Smith-Miles et al., 2014)</ns0:ref>.</ns0:p><ns0:p>In this context complexity measures are used not only as predictors of classifier performance but more importantly as diversity measures capturing various properties of the data sets. It is useful when the measures themselves are diverse and focus on different aspects of the data to give as complete characterisation of the problem space as possible. In the later part of the article we demonstrate that complexity curve fits well into the landscape of currently used measures, offering new insights into data characteristics.</ns0:p><ns0:p>A set of practical measures of data complexity with regard to classification was introduced by <ns0:ref type='bibr' target='#b12'>Ho and Basu (2002)</ns0:ref>, and later extended by <ns0:ref type='bibr' target='#b13'>Ho et al. (2006)</ns0:ref> and <ns0:ref type='bibr' target='#b21'>Orriols-Puig et al. (2010)</ns0:ref>. It is routinely used in tasks involving classifier evaluation <ns0:ref type='bibr' target='#b19'>(Maci&#224; et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b18'>Luengo and Herrera, 2013)</ns0:ref> and meta-learning <ns0:ref type='bibr' target='#b9'>(D&#237;ez-Pastor et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b20'>Mantovani et al., 2015)</ns0:ref>. Some of these measures are based on the overlap of values of specific attributes, examples include Fisher's discriminant ratio, volume of overlap region, attribute efficiency etc. The others focus directly on class separability, this groups includes measures such as the fraction of points on the decision boundary, linear separability, the ratio of intra/inter class distance.</ns0:p><ns0:p>In contrast to our method, such measures focus on specific properties of the classification problem, Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>with data sparsity, such as ratio of attributes to observations, attempt to capture similar properties as our complexity curve. <ns0:ref type='bibr' target='#b16'>Li and Abu-Mostafa (2006)</ns0:ref> defined data set complexity in the context of classification using the general concept of Kolmogorov complexity. They proposed a way to measure data set complexity using the number of support vectors in support vector machine (SVM) classifier. They analysed the problems of data decomposition and data pruning using above methodology. A graphical representation of the data set complexity called the complexity-error plot was also introduced. The main problem with their approach is the selection of very specific and complex machine learning algorithms, which may render the results in less universal way, and which is prone to biases specific for SVMs. This make their method unsuitable for diverse machine learning algorithms comparison.</ns0:p><ns0:p>Another approach to data complexity is to analyse it on the level of individual instances. This kind of analysis is performed by <ns0:ref type='bibr' target='#b27'>Smith et al. (2013)</ns0:ref> who attempted to identify which instances are misclassified by various classification algorithm. They devised local complexity measures calculated with respect to single instances and later tried to correlate average instance hardness with global data complexity measures of <ns0:ref type='bibr' target='#b12'>Ho and Basu (2002)</ns0:ref>. They discovered that it is mostly correlated with class overlap. This makes our work complementary, since in our complexity measure we deliberately ignore class overlap and individual instance composition to isolate another source of difficulty, namely data scarcity. <ns0:ref type='bibr' target='#b32'>Yin et al. (2013)</ns0:ref> proposed a method of feature selection based on Hellinger distance (a measure of similarity between probability distributions). The idea was to choose features, which conditional distributions (depending on the class) have minimal affinity. In the context of our framework this could be interpreted as measuring data complexity for single features. The authors demonstrated experimentally that for the high-dimensional imbalanced data sets their method is superior to popular feature selection methods using Fisher criterion, or mutual information.</ns0:p></ns0:div> <ns0:div><ns0:head>DEFINITIONS</ns0:head><ns0:p>In the following sections we define formally all measures used throughout the paper. Basic intuitions, assumptions, and implementation choices are discussed. Finally, algorithms for calculating complexity curve, conditional complexity curve, and generalisation curve are given.</ns0:p></ns0:div> <ns0:div><ns0:head>Measuring data complexity with samples</ns0:head><ns0:p>In a typical machine learning scenario we want to use information contained in a collected data sample to solve a more general problem which our data describe. Problem complexity can be naturally measured by the size of a sample needed to describe the problem accurately. We call the problem complex, if we need to collect a lot of data in order to get any results. On the other hand, if a small amount of data suffices we say the problem has low complexity.</ns0:p><ns0:p>How to determine if a data sample describes the problem accurately? Any problem can be described with a multivariate probability distribution P of a random vector X. From P we sample our finite data sample D. Now, we can use D to build the estimated probability distribution of X -P D . P D is the approximation of P. If P and P D are identical we know that data sample D describes the problem perfectly and collecting more observations would not give us any new information. Analogously, if P D is very different from P we can be almost certain that the sample is too small.</ns0:p><ns0:p>To measure similarity between probability distributions we use Hellinger distance. For two continuous distributions P and P D with probability density functions p and p D it is defined as:</ns0:p><ns0:formula xml:id='formula_0'>H 2 (P, P D ) = 1 2 p(x) &#8722; p D (x) 2 dx</ns0:formula><ns0:p>The minimum possible distance 0 is achieved when the distributions are identical, the maximum 1 is achieved when any event with non-zero probability in P has probability 0 in P D and vice versa. Simplicity and naturally defined 0-1 range make Hellinger distance a good measure for capturing sample information content.</ns0:p><ns0:p>In most cases we do not know the underlining probability distribution P representing the problem and all we have is a data sample D, but we can still use the described complexity measure. Let us picture our data D as the true source of knowledge about the problem and the estimated probability distribution P D as the reference distribution. Any subset S &#8834; D can be treated as a data sample and a probability distribution Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>P S estimated from it will be an approximation of P D . By calculating H 2 (P D , P S ) we can assess how well a given subset represent the whole available data, i.e. determine its information content.</ns0:p><ns0:p>Obtaining a meaningful estimation of a probability distribution from a data sample poses difficulties in practice. The probability distribution we are interested in is the joint probability on all attributes. In that context most of the realistic data sets should be regarded as extremely sparse and na&#239;ve probability estimation using frequencies of occurring values would result in mostly flat distribution. This can be called the curse of dimensionality. Against this problem we apply a na&#239;ve assumption that all attributes are independent. This may seem like a radical simplification but, as we will demonstrate later, it yields good results in practice and constitute a reasonable baseline for common machine learning techniques. Under the independence assumption we can calculate the joint probability density function f from the marginal density functions f 1 , . . . , f n :</ns0:p><ns0:formula xml:id='formula_1'>f (x) = f 1 (x 1 ) f 2 (x 2 ) . . . f n (x n )</ns0:formula><ns0:p>We will now show the derived formula for Hellinger distance under the independence assumption.</ns0:p><ns0:p>Observe that the Hellinger distance for continuous variables can be expressed in another form:</ns0:p><ns0:formula xml:id='formula_2'>1 2 f (x) &#8722; g(x) 2 dx = 1 2 f (x) &#8722; 2 f (x)g(x) + g(x) dx = 1 2 f (x) dx &#8722; f (x)g(x) dx + 1 2 g(x) dx = 1 &#8722; f (x)g(x) dx</ns0:formula><ns0:p>In the last step we used the fact the that the integral of a probability density over its domain must equal one.</ns0:p><ns0:p>We will consider two multivariate distributions F and G with density functions:</ns0:p><ns0:formula xml:id='formula_3'>f (x 1 , . . . , x n ) = f 1 (x 1 ) . . . f n (x n ) g(x 1 , . . . , x n ) = g 1 (x 1 ) . . . g n (x n )</ns0:formula><ns0:p>The last formula for Hellinger distance will now expand:</ns0:p><ns0:formula xml:id='formula_4'>1 &#8722; &#8226; &#8226; &#8226; f (x 1 , . . . , x n )g(x 1 , . . . , x n ) dx 1 . . . dx n = 1 &#8722; &#8226; &#8226; &#8226; f 1 (x 1 ) . . . f n (x n )g 1 (x 1 ) . . . g n (x n ) dx 1 . . . dx n = 1 &#8722; f 1 (x 1 )g 1 (x 1 ) dx 1 . . . f n (x n )g n (x n ) dx n</ns0:formula><ns0:p>In this form variables are separated and parts of the formula can be calculated separately.</ns0:p></ns0:div> <ns0:div><ns0:head>Practical considerations</ns0:head><ns0:p>Calculating the introduced measure of similarity between data set in practice poses some difficulties.</ns0:p><ns0:p>First, in the derived formula direct multiplication of probabilities occurs, which leads to problems with numerical stability. We increased the stability by switching to the following formula: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_5'>1 &#8722; f 1 (x 1 )g 1 (x 1 ) dx 1 . . . f n (x n )g n (x n ) dx n = 1 &#8722; 1 &#8722; 1 2 f 1 (x 1 ) &#8722; g 1 (x 1 ) 2 dx 1 . . . 1 &#8722; 1 2 f n (x n ) &#8722; g n (x n ) 2 dx 2 = 4/</ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_6'>1 &#8722; 1 &#8722; H 2 (F 1 , G 1 ) . . . 1 &#8722; H 2 (F n , G n )</ns0:formula><ns0:p>For continuous variables probability density function is routinely done with kernel density estimation (KDE) -a classic technique for estimating the shape continuous probability density function from a finite data sample <ns0:ref type='bibr' target='#b25'>(Scott, 1992)</ns0:ref>. For a sample (x 1 , x 2 , . . . , x n ) estimated density function has a form:</ns0:p><ns0:formula xml:id='formula_7'>fh (x) = 1 nh n &#8721; i=1 K x &#8722; x i h</ns0:formula><ns0:p>where K is the kernel function and h is a smoothing parameter -bandwidth. In our experiments we used Gaussian function as the kernel. This is a popular choice, which often yields good results in practice. The bandwidth was set according to the modified Scott's rule <ns0:ref type='bibr' target='#b25'>(Scott, 1992)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_8'>h = 1 2 n &#8722; 1 d+4 ,</ns0:formula><ns0:p>where n is the number of samples and d number of dimensions.</ns0:p><ns0:p>In many cases the independence assumption can be supported by preprocessing input data in a certain way. A very common technique, which can be applied in this situation is the whitening transform. It transforms any set of random variables into a set of uncorrelated random variables. For a random vector X with a covariance matrix &#931; a new uncorrelated vector Y can be calculated as follows:</ns0:p><ns0:formula xml:id='formula_9'>&#931; = PDP &#8722;1 W = PD &#8722; 1 2 P &#8722;1 Y = XW</ns0:formula><ns0:p>where D is diagonal matrix containing eigenvalues and P is matrix of right eigenvectors of &#931;. Naturally, lack of correlation does not imply independence but it nevertheless reduces the error introduced by our independence assumption. Furthermore, it blurs the difference between categorical variables and continuous variables putting them on an equal footing. In all further experiments we use whitening transform preprocessing and then treat all variables as continuous.</ns0:p><ns0:p>A more sophisticated method is a signal processing technique known as Independent Component Analysis (ICA) <ns0:ref type='bibr' target='#b14'>(Hyv&#228;rinen and Oja, 2000)</ns0:ref>. It assumes that all components of an observed multivariate signal are mixtures of some independent source signals and that the distribution of the values in each source signal is non-gaussian. Under these assumption the algorithm attempts to recreate the source signals by splitting the observed signal into the components as independent as possible. Even if the assumptions are not met, ICA technique can reduce the impact of attributes interdependencies. Because of its computational complexity we used it as an optional step in our experiments.</ns0:p></ns0:div> <ns0:div><ns0:head>Machine learning task difficulty</ns0:head><ns0:p>Our data complexity measure can be used for any type of problem described through a multivariate data sample. It is applicable to regression, classification and clustering tasks. The relation between the defined data complexity and the difficulty of a specific machine learning task has to be investigated. We will focus on the supervised learning case. Classification error will be measured as mean 0-1 error (accuracy). Data complexity will be measured as mean Hellinger distance between the real and the estimated probability distributions of attributes conditioned on the target variable:</ns0:p><ns0:formula xml:id='formula_10'>1 m m &#8721; i=1 H 2 (P(X|Y = y i ), P D (X|Y = y i ))</ns0:formula><ns0:p>where X -vector of attributes, Y -target variable, y 1 , y 2 , . . . Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b7'>Domingos (2000)</ns0:ref> proposed an universal scheme of decomposition, which can be adapted for different loss functions. For a classification problem and 0-1 loss L expected error on sample x for which the true label is t, and the predicted label given a training set D is y can be expressed as:</ns0:p><ns0:formula xml:id='formula_11'>E D,t [1(t = y)] = 1(E t [t] = E D [y]) + c 2 E D [1(y = E D [y])] + c 1 E t [1(t = E t [t])] = B(x) + c 2 V (x) + c 1 N(x)</ns0:formula><ns0:p>where B -bias, V -variance, N -noise. Coefficients c 1 and c 2 are added to make the decomposition consistent for different loss functions. In this case they are equal to:</ns0:p><ns0:formula xml:id='formula_12'>c 1 = P D (y = E t [t]) &#8722; P D (y = E t [t])P t (y = t | E t [t] = t) c 2 = 1 if E t [t] = E D [y] &#8722; P D (y = E t [t] | y = E D [y])</ns0:formula><ns0:p>otherwise.</ns0:p><ns0:p>Bias This intuition can be supported by comparing our complexity measure with the error of the Bayes classifier. We will show that they are closely related. Let Y be the target variable taking on values v 1 , v 2 , . . . , v m , f i (x) an estimation of P(X = x|Y = v i ) from a finite sample D, and g(y) an estimation of P(Y = y). In such setting 0-1 loss of the Bayes classifier on a sample x with the true label t is:</ns0:p><ns0:formula xml:id='formula_13'>1(t = y) = 1 t = arg max i (g(v i ) f i (x))</ns0:formula><ns0:p>Let assume that t = v j . Observe that:</ns0:p><ns0:formula xml:id='formula_14'>v j = arg max i (g(v i ) f i (x)) &#8660; &#8704; i g(v j ) f j (x) &#8722; g(v i ) f i (x) &#8805; 0</ns0:formula><ns0:p>which for the case of equally frequent classes reduces to:</ns0:p><ns0:formula xml:id='formula_15'>&#8704; i f j (x) &#8722; f i (x) &#8805; 0</ns0:formula><ns0:p>We can simultaneously add and subtract term P</ns0:p><ns0:formula xml:id='formula_16'>(X = x |Y = v j ) &#8722; P(X = x |Y = v i ) to obtain: &#8704; i ( f j (x) &#8722; P(X = x |Y = v j )) + (P(X = x |Y = v i ) &#8722; f i (x)) + (P(X = x |Y = v j ) &#8722; P(X = x |Y = v i )) &#8805; 0 We know that P(X = x | Y = v j ) &#8722; P(X = x | Y = v i ) &#8805; 0,</ns0:formula><ns0:p>so as long as estimations f i (x), f j (x) do not deviate too much from real distributions the inequality is satisfied. It will not be satisfied (i.e. an error will take place) only if the estimations deviate from the real distributions in a certain way (i.e.</ns0:p><ns0:formula xml:id='formula_17'>f j (x) &lt; P(X = x|Y = v j ) and f i (x) &gt; P(X = x|Y = v i ))</ns0:formula><ns0:p>and the sum of these deviations is greater than <ns0:ref type='table' target='#tab_5'>-2016:03:9443:2:1:NEW 29 Jun 2016)</ns0:ref> Manuscript to be reviewed Computer Science</ns0:p><ns0:formula xml:id='formula_18'>6/25 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula><ns0:formula xml:id='formula_19'>P(X = x|Y = v j ) &#8722; P(X = x|Y = v i ).</ns0:formula><ns0:p>The Hellinger distance between f i (x) and P(X = x|Y = v i ) measures the deviation. This shows that by minimising Hellinger distance we are also minimising error of the Bayes classifier. Converse may not be true: not all deviations of probability estimates result in classification error.</ns0:p><ns0:p>In the introduced complexity measure we assumed independency of all attributes, which is analogous to the assumption of na&#239;ve Bayes. Small Hellinger distance between class-conditioned attribute distributions induced by sets A and B means that na&#239;ve Bayes trained on set A and tested on set B will have only very slight variance error component. Of course, if the independence assumption is broken bias error component may still be substantial.</ns0:p></ns0:div> <ns0:div><ns0:head>Complexity curve</ns0:head><ns0:p>Complexity curve is a graphical representation of a data set complexity. It is a plot presenting the expected Hellinger distance between a subset and the whole set versus subset size:</ns0:p><ns0:formula xml:id='formula_20'>CC(n) = E[H 2 (P, Q n )]</ns0:formula><ns0:p>where P is the empirical probability distribution estimated from the whole set and Q n is the probability distribution estimated from a random subset of size n &#8804; |D|. Let us observe that CC(|D|) = 0 because P = Q |D| . Q 0 is undefined, but for the sake of convenience we assume CC(0) = 1.</ns0:p><ns0:p>Algorithm 1 Procedure for calculating complexity curve. D -original data set, K -number of random subsets of the specified size.</ns0:p><ns0:p>1. Transform D with whitening transform and/or ICA to obtain D I .</ns0:p><ns0:p>2. Estimate probability distribution for each attribute of D I and calculate joint probability distribution -P.</ns0:p><ns0:p>3. For i in 1 . . . |D I | (with an optional step size d):</ns0:p><ns0:p>(a) For j in 1 . . . K:</ns0:p><ns0:p>i. Draw subset S j i &#8838; D I such that |S j i | = i. ii. Estimate probability distribution for each attribute of S j i and calculate joint probability distribution -Q j i . iii. Calculate Hellinger distance:</ns0:p><ns0:formula xml:id='formula_21'>l j i = H 2 (P, Q j i ).</ns0:formula><ns0:p>(b) Calculate mean m i and standard error s i :</ns0:p><ns0:formula xml:id='formula_22'>m i = 1 K K &#8721; j=1 l j i s i = 1 K K &#8721; j=1 m i &#8722; l j i 2</ns0:formula><ns0:p>Complexity curve is a plot of m i &#177; s i vs i.</ns0:p><ns0:p>To estimate complexity curve in practice, for each subset size K random subsets are drawn and the mean value of Hellinger distance, along with standard error, is marked on the plot. The Algorithm 1 presents the exact procedure. Parameters K (the number of samples of a specified size) and d (sampling The shape of the complexity curve captures the information on the complexity of the data set. If the data is simple, it is possible to represent it relatively well with just a few instances. In such case, the complexity curve is very steep at the beginning and flattens towards the end of the plot. If the data is complex, the initial steepness of the curve is smaller. That information can be aggregated into a single parameter -the area under the complexity curve (AUCC). If we express the subset size as the fraction of the whole data set, then the value of the area under the curve becomes limited to the range [0, 1] and can be used as an universal measure for comparing complexity of different data sets.</ns0:p></ns0:div> <ns0:div><ns0:head>Conditional complexity curve</ns0:head><ns0:p>The complexity curve methodology presented so far deals with the complexity of a data set as a whole.</ns0:p><ns0:p>While this approach gives information about data structure, it may assess complexity of the classification task incorrectly. This is because data distribution inside each of the classes may vary greatly from the overall distribution. For example, when the number of classes is larger, or the classes are imbalanced, a random sample large enough to represent the whole data set may be too small to represent some of the classes. To take this into account we introduce conditional complexity curve. We calculate it by splitting each data sample according to the class value and taking the arithmetic mean of the complexities of each sub-sample. Algorithm 2 presents the exact procedure.</ns0:p><ns0:p>Comparison of standard complexity curve and conditional complexity curve for iris data set is given by Figure <ns0:ref type='figure'>1</ns0:ref>. This data set has 3 distinct classes. Our expectation is that estimating conditional distributions for each class would require larger data samples than estimating the overall distribution. Shape of the conditional complexity curve is consistent with this expectation: it is less steep than the standard curve and has larger AUCC value.</ns0:p></ns0:div> <ns0:div><ns0:head>PROPERTIES</ns0:head><ns0:p>To support validity of the proposed method, we perform an in-depth analysis of its properties. We start from purely mathematical analysis giving some intuitions on complexity curve convergence rate and identifying border cases. Then we perform experiments with toy artificial data sets testing basic assumptions behind complexity curve. After that we compare it experimentally with other complexity data measures and show its usefulness in explaining classifier performance. (a) For j in 1 . . . K:</ns0:p><ns0:formula xml:id='formula_23'>i. Draw subset S j i &#8838; D I such that |S j i | = i. ii. Split S j</ns0:formula><ns0:p>i according to the class into S j,1 i , S j,2 i , . . . , S j,C i . iii. From S j,1 i , S j,2 i , . . . , S j,C i estimate probability distributions Q j,1 i , Q j,2 i , . . . , Q j,C i . iv. Calculate mean Hellinger distance:</ns0:p><ns0:formula xml:id='formula_24'>l j i = 1 C &#8721; C k=1 H 2 (P k , Q j,k i ).</ns0:formula><ns0:p>(b) Calculate mean m i and standard error s i :</ns0:p><ns0:formula xml:id='formula_25'>m i = 1 K K &#8721; j=1 l j i s i = 1 K K &#8721; j=1 m i &#8722; l j i 2</ns0:formula><ns0:p>Conditional complexity curve is a plot of m i &#177; s i vs i.</ns0:p></ns0:div> <ns0:div><ns0:head>Mathematical properties 245</ns0:head><ns0:p>Drawing a random subset S n from a finite data set D of size N corresponds to sampling without replacement.</ns0:p><ns0:p>Let assume that the data set contains k distinct values {v 1 , v 2 , . . . , v k } occurring with frequencies P = (p 1 , p 2 , . . . , p k ). Q n = (q 1 , q 2 , . . . , q k ) will be a random vector which follows a multivariate hypergeometric distribution.</ns0:p><ns0:formula xml:id='formula_26'>q i = 1 n &#8721; y&#8712;S n 1{y = v i }</ns0:formula><ns0:p>The expected value for any single element is:</ns0:p><ns0:formula xml:id='formula_27'>E[q i ] = p i</ns0:formula><ns0:p>The probability of obtaining any specific vector of frequencies:</ns0:p><ns0:formula xml:id='formula_28'>P (Q n = (q 1 , q 2 , . . . , q k )) = p 1 N q 1 n p 2 N q 2 n &#8226; &#8226; &#8226; p k N q k n N n with &#8721; k i=1 q i = 1.</ns0:formula></ns0:div> <ns0:div><ns0:head>246</ns0:head><ns0:p>We will consider the simplest case of discrete probability distribution estimated through frequency counts without using the independence assumption. In such case complexity curve is by definition:</ns0:p><ns0:formula xml:id='formula_29'>CC(n) = E[H 2 (P, Q n )]</ns0:formula><ns0:p>It is obvious that CC(N) = 0 because when n = N we draw all available data. This means that complexity curve always converges. We can ask whether it is possible to say anything about the rate of this convergence. This is the question about the upper bound on the tail of hypergeometric distribution. Such bound is given by Hoeffding-Chv&#225;tal inequality <ns0:ref type='bibr' target='#b5'>(Chv&#225;tal, 1979;</ns0:ref><ns0:ref type='bibr' target='#b26'>Skala, 2013)</ns0:ref>. For the univariate case it has the following form: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_30'>P (|q i &#8722; p i | &#8805; &#948; ) &#8804; 2e &#8722;2&#948;</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>which generalises to a multivariate case as:</ns0:p><ns0:formula xml:id='formula_31'>P (|Q n &#8722; P| &#8805; &#948; ) &#8804; 2ke &#8722;2&#948; 2 n</ns0:formula><ns0:p>where |Q n &#8722; P| is the total variation distance. Since H 2 (P, Q n ) &#8804; |Q n &#8722; P| this guarantees that complexity 247 curve converges at least as fast.</ns0:p></ns0:div> <ns0:div><ns0:head>248</ns0:head><ns0:p>Now we will consider a special case when n = 1. In this situation the multivariate hypergeometric distribution is reduced to a simple categorical distribution P. In such case the expected Hellinger distance is:</ns0:p><ns0:formula xml:id='formula_32'>E[H 2 (P, Q 1 )] = k &#8721; i=1 p i &#8730; 2 k &#8721; j=1 &#8730; p j &#8722; 1{ j = k} 2 = k &#8721; i=1 p i &#8730; 2 1 &#8722; p i + ( &#8730; p i &#8722; 1) 2 = k &#8721; i=1 p i 1 &#8722; &#8730; p i</ns0:formula><ns0:p>This corresponds to the first point of complexity curve and determines its overall steepness.</ns0:p></ns0:div> <ns0:div><ns0:head>249</ns0:head><ns0:p>Theorem: E[H 2 (P, Q 1 )] is maximal for a given k when P is an uniform categorical distribution over k categories, i.e.:</ns0:p><ns0:formula xml:id='formula_33'>E[H 2 (P, Q 1 )] = k &#8721; i=1 p i 1 &#8722; &#8730; p i &#8804; 1 &#8722; 1 k Proof:</ns0:formula><ns0:p>We will consider an arbitrary distribution P and the expected Hellinger distance</ns0:p><ns0:formula xml:id='formula_34'>E[H 2 (P, Q 1 )].</ns0:formula><ns0:p>We can modify this distribution by choosing two states l and k occurring with probabilities p l and p k such as that p l &#8722; p k is maximal among all pairs of states. We will redistribute the probability mass between the two states creating a new distribution P . The expected Hellinger distance for the distribution P will be:</ns0:p><ns0:formula xml:id='formula_35'>E[H 2 (P , Q 1 )] = k &#8721; i=1,i =k,i =l p i 1 &#8722; &#8730; p i + a 1 &#8722; &#8730; a + (p k + p l &#8722; a) 1 &#8722; &#8730; p k + p l &#8722; a</ns0:formula><ns0:p>where a and p k + p l &#8722; a are new probabilities of the two states in P . We will consider a function</ns0:p><ns0:formula xml:id='formula_36'>f (a) = a 1 &#8722; &#8730; a + (p k + p l &#8722; a) 1 &#8722; &#8730; p k + p l</ns0:formula><ns0:p>and look for its maxima.</ns0:p><ns0:formula xml:id='formula_37'>&#8706; f (x) &#8706; a = &#8722; 1 &#8722; &#8730; p k + p l &#8722; a + &#8730; p k + p l &#8722; a 4 1 &#8722; &#8730; p k + p l &#8722; a + 1 &#8722; &#8730; a &#8722; &#8730; a 4 1 &#8722; &#8730; a</ns0:formula><ns0:p>The derivative is equal to 0 if and only if a = p k +p l 2 . We can easily see that:</ns0:p><ns0:formula xml:id='formula_38'>f (0) = f (p k + p l ) = (p k + p l ) 1 &#8722; &#8730; p k + p l &lt; (p k + p l ) 1 &#8722; p k + p l 2</ns0:formula><ns0:p>This means that f (a) reaches its maximum for a = p k +p l 2 . From that we can conclude that for any distribution P if we produce distribution P by redistributing probability mass between two states equally the following holds:</ns0:p><ns0:formula xml:id='formula_39'>E[H 2 (P , Q 1 )] &#8805; E[H 2 (P, Q 1 )]</ns0:formula><ns0:p>If we repeat such redistribution arbitrary number of times the outcome distribution converges to uniform 250 distribution. This proves that the uniform distribution leads to the maximal expected Hellinger distance 251 for a given number of states.</ns0:p></ns0:div> <ns0:div><ns0:head>252</ns0:head><ns0:p>Theorem: Increasing the number of categories by dividing an existing category into two new categories always increases the expected Hellinger distance, i.e.</ns0:p><ns0:formula xml:id='formula_40'>k &#8721; i=1 p i 1 &#8722; &#8730; p i &#8804; k &#8721; i=1,i =l p i 1 &#8722; &#8730; p i + a 1 &#8722; &#8730; a + (p l &#8722; a) 1 &#8722; &#8730; p l &#8722; a 10/25 a 1 &#8722; &#8730; p l &#8722; a &#8804; a 1 &#8722; &#8730; a</ns0:formula><ns0:p>which concludes the proof.</ns0:p><ns0:p>From the properties stated by these two theorems we can gain some intuitions about complexity curves in general. First, by looking at the formula for the uniform distribution E[H 2 (P,</ns0:p><ns0:formula xml:id='formula_41'>Q 1 )] = 1 &#8722; 1 k we can see that when k = 1 E[H 2 (P, Q 1 )] = 0 and when k &#8594; &#8734; E[H 2 (P, Q 1 )] &#8594; 1.</ns0:formula><ns0:p>The complexity curve will be less steep if the variables in the data set take multiple values and each value occurs with equal probability. This is consistent with our intuition: we need a larger sample to cover such space and collect information. For smaller number of distinct values or distributions with mass concentrated mostly in a few points smaller sample will be sufficient to represent most of the information in the data set.</ns0:p></ns0:div> <ns0:div><ns0:head>Complexity curve and the performance of an unbiased model</ns0:head><ns0:p>To confirm validity of the assumptions behind complexity curve we performed experiments with artificial data generated according to known models. For each of the data set we selected an appropriate classifier which is known to be unbiased with respect to the given model. In this way it was possible to observe if the variance error component is indeed upper bounded by the complexity curve. To train the classifiers we used the same setting as when calculating the complexity curve: classifiers were trained on random subsets and tested on the whole data set. We fitted the learning curve to the complexity curve by matching first and last points of both curves. Then we observed the relation of the two curves in between.</ns0:p><ns0:p>The first generated data set followed the logistic model (logit data set). Matrix X (1000 observations, 12 attributes) contained values drawn from the normal distribution with mean 0 and standard deviation 1. Class vector Y was defined as follows:</ns0:p><ns0:formula xml:id='formula_42'>P(Y |x) = e &#946; x 1 + e &#946; x</ns0:formula><ns0:p>where &#946; = (0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0, 0, 0, 0, 0, 0). All attributes were independent and conditionally independent. Since Y values were determined in a non-deterministic way, there was some noise presentclassification error of the logistic regression classifier trained and tested on the full data set was larger than zero. What would happen if the attribute conditional independence assumption was broken? To answer this question we generated another type of data modelled after multidimensional chessboard (chessboard data set). X matrix contained 1000 observations and 2, 3 attributes drawn from an uniform distribution on range [0, 1). Class vector Y had the following values:</ns0:p><ns0:formula xml:id='formula_43'>0 if &#931; m i=0 x i</ns0:formula><ns0:p>s is even 1 otherwise where s was a grid step in our experiments set to 0.5. There is clearly strong attribute dependence, but since all parts of decision boundary are parallel to one of the attributes this kind of data can be modelled with a decision tree with no bias. Figure <ns0:ref type='figure'>4</ns0:ref> presents complexity curves and error curves for different dimensionalities of chessboard data.</ns0:p><ns0:p>Here the classification error becomes larger than indicated by complexity curve. The more dimensions, the more dependencies between attributes violating complexity curve assumptions. For 3 dimensional chessboard the classification problem becomes rather hard and the observed error decreases slowly, but the complexity curve remains almost the same as for 2 dimensional case. This shows that the complexity curve is not expected to be a good predictor of classification accuracy in the problems where a lot of high-dimensional attribute dependencies occur, for example in epistatic domains in which the importance of one attribute depends on the values of the other.</ns0:p><ns0:p>Results of experiments with controlled artificial data sets are consistent with our theoretical expectations. Basing on them we can introduce a general interpretation of the difference between complexity curve and learning curve: learning curve below the complexity curve is an indication that the algorithm is able to build a good model without sampling the whole domain, limiting the variance error component.</ns0:p><ns0:p>On the other hand, learning curve above the complexity curve is an indication that the algorithm includes complex attributes dependencies in the constructed model, promoting the variance error component.</ns0:p></ns0:div> <ns0:div><ns0:head>Impact of whitening and ICA</ns0:head><ns0:p>To evaluate the impact of the proposed preprocessing techniques (whitening and ICA -Independent Component Analysis) on complexity curves we performed experiments with artificial data. In the first experiment we generated two data sets of 300 observations and with 8 attributes distributed according to Student's t distribution with 1.5 degrees of freedom. In one data set all attributes were independent, in the 8I w (AUCC: 0.17) 8R w (AUCC: 0.12) Figure <ns0:ref type='figure'>5</ns0:ref>. Complexity curves for whitened data (dashed lines) and not whitened data (solid lines). Areas under the curves are given in the legend. 8I -set of 8 independent random variables with Student's t distribution. 8R -one random variable with Student's t distribution repeated 8 times. 8I w -whitened 8I. 8R w -whitened 8R.</ns0:p><ns0:p>other the same attribute was repeated 8 times. To both sets small Gaussian noise was added. Figure <ns0:ref type='figure'>5</ns0:ref> shows complexity curves calculated before and after whitening transform. We can see that whitening had no significant effect on the complexity curve of the independent set. In the case of the dependent set complexity curve calculated after whitening decreases visibly faster and the area under the curve is smaller. This is consistent with our intuitive notion of complexity: a data set with highly correlated or duplicated attributes should be significantly less complex.</ns0:p><ns0:p>In the second experiment two data sets with 100 observations and 4 attributes were generated. The first data set was generated from the continuous uniform distribution on interval [0, 2], the second one from the discrete (categorical) uniform distribution on the same interval. To both sets small Gaussian noise was added. Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref> presents complexity curves for original, whitened and ICA-transformed data.</ns0:p><ns0:p>Among the original data sets the intuitive notion of complexity is preserved: area under the complexity curve for categorical data is smaller. The difference disappears for the whitened data but is again visible in the ICA-transformed data.</ns0:p><ns0:p>These simple experiments are by no means exhaustive but they confirm usefulness of the chosen signal processing techniques (data whitening and Independent Component Analysis) in complexity curve analysis.</ns0:p></ns0:div> <ns0:div><ns0:head>Complexity curve variability and outliers</ns0:head><ns0:p>Complexity curve is based on the expected Hellinger distance and the estimation procedure includes some variance. The natural assumption is that the variability caused by the sample size is greater than the variability resulting from a specific composition of a sample. Otherwise averaging over samples of the same size would not be meaningful. This assumption is already present in standard learning curve methodology where classifier accuracy is plotted against training set size. We expect that the exact variability of the complexity curve will be connected with the presence of outliers in the data set. Such influential observations will have a huge impact depending whether they will be included in a sample or not.</ns0:p><ns0:p>To verify whether these intuitions were true, we constructed two new data sets by introducing artificially outliers to WINE data set. In WINE001 we modified 1% of the attribute values by multiplying them by a random number from range (&#8722;10, 10). In WINE005 5% of the values were modified in such manner. Average number of points per dimension Table <ns0:ref type='table'>1</ns0:ref>. Data complexity measures used in experiments.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_10'>7</ns0:ref> presents conditional complexity curves for all three data sets. WINE001 curve has indeed a higher variance and is less regular than WINE curve. WINE005 curve is characterised not only by a higher variance but also by a larger AUCC value. This means that adding so much noise increased the overall complexity of the data set significantly.</ns0:p><ns0:p>The result support our hypothesis that large variability of complexity curve signify an occurrence of highly influential observations in the data set. This makes complexity curve a valuable diagnostic tool for such situations. However, it should be noted that our method is unable to distinguish between important outliers and plain noise. To obtain this kind of insight one has to employ different methods.</ns0:p></ns0:div> <ns0:div><ns0:head>Comparison with other complexity measures</ns0:head><ns0:p>The set of data complexity measures developed by <ns0:ref type='bibr' target='#b12'>Ho and Basu (2002)</ns0:ref> and extended by <ns0:ref type='bibr' target='#b13'>Ho et al. (2006)</ns0:ref> continues to be used in experimental studies to explain performance of various classifiers <ns0:ref type='bibr' target='#b9'>(D&#237;ez-Pastor et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b20'>Mantovani et al., 2015)</ns0:ref>. We decided to compare experimentally complexity curve with those measures. Descriptions of the measures used are given in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>According to our hypothesis conditional complexity curve should be robust in the context of class imbalance. To demonstrate this property we used for the comparison 81 imbalanced data sets used previously in the study by <ns0:ref type='bibr' target='#b9'>D&#237;ez-Pastor et al. (2015)</ns0:ref>. These data sets come originally from HDDT <ns0:ref type='bibr' target='#b6'>(Cieslak et al., 2011)</ns0:ref> and KEEL <ns0:ref type='bibr' target='#b0'>(Alcal&#225; et al., 2010)</ns0:ref> repositories. We selected only binary classification problems.</ns0:p><ns0:p>The list of data sets with their properties is presented in Supplementary document S1 as Table <ns0:ref type='table'>S1</ns0:ref> and Table <ns0:ref type='table'>S2</ns0:ref>.</ns0:p><ns0:p>For each data set we calculated area under the complexity curve using the previously described procedure and the values of other data complexity measures using DCOL software <ns0:ref type='bibr' target='#b21'>(Orriols-Puig et al., 2010</ns0:ref>). Pearson's correlation was then calculated for all the measures. As T2 measure seemed to have non-linear characteristics destroying the correlation additional column log T2 was added to comparison.</ns0:p><ns0:p>Results are presented as Figure <ns0:ref type='figure' target='#fig_11'>8</ns0:ref>. Clearly AUCC is mostly correlated with log T2 measure. This is to be expected as both measures are concerned with sample size in relation to attribute structure. The difference is that T2 takes into account only the number of attributes while AUCC considers also the shape of distributions of the individual attributes. Correlations of AUCC with other measures are much lower and it can be assumed that they capture different aspects of data complexity and may be potentially complementary.</ns0:p><ns0:p>The next step was to show that information captured by AUCC is useful for explaining classifier performance. In order to do so we trained a number of different classifiers on the 81 benchmark data sets and evaluated their performance using random train-test split with proportion 0.5 repeated 10 times. The performance measure used was the area under ROC curve. We selected three linear classifiers -na&#239;ve Bayes with gaussian kernel, linear discriminant analysis (LDA) and logistic regression -and two families Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_44'>AUCC F1 F1v F2 F3 F4 L1 L2 L3 N1 N2 N3 N4 T1 T2 log(T2) log(T2) T2 T1 N4 N3 N2 N1 L3 L2 L1 F4 F3 F2 F1v F1 AUCC</ns0:formula><ns0:p>-0.79 -0.091-0.079-0.033 -0.27 -0.41 0.2 -0.17 -0.22 -0.15 -0.16 -0.14 0.33 0.092 0.75 1 -0.49 -0.087-0.047 -0.05 -0.13 -0.22 0.093 -0.14 -0.11 -0.16 -0.19 -0.15 0.15 0.099 1 0.75 -0.023 -0.81 -0.51 0.15 -0.62 -0.42 0.095 0.32 0.55 0.36 0.5 0.37 0.51 1 0.099 0.092 -0.37 -0.49 -0.23 0.25 -0.55 -0.59 0.015 0.3 0.31 0.53 0.51 0.51 1 0.51 0.15 0.33 -0.017 -0.35 -0.16 0.51 -0.49 -0.59 0.34 0.74 0.2 0.98 0.89 1 0.51 0.37 -0.15 -0.14 0.055 -0.47 -0.28 0.33 -0.55 -0.56 0.21 0.61 0.26 0.85 1 0.89 0.51 0.5 -0.19 -0.16 -0.058 -0.36 -0.16 0.52 -0.46 -0.56 0.38 0.81 0.21 1 0.85 0.98 0.53 0.36 -0.16 -0.15 0.25 -0.63 -0.21 0.017 -0.2 -0.07 -0.46 0.25 1 0.21 0.26 0.2 0.31 0.55 -0.11 -0.22 -0.11 -0.31 -0.17 0.38 -0.33 -0.38 0.51 1 0.25 0.81 0.61 0.74 0.3 0.32 -0.14 -0.17 Correlations between AUCC, log T2, and classifier performance are presented in Table <ns0:ref type='table'>2</ns0:ref>. Most of the correlations are weak and do not reach statistical significance, however some general tendencies can be observed. As can be seen, AUC ROC scores of linear classifiers have very little correlation with AUCC and log T2. This may be explained by the high-bias and low-variance nature of these classifiers: they are not strongly affected by data scarcity but their performance depends on other factors. This is especially true for LDA classifier, which has the weakest correlation among linear classifiers.</ns0:p><ns0:p>In k-NN classifier complexity depends on k parameter: with low k values it is more prone to variance error, with larger k it is prone to bias if the sample size is not large enough <ns0:ref type='bibr' target='#b7'>(Domingos, 2000)</ns0:ref>. Both AUCC and log T2 seem to capture the effect of sample size in the case of large k values well (correlations -0.2249 and 0.2395 for 35-NN). However, for k = 1 the correlation with AUCC is stronger (-0.1256 vs 0.0772).</ns0:p><ns0:p>Depth parameter in decision tree also regulates complexity: the larger the depth the more classifier is prone to variance error and less to bias error. This suggests that AUCC should be more strongly correlated with performance of deeper trees. On the other hand, complex decision trees explicitly model attribute interdependencies ignored by complexity curve, which may weaken the correlation. This is observed in the obtained results: for a decision stub (tree of depth 1), which is low-variance high-bias classifier, correlation with AUCC and log T2 is very weak. For d = 3 and d = 5 it becomes visibly stronger, and then for larger tree depth it again decreases. It should be noted that with large tree depth, as with small k values in k-NN, AUCC has stronger correlation with the classifier performance than log T2.</ns0:p><ns0:p>A slightly more sophisticated way of applying data complexity measures is an attempt to explain classifier performance relative to some other classification method. In our experiments LDA is a good candidate for reference method since it is simple, has low variance and is not correlated with either AUCC or log T2. Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref> presents correlations of both measures with classifier performance relative to LDA. Here we can see that correlations for AUCC are generally higher than for log T2 and reach significance for the majority of classifiers. Especially in the case of decision tree AUCC explains relative performance better than log T2 (correlation 0.1809 vs -0.0303 for d = inf).</ns0:p><ns0:p>Results of the presented correlation analyses demonstrate the potential of complexity curve to complement the existing complexity measures in explaining classifier performance. As expected from theoretical considerations, there is a relation between how well AUCC correlates with classifier performance and the classifier's position in the bias-variance spectrum. It is worth noting that despite the attribute independence assumption the complexity curve method proved useful for explaining performance of complex non-linear classifiers.</ns0:p></ns0:div> <ns0:div><ns0:head>Large p, small n problems</ns0:head><ns0:p>There is a special category of machine learning problems in which the number of attributes p is large with respect to the number of samples n, perhaps even order of magnitudes larger. Many important biological data sets, most notably data from microarray experiments, fall into this category <ns0:ref type='bibr' target='#b15'>(Johnstone and Titterington, 2009)</ns0:ref>. To test how our complexity measure behaves in such situations, we calculated AUCC scores for a few microarray data sets and compared them with AUC ROC scores of some simple classifiers. Classifiers were evaluated as in the previous section. Detailed information about the data sets is given in Supplementary document S1 as Table <ns0:ref type='table' target='#tab_4'>S3</ns0:ref>.</ns0:p><ns0:p>Results of the experiment are presented in Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref>. As expected, with the number of attributes much larger than the number of observations data is considered by our metric as extremely scarce -values of AUCC are in all cases above 0.95. On the other hand, AUC ROC classification performance is very varied between data sets with scores approaching or equal to 1.0 for LEUKEMIA and LYMPHOMA data sets, and scores around 0.5 baseline for PROSTATE. This is because despite the large number of dimensions the form of the optimal decision function can be very simple, utilising only a few of available dimensions.</ns0:p><ns0:p>Complexity curve does not consider the shape of decision boundary at all and thus does not reflect differences in classification performance.</ns0:p><ns0:p>From this analysis we concluded that complexity curve is not a good predictor of classifier performance for data sets containing a large number of redundant attributes, as it does not differentiate between important and unimportant attributes. The logical way to proceed in such case would be to perform some form of feature selection or dimensionality reduction on the original data, and then calculate complexity curve in the reduced dimensions.</ns0:p></ns0:div> <ns0:div><ns0:head>APPLICATIONS Interpreting complexity curves</ns0:head><ns0:p>In order to prove the practical applicability of the proposed methodology, and show how complexity curve plot can be interpreted, we performed experiments with six simple data sets from UCI Machine Learning Repository <ns0:ref type='bibr' target='#b10'>(Frank and Asuncion, 2010)</ns0:ref>. The sets were chosen only as illustrative examples.</ns0:p><ns0:p>Basic properties of the data sets are given in Supplementary document as Table <ns0:ref type='table' target='#tab_5'>S4</ns0:ref>. For each data set we calculated conditional complexity curve. The curves are presented in Figure <ns0:ref type='figure' target='#fig_13'>9</ns0:ref>. Learning curves of CART decision tree (DT) were included for comparison. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>On most of the benchmark data sets we can see that complexity curve upper bounds the DT learning curve. The bound is relatively tight in the case of GLASS and IRIS, and looser for BREAST-CANCER-WISCONSIN and WINE data set. A natural conclusion is that a lot of variability contained in this last data set and captured by the Hellinger distance is irrelevant to the classification task. The most straightforward explanation would be the presence of unnecessary attributes not correlated with the class which can be ignored altogether. This is consistent with the results of various studies in feature selection. <ns0:ref type='bibr' target='#b3'>Choubey et al. (1996)</ns0:ref> identified that in GLASS data 7-8 attributes (78-89%) are relevant, in IRIS data 3 attributes (75%), and in BREAST-CANCER-WISCONSIN 5-7 attributes (56-78%). Similar results were obtained for BREAST-CANCER-WISCONSIN in other studies, which found that only 4 of the original attributes (44%) contribute to the classification <ns0:ref type='bibr' target='#b24'>(Ratanamahatana and Gunopulos, 2003;</ns0:ref><ns0:ref type='bibr' target='#b17'>Liu et al., 1998)</ns0:ref>. <ns0:ref type='bibr' target='#b8'>Dy and Brodley (2004)</ns0:ref> obtained best classification results for WINE data set with 7 attributes (54%).</ns0:p><ns0:p>On MONKS-1 and CAR complexity curve is no longer a proper upper bound on DT learning curve.</ns0:p><ns0:p>This is an indication of models relying heavily on attribute interdependencies to determine the correct class. This is not surprising: both MONKS-1 and CAR are artificial data sets with discrete attributes devised for evaluation of rule-based and tree-based classifiers <ns0:ref type='bibr' target='#b31'>Thrun et al. (1991)</ns0:ref>; <ns0:ref type='bibr' target='#b1'>Bohanec and Rajkovi&#269; (1988)</ns0:ref>.</ns0:p><ns0:p>Classes are defined with logical formulas utilising relations of multiple attributes rather than single values -clearly the attributes are interdependent. In that context complexity curve can be treated as a baseline for independent attribute situation and generalisation curve as diagnostic tool indicating the presence of interdependencies.</ns0:p><ns0:p>Besides the slope of the complexity curve we can also analyse its variability. We can see that the shape of WINE complexity curve is very regular with small variance in each point, while the GLASS curve displays much higher variance. This mean that the observations in GLASS data set are more diverse and some observations (or their combinations) are more important for representing data structure than the other.</ns0:p></ns0:div> <ns0:div><ns0:head>Data pruning with complexity curves</ns0:head><ns0:p>The problem of data pruning in the context of machine learning is defined as reducing the size of training sample in order to reduce classifier training time and still achieve satisfactory performance. It becomes extremely important as the data grows and a) does not fit the memory of a single machine, b) training times of more complex algorithms become very long.</ns0:p><ns0:p>A classic method for performing data pruning is progressive sampling -training the classifier on data samples of increasing size as long as its performance increases. <ns0:ref type='bibr' target='#b23'>Provost et al. (1999)</ns0:ref> analysed various schedules for progressive sampling and recommended geometric sampling, in which sample size is multiplied by a specified constant in each iteration, as the reasonable strategy in most cases. Geometric sampling uses samples of sizes a i n 0 , where n 0 -initial sample size, a -multiplier, i -iteration number.</ns0:p><ns0:p>In our method instead of training classifier on the drawn data sample we are probing the complexity curve. We are not trying to detect the convergence of classifier accuracy, but just search for a point on the curve corresponding to some reasonably small Hellinger distance value, e.g. 0.005. This point designates the smallest data subset which still contains the required amount of information.</ns0:p><ns0:p>In this setting we were not interested in calculating the whole complexity curve but just in finding the minimal data subset, which still contains most of the original information. The search procedure should be as fast as possible, since the goal of the data pruning is to save time spent on training classifiers. To comply with these requirements we constructed a criterion function of the form f</ns0:p><ns0:formula xml:id='formula_45'>(x) = H 2 (G x , D) &#8722; t,</ns0:formula><ns0:p>where D denotes a probability distribution induced by the whole data set, G x a distribution induced by random subset of size x and t is the desired Hellinger distance. We used classic Brent method <ns0:ref type='bibr' target='#b2'>(Brent, 1973)</ns0:ref> to find a root of the criterion function. In this way data complexity was calculated only for the points visited by Brent's algorithm. To speed up the procedure even further we used standard complexity curve instead of the conditional one and settled for whitening transform as the only preprocessing technique.</ns0:p><ns0:p>To verify if this idea is of practical use, we performed an experiment with three bigger data sets from UCI repository. Their basic properties are given in Supplementary document S1 as Table <ns0:ref type='table' target='#tab_8'>S5</ns0:ref>.</ns0:p><ns0:p>For all data sets we performed a stratified 10 fold cross validation experiment. The training part of a split was pruned according to our criterion function with t = 0.005 (CC pruning) or using geometric progressive sampling with multiplier a = 2 and initial sample size n 0 = 100 (PS pruning). Achieving the same accuracy as with CC pruning was used as a stop criterion for progressive sampling. Classifiers were trained on pruned and unpruned data and evaluated on the testing part of each cross validation split. Standard error was calculated for the obtained values. We have used machine learning algorithms from scikit-learn library <ns0:ref type='bibr' target='#b22'>(Pedregosa et al., 2011)</ns0:ref> and the rest of the procedure was implemented in Python with the help of NumPy and SciPy libraries. Calculations were done on a workstation with 8 core Intel R</ns0:p><ns0:p>Core TM i7-4770 3.4 Ghz CPU working under Arch GNU/Linux. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science fitting the model on the unpruned data is often faster than applying pruning strategy.</ns0:p><ns0:p>These results present complexity curve pruning as a reasonable model-free alternative to progressive sampling. It is more stable and often less demanding computationally. It does not require additional convergence detection strategy, which is always an important consideration when applying progressive sampling in practice. What is more, complexity curve pruning can also be easily applied in the context of online learning, when the data is being collected on the fly. After appending a batch of new examples to the data set, Hellinger distance between the old data set and the extended one can be calculated. If the distance is smaller than the chosen threshold, the process of data collection can be stopped.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this article we introduced a measure of data complexity targeted specifically at data sparsity. This distinguish it from other measures focusing mostly on the shape of optimal decision boundary in classification problems. The introduced measure has a form of graphical plot -complexity curve. We showed that it exhibits desirable properties through a series of experiments on both artificially constructed and real-world data sets. We proved that complexity curve capture non-trivial characteristics of the data sets and is useful for explaining the performance of high-variance classifiers. With conditional complexity curve it was possible to perform a meaningful analysis even with heavily imbalanced data sets.</ns0:p><ns0:p>Then we demonstrated how complexity curve can be used in practice for data pruning (reducing the size of training set) and that it is a feasible alternative to progressive sampling technique. This result is immediately applicable to all the situations when data overabundance starts to pose a problem.</ns0:p><ns0:p>For instance, it is possible to perform a quick exploration study on a pruned data set before fitting computationally expensive models on the whole set. Pruning result may also provide a suggestion for choosing proper train-test split ratio or number of folds of cross-validation in the evaluation procedure.</ns0:p><ns0:p>We argue that new measures of data characteristics, such as complexity curves, are needed to move away from a relatively static view of classification task to a more dynamic one. It is worth to investigate how various algorithms are affected by certain data manipulations, for example when new data become available or the underlying distribution shifts. This would facilitate the development of more adaptive and universal algorithms capable of working in a dynamically changing environment.</ns0:p><ns0:p>Experiments showed that in the presence of large number of redundant attributes not contributing to the classification task complexity curve does not correlate well with classifier performance. It correctly identifies dimensional sparseness of the data, but that is misleading since the actual decision boundary may still be very simple. Because of this as the next step in our research we plan to apply similar probabilistic approach to measure information content of different attributes in a data set and use that knowledge for performing feature selection. Graphs analogical to complexity curves and generalisation curves would be valuable tools for understanding characteristics of data sets and classification algorithms related to attribute structure.</ns0:p><ns0:p>Another limitation our method is the assumption of lack of attributes interdependencies. While the presence of small dependencies does not disrupt the analysis, when strong high dimensional dependencies are present the complexity curve does not correlate with classifier performance well. This means that it is infeasible to use for some domains, for example highly epistatic problems in bioinformatics.</ns0:p><ns0:p>Our long-term goal is to gain a better understanding of the impact of data set structure, both in terms of contained examples and attributes, and use that knowledge to build heterogeneous classification ensembles. We hope that a better control over data sets used in experiments will allow to perform a more systematic study of classifier diversity and consensus methods.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>measuring shape of the decision boundary and the amount class overlap. Topological measures concerned 2/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:2:1:NEW 29 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>y m -values taken by Y . It has been shown that error of an arbitrary classification or regression model can be decomposed into three parts: Error = Bias + Variance + Noise 5/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:2:1:NEW 29 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>step size) control the trade-off between the precision of the calculated curve and the computation time. In all experiments, unless stated otherwise, we used values K = 20, d = |D| 60 . Regular shapes of the obtained curves did not suggest the need for using larger values.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 1 Figure 1 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure1presents a sample complexity curve (solid lines). It demonstrates how by drawing larger subsets of the data we get better approximations of the original distribution, as indicated by the decreasing Hellinger distance. The logarithmic decrease of the distance is characteristic: it means that with a relatively small number of samples we can recover general characteristics of the distribution, but to model the details precisely we need a lot more data points. The shape of the curve is very regular, with just</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:2:1:NEW 29 Jun 2016) Manuscript to be reviewed Computer Science Algorithm 2 Procedure for calculating conditional complexity curve. D -original data set, C -number of classes, N -number of subsets, K -number of samples. 1. Transform D with whitening transform and/or ICA to obtain D I . 2. Split D I according to the class into D 1 I , D 2 I , . . . , D C I . 3. From D 1 I , D 2 I , . . . , D C I estimate probability distributions P 1 , P 2 , . . . , P C . 4. For i in 1 . . . |D I | with a step size |D I | N :</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:03:9443:2:1:NEW 29 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 2 Figure 2 .Figure 3 .Figure 4 .</ns0:head><ns0:label>2234</ns0:label><ns0:figDesc>Figure 2 presents the complexity curve and the adjusted error of logistic regression for the generated data. After ignoring the noise error component, we can see that the variance error component is indeed upper bounded by the complexity curve. Different kind of artificial data represented multidimensional space with parallel stripes in one dimension (stripes data set). It consisted of X matrix with 1000 observations and 10 attributes drawn from an uniform distribution defined on the range [0, 1). Class values Y depended only on the values of one of the attributes: for values lesser than 0.25 or greater than 0.75 the class was 1, for other values the class was 0. This kind of relation can be naturally modelled with a decision tree. All the attributes are again independent and conditionally independent.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 presents complexity curve and the adjusted error of decision tree classifier on the generated data. Once again the assumptions of complexity curve methodology are satisfied and the complexity curve indeed an upper bounds the classification error.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Complexity curves for whitened data (dashed lines), not whitened data (solid lines) and ICA-transformed data (dotted lines). Areas under the curves are given in the legend. U -data sampled from uniform distribution. C -data sampled from categorical distribution. U w -whitened U. C wwhitened C. U ICA -U w after ICA. C ICA -C w after ICA.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Complexity curves for WINE and its counterparts with introduced outliers. For the sake of clarity only contours were drawn.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Pearson's correlations between complexity measures.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>Decision tree d = 10 -0.1035 0.0695 Decision tree d = 15 -0.0995 0.0375 Decision tree d = 20 -0.0921 0.0394 Decision tree d = 25 -0.0757 0.0298 Decision tree d = 30 -0.0677 0.0227 Decision tree d = inf -0.0774 0.0345 Table 2. Pearson's correlations coefficients between classifier AUC ROC performances and complexity measures. Values larger than 0.22 or smaller than -0.22 are significant at &#945; = 0.05 significance level. tree d = 10 0.2175 -0.0838 LDA -Decision tree d = 15 0.2146 -0.0356 LDA -Decision tree d = 20 0.2042 -0.0382 LDA -Decision tree d = 25 0.1795 -0.0231 LDA -Decision tree d = 30 0.1636 -0.0112 LDA -Decision tree d = inf 0.1809 -0.0303</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Conditional complexity curves for six different data sets from UCI Machine Learning repository with areas under complexity curve (AUCC) reported: A -CAR, AUCC: 0.08, B -MONKS-1, AUCC: 0.05, C -IRIS, AUCC: 0.19, D -BREAST-CANCER-WISCONSIN, AUCC: 0.13, E -GLASS, AUCC: 0.44, F -WINE, AUCC: 0.35.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>comes from an inability of the applied model to represent the true relation present in data, variance comes from an inability to estimate the optimal model parameters from the data sample, noise is inherent to the solved task and irreducible. Since our complexity measure is model agnostic it clearly does not include bias component. As it does not take into account the dependent variable, it cannot measure noise either. All that is left to investigate is the relation between our complexity measure and variance component of the classification error.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>The variance error component is connected with overfitting, when the model fixates over specific</ns0:cell></ns0:row><ns0:row><ns0:cell>properties of a data sample and looses generalisation capabilities over the whole problem domain. If the</ns0:cell></ns0:row><ns0:row><ns0:cell>training sample represented the problem perfectly and the model was fitted with perfect optimisation</ns0:cell></ns0:row><ns0:row><ns0:cell>procedure variance would be reduced to zero. The less representative the training sample is for the whole</ns0:cell></ns0:row><ns0:row><ns0:cell>problem domain, the larger the chance for variance error.</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Pearson's correlations coefficients between classifier AUC ROC performances relative to LDA performance and complexity measures. Values larger than 0.22 or smaller than -0.22 are significant at &#945; = 0.05 significance level.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>18/25PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:2:1:NEW 29 Jun 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Areas under conditional complexity curve (AUCC) for microarray data sets along AUC ROC values for different classifiers. k-NNk-nearest neighbour, DT -CART decision tree, LDA -linear discriminant analysis, NB -na&#239;ve Bayes, LR -logistic regression.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>rate: 0.19 &#177; 0.02 Mean CC compression time: 4.01 &#177; 0.14 LinearSVC 0.86 &#177; 0.00 0.86 &#177; 0.00 27.71 &#177; 0.35 6.69 &#177; 0.52 10.73 &#177; 8.65 0.55 &#177; 0.49 GaussianNB 0.80 &#177; 0.01 0.80 &#177; 0.01</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Classifier</ns0:cell><ns0:cell>Score</ns0:cell><ns0:cell>CC score</ns0:cell><ns0:cell>Time</ns0:cell><ns0:cell>CC time</ns0:cell><ns0:cell>PS time</ns0:cell><ns0:cell>PS rate</ns0:cell></ns0:row><ns0:row><ns0:cell>waveform</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>Mean CC compression 0.02 &#177; 0.00 RF 0.86 &#177; 0.00 0.85 &#177; 0.00 33.49 &#177; 0.04 SVC 0.86 &#177; 0.00 0.86 &#177; 0.00 211.98 &#177; 0.93 Tree 0.78 &#177; 0.00 0.77 &#177; 0.00 3.06 &#177; 0.06 Logit 0.86 &#177; 0.00 0.86 &#177; 0.00 1.75 &#177; 0.06 GBC 0.86 &#177; 0.00 0.86 &#177; 0.00 112.34 &#177; 0.12 24.59 &#177; 2.30 66.66 &#177; 37.99 0.53 &#177; 0.43 4.02 &#177; 0.14 0.03 &#177; 0.01 0.01 &#177; 0.00 9.29 &#177; 0.76 18.06 &#177; 10.75 0.46 &#177; 0.37 9.08 &#177; 1.21 21.22 &#177; 28.34 0.33 &#177; 0.42 4.50 &#177; 0.20 1.40 &#177; 0.70 0.37 &#177; 0.28 4.21 &#177; 0.17 0.60 &#177; 0.62 0.30 &#177; 0.34</ns0:cell></ns0:row><ns0:row><ns0:cell>led</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>Mean CC compression rate: 0.04 &#177; 0.01 Mean CC compression time: 1.38 &#177; 0.03</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>LinearSVC GaussianNB 0.74 &#177; 0.00 0.73 &#177; 0.00 0.74 &#177; 0.00 0.74 &#177; 0.00 RF 0.74 &#177; 0.00 0.73 &#177; 0.00 SVC 0.74 &#177; 0.00 0.74 &#177; 0.00 Tree 0.74 &#177; 0.00 0.73 &#177; 0.00 Logit 0.74 &#177; 0.00 0.74 &#177; 0.00 GBC 0.74 &#177; 0.00 0.73 &#177; 0.00</ns0:cell><ns0:cell>4.68 &#177; 0.10 0.02 &#177; 0.00 1.77 &#177; 0.01 82.16 &#177; 0.86 0.03 &#177; 0.00 2.03 &#177; 0.08 51.26 &#177; 0.40</ns0:cell><ns0:cell cols='3'>1.49 &#177; 0.04 1.38 &#177; 0.03 1.47 &#177; 0.03 1.56 &#177; 0.07 10.04 &#177; 17.52 0.26 &#177; 0.44 0.47 &#177; 1.04 0.13 &#177; 0.34 0.07 &#177; 0.02 0.26 &#177; 0.44 0.83 &#177; 0.25 0.05 &#177; 0.04 1.38 &#177; 0.03 0.04 &#177; 0.01 0.09 &#177; 0.10 1.42 &#177; 0.03 0.30 &#177; 0.44 0.17 &#177; 0.33 3.57 &#177; 0.30 6.32 &#177; 4.05 0.04 &#177; 0.04</ns0:cell></ns0:row><ns0:row><ns0:cell>adult</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>Mean CC compression rate: 0.33 &#177; 0.02 Mean CC compression time: 0.93 &#177; 0.05</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>LinearSVC GaussianNB 0.81 &#177; 0.01 0.81 &#177; 0.01 0.69 &#177; 0.19 0.67 &#177; 0.20 RF 0.86 &#177; 0.01 0.85 &#177; 0.01 SVC 0.76 &#177; 0.00 0.76 &#177; 0.00 Tree 0.81 &#177; 0.00 0.81 &#177; 0.01 Logit 0.80 &#177; 0.00 0.80 &#177; 0.00 GBC 0.86 &#177; 0.00 0.86 &#177; 0.00</ns0:cell><ns0:cell cols='2'>1.79 &#177; 0.08 0.01 &#177; 0.00 2.04 &#177; 0.01 81.70 &#177; 0.56 10.52 &#177; 2.31 1.53 &#177; 0.08 0.93 &#177; 0.05 1.60 &#177; 0.09 0.12 &#177; 0.00 0.97 &#177; 0.05 0.08 &#177; 0.01 0.96 &#177; 0.05 2.33 &#177; 0.01 1.80 &#177; 0.09</ns0:cell><ns0:cell cols='2'>0.30 &#177; 0.84 0.18 &#177; 0.52 0.01 &#177; 0.00 0.02 &#177; 0.02 2.11 &#177; 1.18 0.69 &#177; 0.59 5.06 &#177; 7.17 0.16 &#177; 0.19 0.10 &#177; 0.08 0.72 &#177; 0.72 0.05 &#177; 0.07 0.42 &#177; 0.68 2.37 &#177; 1.22 0.67 &#177; 0.57</ns0:cell></ns0:row></ns0:table><ns0:note>21/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:2:1:NEW 29 Jun 2016) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Obtained accuracies and training times of different classification algorithms on unpruned and pruned data sets. Score corresponds to classifier accuracy, time to classifier training time (including pruning procedure), rate to compression rate. CC corresponds to data pruning with complexity curves, PS to data pruning with progressive sampling. LinearSVC -linear support vector machine, GaussianNBna&#239;ve Bayes with gaussian kernel, RF -random forest 100 CART trees, SVC -support vector machine with radial basis function kernel, Tree -CART decision tree, Logit -logistic regression, GBC -gradient boosting classifier with 100 CART trees.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>presents measured times and obtained accuracies. As can be seen, the difference in classification accuracies between pruned and unpruned training data is negligible. CC compression rate differs for the three data sets, which suggests that they are of different complexity: for LED data only 5% is needed to perform successful classification, while ADULT data is pruned at 33%. CC compression rate is rather stable with only small standard deviation, but PS compression rate is characterised with huge variance. In this regard, complexity curve pruning is preferable as a more stable pruning criterion.In all cases when training a classifier on the unpruned data took more than 10 seconds, we observed huge speed-ups. With the exception of SVC on LED data set, complexity curve pruning performed better than progressive sampling in such cases. Unsurprisingly, real speed-ups were visible only for computationally intensive methods such as Support Vector Machines, Random Forest and Gradient Boosted Decision Trees. For simple methods such as Na&#239;ve Bayes, Decision Tree or Logistic Regression</ns0:figDesc><ns0:table /><ns0:note>22/25PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:2:1:NEW 29 Jun 2016)</ns0:note></ns0:figure> <ns0:note place='foot' n='25'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:2:1:NEW 29 Jun 2016) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
" Dear Jaume Bacardit, We thank for opportunity to present our work in PeerJ Computer Science journal. We are glad that reviewer 1 suggested the publication of the manuscript. We also thank reviewer 2 for providing final remarks. We made additional changes in the manuscript addressing those points. After considering previous reviewers comments and the current length of manuscript we decided to significantly shorten it to improve readability. We made the following changes: • Removed references to generalization curves and focused on complexity curves and their properties alone. • Merged section “Interpreting complexity curves” with section “Generalisation curves for benchmark data sets” to preserve important insights. • Moved all the tables presenting properties of the data sets to supplementary material. Below we provide the detailed answers to reviewers’ comments. Response to the Reviewer 1 ''' The authors have successfully addressed most of my previous comments. IN my opinion this new version of the manuscript is ready to be published. ''' We thank Reviewer 1 for finalizing the review. Response to the Reviewer 2 ''' 1. The discussion of noise on the target variable is still not as clear as it could be presented. ''' We followed your suggestion and added the clarifications to the text: *** Bias-variance decomposition (Domingos, 2000) demonstrates that the error of a predictor can be attributed to three sources: bias, coming from inability of an algorithm to build an adequate model for the relationship present in data, variance, coming from inability to estimate correct model parameters from an imperfect data sample, and the irreducible error component commonly called noise. Following this line of reasoning, difficulty of a classification problem may come partly from the complexity of the relation between dependent variable and explanatory variables, partly from the scarcity of information in the training sample, and partly from class ambiguity (due to noise in the target variable or an overlap between classes). *** and *** We aim to measure information saturation of a data set without making any assumptions on the form of the relation between dependent variable and the rest of variables, so explicitly disregarding shape of the decision boundary and classes ambiguity (e.g. caused by noise on the target variable). *** ''' 2. Epistasis and attribute dependence are not the same concept (although related). (...) Please test your system on a range of multiplexer problems or explicitly state that the attribute independence assumption makes the approach unlikely to be suitable for highly epistatic problems. ''' We are grateful for this explanation. We believe that the difficulties with applying our method to problems with a lot of attribute interdependencies make it also unsuitable for epistatic problems. We stated the explicitly in the revised manuscript. In Complexity curve and the performance of an unbiased model: *** This shows that the complexity curve is not expected to be a good predictor of classification accuracy in the problems where a lot of high-dimensional attribute dependencies occur, for example in epistatic domains in which the importance of one attribute depends on the values of the other. *** In Conclusions: *** Another limitation our method is the assumption of lack of attributes interdependencies. While the presence of small dependencies does not disrupt the analysis, when strong high dimensional dependencies are present the complexity curve does not correlate with classifier performance well. This means that it is infeasible to use for some domains, for example highly epistatic problems in bioinformatics. *** Sincerely, Dariusz Plewczynski and Julian Zubek "
Here is a paper. Please give your review comments after reading it.
264
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>We describe a method for assessing data set complexity based on the estimation of the underlining probability distribution and Hellinger distance. In contrast to some popular complexity measures it is not focused on the shape of a decision boundary in a classification task but on the amount of available data with respect to the attribute structure. Complexity is expressed in terms of graphical plot, which we call complexity curve. It demonstrates the relative increase of available information with the growth of sample size. We perform theoretical and experimental examination of properties of the introduced complexity mea-sure and show its relation to the variance component of classification error. Then we compare it with popular data complexity measures on 81 diverse data sets and show that it can contribute to explaining performance of specific classifiers on these sets. We also apply our methodology to a panel of simple benchmark data sets, demonstrating how it can be used in practice to gain insights into data characteris-tics. Moreover, we show that complexity curve is an effective tool for reducing the size of the training set (data pruning), allowing to significantly speed up the learning process without compromising classification accuracy. Associated code is available to download at: https://github.com/zubekj/complexity_curve (open source Python implementation).</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>It is common knowledge in machine learning community that the difficulty of classification problems varies greatly. Sometimes it is enough to use simple out of the box classifier to get a very good result and sometimes careful preprocessing and model selection are needed to get any non-trivial result at all. The difficulty of a classification task clearly stems from certain properties of the data set, yet we still have problems with defining those properties in general.</ns0:p><ns0:p>Bias-variance decomposition <ns0:ref type='bibr' target='#b7'>(Domingos, 2000)</ns0:ref> demonstrates that the error of a predictor can be attributed to three sources: bias, coming from inability of an algorithm to build an adequate model for the relationship present in data, variance, coming from inability to estimate correct model parameters from an imperfect data sample, and the irreducible error component commonly called noise. Following this line of reasoning, difficulty of a classification problem may come partly from the complexity of the relation between dependent variable and explanatory variables, partly from the scarcity of information in the training sample, and partly from class ambiguity (due to noise in the target variable or an overlap between classes). This is identical to sources of classification difficulty identified by <ns0:ref type='bibr' target='#b12'>Ho and Basu (2002)</ns0:ref>, who labelled the three components: 'complex decision boundary', 'small sample size and dimensionality induced sparsity' and 'ambiguous classes'.</ns0:p><ns0:p>In this article we introduce a new measure of data complexity targeted at sample sparsity, which is mostly associated with variance error component. We aim to measure information saturation of a data set Since the proposed measure characterise the data sample itself without making any assumptions as to how that sample will be used it should be applicable to all kinds of problems involving reasoning from data. In this work we focus on classification tasks since this is the context in which data complexity measures were previously applied. We compare area under the complexity curve with popular data complexity measures and show how it complements the existing metrics. We also demonstrate that it is useful for explaining classifier performance by showing that the area under the complexity curve is correlated with the area under the receiver operating characteristic (AUC ROC) for popular classifiers tested on 81 benchmark data sets.</ns0:p><ns0:p>We propose an immediate application of the developed method connected with the fundamental question: how large data sample is needed to build a successful predictor? We pursue this topic by proposing a data pruning strategy based on complexity curve and evaluating it on large data sets. We show that it can be considered as an alternative to progressive sampling strategies <ns0:ref type='bibr' target='#b23'>(Provost et al., 1999)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED LITERATURE</ns0:head><ns0:p>Problem of measuring data complexity in the context of machine learning is broadly discussed. Our beliefs are similar to <ns0:ref type='bibr' target='#b11'>Ho (2008)</ns0:ref>, who stated the need for including data complexity analysis in algorithm comparison procedures. Similar needs are also discussed in fields outside machine learning, for example in combinatorial optimisation <ns0:ref type='bibr' target='#b30'>(Smith-Miles and Lopes, 2012)</ns0:ref>.</ns0:p><ns0:p>The general idea is to select a sufficiently diverse set of problems to demonstrate both strengths and weaknesses of the analysed algorithms. The importance of this step was stressed by <ns0:ref type='bibr' target='#b19'>Maci&#224; et al. (2013)</ns0:ref>, who demonstrated how algorithm comparison may be biased by benchmark data set selection, and showed how the choice may be guided by complexity measures. Characterising problem space with some metrics makes it possible to estimate regions in which certain algorithms perform well <ns0:ref type='bibr' target='#b18'>(Luengo and Herrera, 2013)</ns0:ref>, and this opens up possibilities of meta-learning <ns0:ref type='bibr' target='#b29'>(Smith-Miles et al., 2014)</ns0:ref>.</ns0:p><ns0:p>In this context complexity measures are used not only as predictors of classifier performance but more importantly as diversity measures capturing various properties of the data sets. It is useful when the measures themselves are diverse and focus on different aspects of the data to give as complete characterisation of the problem space as possible. In the later part of the article we demonstrate that complexity curve fits well into the landscape of currently used measures, offering new insights into data characteristics.</ns0:p><ns0:p>A set of practical measures of data complexity with regard to classification was introduced by <ns0:ref type='bibr' target='#b12'>Ho and Basu (2002)</ns0:ref>, and later extended by <ns0:ref type='bibr' target='#b13'>Ho et al. (2006)</ns0:ref> and <ns0:ref type='bibr' target='#b21'>Orriols-Puig et al. (2010)</ns0:ref>. It is routinely used in tasks involving classifier evaluation <ns0:ref type='bibr' target='#b19'>(Maci&#224; et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b18'>Luengo and Herrera, 2013)</ns0:ref> and meta-learning <ns0:ref type='bibr' target='#b9'>(D&#237;ez-Pastor et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b20'>Mantovani et al., 2015)</ns0:ref>. Some of these measures are based on the overlap of values of specific attributes, examples include Fisher's discriminant ratio, volume of overlap region, attribute efficiency etc. The others focus directly on class separability, this groups includes measures such as the fraction of points on the decision boundary, linear separability, the ratio of intra/inter class distance.</ns0:p><ns0:p>In contrast to our method, such measures focus on specific properties of the classification problem, Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>with data sparsity, such as ratio of attributes to observations, attempt to capture similar properties as our complexity curve. <ns0:ref type='bibr' target='#b16'>Li and Abu-Mostafa (2006)</ns0:ref> defined data set complexity in the context of classification using the general concept of Kolmogorov complexity. They proposed a way to measure data set complexity using the number of support vectors in support vector machine (SVM) classifier. They analysed the problems of data decomposition and data pruning using above methodology. A graphical representation of the data set complexity called the complexity-error plot was also introduced. The main problem with their approach is the selection of very specific and complex machine learning algorithms, which may render the results in less universal way, and which is prone to biases specific for SVMs. This make their method unsuitable for diverse machine learning algorithms comparison.</ns0:p><ns0:p>Another approach to data complexity is to analyse it on the level of individual instances. This kind of analysis is performed by <ns0:ref type='bibr' target='#b28'>Smith et al. (2013)</ns0:ref> who attempted to identify which instances are misclassified by various classification algorithm. They devised local complexity measures calculated with respect to single instances and later tried to correlate average instance hardness with global data complexity measures of <ns0:ref type='bibr' target='#b12'>Ho and Basu (2002)</ns0:ref>. They discovered that it is mostly correlated with class overlap. This makes our work complementary, since in our complexity measure we deliberately ignore class overlap and individual instance composition to isolate another source of difficulty, namely data scarcity. <ns0:ref type='bibr' target='#b32'>Yin et al. (2013)</ns0:ref> proposed a method of feature selection based on Hellinger distance (a measure of similarity between probability distributions). The idea was to choose features, which conditional distributions (depending on the class) have minimal affinity. In the context of our framework this could be interpreted as measuring data complexity for single features. The authors demonstrated experimentally that for the high-dimensional imbalanced data sets their method is superior to popular feature selection methods using Fisher criterion, or mutual information.</ns0:p></ns0:div> <ns0:div><ns0:head>DEFINITIONS</ns0:head><ns0:p>In the following sections we define formally all measures used throughout the paper. Basic intuitions, assumptions, and implementation choices are discussed. Finally, algorithms for calculating complexity curve, conditional complexity curve, and generalisation curve are given.</ns0:p></ns0:div> <ns0:div><ns0:head>Measuring data complexity with samples</ns0:head><ns0:p>In a typical machine learning scenario we want to use information contained in a collected data sample to solve a more general problem which our data describe. Problem complexity can be naturally measured by the size of a sample needed to describe the problem accurately. We call the problem complex, if we need to collect a lot of data in order to get any results. On the other hand, if a small amount of data suffices we say the problem has low complexity.</ns0:p><ns0:p>How to determine if a data sample describes the problem accurately? Any problem can be described with a multivariate probability distribution P of a random vector X. From P we sample our finite data sample D. Now, we can use D to build the estimated probability distribution of X -P D . P D is the approximation of P. If P and P D are identical we know that data sample D describes the problem perfectly and collecting more observations would not give us any new information. Analogously, if P D is very different from P we can be almost certain that the sample is too small.</ns0:p><ns0:p>To measure similarity between probability distributions we use Hellinger distance. For two continuous distributions P and P D with probability density functions p and p D it is defined as:</ns0:p><ns0:formula xml:id='formula_0'>H 2 (P, P D ) = 1 2 p(x) &#8722; p D (x) 2 dx</ns0:formula><ns0:p>The minimum possible distance 0 is achieved when the distributions are identical, the maximum 1 is achieved when any event with non-zero probability in P has probability 0 in P D and vice versa. Simplicity and naturally defined 0-1 range make Hellinger distance a good measure for capturing sample information content.</ns0:p><ns0:p>In Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>P S estimated from it will be an approximation of P D . By calculating H 2 (P D , P S ) we can assess how well a given subset represent the whole available data, i.e. determine its information content.</ns0:p><ns0:p>Obtaining a meaningful estimation of a probability distribution from a data sample poses difficulties in practice. The probability distribution we are interested in is the joint probability on all attributes. In that context most of the realistic data sets should be regarded as extremely sparse and na&#239;ve probability estimation using frequencies of occurring values would result in mostly flat distribution. This can be called the curse of dimensionality. Against this problem we apply a na&#239;ve assumption that all attributes are independent. This may seem like a radical simplification but, as we will demonstrate later, it yields good results in practice and constitute a reasonable baseline for common machine learning techniques. Under the independence assumption we can calculate the joint probability density function f from the marginal density functions f 1 , . . . , f n :</ns0:p><ns0:formula xml:id='formula_1'>f (x) = f 1 (x 1 ) f 2 (x 2 ) . . . f n (x n )</ns0:formula><ns0:p>We will now show the derived formula for Hellinger distance under the independence assumption.</ns0:p><ns0:p>Observe that the Hellinger distance for continuous variables can be expressed in another form:</ns0:p><ns0:formula xml:id='formula_2'>1 2 f (x) &#8722; g(x) 2 dx = 1 2 f (x) &#8722; 2 f (x)g(x) + g(x) dx = 1 2 f (x) dx &#8722; f (x)g(x) dx + 1 2 g(x) dx = 1 &#8722; f (x)g(x) dx</ns0:formula><ns0:p>In the last step we used the fact the that the integral of a probability density over its domain must equal one.</ns0:p><ns0:p>We will consider two multivariate distributions F and G with density functions:</ns0:p><ns0:formula xml:id='formula_3'>f (x 1 , . . . , x n ) = f 1 (x 1 ) . . . f n (x n ) g(x 1 , . . . , x n ) = g 1 (x 1 ) . . . g n (x n )</ns0:formula><ns0:p>The last formula for Hellinger distance will now expand:</ns0:p><ns0:formula xml:id='formula_4'>1 &#8722; &#8226; &#8226; &#8226; f (x 1 , . . . , x n )g(x 1 , . . . , x n ) dx 1 . . . dx n = 1 &#8722; &#8226; &#8226; &#8226; f 1 (x 1 ) . . . f n (x n )g 1 (x 1 ) . . . g n (x n ) dx 1 . . . dx n = 1 &#8722; f 1 (x 1 )g 1 (x 1 ) dx 1 . . . f n (x n )g n (x n ) dx n</ns0:formula><ns0:p>In this form variables are separated and parts of the formula can be calculated separately.</ns0:p></ns0:div> <ns0:div><ns0:head>Practical considerations</ns0:head><ns0:p>Calculating the introduced measure of similarity between data set in practice poses some difficulties.</ns0:p><ns0:p>First, in the derived formula direct multiplication of probabilities occurs, which leads to problems with numerical stability. We increased the stability by switching to the following formula:</ns0:p><ns0:formula xml:id='formula_5'>1 &#8722; f 1 (x 1 )g 1 (x 1 ) dx 1 . . . f n (x n )g n (x n ) dx n = 1 &#8722; 1 &#8722; 1 2 f 1 (x 1 ) &#8722; g 1 (x 1 ) 2 dx 1 . . . 1 &#8722; 1 2 f n (x n ) &#8722; g n (x n ) 2 dx 2 = 4/25</ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:3:0:NEW 30 Jun 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_6'>1 &#8722; 1 &#8722; H 2 (F 1 , G 1 ) . . . 1 &#8722; H 2 (F n , G n )</ns0:formula><ns0:p>For continuous variables probability density function is routinely done with kernel density estimation (KDE) -a classic technique for estimating the shape continuous probability density function from a finite data sample <ns0:ref type='bibr' target='#b25'>(Scott, 1992)</ns0:ref>. For a sample (x 1 , x 2 , . . . , x n ) estimated density function has a form:</ns0:p><ns0:formula xml:id='formula_7'>fh (x) = 1 nh n &#8721; i=1 K x &#8722; x i h</ns0:formula><ns0:p>where K is the kernel function and h is a smoothing parameter -bandwidth. In our experiments we used Gaussian function as the kernel. This is a popular choice, which often yields good results in practice. The bandwidth was set according to the modified Scott's rule <ns0:ref type='bibr' target='#b25'>(Scott, 1992)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_8'>h = 1 2 n &#8722; 1 d+4 ,</ns0:formula><ns0:p>where n is the number of samples and d number of dimensions.</ns0:p><ns0:p>In many cases the independence assumption can be supported by preprocessing input data in a certain way. A very common technique, which can be applied in this situation is the whitening transform. It transforms any set of random variables into a set of uncorrelated random variables. For a random vector X with a covariance matrix &#931; a new uncorrelated vector Y can be calculated as follows:</ns0:p><ns0:formula xml:id='formula_9'>&#931; = PDP &#8722;1 W = PD &#8722; 1 2 P &#8722;1 Y = XW</ns0:formula><ns0:p>where D is diagonal matrix containing eigenvalues and P is matrix of right eigenvectors of &#931;. Naturally, lack of correlation does not imply independence but it nevertheless reduces the error introduced by our independence assumption. Furthermore, it blurs the difference between categorical variables and continuous variables putting them on an equal footing. In all further experiments we use whitening transform preprocessing and then treat all variables as continuous.</ns0:p><ns0:p>A more sophisticated method is a signal processing technique known as Independent Component Analysis (ICA) <ns0:ref type='bibr' target='#b14'>(Hyv&#228;rinen and Oja, 2000)</ns0:ref>. It assumes that all components of an observed multivariate signal are mixtures of some independent source signals and that the distribution of the values in each source signal is non-gaussian. Under these assumption the algorithm attempts to recreate the source signals by splitting the observed signal into the components as independent as possible. Even if the assumptions are not met, ICA technique can reduce the impact of attributes interdependencies. Because of its computational complexity we used it as an optional step in our experiments.</ns0:p></ns0:div> <ns0:div><ns0:head>Machine learning task difficulty</ns0:head><ns0:p>Our data complexity measure can be used for any type of problem described through a multivariate data sample. It is applicable to regression, classification and clustering tasks. The relation between the defined data complexity and the difficulty of a specific machine learning task has to be investigated. We will focus on the supervised learning case. Classification error will be measured as mean 0-1 error (accuracy). Data complexity will be measured as mean Hellinger distance between the real and the estimated probability distributions of attributes conditioned on the target variable:</ns0:p><ns0:formula xml:id='formula_10'>1 m m &#8721; i=1 H 2 (P(X|Y = y i ), P D (X|Y = y i ))</ns0:formula><ns0:p>where X -vector of attributes, Y -target variable, y 1 , y 2 , . . . Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b7'>Domingos (2000)</ns0:ref> proposed an universal scheme of decomposition, which can be adapted for different loss functions. For a classification problem and 0-1 loss L expected error on sample x for which the true label is t, and the predicted label given a training set D is y can be expressed as:</ns0:p><ns0:formula xml:id='formula_11'>E D,t [1(t = y)] = 1(E t [t] = E D [y]) + c 2 E D [1(y = E D [y])] + c 1 E t [1(t = E t [t])] = B(x) + c 2 V (x) + c 1 N(x)</ns0:formula><ns0:p>where B -bias, V -variance, N -noise. Coefficients c 1 and c 2 are added to make the decomposition consistent for different loss functions. In this case they are equal to:</ns0:p><ns0:formula xml:id='formula_12'>c 1 = P D (y = E t [t]) &#8722; P D (y = E t [t])P t (y = t | E t [t] = t) c 2 = 1 if E t [t] = E D [y] &#8722; P D (y = E t [t] | y = E D [y])</ns0:formula><ns0:p>otherwise.</ns0:p><ns0:p>Bias This intuition can be supported by comparing our complexity measure with the error of the Bayes classifier. We will show that they are closely related. Let Y be the target variable taking on values v 1 , v 2 , . . . , v m , f i (x) an estimation of P(X = x|Y = v i ) from a finite sample D, and g(y) an estimation of P(Y = y). In such setting 0-1 loss of the Bayes classifier on a sample x with the true label t is:</ns0:p><ns0:formula xml:id='formula_13'>1(t = y) = 1 t = arg max i (g(v i ) f i (x))</ns0:formula><ns0:p>Let assume that t = v j . Observe that:</ns0:p><ns0:formula xml:id='formula_14'>v j = arg max i (g(v i ) f i (x)) &#8660; &#8704; i g(v j ) f j (x) &#8722; g(v i ) f i (x) &#8805; 0</ns0:formula><ns0:p>which for the case of equally frequent classes reduces to:</ns0:p><ns0:formula xml:id='formula_15'>&#8704; i f j (x) &#8722; f i (x) &#8805; 0</ns0:formula><ns0:p>We can simultaneously add and subtract term P</ns0:p><ns0:formula xml:id='formula_16'>(X = x |Y = v j ) &#8722; P(X = x |Y = v i ) to obtain: &#8704; i ( f j (x) &#8722; P(X = x |Y = v j )) + (P(X = x |Y = v i ) &#8722; f i (x)) + (P(X = x |Y = v j ) &#8722; P(X = x |Y = v i )) &#8805; 0 We know that P(X = x | Y = v j ) &#8722; P(X = x | Y = v i ) &#8805; 0,</ns0:formula><ns0:p>so as long as estimations f i (x), f j (x) do not deviate too much from real distributions the inequality is satisfied. It will not be satisfied (i.e. an error will take place) only if the estimations deviate from the real distributions in a certain way (i.e.</ns0:p><ns0:formula xml:id='formula_17'>f j (x) &lt; P(X = x|Y = v j ) and f i (x) &gt; P(X = x|Y = v i ))</ns0:formula><ns0:p>and the sum of these deviations is greater than <ns0:ref type='table' target='#tab_2'>-2016:03:9443:3:0:NEW 30 Jun 2016)</ns0:ref> Manuscript to be reviewed Computer Science</ns0:p><ns0:formula xml:id='formula_18'>6/25 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula><ns0:formula xml:id='formula_19'>P(X = x|Y = v j ) &#8722; P(X = x|Y = v i ).</ns0:formula><ns0:p>The Hellinger distance between f i (x) and P(X = x|Y = v i ) measures the deviation. This shows that by minimising Hellinger distance we are also minimising error of the Bayes classifier. Converse may not be true: not all deviations of probability estimates result in classification error.</ns0:p><ns0:p>In the introduced complexity measure we assumed independency of all attributes, which is analogous to the assumption of na&#239;ve Bayes. Small Hellinger distance between class-conditioned attribute distributions induced by sets A and B means that na&#239;ve Bayes trained on set A and tested on set B will have only very slight variance error component. Of course, if the independence assumption is broken bias error component may still be substantial.</ns0:p></ns0:div> <ns0:div><ns0:head>Complexity curve</ns0:head><ns0:p>Complexity curve is a graphical representation of a data set complexity. It is a plot presenting the expected Hellinger distance between a subset and the whole set versus subset size:</ns0:p><ns0:formula xml:id='formula_20'>CC(n) = E[H 2 (P, Q n )]</ns0:formula><ns0:p>where P is the empirical probability distribution estimated from the whole set and Q n is the probability distribution estimated from a random subset of size n &#8804; |D|. Let us observe that CC(|D|) = 0 because P = Q |D| . Q 0 is undefined, but for the sake of convenience we assume CC(0) = 1.</ns0:p><ns0:p>Algorithm 1 Procedure for calculating complexity curve. D -original data set, K -number of random subsets of the specified size.</ns0:p><ns0:p>1. Transform D with whitening transform and/or ICA to obtain D I .</ns0:p><ns0:p>2. Estimate probability distribution for each attribute of D I and calculate joint probability distribution -P.</ns0:p><ns0:p>3. For i in 1 . . . |D I | (with an optional step size d):</ns0:p><ns0:p>(a) For j in 1 . . . K:</ns0:p><ns0:p>i. Draw subset S j i &#8838; D I such that |S j i | = i. ii. Estimate probability distribution for each attribute of S j i and calculate joint probability distribution -Q j i . iii. Calculate Hellinger distance:</ns0:p><ns0:formula xml:id='formula_21'>l j i = H 2 (P, Q j i ).</ns0:formula><ns0:p>(b) Calculate mean m i and standard error s i :</ns0:p><ns0:formula xml:id='formula_22'>m i = 1 K K &#8721; j=1 l j i s i = 1 K K &#8721; j=1 m i &#8722; l j i 2</ns0:formula><ns0:p>Complexity curve is a plot of m i &#177; s i vs i.</ns0:p><ns0:p>To estimate complexity curve in practice, for each subset size K random subsets are drawn and the mean value of Hellinger distance, along with standard error, is marked on the plot. The Algorithm 1 presents the exact procedure. Parameters K (the number of samples of a specified size) and d (sampling The shape of the complexity curve captures the information on the complexity of the data set. If the data is simple, it is possible to represent it relatively well with just a few instances. In such case, the complexity curve is very steep at the beginning and flattens towards the end of the plot. If the data is complex, the initial steepness of the curve is smaller. That information can be aggregated into a single parameter -the area under the complexity curve (AUCC). If we express the subset size as the fraction of the whole data set, then the value of the area under the curve becomes limited to the range [0, 1] and can be used as an universal measure for comparing complexity of different data sets.</ns0:p></ns0:div> <ns0:div><ns0:head>Conditional complexity curve</ns0:head><ns0:p>The complexity curve methodology presented so far deals with the complexity of a data set as a whole.</ns0:p><ns0:p>While this approach gives information about data structure, it may assess complexity of the classification task incorrectly. This is because data distribution inside each of the classes may vary greatly from the overall distribution. For example, when the number of classes is larger, or the classes are imbalanced, a random sample large enough to represent the whole data set may be too small to represent some of the classes. To take this into account we introduce conditional complexity curve. We calculate it by splitting each data sample according to the class value and taking the arithmetic mean of the complexities of each sub-sample. Algorithm 2 presents the exact procedure.</ns0:p><ns0:p>Comparison of standard complexity curve and conditional complexity curve for iris data set is given by Figure <ns0:ref type='figure'>1</ns0:ref>. This data set has 3 distinct classes. Our expectation is that estimating conditional distributions for each class would require larger data samples than estimating the overall distribution. Shape of the conditional complexity curve is consistent with this expectation: it is less steep than the standard curve and has larger AUCC value.</ns0:p></ns0:div> <ns0:div><ns0:head>PROPERTIES</ns0:head><ns0:p>To support validity of the proposed method, we perform an in-depth analysis of its properties. We start from purely mathematical analysis giving some intuitions on complexity curve convergence rate and identifying border cases. Then we perform experiments with toy artificial data sets testing basic assumptions behind complexity curve. After that we compare it experimentally with other complexity data measures and show its usefulness in explaining classifier performance. (a) For j in 1 . . . K:</ns0:p><ns0:formula xml:id='formula_23'>i. Draw subset S j i &#8838; D I such that |S j i | = i. ii. Split S j</ns0:formula><ns0:p>i according to the class into S j,1 i , S j,2 i , . . . , S j,C i . iii. From S j,1 i , S j,2 i , . . . , S j,C i estimate probability distributions Q j,1 i , Q j,2 i , . . . , Q j,C i . iv. Calculate mean Hellinger distance:</ns0:p><ns0:formula xml:id='formula_24'>l j i = 1 C &#8721; C k=1 H 2 (P k , Q j,k i ).</ns0:formula><ns0:p>(b) Calculate mean m i and standard error s i :</ns0:p><ns0:formula xml:id='formula_25'>m i = 1 K K &#8721; j=1 l j i s i = 1 K K &#8721; j=1 m i &#8722; l j i 2</ns0:formula><ns0:p>Conditional complexity curve is a plot of m i &#177; s i vs i.</ns0:p></ns0:div> <ns0:div><ns0:head>Mathematical properties 245</ns0:head><ns0:p>Drawing a random subset S n from a finite data set D of size N corresponds to sampling without replacement.</ns0:p><ns0:p>Let assume that the data set contains k distinct values {v 1 , v 2 , . . . , v k } occurring with frequencies P = (p 1 , p 2 , . . . , p k ). Q n = (q 1 , q 2 , . . . , q k ) will be a random vector which follows a multivariate hypergeometric distribution.</ns0:p><ns0:formula xml:id='formula_26'>q i = 1 n &#8721; y&#8712;S n 1{y = v i }</ns0:formula><ns0:p>The expected value for any single element is:</ns0:p><ns0:formula xml:id='formula_27'>E[q i ] = p i</ns0:formula><ns0:p>The probability of obtaining any specific vector of frequencies:</ns0:p><ns0:formula xml:id='formula_28'>P (Q n = (q 1 , q 2 , . . . , q k )) = p 1 N q 1 n p 2 N q 2 n &#8226; &#8226; &#8226; p k N q k n N n with &#8721; k i=1 q i = 1.</ns0:formula></ns0:div> <ns0:div><ns0:head>246</ns0:head><ns0:p>We will consider the simplest case of discrete probability distribution estimated through frequency counts without using the independence assumption. In such case complexity curve is by definition:</ns0:p><ns0:formula xml:id='formula_29'>CC(n) = E[H 2 (P, Q n )]</ns0:formula><ns0:p>It is obvious that CC(N) = 0 because when n = N we draw all available data. This means that complexity curve always converges. We can ask whether it is possible to say anything about the rate of this convergence. This is the question about the upper bound on the tail of hypergeometric distribution. Such bound is given by Hoeffding-Chv&#225;tal inequality <ns0:ref type='bibr' target='#b5'>(Chv&#225;tal, 1979;</ns0:ref><ns0:ref type='bibr' target='#b27'>Skala, 2013)</ns0:ref>. For the univariate case it has the following form: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_30'>P (|q i &#8722; p i | &#8805; &#948; ) &#8804; 2e &#8722;2&#948;</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>which generalises to a multivariate case as:</ns0:p><ns0:formula xml:id='formula_31'>P (|Q n &#8722; P| &#8805; &#948; ) &#8804; 2ke &#8722;2&#948; 2 n</ns0:formula><ns0:p>where |Q n &#8722; P| is the total variation distance. Since H 2 (P, Q n ) &#8804; |Q n &#8722; P| this guarantees that complexity 247 curve converges at least as fast.</ns0:p></ns0:div> <ns0:div><ns0:head>248</ns0:head><ns0:p>Now we will consider a special case when n = 1. In this situation the multivariate hypergeometric distribution is reduced to a simple categorical distribution P. In such case the expected Hellinger distance is:</ns0:p><ns0:formula xml:id='formula_32'>E[H 2 (P, Q 1 )] = k &#8721; i=1 p i &#8730; 2 k &#8721; j=1 &#8730; p j &#8722; 1{ j = k} 2 = k &#8721; i=1 p i &#8730; 2 1 &#8722; p i + ( &#8730; p i &#8722; 1) 2 = k &#8721; i=1 p i 1 &#8722; &#8730; p i</ns0:formula><ns0:p>This corresponds to the first point of complexity curve and determines its overall steepness.</ns0:p></ns0:div> <ns0:div><ns0:head>249</ns0:head><ns0:p>Theorem: E[H 2 (P, Q 1 )] is maximal for a given k when P is an uniform categorical distribution over k categories, i.e.:</ns0:p><ns0:formula xml:id='formula_33'>E[H 2 (P, Q 1 )] = k &#8721; i=1 p i 1 &#8722; &#8730; p i &#8804; 1 &#8722; 1 k Proof:</ns0:formula><ns0:p>We will consider an arbitrary distribution P and the expected Hellinger distance</ns0:p><ns0:formula xml:id='formula_34'>E[H 2 (P, Q 1 )].</ns0:formula><ns0:p>We can modify this distribution by choosing two states l and k occurring with probabilities p l and p k such as that p l &#8722; p k is maximal among all pairs of states. We will redistribute the probability mass between the two states creating a new distribution P . The expected Hellinger distance for the distribution P will be:</ns0:p><ns0:formula xml:id='formula_35'>E[H 2 (P , Q 1 )] = k &#8721; i=1,i =k,i =l p i 1 &#8722; &#8730; p i + a 1 &#8722; &#8730; a + (p k + p l &#8722; a) 1 &#8722; &#8730; p k + p l &#8722; a</ns0:formula><ns0:p>where a and p k + p l &#8722; a are new probabilities of the two states in P . We will consider a function</ns0:p><ns0:formula xml:id='formula_36'>f (a) = a 1 &#8722; &#8730; a + (p k + p l &#8722; a) 1 &#8722; &#8730; p k + p l</ns0:formula><ns0:p>and look for its maxima.</ns0:p><ns0:formula xml:id='formula_37'>&#8706; f (x) &#8706; a = &#8722; 1 &#8722; &#8730; p k + p l &#8722; a + &#8730; p k + p l &#8722; a 4 1 &#8722; &#8730; p k + p l &#8722; a + 1 &#8722; &#8730; a &#8722; &#8730; a 4 1 &#8722; &#8730; a</ns0:formula><ns0:p>The derivative is equal to 0 if and only if a = p k +p l 2 . We can easily see that:</ns0:p><ns0:formula xml:id='formula_38'>f (0) = f (p k + p l ) = (p k + p l ) 1 &#8722; &#8730; p k + p l &lt; (p k + p l ) 1 &#8722; p k + p l 2</ns0:formula><ns0:p>This means that f (a) reaches its maximum for a = p k +p l 2 . From that we can conclude that for any distribution P if we produce distribution P by redistributing probability mass between two states equally the following holds:</ns0:p><ns0:formula xml:id='formula_39'>E[H 2 (P , Q 1 )] &#8805; E[H 2 (P, Q 1 )]</ns0:formula><ns0:p>If we repeat such redistribution arbitrary number of times the outcome distribution converges to uniform 250 distribution. This proves that the uniform distribution leads to the maximal expected Hellinger distance 251 for a given number of states.</ns0:p></ns0:div> <ns0:div><ns0:head>252</ns0:head><ns0:p>Theorem: Increasing the number of categories by dividing an existing category into two new categories always increases the expected Hellinger distance, i.e.</ns0:p><ns0:formula xml:id='formula_40'>k &#8721; i=1 p i 1 &#8722; &#8730; p i &#8804; k &#8721; i=1,i =l p i 1 &#8722; &#8730; p i + a 1 &#8722; &#8730; a + (p l &#8722; a) 1 &#8722; &#8730; p l &#8722; a 10/25 a 1 &#8722; &#8730; p l &#8722; a &#8804; a 1 &#8722; &#8730; a</ns0:formula><ns0:p>which concludes the proof.</ns0:p><ns0:p>From the properties stated by these two theorems we can gain some intuitions about complexity curves in general. First, by looking at the formula for the uniform distribution E[H 2 (P,</ns0:p><ns0:formula xml:id='formula_41'>Q 1 )] = 1 &#8722; 1 k we can see that when k = 1 E[H 2 (P, Q 1 )] = 0 and when k &#8594; &#8734; E[H 2 (P, Q 1 )] &#8594; 1.</ns0:formula><ns0:p>The complexity curve will be less steep if the variables in the data set take multiple values and each value occurs with equal probability. This is consistent with our intuition: we need a larger sample to cover such space and collect information. For smaller number of distinct values or distributions with mass concentrated mostly in a few points smaller sample will be sufficient to represent most of the information in the data set.</ns0:p></ns0:div> <ns0:div><ns0:head>Complexity curve and the performance of an unbiased model</ns0:head><ns0:p>To confirm validity of the assumptions behind complexity curve we performed experiments with artificial data generated according to known models. For each of the data set we selected an appropriate classifier which is known to be unbiased with respect to the given model. In this way it was possible to observe if the variance error component is indeed upper bounded by the complexity curve. To train the classifiers we used the same setting as when calculating the complexity curve: classifiers were trained on random subsets and tested on the whole data set. We fitted the learning curve to the complexity curve by matching first and last points of both curves. Then we observed the relation of the two curves in between.</ns0:p><ns0:p>The first generated data set followed the logistic model (logit data set). Matrix X (1000 observations, 12 attributes) contained values drawn from the normal distribution with mean 0 and standard deviation 1. Class vector Y was defined as follows:</ns0:p><ns0:formula xml:id='formula_42'>P(Y |x) = e &#946; x 1 + e &#946; x</ns0:formula><ns0:p>where &#946; = (0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0, 0, 0, 0, 0, 0). All attributes were independent and conditionally independent. Since Y values were determined in a non-deterministic way, there was some noise presentclassification error of the logistic regression classifier trained and tested on the full data set was larger than zero. What would happen if the attribute conditional independence assumption was broken? To answer this question we generated another type of data modelled after multidimensional chessboard (chessboard data set). X matrix contained 1000 observations and 2, 3 attributes drawn from an uniform distribution on range [0, 1). Class vector Y had the following values:</ns0:p><ns0:formula xml:id='formula_43'>0 if &#931; m i=0 x i</ns0:formula><ns0:p>s is even 1 otherwise where s was a grid step in our experiments set to 0.5. There is clearly strong attribute dependence, but since all parts of decision boundary are parallel to one of the attributes this kind of data can be modelled with a decision tree with no bias. Figure <ns0:ref type='figure'>4</ns0:ref> presents complexity curves and error curves for different dimensionalities of chessboard data.</ns0:p><ns0:p>Here the classification error becomes larger than indicated by complexity curve. The more dimensions, the more dependencies between attributes violating complexity curve assumptions. For 3 dimensional chessboard the classification problem becomes rather hard and the observed error decreases slowly, but the complexity curve remains almost the same as for 2 dimensional case. This shows that the complexity curve is not expected to be a good predictor of classification accuracy in the problems where a lot of high-dimensional attribute dependencies occur, for example in epistatic domains in which the importance of one attribute depends on the values of the other.</ns0:p><ns0:p>Results of experiments with controlled artificial data sets are consistent with our theoretical expectations. Basing on them we can introduce a general interpretation of the difference between complexity curve and learning curve: learning curve below the complexity curve is an indication that the algorithm is able to build a good model without sampling the whole domain, limiting the variance error component.</ns0:p><ns0:p>On the other hand, learning curve above the complexity curve is an indication that the algorithm includes complex attributes dependencies in the constructed model, promoting the variance error component.</ns0:p></ns0:div> <ns0:div><ns0:head>Impact of whitening and ICA</ns0:head><ns0:p>To evaluate the impact of the proposed preprocessing techniques (whitening and ICA -Independent Component Analysis) on complexity curves we performed experiments with artificial data. In the first experiment we generated two data sets of 300 observations and with 8 attributes distributed according to Student's t distribution with 1.5 degrees of freedom. In one data set all attributes were independent, in the 8I w (AUCC: 0.17) 8R w (AUCC: 0.12) Figure <ns0:ref type='figure'>5</ns0:ref>. Complexity curves for whitened data (dashed lines) and not whitened data (solid lines). Areas under the curves are given in the legend. 8I -set of 8 independent random variables with Student's t distribution. 8R -one random variable with Student's t distribution repeated 8 times. 8I w -whitened 8I. 8R w -whitened 8R.</ns0:p><ns0:p>other the same attribute was repeated 8 times. To both sets small Gaussian noise was added. Figure <ns0:ref type='figure'>5</ns0:ref> shows complexity curves calculated before and after whitening transform. We can see that whitening had no significant effect on the complexity curve of the independent set. In the case of the dependent set complexity curve calculated after whitening decreases visibly faster and the area under the curve is smaller. This is consistent with our intuitive notion of complexity: a data set with highly correlated or duplicated attributes should be significantly less complex.</ns0:p><ns0:p>In the second experiment two data sets with 100 observations and 4 attributes were generated. The first data set was generated from the continuous uniform distribution on interval [0, 2], the second one from the discrete (categorical) uniform distribution on the same interval. To both sets small Gaussian noise was added. Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> presents complexity curves for original, whitened and ICA-transformed data.</ns0:p><ns0:p>Among the original data sets the intuitive notion of complexity is preserved: area under the complexity curve for categorical data is smaller. The difference disappears for the whitened data but is again visible in the ICA-transformed data.</ns0:p><ns0:p>These simple experiments are by no means exhaustive but they confirm usefulness of the chosen signal processing techniques (data whitening and Independent Component Analysis) in complexity curve analysis.</ns0:p></ns0:div> <ns0:div><ns0:head>Complexity curve variability and outliers</ns0:head><ns0:p>Complexity curve is based on the expected Hellinger distance and the estimation procedure includes some variance. The natural assumption is that the variability caused by the sample size is greater than the variability resulting from a specific composition of a sample. Otherwise averaging over samples of the same size would not be meaningful. This assumption is already present in standard learning curve methodology where classifier accuracy is plotted against training set size. We expect that the exact variability of the complexity curve will be connected with the presence of outliers in the data set. Such influential observations will have a huge impact depending whether they will be included in a sample or not.</ns0:p><ns0:p>To verify whether these intuitions were true, we constructed two new data sets by introducing artificially outliers to WINE data set. In WINE001 we modified 1% of the attribute values by multiplying them by a random number from range (&#8722;10, 10). In WINE005 5% of the values were modified in such manner. Average number of points per dimension Table <ns0:ref type='table'>1</ns0:ref>. Data complexity measures used in experiments.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_11'>7</ns0:ref> presents conditional complexity curves for all three data sets. WINE001 curve has indeed a higher variance and is less regular than WINE curve. WINE005 curve is characterised not only by a higher variance but also by a larger AUCC value. This means that adding so much noise increased the overall complexity of the data set significantly.</ns0:p><ns0:p>The result support our hypothesis that large variability of complexity curve signify an occurrence of highly influential observations in the data set. This makes complexity curve a valuable diagnostic tool for such situations. However, it should be noted that our method is unable to distinguish between important outliers and plain noise. To obtain this kind of insight one has to employ different methods.</ns0:p></ns0:div> <ns0:div><ns0:head>Comparison with other complexity measures</ns0:head><ns0:p>The set of data complexity measures developed by <ns0:ref type='bibr' target='#b12'>Ho and Basu (2002)</ns0:ref> and extended by <ns0:ref type='bibr' target='#b13'>Ho et al. (2006)</ns0:ref> continues to be used in experimental studies to explain performance of various classifiers <ns0:ref type='bibr' target='#b9'>(D&#237;ez-Pastor et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b20'>Mantovani et al., 2015)</ns0:ref>. We decided to compare experimentally complexity curve with those measures. Descriptions of the measures used are given in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>According to our hypothesis conditional complexity curve should be robust in the context of class imbalance. To demonstrate this property we used for the comparison 81 imbalanced data sets used previously in the study by <ns0:ref type='bibr' target='#b9'>D&#237;ez-Pastor et al. (2015)</ns0:ref>. These data sets come originally from HDDT <ns0:ref type='bibr' target='#b6'>(Cieslak et al., 2011)</ns0:ref> and KEEL <ns0:ref type='bibr' target='#b0'>(Alcal&#225; et al., 2010)</ns0:ref> repositories. We selected only binary classification problems.</ns0:p><ns0:p>The list of data sets with their properties is presented in Supplementary document S1 as Table <ns0:ref type='table'>S1</ns0:ref> and Table <ns0:ref type='table'>S2</ns0:ref>.</ns0:p><ns0:p>For each data set we calculated area under the complexity curve using the previously described procedure and the values of other data complexity measures using DCOL software <ns0:ref type='bibr' target='#b21'>(Orriols-Puig et al., 2010</ns0:ref>). Pearson's correlation was then calculated for all the measures. As T2 measure seemed to have non-linear characteristics destroying the correlation additional column log T2 was added to comparison.</ns0:p><ns0:p>Results are presented as Figure <ns0:ref type='figure' target='#fig_13'>8</ns0:ref>. Clearly AUCC is mostly correlated with log T2 measure. This is to be expected as both measures are concerned with sample size in relation to attribute structure. The difference is that T2 takes into account only the number of attributes while AUCC considers also the shape of distributions of the individual attributes. Correlations of AUCC with other measures are much lower and it can be assumed that they capture different aspects of data complexity and may be potentially complementary.</ns0:p><ns0:p>The next step was to show that information captured by AUCC is useful for explaining classifier performance. In order to do so we trained a number of different classifiers on the 81 benchmark data sets and evaluated their performance using random train-test split with proportion 0.5 repeated 10 times. The performance measure used was the area under ROC curve. We selected three linear classifiers -na&#239;ve Bayes with gaussian kernel, linear discriminant analysis (LDA) and logistic regression -and two families -0.11 -0.31 -0.17 0.38 -0.33 -0.38 0.51 1 0.25 0.81 0.61 0.74 0.3 0.32 -0.14 -0.17 Correlations between AUCC, log T2, and classifier performance are presented in Table <ns0:ref type='table'>2</ns0:ref>. Most of the correlations are weak and do not reach statistical significance, however some general tendencies can be observed. As can be seen, AUC ROC scores of linear classifiers have very little correlation with AUCC and log T2. This may be explained by the high-bias and low-variance nature of these classifiers: they are not strongly affected by data scarcity but their performance depends on other factors. This is especially true for LDA classifier, which has the weakest correlation among linear classifiers.</ns0:p><ns0:p>In k-NN classifier complexity depends on k parameter: with low k values it is more prone to variance error, with larger k it is prone to bias if the sample size is not large enough <ns0:ref type='bibr' target='#b7'>(Domingos, 2000)</ns0:ref>. Both AUCC and log T2 seem to capture the effect of sample size in the case of large k values well (correlations -0.2249 and 0.2395 for 35-NN). However, for k = 1 the correlation with AUCC is stronger (-0.1256 vs 0.0772).</ns0:p><ns0:p>Depth parameter in decision tree also regulates complexity: the larger the depth the more classifier is prone to variance error and less to bias error. This suggests that AUCC should be more strongly correlated with performance of deeper trees. On the other hand, complex decision trees explicitly model attribute interdependencies ignored by complexity curve, which may weaken the correlation. This is observed in the obtained results: for a decision stub (tree of depth 1), which is low-variance high-bias classifier, correlation with AUCC and log T2 is very weak. For d = 3 and d = 5 it becomes visibly stronger, and then for larger tree depth it again decreases. It should be noted that with large tree depth, as with small k values in k-NN, AUCC has stronger correlation with the classifier performance than log T2.</ns0:p><ns0:p>A slightly more sophisticated way of applying data complexity measures is an attempt to explain classifier performance relative to some other classification method. In our experiments LDA is a good candidate for reference method since it is simple, has low variance and is not correlated with either AUCC or log T2. Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref> presents correlations of both measures with classifier performance relative to LDA. Here we can see that correlations for AUCC are generally higher than for log T2 and reach significance for the majority of classifiers. Especially in the case of decision tree AUCC explains relative performance better than log T2 (correlation 0.1809 vs -0.0303 for d = inf).</ns0:p><ns0:p>Results of the presented correlation analyses demonstrate the potential of complexity curve to complement the existing complexity measures in explaining classifier performance. As expected from theoretical considerations, there is a relation between how well AUCC correlates with classifier performance and the classifier's position in the bias-variance spectrum. It is worth noting that despite the attribute independence assumption the complexity curve method proved useful for explaining performance of complex non-linear classifiers.</ns0:p></ns0:div> <ns0:div><ns0:head>Large p, small n problems</ns0:head><ns0:p>There is a special category of machine learning problems in which the number of attributes p is large with respect to the number of samples n, perhaps even order of magnitudes larger. Many important biological data sets, most notably data from microarray experiments, fall into this category <ns0:ref type='bibr' target='#b15'>(Johnstone and Titterington, 2009)</ns0:ref>. To test how our complexity measure behaves in such situations, we calculated AUCC scores for a few microarray data sets and compared them with AUC ROC scores of some simple classifiers. Classifiers were evaluated as in the previous section. Detailed information about the data sets is given in Supplementary document S1 as Table <ns0:ref type='table' target='#tab_1'>S3</ns0:ref>.</ns0:p><ns0:p>Results of the experiment are presented in Table <ns0:ref type='table' target='#tab_2'>4</ns0:ref>. As expected, with the number of attributes much larger than the number of observations data is considered by our metric as extremely scarce -values of AUCC are in all cases above 0.95. On the other hand, AUC ROC classification performance is very varied between data sets with scores approaching or equal to 1.0 for LEUKEMIA and LYMPHOMA data sets, and scores around 0.5 baseline for PROSTATE. This is because despite the large number of dimensions the form of the optimal decision function can be very simple, utilising only a few of available dimensions.</ns0:p><ns0:p>Complexity curve does not consider the shape of decision boundary at all and thus does not reflect differences in classification performance.</ns0:p><ns0:p>From this analysis we concluded that complexity curve is not a good predictor of classifier performance for data sets containing a large number of redundant attributes, as it does not differentiate between important and unimportant attributes. The logical way to proceed in such case would be to perform some form of feature selection or dimensionality reduction on the original data, and then calculate complexity curve in the reduced dimensions.</ns0:p></ns0:div> <ns0:div><ns0:head>APPLICATIONS Interpreting complexity curves</ns0:head><ns0:p>In order to prove the practical applicability of the proposed methodology, and show how complexity curve plot can be interpreted, we performed experiments with six simple data sets from UCI Machine Learning Repository <ns0:ref type='bibr' target='#b10'>(Frank and Asuncion, 2010)</ns0:ref>. The sets were chosen only as illustrative examples.</ns0:p><ns0:p>Basic properties of the data sets are given in Supplementary document as Table <ns0:ref type='table' target='#tab_2'>S4</ns0:ref>. For each data set we calculated conditional complexity curve. The curves are presented in Figure <ns0:ref type='figure' target='#fig_15'>9</ns0:ref>. Learning curves of CART decision tree (DT) were included for comparison. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>On most of the benchmark data sets we can see that complexity curve upper bounds the DT learning curve. The bound is relatively tight in the case of GLASS and IRIS, and looser for BREAST-CANCER-WISCONSIN and WINE data set. A natural conclusion is that a lot of variability contained in this last data set and captured by the Hellinger distance is irrelevant to the classification task. The most straightforward explanation would be the presence of unnecessary attributes not correlated with the class which can be ignored altogether. This is consistent with the results of various studies in feature selection. <ns0:ref type='bibr' target='#b4'>Choubey et al. (1996)</ns0:ref> identified that in GLASS data 7-8 attributes (78-89%) are relevant, in IRIS data 3 attributes (75%), and in BREAST-CANCER-WISCONSIN 5-7 attributes (56-78%). Similar results were obtained for BREAST-CANCER-WISCONSIN in other studies, which found that only 4 of the original attributes (44%) contribute to the classification <ns0:ref type='bibr' target='#b24'>(Ratanamahatana and Gunopulos, 2003;</ns0:ref><ns0:ref type='bibr' target='#b17'>Liu et al., 1998)</ns0:ref>. <ns0:ref type='bibr' target='#b8'>Dy and Brodley (2004)</ns0:ref> obtained best classification results for WINE data set with 7 attributes (54%).</ns0:p><ns0:p>On MONKS-1 and CAR complexity curve is no longer a proper upper bound on DT learning curve.</ns0:p><ns0:p>This is an indication of models relying heavily on attribute interdependencies to determine the correct class. This is not surprising: both MONKS-1 and CAR are artificial data sets with discrete attributes devised for evaluation of rule-based and tree-based classifiers <ns0:ref type='bibr' target='#b31'>Thrun et al. (1991)</ns0:ref>; <ns0:ref type='bibr' target='#b2'>Bohanec and Rajkovi&#269; (1988)</ns0:ref>.</ns0:p><ns0:p>Classes are defined with logical formulas utilising relations of multiple attributes rather than single values -clearly the attributes are interdependent. In that context complexity curve can be treated as a baseline for independent attribute situation and generalisation curve as diagnostic tool indicating the presence of interdependencies.</ns0:p><ns0:p>Besides the slope of the complexity curve we can also analyse its variability. We can see that the shape of WINE complexity curve is very regular with small variance in each point, while the GLASS curve displays much higher variance. This mean that the observations in GLASS data set are more diverse and some observations (or their combinations) are more important for representing data structure than the other.</ns0:p><ns0:p>This short analysis demonstrate how to use complexity curves to compare properties of different data sets. Here only decision tree was used as reference classifier. The method can be easily extended to include multiple classifiers and compare their performance. We present such an extended analysis in Supplementary Document S2.</ns0:p></ns0:div> <ns0:div><ns0:head>Data pruning with complexity curves</ns0:head><ns0:p>The problem of data pruning in the context of machine learning is defined as reducing the size of training sample in order to reduce classifier training time and still achieve satisfactory performance. It becomes extremely important as the data grows and a) does not fit the memory of a single machine, b) training times of more complex algorithms become very long.</ns0:p><ns0:p>A classic method for performing data pruning is progressive sampling -training the classifier on data samples of increasing size as long as its performance increases. <ns0:ref type='bibr' target='#b23'>Provost et al. (1999)</ns0:ref> analysed various schedules for progressive sampling and recommended geometric sampling, in which sample size is multiplied by a specified constant in each iteration, as the reasonable strategy in most cases. Geometric sampling uses samples of sizes a i n 0 , where n 0 -initial sample size, a -multiplier, i -iteration number.</ns0:p><ns0:p>In our method instead of training classifier on the drawn data sample we are probing the complexity curve. We are not trying to detect the convergence of classifier accuracy, but just search for a point on the curve corresponding to some reasonably small Hellinger distance value, e.g. 0.005. This point designates the smallest data subset which still contains the required amount of information.</ns0:p><ns0:p>In this setting we were not interested in calculating the whole complexity curve but just in finding the minimal data subset, which still contains most of the original information. The search procedure should be as fast as possible, since the goal of the data pruning is to save time spent on training classifiers. To comply with these requirements we constructed a criterion function of the form f</ns0:p><ns0:formula xml:id='formula_44'>(x) = H 2 (G x , D) &#8722; t,</ns0:formula><ns0:p>where D denotes a probability distribution induced by the whole data set, G x a distribution induced by random subset of size x and t is the desired Hellinger distance. We used classic Brent method <ns0:ref type='bibr' target='#b3'>(Brent, 1973)</ns0:ref> to find a root of the criterion function. In this way data complexity was calculated only for the points visited by Brent's algorithm. To speed up the procedure even further we used standard complexity curve instead of the conditional one and settled for whitening transform as the only preprocessing technique.</ns0:p><ns0:p>To verify if this idea is of practical use, we performed an experiment with three bigger data sets from UCI repository. Their basic properties are given in Supplementary document S1 as Table <ns0:ref type='table' target='#tab_5'>S5</ns0:ref>.</ns0:p><ns0:p>For all data sets we performed a stratified 10 fold cross validation experiment. a split was pruned according to our criterion function with t = 0.005 (CC pruning) or using geometric progressive sampling with multiplier a = 2 and initial sample size n 0 = 100 (PS pruning). Achieving the same accuracy as with CC pruning was used as a stop criterion for progressive sampling. Classifiers were trained on pruned and unpruned data and evaluated on the testing part of each cross validation split.</ns0:p><ns0:p>Standard error was calculated for the obtained values. We have used machine learning algorithms from scikit-learn library <ns0:ref type='bibr' target='#b22'>(Pedregosa et al., 2011)</ns0:ref> and the rest of the procedure was implemented in Python with the help of NumPy and SciPy libraries. Calculations were done on a workstation with 8 core Intel R</ns0:p><ns0:p>Core TM i7-4770 3.4 Ghz CPU working under Arch GNU/Linux. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science These results present complexity curve pruning as a reasonable model-free alternative to progressive sampling. It is more stable and often less demanding computationally. It does not require additional convergence detection strategy, which is always an important consideration when applying progressive sampling in practice. What is more, complexity curve pruning can also be easily applied in the context of online learning, when the data is being collected on the fly. After appending a batch of new examples to the data set, Hellinger distance between the old data set and the extended one can be calculated. If the distance is smaller than the chosen threshold, the process of data collection can be stopped.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this article we introduced a measure of data complexity targeted specifically at data sparsity. This distinguish it from other measures focusing mostly on the shape of optimal decision boundary in classification problems. The introduced measure has a form of graphical plot -complexity curve. We showed that it exhibits desirable properties through a series of experiments on both artificially constructed and real-world data sets. We proved that complexity curve capture non-trivial characteristics of the data sets and is useful for explaining the performance of high-variance classifiers. With conditional complexity curve it was possible to perform a meaningful analysis even with heavily imbalanced data sets.</ns0:p><ns0:p>Then we demonstrated how complexity curve can be used in practice for data pruning (reducing the size of training set) and that it is a feasible alternative to progressive sampling technique. This result is immediately applicable to all the situations when data overabundance starts to pose a problem.</ns0:p><ns0:p>For instance, it is possible to perform a quick exploration study on a pruned data set before fitting computationally expensive models on the whole set. Pruning result may also provide a suggestion for choosing proper train-test split ratio or number of folds of cross-validation in the evaluation procedure.</ns0:p><ns0:p>We argue that new measures of data characteristics, such as complexity curves, are needed to move away from a relatively static view of classification task to a more dynamic one. It is worth to investigate how various algorithms are affected by certain data manipulations, for example when new data become available or the underlying distribution shifts. This would facilitate the development of more adaptive and universal algorithms capable of working in a dynamically changing environment.</ns0:p><ns0:p>Experiments showed that in the presence of large number of redundant attributes not contributing to the classification task complexity curve does not correlate well with classifier performance. It correctly identifies dimensional sparseness of the data, but that is misleading since the actual decision boundary may still be very simple. Because of this as the next step in our research we plan to apply similar probabilistic approach to measure information content of different attributes in a data set and use that knowledge for performing feature selection. Graphs analogical to complexity curves and generalisation curves would be valuable tools for understanding characteristics of data sets and classification algorithms related to attribute structure.</ns0:p><ns0:p>Another limitation our method is the assumption of lack of attributes interdependencies. While the presence of small dependencies does not disrupt the analysis, when strong high dimensional dependencies are present the complexity curve does not correlate with classifier performance well. This means that it is infeasible to use for some domains, for example highly epistatic problems in bioinformatics.</ns0:p><ns0:p>Our long-term goal is to gain a better understanding of the impact of data set structure, both in terms of contained examples and attributes, and use that knowledge to build heterogeneous classification ensembles. We hope that a better control over data sets used in experiments will allow to perform a more systematic study of classifier diversity and consensus methods.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>measuring shape of the decision boundary and the amount class overlap. Topological measures concerned 2/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:3:0:NEW 30 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>most cases we do not know the underlining probability distribution P representing the problem and all we have is a data sample D, but we can still use the described complexity measure. Let us picture our data D as the true source of knowledge about the problem and the estimated probability distribution P D as the reference distribution. Any subset S &#8834; D can be treated as a data sample and a probability distribution 3/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:3:0:NEW 30 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>y m -values taken by Y . It has been shown that error of an arbitrary classification or regression model can be decomposed into three parts: Error = Bias + Variance + Noise 5/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:3:0:NEW 30 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>step size) control the trade-off between the precision of the calculated curve and the computation time. In all experiments, unless stated otherwise, we used values K = 20, d = |D| 60 . Regular shapes of the obtained curves did not suggest the need for using larger values.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 1 Figure 1 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure1presents a sample complexity curve (solid lines). It demonstrates how by drawing larger subsets of the data we get better approximations of the original distribution, as indicated by the decreasing Hellinger distance. The logarithmic decrease of the distance is characteristic: it means that with a relatively small number of samples we can recover general characteristics of the distribution, but to model the details precisely we need a lot more data points. The shape of the curve is very regular, with just</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:3:0:NEW 30 Jun 2016) Manuscript to be reviewed Computer Science Algorithm 2 Procedure for calculating conditional complexity curve. D -original data set, C -number of classes, N -number of subsets, K -number of samples. 1. Transform D with whitening transform and/or ICA to obtain D I . 2. Split D I according to the class into D 1 I , D 2 I , . . . , D C I . 3. From D 1 I , D 2 I , . . . , D C I estimate probability distributions P 1 , P 2 , . . . , P C . 4. For i in 1 . . . |D I | with a step size |D I | N :</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:03:9443:3:0:NEW 30 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 2 Figure 2 .Figure 3 .Figure 4 .</ns0:head><ns0:label>2234</ns0:label><ns0:figDesc>Figure 2 presents the complexity curve and the adjusted error of logistic regression for the generated data. After ignoring the noise error component, we can see that the variance error component is indeed upper bounded by the complexity curve. Different kind of artificial data represented multidimensional space with parallel stripes in one dimension (stripes data set). It consisted of X matrix with 1000 observations and 10 attributes drawn from an uniform distribution defined on the range [0, 1). Class values Y depended only on the values of one of the attributes: for values lesser than 0.25 or greater than 0.75 the class was 1, for other values the class was 0. This kind of relation can be naturally modelled with a decision tree. All the attributes are again independent and conditionally independent.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 presents complexity curve and the adjusted error of decision tree classifier on the generated data. Once again the assumptions of complexity curve methodology are satisfied and the complexity curve indeed an upper bounds the classification error.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Complexity curves for whitened data (dashed lines), not whitened data (solid lines) and ICA-transformed data (dotted lines). Areas under the curves are given in the legend. U -data sampled from uniform distribution. C -data sampled from categorical distribution. U w -whitened U. C wwhitened C. U ICA -U w after ICA. C ICA -C w after ICA.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Complexity curves for WINE and its counterparts with introduced outliers. For the sake of clarity only contours were drawn.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>79 -0.091-0.079-0.033 -0.27 -0.41 0.2 -0.17 -0.22 -0.15 -0.16 -0.14 0.33 0.092 0.75 1 -0.49 -0.087-0.047 -0.05 -0.13 -0.22 0.093 -0.14 -0.11 -0.16 -0.19 -0.15 0.15 0.099 1 0.75 -0.023 -0.81 -0.51 0.15 -0.62 -0.42 0.095 0.32 0.55 0.36 0.5 0.37 0.51 1 0.099 0.092 -0.37 -0.49 -0.23 0.25 -0.55 -0.59 0.015 0.3 0.31 0.53 0.51 0.51 1 0.51 0.15 0.33 -0.017 -0.35 -0.16 0.51 -0.49 -0.59 0.34 0.74 0.2 0.98 0.89 1 0.51 0.37 -0.15 -0.14 0.055 -0.47 -0.28 0.33 -0.55 -0.56 0.21 0.61 0.26 0.85 1 0.89 0.51 0.5 -0.19 -0.16 -0.058 -0.36 -0.16 0.52 -0.46 -0.56 0.38 0.81 0.21 1 0.85 0.98 0.53 0.36 -0.16 -0.15 0.25 -0.63 -0.21 0.017 -0.2 -0.07 -0.46 0.25 1 0.21 0.26 0.2 0.31 0.55 -0.11 -0.22</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Pearson's correlations between complexity measures.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>Decision tree d = 10 -0.1035 0.0695 Decision tree d = 15 -0.0995 0.0375 Decision tree d = 20 -0.0921 0.0394 Decision tree d = 25 -0.0757 0.0298 Decision tree d = 30 -0.0677 0.0227 Decision tree d = inf -0.0774 0.0345 Table 2. Pearson's correlations coefficients between classifier AUC ROC performances and complexity measures. The largest absolute value in each row is printed in bold. tree d = 10 0.2175 -0.0838 LDA -Decision tree d = 15 0.2146 -0.0356 LDA -Decision tree d = 20 0.2042 -0.0382 LDA -Decision tree d = 25 0.1795 -0.0231 LDA -Decision tree d = 30 0.1636 -0.0112 LDA -Decision tree d = inf 0.1809 -0.0303</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Conditional complexity curves for six different data sets from UCI Machine Learning repository with areas under complexity curve (AUCC) reported: A -CAR, AUCC: 0.08, B -MONKS-1, AUCC: 0.05, C -IRIS, AUCC: 0.19, D -BREAST-CANCER-WISCONSIN, AUCC: 0.13, E -GLASS, AUCC: 0.44, F -WINE, AUCC: 0.35.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>huge speed-ups. With the exception of SVC on LED data set, complexity curve pruning performed better than progressive sampling in such cases. Unsurprisingly, real speed-ups were visible only for computationally intensive methods such as Support Vector Machines, Random Forest and Gradient Boosted Decision Trees. For simple methods such as Na&#239;ve Bayes, Decision Tree or Logistic Regression fitting the model on the unpruned data is often faster than applying pruning strategy.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>comes from an inability of the applied model to represent the true relation present in data, variance comes from an inability to estimate the optimal model parameters from the data sample, noise is inherent to the solved task and irreducible. Since our complexity measure is model agnostic it clearly does not include bias component. As it does not take into account the dependent variable, it cannot measure noise either. All that is left to investigate is the relation between our complexity measure and variance component of the classification error.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>The variance error component is connected with overfitting, when the model fixates over specific</ns0:cell></ns0:row><ns0:row><ns0:cell>properties of a data sample and looses generalisation capabilities over the whole problem domain. If the</ns0:cell></ns0:row><ns0:row><ns0:cell>training sample represented the problem perfectly and the model was fitted with perfect optimisation</ns0:cell></ns0:row><ns0:row><ns0:cell>procedure variance would be reduced to zero. The less representative the training sample is for the whole</ns0:cell></ns0:row><ns0:row><ns0:cell>problem domain, the larger the chance for variance error.</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Pearson's correlations coefficients between classifier AUC ROC performances relative to LDA performance and complexity measures. The largest absolute value in each row is printed in bold.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>AUCC</ns0:cell><ns0:cell>1-NN</ns0:cell><ns0:cell cols='3'>5-NN DT d-10 DT d-inf</ns0:cell><ns0:cell>LDA</ns0:cell><ns0:cell>NB</ns0:cell><ns0:cell>LR</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>ADENOCARCINOMA 0.9621 0.6354 0.5542</ns0:cell><ns0:cell>0.5484</ns0:cell><ns0:cell cols='3'>0.5172 0.6995 0.5021 0.7206</ns0:cell></ns0:row><ns0:row><ns0:cell>BREAST2</ns0:cell><ns0:cell cols='3'>0.9822 0.5869 0.6572</ns0:cell><ns0:cell>0.6012</ns0:cell><ns0:cell cols='3'>0.6032 0.6612 0.5785 0.6947</ns0:cell></ns0:row><ns0:row><ns0:cell>BREAST3</ns0:cell><ns0:cell cols='3'>0.9830 0.6788 0.7344</ns0:cell><ns0:cell>0.6274</ns0:cell><ns0:cell cols='3'>0.6131 0.7684 0.6840 0.7490</ns0:cell></ns0:row><ns0:row><ns0:cell>COLON</ns0:cell><ns0:cell cols='3'>0.9723 0.7395 0.7870</ns0:cell><ns0:cell>0.6814</ns0:cell><ns0:cell cols='3'>0.6793 0.7968 0.5495 0.8336</ns0:cell></ns0:row><ns0:row><ns0:cell>LEUKEMIA</ns0:cell><ns0:cell cols='3'>0.9611 1.0000 0.9985</ns0:cell><ns0:cell>0.7808</ns0:cell><ns0:cell cols='3'>0.8715 0.9615 0.8300 1.0000</ns0:cell></ns0:row><ns0:row><ns0:cell>LYMPHOMA</ns0:cell><ns0:cell cols='3'>0.9781 0.9786 0.9976</ns0:cell><ns0:cell>0.8498</ns0:cell><ns0:cell cols='3'>0.8660 0.9952 0.9700 1.0000</ns0:cell></ns0:row><ns0:row><ns0:cell>PROSTATE</ns0:cell><ns0:cell cols='3'>0.9584 0.5931 0.4700</ns0:cell><ns0:cell>0.4969</ns0:cell><ns0:cell cols='3'>0.5238 0.4908 0.5000 0.4615</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Areas under conditional complexity curve (AUCC) for microarray data sets along AUC ROC values for different classifiers. k-NNk-nearest neighbour, DT -CART decision tree, LDA -linear discriminant analysis, NB -na&#239;ve Bayes, LR -logistic regression.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>rate: 0.19 &#177; 0.02 Mean CC compression time: 4.01 &#177; 0.14 LinearSVC 0.86 &#177; 0.00 0.86 &#177; 0.00 27.71 &#177; 0.35 6.69 &#177; 0.52 10.73 &#177; 8.65 0.55 &#177; 0.49 GaussianNB 0.80 &#177; 0.01 0.80 &#177; 0.01</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Classifier</ns0:cell><ns0:cell>Score</ns0:cell><ns0:cell>CC score</ns0:cell><ns0:cell>Time</ns0:cell><ns0:cell>CC time</ns0:cell><ns0:cell>PS time</ns0:cell><ns0:cell>PS rate</ns0:cell></ns0:row><ns0:row><ns0:cell>waveform</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>Mean CC compression 0.02 &#177; 0.00 RF 0.86 &#177; 0.00 0.85 &#177; 0.00 33.49 &#177; 0.04 SVC 0.86 &#177; 0.00 0.86 &#177; 0.00 211.98 &#177; 0.93 Tree 0.78 &#177; 0.00 0.77 &#177; 0.00 3.06 &#177; 0.06 Logit 0.86 &#177; 0.00 0.86 &#177; 0.00 1.75 &#177; 0.06 GBC 0.86 &#177; 0.00 0.86 &#177; 0.00 112.34 &#177; 0.12 24.59 &#177; 2.30 66.66 &#177; 37.99 0.53 &#177; 0.43 4.02 &#177; 0.14 0.03 &#177; 0.01 0.01 &#177; 0.00 9.29 &#177; 0.76 18.06 &#177; 10.75 0.46 &#177; 0.37 9.08 &#177; 1.21 21.22 &#177; 28.34 0.33 &#177; 0.42 4.50 &#177; 0.20 1.40 &#177; 0.70 0.37 &#177; 0.28 4.21 &#177; 0.17 0.60 &#177; 0.62 0.30 &#177; 0.34</ns0:cell></ns0:row><ns0:row><ns0:cell>led</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>Mean CC compression rate: 0.04 &#177; 0.01 Mean CC compression time: 1.38 &#177; 0.03</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>LinearSVC GaussianNB 0.74 &#177; 0.00 0.73 &#177; 0.00 0.74 &#177; 0.00 0.74 &#177; 0.00 RF 0.74 &#177; 0.00 0.73 &#177; 0.00 SVC 0.74 &#177; 0.00 0.74 &#177; 0.00 Tree 0.74 &#177; 0.00 0.73 &#177; 0.00 Logit 0.74 &#177; 0.00 0.74 &#177; 0.00 GBC 0.74 &#177; 0.00 0.73 &#177; 0.00</ns0:cell><ns0:cell>4.68 &#177; 0.10 0.02 &#177; 0.00 1.77 &#177; 0.01 82.16 &#177; 0.86 0.03 &#177; 0.00 2.03 &#177; 0.08 51.26 &#177; 0.40</ns0:cell><ns0:cell cols='3'>1.49 &#177; 0.04 1.38 &#177; 0.03 1.47 &#177; 0.03 1.56 &#177; 0.07 10.04 &#177; 17.52 0.26 &#177; 0.44 0.47 &#177; 1.04 0.13 &#177; 0.34 0.07 &#177; 0.02 0.26 &#177; 0.44 0.83 &#177; 0.25 0.05 &#177; 0.04 1.38 &#177; 0.03 0.04 &#177; 0.01 0.09 &#177; 0.10 1.42 &#177; 0.03 0.30 &#177; 0.44 0.17 &#177; 0.33 3.57 &#177; 0.30 6.32 &#177; 4.05 0.04 &#177; 0.04</ns0:cell></ns0:row><ns0:row><ns0:cell>adult</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>Mean CC compression rate: 0.33 &#177; 0.02 Mean CC compression time: 0.93 &#177; 0.05</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>LinearSVC GaussianNB 0.81 &#177; 0.01 0.81 &#177; 0.01 0.69 &#177; 0.19 0.67 &#177; 0.20 RF 0.86 &#177; 0.01 0.85 &#177; 0.01 SVC 0.76 &#177; 0.00 0.76 &#177; 0.00 Tree 0.81 &#177; 0.00 0.81 &#177; 0.01 Logit 0.80 &#177; 0.00 0.80 &#177; 0.00 GBC 0.86 &#177; 0.00 0.86 &#177; 0.00</ns0:cell><ns0:cell cols='2'>1.79 &#177; 0.08 0.01 &#177; 0.00 2.04 &#177; 0.01 81.70 &#177; 0.56 10.52 &#177; 2.31 1.53 &#177; 0.08 0.93 &#177; 0.05 1.60 &#177; 0.09 0.12 &#177; 0.00 0.97 &#177; 0.05 0.08 &#177; 0.01 0.96 &#177; 0.05 2.33 &#177; 0.01 1.80 &#177; 0.09</ns0:cell><ns0:cell cols='2'>0.30 &#177; 0.84 0.18 &#177; 0.52 0.01 &#177; 0.00 0.02 &#177; 0.02 2.11 &#177; 1.18 0.69 &#177; 0.59 5.06 &#177; 7.17 0.16 &#177; 0.19 0.10 &#177; 0.08 0.72 &#177; 0.72 0.05 &#177; 0.07 0.42 &#177; 0.68 2.37 &#177; 1.22 0.67 &#177; 0.57</ns0:cell></ns0:row></ns0:table><ns0:note>The training part of 21/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:3:0:NEW 30 Jun 2016) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Obtained accuracies and training times of different classification algorithms on unpruned and pruned data sets. Score corresponds to classifier accuracy, time to classifier training time (including pruning procedure), rate to compression rate. CC corresponds to data pruning with complexity curves, PS to data pruning with progressive sampling. LinearSVC -linear support vector machine, GaussianNBna&#239;ve Bayes with gaussian kernel, RF -random forest 100 CART trees, SVC -support vector machine with radial basis function kernel, Tree -CART decision tree, Logit -logistic regression, GBC -gradient boosting classifier with 100 CART trees.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>presents measured times and obtained accuracies. As can be seen, the difference in classification accuracies between pruned and unpruned training data is negligible. CC compression rate differs for the three data sets, which suggests that they are of different complexity: for LED data only 5% is needed to perform successful classification, while ADULT data is pruned at 33%. CC compression rate is rather stable with only small standard deviation, but PS compression rate is characterised with huge variance. In this regard, complexity curve pruning is preferable as a more stable pruning criterion.In all cases when training a classifier on the unpruned data took more than 10 seconds, we observed</ns0:figDesc><ns0:table /><ns0:note>22/25PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:3:0:NEW 30 Jun 2016)</ns0:note></ns0:figure> <ns0:note place='foot' n='18'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:3:0:NEW 30 Jun 2016)</ns0:note> <ns0:note place='foot' n='25'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9443:3:0:NEW 30 Jun 2016) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
" Dear Jaume Bacardit, We are grateful for the latest comments. We understand your concerns regarding dropping generalisation curves from the manuscript. We now moved all the text and figures to Supplementary Document S2, which can be seen as an extended case study of complexity curves application. We believe these changes do not diminish the contribution of our work and make the paper easier to follow. The detailed responses are provided below. I can see from the manuscript changes that the authors have cut out a significant chunk of the manuscript, related to the generalisation curves. Hence, some of the initial claims about the contribution of this work are not valid anymore. I think that the authors need to make a case for which their paper still deserves publication. Indeed, we decided to shorten the manuscript by skipping less relevant analyses and background information not essential for the main message of the article. The focus our work is on complexity curve understood as a measure of data complexity. In the paper we provide theoretical justifications behind it, analyse its properties, compare it experimentally with the existing complexity measures, and demonstrate its applications to data pruning and analysing the properties of individual data sets. In our opinion those results are sufficient to show the merits of complexity curve technique. In the article we claim that data complexity analysis is beneficial for comparing classification algorithms, which is a view widely expressed in the existing literature on the subject. We believe that this claim is still valid and the analyses included in section “Comparison with other complexity measures” provide enough support for it. The generalisation curve methodology can be seen as an extended version of the analysis performed in section “Interpreting complexity curves”. In the revised manuscript we describe it as so. What is important is that all conclusions drawn from the analysis with generalisation curves can be drawn from analysing complexity curve and learning curve alone. We have included those in section “Interpreting complexity curves”. Moreover, I don't understand the line 'Values larger than 0.22 or smaller than -0.22 are significant at a = 0:05 significance level.' in the caption of tables 2 and 3. It does not match the rows marked as bold in the tables. Moreover, were corrections for multiple comparisons applied? Thank you for this comment. We now bolded the largest absolute value in each row. We also removed the reference to significance level: Pearson’s correlation was only used to calculate relative scores for different complexity measures and p-values are misleading in this context. We hope that you consider the revised manuscript appropriate for publication. Sincerely, Dariusz Plewczynski and Julian Zubek "
Here is a paper. Please give your review comments after reading it.
265
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Image data collection and labelling is costly or difficult in many real applications.</ns0:p><ns0:p>Generating diverse and controllable images using conditional generative adversarial networks (GANs) for data augmentation from a small dataset is promising but challenging as deep convolutional neural networks need a large training dataset to achieve reasonable performance in general. However, unlabeled and incomplete features (e.g., unintegral edges, simplified lines, hand-drawn sketches, discontinuous geometry shapes, etc.) can be conveniently obtained through pre-processing the training images and can be used for image data augmentation. This paper proposes a conditional GAN framework for facial image augmentation using a very small training dataset and incomplete or modified edge features as conditional input for diversity. The proposed method defines a new domain or space for refining interim images to prevent overfitting caused by using a very small training dataset and enhance the tolerance of distortions caused by incomplete edge features, which effectively improves the quality of facial image augmentation with diversity. Experimental results have shown that the proposed method can generate highquality images of good diversity when the GANs are trained using very sparse edges and a small number of training samples. Compared to the state-of-the-art edge-to-image translation methods that directly convert sparse edges to images, when using a small training dataset, the proposed conditional GAN framework can generate facial images with desirable diversity and acceptable distortions for dataset augmentation and significantly outperform the existing methods in terms of the quality of synthesised images, evaluated by Fr&#233;chet Inception Distance (FID) and Kernel Inception Distance (KID) scores.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Deep convolutional neural networks generally require a large amount of training data to improve accuracy on unseen data or testing data <ns0:ref type='bibr' target='#b47'>(Tan et al., 2018)</ns0:ref>. For many practical applications, it is expensive to collect a large amount of training data for deep learning. Data augmentation is often used to generate more training data. Traditional approaches for data augmentation include geometric transformations, such as translation, scaling, flip, rotation, etc., especially for image data <ns0:ref type='bibr' target='#b36'>(Mikolajczyk &amp; Grochowski, 2018)</ns0:ref>. However, the diversity introduced by traditional augmentation methods is limited and insufficient for many applications. The motivation of this work is to augment a small image dataset by making use of conditional edge features extracted from the available training images, and it can be expected that the synthesised images are more diverse and less distortive than those obtained from traditional methods. In recent years, conditional generative adversarial networks (GANs), which can generate photorealistic images from conditional data, have become one of the most popular research fields in image synthesis. Conditional inputs, such as edges, mark points, masks, semantic maps, labels and so on, can be used to manipulate the images generated by GANs, making the synthesised images not only diverse but also controllable <ns0:ref type='bibr' target='#b23'>(Isola et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b48'>Torrado et al., 2020)</ns0:ref>. Image-toimage translation methods using conditional GANs directly learn the pixel mapping relationship between conditional edge features and real images <ns0:ref type='bibr' target='#b32'>(Lin et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b51'>Wang &amp; Gupta, 2016)</ns0:ref>. Although image-to-image methods based on conditional GANs have been developed for controllable image synthesis, there are still several problems that should be resolved when applying them on a small training dataset: 1) Compared with unconditional GANs, a limitation of conditional GANs is that the output images must be generated from the corresponding conditional inputs, hence a clear mapping relationship between input and output should be correctly established. Corresponding mapping relationships are hard to be discovered, especially when only a very small training dataset is available for deep neural networks to learn. 2) With a small training dataset, the training process for image-to-image translation is easy to converge but difficult to obtain high-quality results due to the overfitting problem and insufficient information about the underlying data distribution, whether the used conditional features are of high quality or not <ns0:ref type='bibr' target='#b11'>(Dimitrakopoulos, Sfikas &amp; Nikou, 2020)</ns0:ref>, the discriminator will overfit the training data, and the generator would produce unexpected distortions in the generated images in the validation or application phase <ns0:ref type='bibr' target='#b0'>(Arjovsky &amp; Bottou, 2017;</ns0:ref><ns0:ref type='bibr' target='#b16'>Gu et al., 2019)</ns0:ref>. 3) Training GANs using a small dataset must deal with the inevitable problem of mode collapse <ns0:ref type='bibr' target='#b58'>(Zhao, Cong &amp; Carin, 2020)</ns0:ref>, which implies that the GANs may learn the training data distribution from a limited number samples only but overlook other useful training data <ns0:ref type='bibr' target='#b45'>(Srivastava et al., 2017)</ns0:ref>. Other issues, such as non-convergence and instability, would also worsen the quality of the generated images <ns0:ref type='bibr' target='#b43'>(Salimans et al., 2016)</ns0:ref>. Clearly, it is a challenging task to synthesise photorealistic images using conditional GANs based on incomplete conditional features and a small number of training images. Edge-based image-to-image translation using conditional GANs has the advantage of introducing diversity in image data augmentation, but it is challenging in terms of generating high-quality photorealistic images. Extracted edges cannot be regarded as perfect conditional features that support various advanced visual tasks and contain all visual information of potential perceptual relevance <ns0:ref type='bibr' target='#b14'>(Elder, 1999)</ns0:ref>. Since edges generally contain incomplete information, such as unintegral geometry, simplified lines, discontinuous shapes, missing components, and undefined contours, it is hard for image-to-image translation methods to map edges to realistic images without clear conditional information.</ns0:p><ns0:p>To simply demonstrate the impact of the number of training images and incomplete conditional edges on the quality of the images generated using conditional GANs, some preliminary experimental results are shown in Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>. It can be observed that the level of detail in the input edge features and the number of training images have considerable influence on the quality of both training and inference results, which is more serious on the inference results though because the input edge features used in the inference process have not been used in the training process. In this paper, a new image-to-image translation framework using conditional GANs is proposed, which can generate diverse photorealistic images from limited edge features after training with a small number of training images. Instead of deepening the convolutional layers or increasing the number of parameters, the proposed conditional GAN framework can learn additional relationships between incomplete edges and corresponding images, because regional binarisation and segmentation masks are used as new reference information, which can be obtained automatically by image processing. In particular, the proposed method can beneficially obtain extra pixel correlations between conditional edge inputs and the corresponding ground truth images to mitigate the influence of the overfitting problem during training. If the conditional GANs can efficiently learn from informative conditional inputs, such as colour, texture, edges, labels, etc., then it would be effortless to generate corresponding photorealistic image outputs <ns0:ref type='bibr' target='#b12'>(Dosovitskiy &amp; Brox, 2016;</ns0:ref><ns0:ref type='bibr' target='#b55'>Wei et al., 2018)</ns0:ref>. A new network structure is proposed in this paper to divide the image synthesis task into two stages: the first stage transforms conditional input with incomplete edges into refined images as the new conditional input for the second stage, whose pixel values are obtained by combining the information from segmentation masks and binarised images. The second stage transforms the refined images into photorealistic image outputs. The experimental results have demonstrated that the proposed method can generate high-quality diverse images even with a very small training dataset and very sparse edge features as conditional input. In addition, the generated images do not contain large distortions from incomplete or modified edge inputs for data augmentation purposes. The contributions of this paper are as follows:</ns0:p><ns0:p>&#61623; In order to deal with the problem of distortions in the images generated by conditional Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#61623; For the first time, the proposed method uses the mixture of pixel values of both binary images and segmentation masks to enhance the conditional input in an interim domain for refining images, which can integrate facial components, including eyes, nose, mouth, etc., so as to introduce diversity and enhance the quality of the images generated by conditional GANs trained using a very small training dataset.</ns0:p><ns0:p>&#61623; A facial image augmentation method using conditional GANs has been proposed, which can generate photorealistic facial images of diversity from incomplete edges or handdrawn sketches. Compared with existing edge-to-image translation methods without ideal conditional inputs, the proposed method is tolerant to various incomplete edges as conditional inputs and able to generate diverse images of higher quality in terms of Fr&#233;chet Inception Distance (FID) <ns0:ref type='bibr' target='#b18'>(Heusel et al., 2017)</ns0:ref> and Kernel Inception Distance (KID) <ns0:ref type='bibr' target='#b4'>(Binkowski et al., 2018)</ns0:ref>. This paper is organised as follows: In section 2, the related work is reviewed, including methods for image data augmentation, image-to-image translation, challenges in training GANs using small datasets. In section 3, the proposed methods, including the proposed conditional GAN framework, image pre-processing for generating the interim domain, and training strategies, are described in detail. Section 4 describes the experiment design, including data preparation and implementation details. The experimental results are presented and the performance of the proposed conditional GAN framework is evaluated qualitatively and quantitatively in section 5. Finally, conclusions, limitations and future work suggestions are presented in section 6.</ns0:p></ns0:div> <ns0:div><ns0:head>Related Work</ns0:head><ns0:p>The method proposed in this paper aims to use incomplete or modified edge features to augment small facial image datasets. The method was based on image-to-image translation using conditional GANs. This section introduces related techniques for image synthesis using conditional GANs.</ns0:p></ns0:div> <ns0:div><ns0:head>Image Data Augmentation</ns0:head><ns0:p>Image data augmentation is applied in many applications to increase dataset size and data diversity. Deep neural networks are not easy to be trained well using small training datasets due to overfitting problems <ns0:ref type='bibr' target='#b3'>(Bartlett et al., 2020)</ns0:ref>. One of the solutions to overcome overfitting is training data augmentation, which intends to boost the diversity of a small training dataset <ns0:ref type='bibr' target='#b44'>(Shorten &amp; Khoshgoftaar, 2019)</ns0:ref>. Traditional methods such as rotation, reflecting, translation, scaling, cropping, blurring, grey scaling and colour converting, have been commonly used for image data augmentation to reduce overfitting when training deep neural networks for image classification applications. Although these traditional techniques can produce similar images of high-quality, they rarely enlarge the feature diversity in the original images. Therefore, developing novel methods image data augumentation, which can synthesise not only diverse but also photorealistic images based on a small image dataset, is a very important but challenging task.</ns0:p><ns0:p>Many GANs have been proposed for image synthesis <ns0:ref type='bibr' target='#b15'>(Gatys, Ecker &amp; Bethge, 2016;</ns0:ref><ns0:ref type='bibr' target='#b21'>Iizuka, Simo-Serra &amp; Ishikawa, 2017)</ns0:ref>. Since conditional GANs provide alternative methods to edit images as well as manipulate generative attributes, they have been applied to generate highquality images of diverse features <ns0:ref type='bibr'>(Lin et al., 2018a)</ns0:ref>. Image synthesis using conditional GANs can boost the applications of data augmentation in many real applications, but it is hard to generate high-quality diverse images when the training dataset is small.</ns0:p></ns0:div> <ns0:div><ns0:head>Image-to-image Translation and Image Synthesis Using Conditional GANs</ns0:head><ns0:p>Image-to-image translation is a type of image synthesis using conditional GANs with specific forms of conditions, such as videos and images <ns0:ref type='bibr' target='#b2'>(Azadi et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b46'>Szeto et al., 2021)</ns0:ref>, scenes <ns0:ref type='bibr' target='#b1'>(Ashual &amp; Wolf, 2019;</ns0:ref><ns0:ref type='bibr' target='#b25'>Johnson, Gupta &amp; Li, 2018)</ns0:ref>, or segmentation masks <ns0:ref type='bibr' target='#b9'>(Cherian &amp; Sullivan, 2018;</ns0:ref><ns0:ref type='bibr' target='#b40'>Park et al., 2019)</ns0:ref>. The conditional inputs can be transferred from a source domain to a target domain by using supervised learning techniques. The main concept of image-to-image translation is to learn the data mapping relationships <ns0:ref type='bibr' target='#b33'>(Liu, Breuel &amp; Kautz, 2017)</ns0:ref>. Image-toimage translation methods can automatically generate images between the corresponding domains <ns0:ref type='bibr' target='#b38'>(Mo, Cho &amp; Shin, 2019)</ns0:ref> and discover the dependence with pairs of images to translate features into realistic images. Image-to-image translation methods provide a prominent approach to image synthesis with diverse results by using controllable conditional features <ns0:ref type='bibr' target='#b23'>(Isola et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b62'>Zhu et al., 2017)</ns0:ref>. With remarkable advantages in image-to-image translation, edge-to-image synthesis has achieved visually pleasing performance <ns0:ref type='bibr' target='#b56'>(Yi et al., 2017)</ns0:ref>. Compared with other conditional forms, edges are one of the most easy-to-obtain and simple features in computer vision, and hand-drawn sketches can be regarded as a specific form of edge features <ns0:ref type='bibr' target='#b8'>(Chen &amp; Hays, 2018)</ns0:ref>. Since edges usually contain critical information, such as gradients, shapes, contours, profiles, boundaries and so on, they directly provide simple and direct depictions for objects <ns0:ref type='bibr' target='#b6'>(Chen et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b13'>Eitz et al, 2011)</ns0:ref>. The benefit of using edge features is that they are flexible to modify for introducing diversity in data augmentation. However, in contrast to labelled images, edges as conditional inputs often preserve insufficient information (e.g., textures, colours, brightness, labels, etc.), which makes it hard to generate desirable high-quality images using edge-to-image translation. Face synthesis requires integral contours that faithfully reflect the inputs and connections to the realistic context <ns0:ref type='bibr' target='#b31'>(Li et al., 2020)</ns0:ref>. Incomplete features with missing parts or undefined components will affect the quality of images generated by conditional GANs <ns0:ref type='bibr' target='#b24'>(Jo &amp; Park, 2019;</ns0:ref><ns0:ref type='bibr' target='#b27'>Karras et al., 2018)</ns0:ref>. To alleviate the influence of lacking ideal conditional inputs, this work proposes to reconstruct the missed features in the conditional input and transform limited edge information into refined images by introducing an interim domain to alleviate the negative effect of imperfect conditional features on the quality of the generated images.</ns0:p></ns0:div> <ns0:div><ns0:head>Challenges in Training GANs Using Small Training Dataset</ns0:head><ns0:p>Mode collapse is a major problem in training GANs using small training data, which makes it fail to switch training samples and thus the trained GANs would be over-optimised with a limited number of training samples only <ns0:ref type='bibr' target='#b39'>(Odena, Olah &amp; Shlens, 2017)</ns0:ref>. Due to the mode collapse problem, the generator is constructed with very limited training data, which makes the discriminator believe that the generative outputs are real instead of fake. This is one of the key PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:08:64945:1:0:NEW 20 Sep 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>causes that make it difficult for GANs to generate diverse results for data augmentation purposes <ns0:ref type='bibr' target='#b10'>(De Cao &amp; Kipf, 2018)</ns0:ref>. Another drawback in training GANs using a small training dataset is that it is hard to fine-tune all parameters to discover an optimal balance between the generator and discriminator. Both the generator and discriminator contain a large number of trainable parameters, which necessarily need enormous training data to prevent from overfitting problems <ns0:ref type='bibr' target='#b17'>(Gulrajani et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b35'>Mescheder, Geiger &amp; Nowozin, 2018)</ns0:ref>. Ideally, training a GAN requires a large amount of training data for optimising its parameters and reducing losses to generate photorealistic image outputs <ns0:ref type='bibr' target='#b54'>(Wang et al., 2018c)</ns0:ref>. Therefore, reducing the requirement for a large amount of training data is a grand challenge in using GANs for image data augmentation in which available training data is limited. Our previous preliminary work <ns0:ref type='bibr' target='#b19'>(Hung &amp; Gan, 2021)</ns0:ref> partly addressed the above challenges by proposing a new conditional GAN architecture. This paper substantially extends our preliminary work via further investigation and deeper analysis of much more experimental results.</ns0:p></ns0:div> <ns0:div><ns0:head>Methods</ns0:head><ns0:p>Image-to-image translation methods find specific mapping relationships between source distribution and target distribution. In general, a small number of paired features may not comprehensively align with the source and target distributions using imperfect conditional inputs such as incomplete edges and a small training image dataset. Therefore, data refining in paired features can be adopted to expand mapping relationships based on a small training dataset. In this section, a new method is proposed for facial image synthesis based on a very small training image dataset. With incomplete conditional features in the source domain and small training data in the target domain, the mapping between source and target domains cannot be described by clear one-to-one relationships. The method proposed in this paper transfers the source domain to an interim domain for refining images with extra annotated information, in which newly defined images in the interim domain need to be generated based on a small training dataset. The interim domain can provide extra reference information to discover more mapping relationships between the source and target domains. Figure <ns0:ref type='figure'>2</ns0:ref> shows the proposed translation method using a small training dataset. It is difficult using a small training dataset to obtain a comprehensive view of correct mapping relationships between the source domain and target domain without sufficient representative training samples, as shown by the blue line in Figure <ns0:ref type='figure'>2</ns0:ref>. Even if changing the types of the conditional inputs, a similar situation remains as it is still difficult to learn correct mapping relationships, as demonstrated by the red line in Figure <ns0:ref type='figure'>2</ns0:ref>. For the purpose of comprehensively finding correct mapping relationships, extending the mapping relationships in an interim domain for refining images, as shown by the green dotted line in Figure <ns0:ref type='figure'>2</ns0:ref>, can reduce uncertainty caused by using a small training dataset and incomplete edge features as conditional inputs for data augmentation PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:08:64945:1:0:NEW 20 Sep 2021)</ns0:ref> Manuscript to be reviewed Computer Science with diversity. This will be explained in more detail when introducing the proposed conditional GAN framework later. When training GANs using small training datasets, the following factors should be considered: 1) It is difficult to avoid distortions in the generated images and training imbalance with a small number of training images or insufficient diverse samples. 2) Through deep convolutional neural network structures, such as convolution, normalisation and downsampling, it is easy to lose spatial information and impractical to completely preserve the conditional information with a small number of training images <ns0:ref type='bibr' target='#b28'>(Kinoshita &amp; Kiya, 2020;</ns0:ref><ns0:ref type='bibr' target='#b49'>Wang et al., 2018a)</ns0:ref>. If the conditional inputs contain sparse, unclear, limited, discontinuous, or incomplete features, fine-tuning model parameters without distortions becomes much more difficult. 3) Using a small training dataset and limited conditional features will make the training easy to overfit but hard to obtain realistic results. Since many parameters in a deep convolutional neural network need to be fine-tuned, it is impossible to optimise all the parameter values using a small training dataset in terms of the generalisation ability of the trained deep neural network. To tackle the above problems in training GANs using a small number of training images, several training strategies are adopted in the proposed method, which are described as follows: Enlarging the diversity of source domain: The training of a GAN using a small training dataset can easily converge but frequently attain unrealistic inference results, mainly because of the overfitting problem. It is impossible for GANs to have a whole view of the target domain through training with a limited number of training images. For the goal of achieving photorealistic results, both the discriminator and generator should stay in an equilibrium balance. Increasing the data diversity and widening the mapping relationship between the source domain and target domain could help achieve the required balance between the generator and discriminator when using a small number of training images. In the proposed method, new reference information is created by image pre-processing, and the adoption of the interim domain for refining images can enlarge the diversity of the source domain. Double translation: Double translation strategy aims to decrease the chance of mode collapse in single translation approach and reduce the impact of the uncertainty due to using incomplete edges as conditional input, so as to alleviate the distortions caused by training with a small number of training images. For generating additional reference information, the proposed method combines binary images and segmentation masks to generate refined images as conditional input in the next translation. In the first translation, refined images with annotated facial components are generated from incomplete edge features. This translation is conducted between the source domain and the interim domain. The second translation is conducted between the interim domain and target domain, which can successively learn from the possible distortions in the first stage to avoid or alleviate negative distortions in the final outputs. Reusing the conditional information: Spatial information in the conditional edges can be easily lost during training in the convolutional neural layers, and the relationships between the source domain and target domain will become incomprehensive. In order to reduce the spatial information vanishing, edge features in the source domain can be reused in each translation.</ns0:p><ns0:p>Freezing weights: Weight freezing is a strategy to overcome the gradient vanishing problem during training, which often happens when using a small training dataset. If the provided training data cannot give the discriminator enough information to progress the generator, the gradient will become smaller and smaller when going from bottom to top layers of the network. Incomplete conditions would worsen the gradient vanishing problem and make it impossible to fine-tune the model parameters to obtain realistic results. Hence, freezing part of weights in separate training stages allows the discriminator to acquire information from each training stage rather than tuning all parameters at one time.</ns0:p></ns0:div> <ns0:div><ns0:head>The Proposed Conditional GAN Framework</ns0:head><ns0:p>To mitigate the output distortions caused by using a small training dataset and incomplete edge as conditional input, additional paired segmentation masks and regional binary images are used as reference information in the proposed method, which can enrich the mapping relationships between the source domain and target domain. Consequently, the proposed method creates additional data distributions from the small training dataset using image pre-processing, and the data in the interim domain provides more referable features than the original incomplete edges in the source domain. Two U-nets <ns0:ref type='bibr' target='#b20'>(Ibtehaz &amp; Rahman, 2020;</ns0:ref><ns0:ref type='bibr' target='#b41'>Ronneberger, Fischer &amp; Brox, 2015)</ns0:ref> are adopted in the proposed conditional GAN framework for image-to-image translation. This structure can achieve better performance when training conditional GANs using small training data for two reasons: on one hand, during the training process the U-nets create images based on the special concatenating structure, which is beneficial to retain the matched features from limited conditional features for integral perceptions in convolutional layers. On the other hand, the U-net structure is simple and beneficial to generate images without using very deep convolutional layers, which is critical to alleviate the gradient vanishing problem during training using small training data and incomplete edges. The proposed framework also reuses the conditional input information to strengthen the input features at each training stage, and freezing weights for separate networks at each training stage can prevent from gradient vanishing as well. To sum up, the proposed conditional GAN framework can alleviate the problems in training GANs using small training data by intensifying the conditional information in the source domain. An overview of the proposed conditional GAN framework is shown in Figure <ns0:ref type='figure'>3</ns0:ref> and described as follows.</ns0:p><ns0:p>The proposed model consists of three parts: 1) image pre-processing, 2) generators and 3) discriminators. The two generators use the same convolutional structure of the U-net, both of which down-sample and then up-sample to the original size of input images. All convolutional layers use convolution kernels of size 3 &#215; 3 <ns0:ref type='bibr' target='#b57'>(Yu et al., 2019)</ns0:ref>, and normalisation is applied to all convolutional layers except for the input and output layers <ns0:ref type='bibr' target='#b59'>(Zhou &amp; Yang, 2019)</ns0:ref>. In the training phase, the first generator is used to create refined images based on the original sparse edges and ground truth. The refined images are referred from image pre-processing, which contain features related to texture, colour, shape of different facial components. The second generator is designed to improve the synthetic process to generate photorealistic images from the interim domain. In the inference phase, the generators use the fine-tuned parameters to generate photorealistic images from conditional edges that may have not been seen during training. The two discriminators have the same task of distinguishing between real and fake images: the first one is to identify generated images in terms of refined images, and the other is in terms of the ground truth. Image Pre-processing and Refining Image refining is essential for providing informative conditional features since incomplete edges may contain much unidentical information representing the same facial component. This uncertainty makes it difficult for conditional GANs to comprehensively find pixel relevance in different domains. For instance, an unclear 'black circle' with incomplete edges can represent either nose, ear or eye, even if using a powerful network, it is difficult to learn well with a rare sign of 'circle' as conditional input without any other crucial information (e.g., colour, types, angels, positions, textures, sub-components, brightness, layouts, shapes, etc.). A refining process can be employed within an image-to-image translation method, which enhances powerful one-toone mapping by providing close to ideal conditional input. However, there is no guarantee that ideal conditional inputs can be obtained in real applications, especially if the conditional inputs are incomplete or sparse edges. These uncertainties can result in unexpected distortions. If the interim domain for refining images can provide more specific information, the synthetic quality will be improved. Therefore, enhancing conditional information is one of the important goals of image pre-processing and refining. Edge Extraction: Edges may contain incomplete features with many possible feature types, including undefined density, shape or geometry <ns0:ref type='bibr' target='#b42'>(Royer et al., 2017)</ns0:ref>. However, to achieve high performance, image-to-image translation methods need clear conditions <ns0:ref type='bibr' target='#b30'>(Lee et al., 2019)</ns0:ref>. In order to generate photorealistic images from limited conditional information, extending translation relationships based on proper reference images can make the mapping relationships between the source domain and target domain more precise based on a small training dataset. As an example, the corresponding mapping relationships among the ground truth, conditional features, and refined image are shown in Figure <ns0:ref type='figure'>4</ns0:ref>. The ground truth image is responsible for providing not only realistic features but also reference images to composite refined images. The red boxes shown in Figure <ns0:ref type='figure'>4</ns0:ref> indicate the eye mapping in different domains, and the new relationships are expected to effectively reduce the uncertain mapping in image-to-image translation.</ns0:p></ns0:div> <ns0:div><ns0:head>Adoption of an Interim Domain:</ns0:head><ns0:p>In contrast to directly transforming the source edges to target results, the proposed conditional GAN framework converts conditional edges to refined images in an interim domain first. The interim domain trends to reconstruct possible missing information from incomplete conditional features using a U-net. Mode collapse problem may happen in the interim domain when incomplete edges are transferred to a refined image. Nevertheless, the translation at this stage is useful for facial component identification because the incomplete edges in the source domain are further processed. The refined images provide clearer accessorial information than the original incomplete edges, even if they are converted into simplified samples when mode collapse happens. By trial and error, appropriate regional features as reference images can reduce distortions and mismatch of features based on very limited edges. Therefore, the refined images are constructed by combining binarised images and segmentation masks, as shown in Figure <ns0:ref type='figure'>3</ns0:ref>. In short, the main function of the interim domain is to refine the original data distribution and strengthen the incomplete conditional features from the source domain.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref> (a) shows inference results of using uncertain edges to generate segmentation masks from 50 paired training images. To handle the incomplete edges in the conditional input, facial components can be reconstructed by a U-net in the interim domain. It is evident that the proposed conditional GAN can learn from only 50 segmentation masks to generate more integral face components, such as nose, eyebrow, hair and mouth. Figure <ns0:ref type='figure'>5 (b</ns0:ref>) illustrates examples where incorrect eye shapes are obtained, as shown in the red boxes, which would aggravate distortions in the target domain. What is worse, this situation is hard to be solved because it is difficult to increase the number of diverse samples based on a small dataset as GANs generally require more diverse data to be trained well. To resolve this problem, binary images with clear regional information are obtained through image pre-processing, which can enhance the contours and thus reduce distortions in facial components, as shown in Figure <ns0:ref type='figure'>4</ns0:ref>. In contrast to imprecisely depicting facial components in segmentation masks, binary images obtained by appropriate thresholding can produce more correct contours than segmentation masks and thus alleviate the problem caused by very limited training data. Figure <ns0:ref type='figure'>6</ns0:ref> shows that binary images can handle uncertain edge density in the inference phase to enhance crucial edge information with regional distributions. Binarised regional features can be extracted by the corresponding edge distribution from a small training dataset, which can not only integrate crucial contours, as shown in Figure <ns0:ref type='figure'>6</ns0:ref> (a), but also get rid of meaningless noise when various untrained edges may be unrecognisable in the inference phase, as shown in Figure <ns0:ref type='figure'>6</ns0:ref> (b). It is noteworthy that the results presented in Figures <ns0:ref type='figure'>5 and 6</ns0:ref> can be regarded as those from an ablation study, which shows that removing the component of combining binarised regional features in the proposed method will significantly deteriorated the performance of the proposed conditional GAN.</ns0:p></ns0:div> <ns0:div><ns0:head>Model Training and Loss Functions Conditional Adversarial Loss:</ns0:head><ns0:p>During training the proposed conditional GAN framework, it is difficult to find a balance between the generator and discriminator, especially when only very limited training data is available. Using an appropriate loss function is critical to ensure good quality of the generated images. Firstly, to distinguish real images from fake ones, the following basic loss function is used for the two convolutional neural networks, which is known as conditional adversarial loss.</ns0:p><ns0:formula xml:id='formula_0'>&#8466; &#119886;&#119889;&#119907; ( &#119863;,&#119866; ) = &#120124; &#119868;,&#119878; [ &#119897;&#119900;&#119892; &#119863; ( &#119878;|&#119868; )] + &#120124; &#119868;,&#119868; ' [ &#119897;&#119900;&#119892; ( 1 -&#119863; ( &#119868;, &#119866;(&#119868;'|&#119868;) ))]</ns0:formula><ns0:p>where &#120124; represents expected value, G the generator, D the discriminator, S the source image, I the conditional edge feature input, and I' the generated image. In the first U-net, S should contain a mixture of pixels of binary image, segmentation mask and ground truth so as to distinguish Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>between real refined image and fake generated image. In the second U-net, S needs to be set as the ground truth only. Feature Matching Loss: As in the pix2pix GAN model <ns0:ref type='bibr' target='#b23'>(Isola et al., 2017)</ns0:ref>, the L 1 pixel loss as feature matching loss in the synthesised images is adopted. Since there are paired images in the training phase, the L 1 distance between the generated image (I') and source image (S) can be defined as follows:</ns0:p><ns0:formula xml:id='formula_1'>&#8466; &#119871;1 (&#119866;) = &#120124; &#119878;,&#119868;,&#119868;' [ &#8214; &#119878; -&#119866;(&#119868;'|&#119868;) &#8214; 1 ]</ns0:formula><ns0:p>Overall Loss: The main purpose of using the loss function is to help the generator to synthesise photorealistic images by minimising the loss value with a limited number of input images. The overall loss function is defined as</ns0:p><ns0:formula xml:id='formula_2'>&#119898;&#119894;&#119899; &#119866; &#119898;&#119886;&#119909; &#119863; &#8466; &#119886;&#119889;&#119907; ( &#119863;,&#119866; ) + &#120572;&#8466; &#119871;1 ( &#119866; )</ns0:formula><ns0:p>where &#120572; is a weight parameter. A larger value of &#120572; encourages the generator to synthesise images less blurring in terms of L 1 distance.</ns0:p><ns0:p>The second U-net uses the refined images and original sparse lines as inputs to generate photorealistic images with the same loss function but different training parameters and freezing weights. Another difference between these two networks is the source image S, which should be either the refined images or the ground truth images.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiments with the Proposed Conditional GAN Framework</ns0:head></ns0:div> <ns0:div><ns0:head>Data Preparation</ns0:head><ns0:p>A small set of randomly chosen images from CelebA-HD <ns0:ref type='bibr' target='#b34'>(Liu et al., 2015)</ns0:ref> formed the training image dataset in our experiments. CelebA-HD includes 30,000 high-resolution celebrity facial images. All the images were resized to 256 &#215; 256 for our proposed model. CelebAMask-HQ <ns0:ref type='bibr' target='#b29'>(Lee et al., 2020)</ns0:ref> is a face image dataset consisting of 30,000 high-resolution face images of size 512 &#215; 512 and 19 classes, including skin, nose, eyes, eyebrows, ears, mouth, lip, hair, hat, eyeglass, earring, necklace, neck, cloth and so on. All the images in CelebAMask-HQ were selected from the CelebA-HD dataset, and each image has segmentation masks of facial attributes corresponding to CelebA-HD. Since different numbers of segmentation masks were used to compare the performance of different methods with different numbers of training samples, the CelebAMask-HQ was used as the standard segmentation masks of reference images. If a very small training dataset is used, it would be fine to manually generate the segmentation masks by image pre-processing. In our experiments, the segmentation masks from CelebAMask-HQ were used as the common reference images of the corresponding training images.</ns0:p></ns0:div> <ns0:div><ns0:head>Implementation Details</ns0:head><ns0:p>The hyper-parameter values were determined through trial and error in our experiments. For training the proposed conditional GAN framework, the Adam optimiser was used to minimise the loss function with the initial learning rate set to 0.0002 and the momentum 0.5. The weight parameter in the loss function &#120572; was set to 100. All the experiments were conducted on a desktop computer with NVIDIA GeForce RTX 2080 GPU, Intel Core i7-6700 (3.4GHz) processor, and 16G RAM. Incomplete edges or hand-drawn sketches as conditional inputs usually represent abstract concepts, which are beneficial for generating diverse data augmentation results but it is difficult for conditional GANs to generate photorealistic images with limited conditional inputs based on small training data. In our experiments, edges were extracted by Canny edge detector <ns0:ref type='bibr' target='#b5'>(Canny, 1986)</ns0:ref>, which can obtain simple and continuous edges using a set of intensity gradients from realistic images. Two intensity gradient magnitudes are used in Canny edge detector as a threshold range to control the edge density, which is determined in our experiments by a threshold ratio. It is the ratio of the high threshold value to the maximum magnitude, and the low threshold value is 40% of the high threshold value. The edges produced by the Canny edge detector are more similar to hand-drawn sketches than those by other commonly used edge detectors, as shown in Figure <ns0:ref type='figure'>7</ns0:ref>. The appropriate threshold ratio for the Canny edge detector was chosen through trial and error in our experiment. The red box in Figure <ns0:ref type='figure'>7</ns0:ref> shows the edges extracted using the chosen threshold ratio, which has clear information about facial components without unexpected noise and meets the requirement for good conditional inputs.</ns0:p><ns0:p>In the design of the interim domain, the pixel values of the refined image were set by the following mixture ratios: 25% from binary image, 25% from segmentation mask, and 50% from original image. Figure <ns0:ref type='figure'>8</ns0:ref> shows the inference results of the refined images and the corresponding generated image outputs. The red boxes represent blending areas in the masks, binary images and texture features in the refined images, which reflect the brightness changes in the generated image outputs. The overlapped regions are visually darker and gloomier compared to other regions. Therefore, these blending areas from different reference images conduct transitions in brightness and lightness to synthesise realistic results. With the interim domain, the proposed conditional GAN can efficiently deal with both overlapped and non-overlapped mappings between segmentation masks and binary regions, leading to more photorealistic image outputs. Image blending with different styles is beneficial to augment training image datasets with diversity. In the proposed conditional GAN framework, generated images were controlled by conditional edge inputs. Exchanging or modifying edge features is an easy way to generate different images that increase data diversity and expand original facial features. Figure <ns0:ref type='figure'>9</ns0:ref> shows examples of the generated images with facial features swapped on a small training dataset by exchanging edge components in conditional inputs. It can be seen that the generated images can preserve facial features with clear conditional edges and reconstruct the critical components in incomplete or undefined areas.</ns0:p></ns0:div> <ns0:div><ns0:head>Results and Performance Evaluation</ns0:head><ns0:p>For performance evaluation, the proposed conditional GAN framework was used to generate images from different training images and conditional edge input settings. To demonstrate the performance of the proposed method, the state-of-the-art edge-to-image translation methods were compared both qualitatively by visual inspection of the quality and diversity of the generated images and quantitatively in terms of FID and KID scores.</ns0:p></ns0:div> <ns0:div><ns0:head>Diversity in Facial Image Augmentation Using the Proposed Conditional GAN</ns0:head><ns0:p>It is clear that the threshold ratio chosen for the Canny edge detector affects the density level of the extracted edges, which as conditional inputs would affect the quality of the images generated by the conditional GAN. It is desirable that the proposed conditional GAN can generate diverse images with the change of edge density levels in the conditional input but be robust in terms of the quality of the generated images. Figure <ns0:ref type='figure' target='#fig_3'>10</ns0:ref> shows the inference results with different density levels in the conditional edges, which were not included in the training phase except for those in the red box. It can be seen that the generated images are slightly different with different density levels in the conditional edge inputs and the distortions are small even when the GANs were trained using a small dataset of 50 training images. The generated images are more photorealistic when the conditional input contains less noise or unidentical edges, which correspond to those generated with the edge density level chosen in the training phase, as shown in the red box.</ns0:p><ns0:p>Fortunately, with the change of the density levels of the conditional edge inputs, the quality of the generated images is prevented from considerable deterioration because the refined images can integrally represent facial features at an acceptable level based on a small dataset. Consequently, as the conditional inputs to the second U-net in the second stage, they play an important role in reducing distortions in the generated facial image outputs.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>11</ns0:ref> shows examples of facial image augmentation results using 50 training images to train the proposed conditional GAN framework. Diverse new facial images can be generated from each training image with extracted edges modified for desirable facial features as conditional inputs. The modifications to the extracted edges include adding or deleting parts of the edges or changing facial expression or direction, as shown in Figure <ns0:ref type='figure' target='#fig_3'>11</ns0:ref>. It can be seen that the image data augmentation results using the proposed conditional GAN are more diverse than traditional augmentation methods and the generated images are of good quality due to the use of the interim domain. For data augmentation purposes, using deliberately modified edges as conditional inputs to the proposed conditional GAN framework can boost the data diversity on the basis of the available small set of training images. In general, it is difficult for image-to-image translation methods to generate high-quality images with conditional inputs that are not directly corresponding to features in the training images, such as hand-drawn sketches. In the previous experiments, it has been demonstrated that the interim domain is helpful to generate high-quality images with various edge density levels for diversity.</ns0:p><ns0:p>In our experiments, hand-drawn sketches were also used as conditional inputs for the proposed conditional GAN to generate facial images with specific facial expressions or features. Figure <ns0:ref type='figure' target='#fig_3'>13</ns0:ref> shows the inference results with hand-drawn sketches as conditional inputs, with the proposed conditional GAN trained using a dataset of 50 training images. It is obvious that, when giving unidentical or incomplete facial contours in the conditional inputs, the refined images generated by the first U-net in the proposed conditional GAN structure are responsible for reducing distortions in the generated images while keeping the diverse facial expressions introduced by the hand-drawn sketches.</ns0:p></ns0:div> <ns0:div><ns0:head>Qualitative Comparison</ns0:head><ns0:p>To evaluate the quality of the images generated by the proposed conditional GAN framework, the images generated by the proposed method were compared with those generated by the stateof-the-art edge-to-image translation methods, including pix2pix <ns0:ref type='bibr' target='#b23'>(Isola et al, 2017)</ns0:ref> and pix2pixHD <ns0:ref type='bibr' target='#b49'>(Wang et al., 2018a)</ns0:ref>, under the same training conditions and in terms how the generated images are comparable to the ground truth images. Figure <ns0:ref type='figure' target='#fig_8'>14</ns0:ref> shows representative images generated respectively by the three conditional GANs for comparison, trained using the same small dataset of 50 training images. Different sparse edges as conditional inputs were tested. The results in Figure <ns0:ref type='figure' target='#fig_8'>14</ns0:ref> demonstrate that the method proposed in this paper can generate more photorealistic facial images with fewer distortions than pix2pix and pix2pixHD, when the GANs were trained using small training data.</ns0:p></ns0:div> <ns0:div><ns0:head>Quantitative Comparison</ns0:head><ns0:p>Since the number of training images is small, the difference between the generated images and the corresponding ground truth images is noticeable by visual inspection. In order to quantitatively compare the quality of the images generated by different conditional GANs, FID and KID scores were adopted to evaluate the photorealistic scales of the generated images in our further experiments. FID is widely adopted to evaluate the visual quality of generated images, which calculates the Wasserstein distance between the generated images and the corresponding ground truth images. KID can be used similarly for image quality measurement, but KID scores are based on an unbiased estimator with a cubic kernel <ns0:ref type='bibr' target='#b4'>(Binkowski et al., 2018)</ns0:ref>. Clearly, lower FID and KID scores represent a better match between the generated images and the corresponding ground truth images.</ns0:p><ns0:p>To evaluate the effect of the interim domain adopted in the proposed conditional GAN framework on the quality of the generated images, the performance of double U-nets is compared with that of a single U-net in terms of FID and KID with different threshold ratios used in the Canny edge detector. In the training phase, one input type with threshold ratio = 0.4 and three input types with threshold ratios = 0.2, 0.4 and 0.6 were considered and 50 training images were used, whilst in the inference phase 11 different threshold ratios (0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 and 0.9) were tested with 1000 generated images respectively. Figure <ns0:ref type='figure' target='#fig_3'>15</ns0:ref> shows the comparative results in terms of FID and KID scores. Three points can be made from the experimental results presented in this figure: Firstly, double U-nets achieved lower FID and KID scores than single U-net, indicating that the interim domain for refining images in the proposed conditional GAN can reduce distortions caused by incomplete conditional edges and small training dataset, and thus improve the quality of the generated images; Secondly, training with three conditional edge density levels, compared with only one conditional type, can achieve better and robust performance with various levels of conditional edge density (for better diversity); Thirdly, the most photorealistic performance was achieved when the edge density levels in the inference phase are close to those used in the training phase. The results presented in Figure <ns0:ref type='figure' target='#fig_3'>15</ns0:ref> can be interpreted from the perspective of ablation study because removing the interim domain adopted in the proposed method will considerably deteriorate the performance of the conditional GAN. This paper aims to generate photorealistic facial images using conditional GANs trained with a small set of training images for data augmentation. To evaluate the effect of the number of training images on the quality of the images generated by the proposed conditional GAN, pix2pix and pix2pixHD, different numbers (25, 50, 100, and 500) of training images were used to train each of the three conditional GANs separately. Moreover, in order to demonstrate the effect of different conditional edge density levels, both sparse edges (threshold ratio = 0.4) and dense edges (threshold ratio = 0.2) were used to generate 1000 images by each trained conditional GAN. The FID and KID scores of the images generated by the three conditional GANs were calculated respectively. Figure <ns0:ref type='figure' target='#fig_3'>16</ns0:ref> shows the changes of FID and KID scores with the different numbers of training images, from which the following three points can be made: 1) With the interim domain, the proposed conditional GAN achieved lower FID and KID scores than pix2pix and pix2pixHD when trained with the same number of training images. 2) Dense conditional edges achieved lower KID and FID scores than sparse edges, but the diversity in the generated images may be constrained. 3) With the increase in the number of training images, the advantage of the proposed method over the existing methods becomes less obvious. This tendency indicates that the proposed conditional GAN framework is very effective when it is trained with a small number training samples and its performance would approach to that of the existing methods when the number of training samples becomes relatively large.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions and Future Work</ns0:head><ns0:p>In this paper, a new conditional GAN framework is proposed for edge-to-image translation based on a small set of training data, which can synthesise photorealistic diverse facial images using incomplete edges as conditional inputs for data augmentation purposes. In order to solve the problem in training conditional GANs using small training data, an interim domain for refining images is introduced in the proposed conditional GAN, which can effectively reduce unexpected distortions and thus improve the quality of the generated images. Experimental results have demonstrated that blending segmentation masks and regional binary images as refined reference images can reduce distortions in facial components of the images generated by the conditional GAN trained with a small training dataset. Compared with the existing edge-to-image translation methods, the proposed conditional GAN can not only automatically transfer incomplete conditional edges to reference images with more facial features in the interim domain but also effectively reduce unexpected distortions caused by small training data. Compared to directly transferring source domain into target domain, the proposed method can have a more comprehensive view to generate more photorealistic edge-to-image translation results when using various incomplete conditional edges for data augmentation. More informative reference images can be constructed in the interim domain from incomplete edge inputs to integrate useful facial components. The proposed conditional GAN trained using a small dataset can synthesise various photorealistic facial images by manipulating conditional edge features or using hand-drawn facial sketches for diverse image data augmentation.</ns0:p><ns0:p>Compared to the existing conditional GANs for image-to-image translation, the images generated by the proposed conditional GAN have less distortion and more diversity, which is desirable for data augmentation purposes. Due to limited GPU computing facilities available for conducting our experiments, it is hard to optimise the hyperparameters of the tested models, and the performance evaluation is based on the comparison with two state-of-the-art methods only. More extensive comparative study would be desirable in the future research to draw more reliable conclusions.</ns0:p><ns0:p>The advantage of the proposed conditional GAN framework over the existing methods becomes less obvious when the number of training samples are relatively large. For future work, the interim domain could be improved so that the proposed conditional GAN framework would also significantly outperform existing methods for image data augmentation when a reasonably large number of training image dataset is available. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>The proposed translation method by defining an interim domain for refining images based on a small training dataset.</ns0:p><ns0:p>Image pre-processing is adopted in the proposed conditional GAN to enhance the mapping relationship from source domain to target domain.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64945:1:0:NEW 20 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:p>Overview of the proposed conditional GAN for translating edges to photorealistic images using two U-nets.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64945:1:0:NEW 20 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:p>Inference results in translating sparse edges to labelled segmentation masks with 50 training images. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:p>Inference results in translating sparse edges to binary regional images with 50 training images. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:p>Inference results with examples of refined images and final outputs. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:p>Results from exchanging conditional facial edges to generate diverse styles of facial images.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64945:1:0:NEW 20 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 10</ns0:note><ns0:p>Inference results with example images in the source, interim, and target domains respectively.</ns0:p><ns0:p>The various density levels in the conditional inputs were not included in the training phase except the one in the red box generated by Canny edge detector with threshold ratio = 0.4.</ns0:p><ns0:p>The results are from GANs trained using 50 images only. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 15</ns0:note><ns0:note type='other'>Computer Science Figure 16</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>GANs due to using very small training data and sparse conditional input for diverse data augmentation, a new conditional GAN framework has been proposed, which converts the original incomplete edges into new conditional inputs in an interim domain for refining images and thus alleviates distortions caused by small training data and incomplete conditional edges. PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64945:1:0:NEW 20 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64945:1:0:NEW 20 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Figure 12 shows some other examples of facial image data augmentation using 50 training images to train the proposed conditional GAN, with edge features from multiple training images swapped as conditional inputs. The red boxes in the figure indicate the original training images, and the other images in a row are generated by the proposed conditional GAN, which shows swapped facial features (including eyes, eyebrows, nose and mouth) or hairstyles. It can be seen that by using mixed edge features from multiple training images as conditional inputs the proposed conditional GAN can efficiently generate diverse facial images of good quality.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Corresponding mapping relationships among the conditional inputs, refined image and ground truth. PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64945:1:0:NEW 20 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>(a) The outputs can roughly resume the missing facial components from incomplete layouts when given abstract inputs. (b) The red boxes indicate the corresponding indefinite contours in the original inputs and generated masks. PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64945:1:0:NEW 20 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Figure 7</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>The red boxes represent blending areas in the refined region, which can be reflected by the brightness in the generated image outputs.PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64945:1:0:NEW 20 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 14 Inference</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>FID</ns0:head><ns0:label /><ns0:figDesc>and KID scores of double U-nets with an interim domain and single U-net with different levels of input edge density respectively. One input type (threshold ratio = 0.4) and three input types (threshold ratios = 0.2, 0.4, 0.6) in the source domain were used respectively during training with a small training dataset of 50 images. The FID and KID scores were calculated based on the same 1000 inference images at different edge density levels. PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64945:1:0:NEW 20 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Changes of FID scores (first row) and KID scores (second row) with different number of training images. Comparison among three edge-to-image translation methods with sparse and dense edge inputs respectively: pix2pix, pix2pixHD and ours. PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64945:1:0:NEW 20 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,250.12,525.00,327.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,199.12,525.00,189.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,199.12,525.00,411.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,199.12,525.00,170.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,250.12,525.00,269.25' type='bitmap' /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64945:1:0:NEW 20 Sep 2021)</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64945:1:0:NEW 20 Sep 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64945:1:0:NEW 20 Sep 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Authors’ Responses to the Reviews of “Small facial image dataset augmentation using conditional GANs based on incomplete edge feature input” (#CS-2021:08:64945:0:1:REVIEW) We are very grateful to the reviewers and the editors for the comments and suggestions, which have helped us to improve the manuscript. Our responses to the comments and suggestions are as follows. Response to Reviewer #1 In this paper, a new image-to-image translation framework using conditional GANs is proposed, which can generate diverse photorealistic images from limited edge features after training with a small number of training images. The article has a certain degree of innovation and novelty, the experiment is relatively sufficient, and the overall content is complete, but there are still some problems that the author hopes to correct: 1)Whether the pictures involved in the article can be directly reflected in the text to facilitate reading and understanding; Authors' Response: Thank you very much for your supportive comments. The pictures/figures have been explained in the text, but the figures and captions are separated from the text according to the manuscript submission requirements of PeerJ. 2)The quality of the picture is not very high, it is recommended to replace it, and some of the picture content is redundant, and it is recommended to delete it; Authors' Response: We have replaced all the figures with high-quality images (900*900 pixels at least) in the revised manuscript. We will submit the original high-quality figures as the supplement data. Moreover, the redundant samples in Figure 11and Figure 13 have been deleted in the revised manuscript. 3)Figure 16 compares the KID and FID values of different methods within 500 sheets, and the difference between the results of the other two methods and the method in this article is getting smaller and smaller. Can the author compare more images, or briefly Explain the change trend of subsequent FID and KID results; Authors' Response: We appreciate this observation. The method proposed in this paper aims to mitigate the overfitting and generative distortions caused by using small training datasets. With the increase in the number of training samples, the advantage of the proposed method over the existing methods becomes less obvious. We have explained this tendency at the end of the quantitative comparison section in the revised manuscript. Response to Reviewer #2 1) Basic reporting It is better to include a paragraph providing details about the structure of the paper (could be the last paragraph of the Introduction section). The related work section is very comprehensive (good work!). Include some details of additional image data augmentation works would further improve the related work section. Indicate potential future works of this research in the conclusion section. Authors' Response: We appreciate your review and valuable suggestions. A paragraph about the structure of the paper has been added at the end of the introduction section. More details on additional image data augmentation methods have been included in the related work section. Discussions on potential future work have been added in the conclusion section. 2) Experimental design Structuring the methods section is very important to enhance the readability of the paper. It is better to include individual subsections to three major parts of the proposed method. Lack of supportive arguments is the primary concern in the results section. Authors will get benefitted by including a comprehensive discussion on both qualitative and quantitative results. An ablative study is a must in this case to express the significance of each module. Authors' Response: Individual subsections for three major parts of the proposed method have been organised in the revised manuscript. More discussions on the experimental results have been added at the end of both qualitative comparison and quantitative comparison sections. We very much appreciate the suggestion of ablation study that is very important in validating machine learning models. Due to the tight deadline for making the manor revision to this paper and the limited GPU facilities we have, it is difficult to carry out a comprehensive ablation study. However, some experimental results in this paper have provided evidence of importantce and necessity of the new components in the proposed model. For instance, Figure 15 shows the performance of the conditional GANs with and without the interim domain respectively by using single U-net and double U-net in the quantitative comparison section, indicting that without the proposed interim domain the quality of the images generated by the conditional GAN is considerably deteriorated in terms of FID and KID scores. In addition, the adoption of different features in the interim domain and its effect are presented in the image pre-processing and refining section, as shown in Figure 5 and Figure 6. For future work, we would conduct a more comprehensive ablation study to investigate how effective the different conditional features in the proposed interim domain are in reducing distortions and improving the quality of the images generated by the proposed conditional GAN framework. 3) Validity of the findings Highlight the limitations of the work is important for researchers in the field to explore more in this domain of interest. Discuss some of them in the results section. Authors' Response: Thank for this suggestion. Limitations of the work have been highlighted and discussed at the end of the conclusion section in the revised manuscript, which hopefully will motivate the researchers in the field to explore more in this line of research. Response to Reviewer #3 1) Basic reporting There contains grammatical errors and typos in the manuscript. The authors should re-check and revise carefully. Authors' Response: We have carefully rechecked the manuscript and corrected the grammatical errors and typos in the previous version of the manuscript. "
Here is a paper. Please give your review comments after reading it.
266
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Due to the sophisticated entanglements for non-rigid deformation, generating person images from source pose to target pose is a challenging work. In this paper, we present a novel framework to generate person images with shape consistency and appearance consistency. The proposed framework leverages Graph Network to infer the global relationship of source pose and target pose in a graph for better pose transfer. Moreover, We decompose the source image into different attributes(e.g., hair, clothes, pants and shoes) and combine them with the pose coding through operation method to generate a more realistic person image. We adopt an alternate updating strategy to promote mutual guidance between pose modules and appearance modules for better person image quality.</ns0:p><ns0:p>Qualitative and quantitative experiments were carried out on DeepFashion dateset. The efficacy of the presented framework are verified.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>This may cause some problems of obtaining the global relationship between source pose and target pose, which not only increases the calculation cost, but also has incomplete problems.</ns0:p><ns0:p>As shown in Figure1, in this paper, we propose a pose transfer framework based on graph network and appearance decomposition.Through the graph network, we make a global reasoning between the target pose and the source pose, and get a better attitude transfer effect. Through the appearance attribute decomposition module, the generated person image can obtain more real and delicate appearance features. Inspired by <ns0:ref type='bibr' target='#b4'>(Chen et al. 2019)</ns0:ref>, we map the source pose and target pose to the same interaction space. After global reasoning in the interaction space, we map the different poses back to the original independent space. Specificially, as shown in Figure2, we construct an interaction space for global reasoning, map the key points of the source pose and the target pose to the interaction space respectively, establish a fully connected graph connecting all the joint points in the space, and carry out relationship reasoning on the graph. After reasoning is completed, the updated joint points are remapped back to the original space. For appearance code, we use a VGG-based pre-trained human parser to decompose the attributes of source images. Then these attributes are input into a texture encoder to reconstruct the style code, and finally the style code and the pose code are combined to obtain the generated images. In the training process, we use a pair of conditional discriminators, which combine the conditional discriminator and appearance discriminator to improve the quality of the generated image. The performance of proposed network outperform prior works both qualitatively and quantitatively on challenging benchmarks. In total,the proposed framework has the following contributions:</ns0:p><ns0:p>&#61548; We propose a novel generative adversarial network based on graph,which can infer the global relationship between different pose.Tackling the problem that CNN needs to overlay multiple convolution layers to expand the receptive field to cover all the joint points of source pose and target pose.</ns0:p><ns0:p>&#61548; We employ the human body parser to decompose the attributes of the human body images, and fuse the attribute coding with the pose coding. Therefore, the generated images are desirable.</ns0:p><ns0:p>The remainder of this paper is structured as follows.In Section 2, the related work of this paper is introduced. Section 3, details of the proposed framework are given. Section 4 presents distinct experiments on DeepFashion dataset. Finally, a summary is given in Section 5.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>Related Work</ns0:head></ns0:div> <ns0:div><ns0:head n='2.1'>Person image generation</ns0:head><ns0:p>With the continuous development of computer vision technology, image generation models have been developing at a high rate in recent years. The two mainstream methods are Variational auto-encoder(VAE) <ns0:ref type='bibr' target='#b12'>(Kingma &amp; Welling 2013;</ns0:ref><ns0:ref type='bibr' target='#b15'>Lassner et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b23'>Rezende et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b27'>Sohn et al. 2015)</ns0:ref> and Generative Adversarial Networks (GANs) <ns0:ref type='bibr' target='#b2'>(Balakrishnan et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b7'>Dong et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b10'>Honda 2019;</ns0:ref><ns0:ref type='bibr' target='#b25'>Si et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b35'>Zanfir et al. 2018)</ns0:ref>. The former captures the relationship between different dimensions of the data by modeling the structure of the data to generate new data. The latter generates images through mutual game between the generator and the discriminator. Since the loss used by GANs is better than VAE, GANs can generate more vivid images and is sought after by more researchers.</ns0:p><ns0:p>Aiming at the human body image generation method based on the generative adversarial network, Ma et.al first proposed PG 2 <ns0:ref type='bibr' target='#b19'>(Ma et al. 2017)</ns0:ref> to achieve pose guided person body image generation, whose model is cascaded by two different generators. The first stage generates a blurry image under the target pose. The second stage improve the texture and color quality of the image generated in the first stage. Although the second stage improve the image quality to a certain extent, it is still unable to capture the changes in image distribution well, which makes the generated images lack of fine texture. To obtain better appearance texture, <ns0:ref type='bibr' target='#b8'>(Esser et al. 2018)</ns0:ref>exploited to combine VAE and U-Net to disentangle appearance and pose, using the decoupled posture information to generate pictures, and then integrate the appearance information of the source images into the generated pictures. However, it will cause the problem of feature offset caused by posture difference due to the U-Net based skip connections in the model. To tackle this problem, <ns0:ref type='bibr' target='#b26'>(Siarohin et al. 2018</ns0:ref>) introduced deformable skip connection to transfer features of various parts of the body, which effectively alleviated the problem of feature migration. In order to control the attributes flexibly, <ns0:ref type='bibr' target='#b21'>(Men et al. 2020)</ns0:ref> proposed Attribute-Decomposed GAN, which embeds the attribute codes of each part of the human body into the potential space independently, and recombines these codes in a specific order to form a complete appearance code, so as to achieve the effect of flexible control of each attribute.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Graph-based Reasoning</ns0:head><ns0:p>Graph is a data structure, which can model a group of objects (nodes) and their relationships (edges). In recent years, more and more attention has been paid to the study of graph analysis based on machine learning due to its powerful expression ability. <ns0:ref type='bibr' target='#b13'>(Kipf &amp; Welling 2016</ns0:ref>) firstly proposed graph convolutional network which used an efficient layer-wise propagation rule that is based on a first-order approximation of spectral convolutions on graphs. In order to pay dynamical attention to the features of adjacent nodes, Graph attention networks <ns0:ref type='bibr'>(Veli&#236;kovi&#231; et al. 2017</ns0:ref>) have been proposed. <ns0:ref type='bibr' target='#b31'>(Wang et al. 2020</ns0:ref>)introduced a Global Relation Reasoning Graph Convolutional Networks (GRR-GCN) to efficiently capture the global relations among different body joints. It modeled the relations among different body joints that may mitigate some challenges such as occlusion. In this paper, we introduce a graph-based reasoning in person image generation model.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>Methods</ns0:head><ns0:p>In this section, we give a description of our network architecture. We start with some notations.</ns0:p><ns0:p>denotes the set of person images. Before training, the Human Pose</ns0:p><ns0:formula xml:id='formula_0'>3 H W I R &#61620; &#61620; &#61646;</ns0:formula><ns0:p>Estimator (HPE) <ns0:ref type='bibr' target='#b8'>(Esser et al. 2018</ns0:ref>) is adopted to estimate the position of 18 joint points in the images. represents a 18-channel heat map that encodes the locations of 18 joints of a human parser to decompose the attributes of source images. More details will be introduced below.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Generator</ns0:head><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> shows the architecture of the generator which aims to transfer the pose of the person in from to . At the core the generator comprise two pathway, namely pose c I c P t P pathway and appearance pathway. The former consisted of a series of pose blocks and the latter consisted of several texture blocks.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.1'>Style Encoder</ns0:head><ns0:p>Due to the manifold structure composed of various human body images is very complex, it is difficult to encode the entire human body with detailed textures. Inspired by <ns0:ref type='bibr' target='#b21'>(Men et al. 2020)</ns0:ref>,we decompose the source image into different components and recombine their potential code to build the complete style code. Firstly, a pre-trained human body parser based on VGG is used to obtain the semantic map of the source image . Then, the semantic map is mapped to a c I K -channels heat map . Each channel i has a binary mask corresponding to different</ns0:p><ns0:formula xml:id='formula_1'>K H W M &#61620; &#61620; H W i M &#61620;</ns0:formula><ns0:p>components. Multiplying element-wise the source image and the mask to obtain c</ns0:p><ns0:formula xml:id='formula_2'>I H W i M &#61620; decomposed person image with component i. i c c i I I M &#61501; e (1)</ns0:formula><ns0:p>After that, is input into the appearance encoder to acquire the corresponding style code .</ns0:p><ns0:formula xml:id='formula_3'>i c I i sty F ( ) i i sty enc c F T I &#61501; (2)</ns0:formula><ns0:p>where is shared for all components and then all is concatenated to get a full style code for and which consists of N-down-sampling convolutional layers(N = 2 in our case). That is c P t P to say two shape encoders are sharing the weights.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.3'>Pose Block</ns0:head><ns0:p>The pose block aims to reason the crossing long range relations between the source pose and the target pose in a graph and output new shape codes. The main idea of this method are to map the source pose and the target pose to the graph space, then cross reasoning on the graph space, and finally map back to the original space to get the updated code. Firstly,we learn the projection function that maps source pose and target pose from coordinating space to graph space. c source ( )</ns0:p><ns0:formula xml:id='formula_4'>P C D H F &#61553; &#61620; &#61501; &#61646; &#161; (3) t arg et ( ) t P C D H F &#61553; &#61620; &#61501; &#61646; &#161; (4)</ns0:formula><ns0:p>where function is implemented by 1&#215;1 convolutional layer, and represent feature map ( )</ns0:p><ns0:formula xml:id='formula_5'>&#61553; g C D</ns0:formula><ns0:p>channels and number of nodes respectively. Then we can get the new features with the cross relationship between the source pose and the target pose. of optimization the graph convolution is formulated as:</ns0:p><ns0:formula xml:id='formula_6'>(( ) ) Z I A V W &#61501; &#61485;</ns0:formula><ns0:p>(6) Following the principle in <ns0:ref type='bibr' target='#b4'>(Chen et al. 2019)</ns0:ref>, Laplacian smoothing is used as the first step of volume product. Both A and W are adopt random initialization and updated by gradient descent. Next, we need to map the inferred Z back to the coordinate space. Similar to the first step, we adopt the projection matrix and linear projection to formulate.</ns0:p><ns0:formula xml:id='formula_7'>t arg et H &#61480; &#61481; &#61546; g t arg et ( ) c P F H Z &#61546; &#61501; ) g (7)</ns0:formula></ns0:div> <ns0:div><ns0:head n='3.1.4'>Texture Block</ns0:head><ns0:p>The texture blocks aims to transfer pose and texture simultaneously and interactively. Each texture blocks is consist of residual conv-blocks equipped with AdaIN. Firstly, we compute attention mask by two convolutional layers. Mathematically,</ns0:p><ns0:formula xml:id='formula_8'>t M ( ( )) c P t M Conv F &#61555; &#61501; )<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>After getting the attention mask, the appearance code is updated by:</ns0:p><ns0:p>-1 1</ns0:p><ns0:formula xml:id='formula_9'>= i i i sty sty t sty F F M F &#61485; &#61483;<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>The pose code is updated by: = ( )</ns0:p><ns0:formula xml:id='formula_10'>c c P P i sty F Conv F F ) P<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>where &#8214; means connecting along the depth axis.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Encoder</ns0:head><ns0:p>The primary focus of the decoder is to generate a new image by decoding codes. We finally take the texture code to generate a new person image. According to standard practice,the decoder generates the generated image via deconvolutional layers. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Discriminators</ns0:head><ns0:p>The main purpose of the discriminator is to promote the generator to generate a more realistic image by distinguishing the generated image from the real image. In the training process, we adopt pose discriminator and texture discriminator to identify the shape</ns0:p><ns0:formula xml:id='formula_11'>P D t D</ns0:formula><ns0:p>consistency and appearance consistency. The discriminators are implemented by Resnet Discriminator, each discriminator is independently trained, and all the discriminators can be analyzed and optimized separately.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4'>Loss function 3.4.1 Adversarial Loss</ns0:head><ns0:p>The goal of adversarial loss is to guide the images generated by the generator to be close to the real images. This goal is achieved by the min-max confrontation process between the generator and the discriminator. The discriminator needs to maximize the probability of correctly determining the distribution of real images and false image. The task of the generator is to identify minimize the probability of the generated images being identified as false images, the synthesize the human body images in the target pose. The formula for adversarial loss in this paper is as follows: will judge the generated image as a false image.</ns0:p><ns0:formula xml:id='formula_12'>&#61480; &#61481; &#61480; &#61481; &#61563; &#61565; &#61480; &#61481; &#61480; &#61481; &#61480; &#61481; &#61563; &#61565; c t t ,</ns0:formula></ns0:div> <ns0:div><ns0:head n='3.4.2'>Reconstruction Loss</ns0:head><ns0:p>The goal of reconstruction loss is to improve the similarity between the original image and the generated images, avoid significant distortion of colors, and accelerate the convergence process. This paper uses L1 reconstruction loss to calculate the pixel difference between the generated source image and the source image . The formula is as follows: </ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.3'>Perceptual loss</ns0:head><ns0:p>Because we often use MSE loss function, the output images will be smoother (losing the details / high frequency part), so we can enhance the image details by choosing the perceptual loss function. The preceputal loss is computed as <ns0:ref type='bibr' target='#b20'>(Ma et al. 2018)</ns0:ref> <ns0:ref type='figure'>&#65306; , , 1</ns0:ref> 1 1 1 ( )</ns0:p><ns0:formula xml:id='formula_13'>j j j W H C per j g x y z x y z j j j L I W H C &#61542; &#61542; &#61501; &#61501; &#61501; &#61501; &#61485; &#61669;&#61669;&#61669; P (13)</ns0:formula><ns0:p>where is the output feature of the j-th layer in the VGG19 network, and W j ,H j ,C j are the spatial j &#61542; width, height and depth of , respectively.</ns0:p><ns0:formula xml:id='formula_14'>j &#61542;</ns0:formula><ns0:p>The total loss function is denoted as:</ns0:p><ns0:formula xml:id='formula_15'>total adv r pixel rec p per L L L L &#61485; &#61501; &#61483; &#61548; &#61483; &#61548; (14)</ns0:formula><ns0:p>where and denote the reconstruction losses and perceptual loss.</ns0:p><ns0:formula xml:id='formula_16'>r &#61548; p &#61548;</ns0:formula></ns0:div> <ns0:div><ns0:head n='3.5'>Datasets and Details</ns0:head><ns0:p>In this paper, we use DeepFashion <ns0:ref type='bibr' target='#b17'>(Liu et al. 2016</ns0:ref>) dataset for performance evaluation. DeepFashion contains 52,712 images with the resolution of 256 &#215; 256. Before training, we use Human Pose Estimator (HPE) to remove noisy images from the dataset in which human body can't be detected by HPE. Here we select 37,258 images for training and 12,000 images for testing. In particular, the test sets do not contain the person identities in the training sets in order to objectively evaluate the generalization ability of the network. In addition, we implement the proposed framework in Pytorch framework using two NVIDIA Quadro P4000 GPUs with 16GB memory. The generator contains 9 cascaded residual blocks. To optimize the network parameters, we adopt Rectified Adam(RAdam), which can not only have the advantages of Adam's fast convergence but also possess the advantages of SGD. We train our network for about 120k iterations. The learning rate is initially set 1&#215;e-5 and linearly decayed to zero after 60k iterations. The batch size for DeepFashion is set 1. We alternatively train the generator and discriminator with the above configuration.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.6'>Metrics</ns0:head><ns0:p>Inception score (IS) <ns0:ref type='bibr' target='#b3'>(Barratt &amp; Sharma 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Salimans et al. 2016)</ns0:ref> and Structure Similarity (SSIM) <ns0:ref type='bibr' target='#b32'>(Wang et al. 2004</ns0:ref>) are the most commonly used indicators to evaluate the quality of generated images. Inception score uses the Inception Net V3 network to evaluate the quality of the generated images from two aspects: image clarity and diversity. Structure Similarity is a perception-based calculation model that measures the similarity of two images from three aspects: brightness, contrast, and structure. However, IS only relying on the generated image itself for judgment, ignoring the consistency between the generated image and the real image. What's more,based on this, Fr&#233;chet Inception Distance (FID) <ns0:ref type='bibr' target='#b9'>(Heusel et al. 2017</ns0:ref>) is adopted to measure the realism of the generated image. This method first converts both the generated image and the real image into a feature space, and then calculates the Wasserstein-2 distance between the two images. In addition to the above-mentioned objective evaluation indicators, a User Study was also conducted, and subjective indicators were formed by collecting volunteers' evaluation of the generated images.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>Results and Discussion</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1'>Quantitative and qualitative comparison</ns0:head><ns0:p>Since the judgment of the generated image is more subjective, We compare our method with several stare-of-the-art methods including PATN <ns0:ref type='bibr' target='#b37'>(Zhu et al. 2019</ns0:ref>)&#12289;ADGAN <ns0:ref type='bibr' target='#b21'>(Men et al. 2020</ns0:ref>)&#12289;PISE <ns0:ref type='bibr' target='#b36'>(Zhang et al. 2021)</ns0:ref>. The qualitative comparison results are shown in Figure <ns0:ref type='figure'>4</ns0:ref>. In terms of visual effects, our method achieves excellent performance. Our method avoids a lot of noise, such as the images in the first line of the figure, and other methods appear white noise points on the clothes, but our generated image has no noise and perfectly presents the style of the clothes in the source image. In addition,our method shows better details than other methods in hair and face, and is closer to the real image. For more details, zoom in on Figure <ns0:ref type='figure'>4</ns0:ref>.</ns0:p><ns0:p>In order to verify the effectiveness of our method, we conducted experiments on four benchmarks. In order to get a more fair comparison, we reproduce PATN&#12289;ADGAN&#12289;PISE and test it with the test set in this paper. The results of comparison and the advantages of this method are clearly shown in Table <ns0:ref type='table'>1</ns0:ref>. Our method is superior to other methods in SSIM and mask SSIM, which verifies the effectiveness of the Graph-based generative adversarial network and maintains the consistency of the structure in the pose conversion process. Although the IS value is slightly lower than that of ADGAN, the FID value is comparable, indicating that our generated images are very close to the real images.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>User Study</ns0:head><ns0:p>Human subjective judgment is a very important indicator for generating images. This article relies on the questionnaire star website to do a difference test. In the experiment, 100 volunteers were asked to select the more realistic image from the generated images and the real images within one second. In order to ensure the confidence, following the rules in <ns0:ref type='bibr' target='#b20'>(Ma et al. 2018)</ns0:ref>, we randomly select 55 real images and 55 generated images for out-of-order processing, and then pick out 10 of them for volunteer practice, and the remaining 100 for evaluation and judgment. Each image was compared 3 times by different volunteers. The results are shown in Table <ns0:ref type='table'>2</ns0:ref>. The images generated by the method in this paper have achieved significant effects in human subjective evaluation.</ns0:p><ns0:p>R2G means the percentage of real images being rated as the generated w.r.t. all real images. G2R means the percentage of generated images rated as the real w.r.t. all generated images. The results of other methods are drawn from their papers.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Ablation Study</ns0:head><ns0:p>As shown in Figure <ns0:ref type='figure'>5</ns0:ref> and Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref>, the evaluation results of different versions of our proposed method are shown. We first compare the results using appearance decomposition to the results without using it. We remove the appearance decomposition part from the model, use the encoder similar to the PATN to encode the source image directly, and then transfer it to the generation network directly. By comparison, we find that the appearance decomposition module in our method can effectively improve the performance of the generator. It describes the spatial layout of the region level through the partition mapping, so as to guide the image generation with higher-level structural constraints. Then, we verify the role of graph-based global reasoning. In the pose pathway,we replace the graph-based global reasoning with the method used in <ns0:ref type='bibr'>(Zhu et al. 2019)</ns0:ref>, which use the super position of convolution layer to expand the receptive field gradually for pose transfer. From the Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref>, the graph-based global reasoning module can get higher SSIM value, which shows that the module can improve the structural consistency of the image.In addition, we also verify the influence of each objective function on the generated results. It can be seen that adding these objective functions together can effectively improve the performance of the generator.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>Conclusion</ns0:head><ns0:p>In this paper, a generation model based on appearance decomposition and graph-based global reasoning is proposed for pose guided image generation. The task of pose transfer is divided into pose path and appearance path. We use graph network for global reasoning and appearance decomposition for texture synthesis simultaneously. Through several comparative experiments on Deepfashion dataset, our model shows superior performance in terms of subjective visual authenticity and objective quantitative indicators. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:p>The results of our method in the pose transfer task.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62215:1:2:NEW 9 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>Illustration of our idea. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:p>The qualitative results of ablation study.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62215:1:2:NEW 9 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>body. During the training, the model requires source images and target images ( , ) and c I t I their corresponding heat map ( , ) as input. Moreover, we adopt a VGG-based pre-trained c P t P</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Pose EncoderIn the pose pathway, the source pose and target pose are embedded into the latent c P t P space as the pose code and by pose encoder. Note that we adopt the same shape encoder</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>features, we use graph convolution for interactive reasoning. In particular,let denote the node fully-connected adjacency matrix for spreading information across</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62215:1:2:NEW 9 Sep 2021)Manuscript to be reviewed Computer Science two continue to fight, and ultimately achieve Nash equilibrium. In this paper, an adversarial loss function with and is used to help the generator optimize the generation parameters and</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,285.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 3 . The evaluation results of ablation study.</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell cols='2'>DeepFashion</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>IS</ns0:cell><ns0:cell>SSIM</ns0:cell><ns0:cell>Mask-SSIM</ns0:cell><ns0:cell>FID</ns0:cell></ns0:row><ns0:row><ns0:cell>w/o decomposition</ns0:cell><ns0:cell>3.128</ns0:cell><ns0:cell>0.781</ns0:cell><ns0:cell>0.930</ns0:cell><ns0:cell>14.862</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>w/o graph reasoning 3.025</ns0:cell><ns0:cell>0.778</ns0:cell><ns0:cell>0.929</ns0:cell><ns0:cell>17.306</ns0:cell></ns0:row><ns0:row><ns0:cell>w/o adv L</ns0:cell><ns0:cell>3.168</ns0:cell><ns0:cell>0.776</ns0:cell><ns0:cell>0.932</ns0:cell><ns0:cell>13.394</ns0:cell></ns0:row><ns0:row><ns0:cell>w/o pixel-rec L</ns0:cell><ns0:cell>3.164</ns0:cell><ns0:cell>0.774</ns0:cell><ns0:cell>0.931</ns0:cell><ns0:cell>12.672</ns0:cell></ns0:row><ns0:row><ns0:cell>w/o per L</ns0:cell><ns0:cell>3.178</ns0:cell><ns0:cell>0.785</ns0:cell><ns0:cell>0.933</ns0:cell><ns0:cell>14.862</ns0:cell></ns0:row><ns0:row><ns0:cell>Full</ns0:cell><ns0:cell>3.183</ns0:cell><ns0:cell>0.7916</ns0:cell><ns0:cell>0.933</ns0:cell><ns0:cell>12.649</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62215:1:2:NEW 9 Sep 2021)</ns0:note></ns0:figure> </ns0:body> "
"Dear Juan Pedro Dominguez-Morales: We thank all the reviewers. And we apologize for typos mistakes. Thanks to all the reviewers for their time and feedback. We provide some specific responses and clarifications. Reviewer 1 (Anonymous) The paper is interesting and will be beneficial to a sizable amount of researchers and students. However the tying error even in the title of the paper ……Apperance ….. should be taken care of before submission. Such errors are also seen in the text (For example Line 120) 1. Line 78 and Lines 93-94, the two statements about the proposed method may confuse the readers. Are they the same or different contributions? 2. Lines 147-148 Can we get the 18 joint points automatically? 3. I wonder how we can know the probabilities defined in equation 11 are non-zero. The experimental results seem to be significant. 1. We checked the manuscript carefully and corrected all the misspellings. 2. The functions of the two methods are different. In order to avoid confusion, we have added introduction text to the article.(On lines 79-82 of the manuscript) 3. Through human pose estimator, we can automatically obtain the information of 18 human key points. 4. The formula is the objective function of generating countermeasure network. The formula represents the loss function of generator and discriminator. If one side is 0, the other side is 1. When the complete Nash equilibrium is reached between the generator and the discriminator, the loss of the objective function is maintained at about 0.5. To sum up, the formula is non-zero. Reviewer 2 (Eichi Takaya) The authors should check the manuscript once again by themselves. - First of all, there are spelling mistakes in the title, and the capitalization rules are not consistent. - The single-byte space after the commas and periods are not consistent. - The operators that are supposed to be Hadamard products are written as e. - Section 3.1 and 3.2 have the same titles. Perhaps 3.1 should be called 'Encoder'. The method is written in a relatively clear manner, including mathematical expressions, but it does not correspond to Figure 3. - For example, there is no mention of AdaIN, which is included in the Texture Block in the figure. - Also, the text says that the Pose Block contains GCN, but the Pose Block in the figure is not drawn in such a way as to show this. There is no mention of how the three Losses are combined. If the three are to be added together, it should be clearly stated as such. 1. We checked the manuscript carefully and corrected all the misspellings.The case rules of titles are unified. 2. Match the comma with the single byte space after the period. 3. Sorry, we didn't find a place to write the operator of Hadamard product as e. 4. We change the title of Section 3.2 to “Encoder” 5. We introduced the existence of AdaIN in the article and mark the location of GCN in the figure. 6. The three losses are to be added together,we explained it in lines 237 to 238 of the article "
Here is a paper. Please give your review comments after reading it.
267
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. The involvement of prosumers in the form of agricultural community groups has been acknowledged, and interest in it is increasing due to local food demand and quality of food. How to create a prosumer group? The definition of agro-prosumers and analysis of their behaviour, engaging new members to the existing groups, managing members and their goals are important factors to consider. Hence, to overcome this barrier and to improve the participation of prosumers, in this paper three key frameworks are presented to develop an Agro-Prosumer Community Group (APCGs) platform.</ns0:p><ns0:p>Methods A conceptual process that consist of strict multiple stages i.e. requirement analysis, design logic, theoretical fundamentals, implementation of prototype and verification, is used to build the frameworks for APCG. Different methods and approaches are used to design and develop framework's prototype. For instance, clustering algorithms are used to define and group agro-prosumer concept, an approach is developed that evaluates real-time production behaviour of new prosumers while engaging them to APCG. Finally, the goal-ranking techniques i.e. MCGP are used to build a goal management framework that effectively reaches a compromise between diverse goals of APCGs.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>Results for each framework is shown while verifying the prototype using prosumers data.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>An Agro-Prosumer Community Group addresses three key issues relevant to the development of an agro-prosumer community-based approach to manage the prosumers in local foodand carbon-sharing networks. The key contributions are 1) APCG concept, 2) Prosumer engagement framework, and 3) Goal management framework. Thus APCG platform provides a seamless structure for carbon and produce sharing network.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Prosuming, individually or as a group, is seen as a political act which is feasible, reduces unfavorable environmental changes and affects the economy by reducing centralized long-chain value in the supply chain <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Thus, prosuming in urban agriculture can be perceived as a sustainable step for the agriculture industry. With the growing interest in the supply of quality food, the urban prosumer can be seen as a leader in providing high quality, trustworthy produce. Prosumers' crop yields are not only considered high quality as generally they are home grown, but are also chemical free, grown in better soil (i.e., composting) or grown organically. Additionally, prosumer crops are more trustworthy and better quality than commercial produce as they are usually home grown from seeds or seedlings, as is the practice of the urban prosumer. Hence, an agro-prosumer has strong prospects of obtaining better value and exposure in the market by forming or joining a prosumer community group. An agro-prosumer community group (APCG) can be described as a community group network formed by 'using different agro-prosumers profile, personal motivations and unique characteristics'. Forming an agro-prosumer community group has a number of benefits such as it can improve economic value and offer rich socio-psychological experiences <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> to the agroprosumers by creating social relationships and self-pride, imparting new skills to members, generating knowledge, and contributing to community activities. In addition to high quality produce, agro-prosumers can generate carbon tokens depending upon the consumption of the total amount of carbon content consumed during the vegetation process, and trade it with industries. APCGs can also reduce the long supply chain work, thus improving transparency, security and sustainability.</ns0:p><ns0:p>To build an APCG network, first step is to define APCG and identify prerequisites. To achieve this, a framework is designed where agro-prosumers' profiles are assessed and analyzed to form different groups and derive each group's unique pre-requirements. These unique requirements will become the prerequisites of each group and will be utilized to classify prosumers into appropriate group. The framework utilizes agro-prosumers' production history when deciding the precondition criteria for each APCG. Furthermore, engaging new agro-prosumers is critical to make APCGs a sustainable network. Thus as a next step an APCG require a recruitment framework to add new agro-prosumer in the network. For new recruitments, it is important to evaluate the new agro-prosumers' real-time production profiles before offering them membership of their desired APCG. Thus, rather than relying on historic production behavior, it is important to use real-time production profiles, which will give a better understanding of the prosumer's commitment to supporting the APCG. Hence, we propose a framework for recruiting new agro-prosumers for an APCG. The recruitment is based on an evaluation of their real-time commitment conducted over a defined period. After initiating the community-based, produce-sharing network, one of the key requirements is to make APCGs goal-oriented. This can be done by determining the overall community objectives of the production-sharing network and, subsequently, managing diverse multiple goals of the various APCG groups.</ns0:p><ns0:p>Goal management can be challenging in a community-based network due to diverse conflicting issues such as demand constraints and cost constraints. Several situations can occur where one objective is achieved while leaving another. For instance, in order to improve carbon sequestration in soil by APCGs, organic/ecological farming ways must be practiced; however, organic farming methods yield less, which in turn affects the collective produce of the APCGs and subsequent income. Therefore, it is a requirement that a compromise be applied to the multiple goals with respect to the given constraints. Based on the above discussed factors an effective framework for the management of goals is essential. Thus another key framework is developed and termed as goal management framework. The paper presents a seamless structure to develop an agroprosumer community group network by proposing and verifying three sub frameworks i.e. APCG definition and prerequisites, APCG new prosumer recruitment and Goal management for APCGs.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head></ns0:div> <ns0:div><ns0:head>Framework 1 APCG definition and prerequisites</ns0:head><ns0:p>The key input for the APCG's definition and pre-conditions-determining framework is agroprosumer's produce profile. Agro-prosumers' produce profiles are selected based on the suburb or postcode and types of crops grown in their garden, along with the quantity during different seasons, particularly winters and summers. Outputs derived from this framework will become the prerequisites for different APCGs. The pre-condition requirement for each member will be treated as a commitment to meet their group's prerequisites. The framework is divided into two parts as shown in Figure <ns0:ref type='figure' target='#fig_4'>1</ns0:ref>.Phase 1: clustering prosumer profiles and outlier detection, and Phase 2: optimizing prosumer clusters to define group's pre-conditions. Agro-prosumers' seasonal summer and winter data has been collected as an input for the first part of the framework. Kmeans algorithm has been worked out, however the objective here is to find out prosumers based on similar production behavior. Therefore, an hierarchical clustering algorithm shown in Figure <ns0:ref type='figure'>2</ns0:ref> is used to create clusters based on the homogeneity of prosumers' profiles, and to detect outliers. Homogeneity of prosumers with similar produce profiles will help in earning fair amount of incentives for all and will also support easy incentive distribution <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. Clusters are optimized and unique attributes are identified in the second part of the solution. The non-overlapping agro-prosumer clusters from the first part are optimized to achieve a feasible number of APCGs, and unique attributes are identified for each group and used as pre-requisites of APCGs.</ns0:p></ns0:div> <ns0:div><ns0:head>Phase 1 Prosumer Clustering</ns0:head><ns0:p>The first phase includes clustering of the prosumers' profiles and detecting any outliers using a hierarchical clustering method. Prosumers' seasonal profiles for two seasons (summer and winter) are taken as an input for this phase. There are three steps in this phase: creation of regional groups, building clusters, and outlier detection.</ns0:p><ns0:p>Step I Creating regional groups In this phase, agro-prosumer's postcode is taken as an input. The prosumers are partitioned into groups based on their postcodes within a certain region. This will mean that the delivery of prosumer produce can be done without the need for long-distance transportation. The output of this step will provide GL-clusters (geographical location based-clusters) based on postcodes and the neighborhood zone.</ns0:p><ns0:p>Step II Outlier detection In order to deal with outliers, a threshold is set: after calculating the distance between existing clusters, if the shortest distance is not further than the threshold, we assign the dataset to its closest cluster. If the shortest distance exceeds the threshold, this means that the cluster could belong to a minor group. The objects in the minor group are those that did not belong to any major groups. Objects in minor groups are data points, not outliers as they do not belong to any major groups. Further clustering of objects in minor groups can be done for future analysis.</ns0:p><ns0:p>Step III Building clusters For each GL-cluster obtained from step one and after removing outliers, the corresponding agroprosumer profiles are considered in the next step, and clusters are formed based upon prosumers' production history. The hierarchical clustering method is used to decide the number of clusters. Initially, each prosumer profile is placed in a unique cluster. For each pair of clusters, some value of dissimilarity or distance is computed. In this case, minimum variance, i.e., Ward's criterion, is used to minimize the total within-cluster variance and find the pair of clusters that leads to minimum increase in total within-cluster variance after merging. In every step, the clusters with the minimum variance in the current clustering are merged until the whole dataset forms a single cluster. Hierarchical clustering helps in identifying groups in the dataset. Thus, the output from this step will be number of prosumer clusters based on their production similarity. Phase 2 Prosumer Cluster Optimization and forming pre-requisites: This phase involves the optimization of prosumer groups based on the number of prosumers in each group and their production amount. Firstly the clusters are optimized and pre-requisites for each cluster-group is formed. The optimization steps and pre-requisites are further illustrated in this section.</ns0:p><ns0:p>Step I Optimization of prosumer clusters Agro-prosumer cluster-groups created by using hierarchical clustering are optimized to produce sufficient number of clusters that will then represent different agro-prosumer community groups. The number of clusters produced by optimization, depends on the variation of production quantity. If the variation is large, too many clusters could be formed, which are not feasible to manage. Thus, this stage involves optimizing the clusters into a feasible number of APCGs by merging small clusters into one or splitting large clusters into smaller ones to obtain a feasible number of APCGs to satisfy market requirements. In order to determine the ideal number of clusters, firstly, suburb requirements are analyzed and the expectations of relevant APCGs are derived. To optimize APCGs; Let X be the population of suburb ABC and C is the per capita consumption of lemons. Assume that the APCG framework targets a minimum 1% of lemon market for a suburb ABC. Then the requirement (R expected ) of lemons for suburb ABC using APCGs can be calculated with R expected = X*C*0.01 Let L be the number of clusters formed using the clustering method, and R L represents every APCG's goal. R L = R expected /L After determining the suburb's requirements, next step optimizes the clusters by evaluating number of agro-prosumers present in each APCG (as shown in Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>). Let say P l and P h respectively be the lowest and highest number of prosumers expected in each group. Let R L be the minimum amount of production expected from each APCG. Prosumers count (P num ) and the production quantity (R obtained ) from a specific prosumer cluster is shown in equations 1 and 2. P l &lt;P num &lt;P h 1 R obtained &gt;R L 2 If the production is less than the expected amount (R obtained &lt;R L ), or the number of prosumers is lower than the ideal number of prosumers (P l &gt;P num ), that agro-prosumer cluster is merged with the closest prosumer cluster, and the same process continues until the prosumer cluster can meet the total production requirement (R obtained &gt;R L ) and the number of prosumers (P l &lt;P num &lt;P h ) defined for the APCG. Now, if too many prosumers form an agro-prosumer clusters, the clusters are further break down into small size clusters consisting of an most favourable number of prosumers and meeting the production goals i.e. R obtained &gt;R L and P l &lt;P num &lt;P h . The final output of optimization will result in the optimised prosumer clusters, which are then represented as APCGs. Now these APCGs are analysed to identify the unique production characteristics or pattern of each group which will be denoted as the pre-requisite of the APCGs.</ns0:p></ns0:div> <ns0:div><ns0:head>Pre-requisite formation</ns0:head><ns0:p>Introduction to APCGs includes formation of unique entry requirements for each group. The two key input, as discussed in the previous sections, to determine the prosumers' adherence are the 'lower threshold' (Lt) and the 'upper threshold' (Ut). The defined inputs used as pre-requisites of each APCGs will be:</ns0:p><ns0:formula xml:id='formula_0'>&#61623; Lower threshold: L t &#61623; Upper threshold: U t</ns0:formula></ns0:div> <ns0:div><ns0:head>Framework 2: Agro-Prosumers Recruitment Framework</ns0:head><ns0:p>A new recruitment framework is designed to evaluate real time behavior of new agro-prosumers and allocate them in specific APCG groups. The reason for designing this approach is to encourage participation of non-farmers and new gardeners, which not only help them to estimate production details, outline incentive benefits etc., but will also ease off the management of APCG in the long run. Additionally this framework requirement is new and won't be justified to use CSA methods which basically works on partnership basis and useful for a large piece of land <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref><ns0:ref type='bibr' target='#b3'>[4]</ns0:ref><ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>. An overview of the framework is shown in Figure <ns0:ref type='figure'>4</ns0:ref>. New agro-prosumers who are interested in joining the APCGs, and their real-time behavior profiles, are collected as input for this framework. We term these agro-prosumers 'prospect agroprosumers' who are assumed to be new to the community sharing network; thus, because there are no previous production profiles, real-time production needs to be determined. The final outcome Manuscript to be reviewed Computer Science of this framework is the recruitment of prospect agro-prosumer to suitable APCGs. This stage is further divided into four components, which are explained below. The framework has four components 1.</ns0:p><ns0:p>An approach to evaluate agro-prosumers' production performance;</ns0:p><ns0:p>2.</ns0:p><ns0:p>Agro-prosumers' transaction assessment during the evaluation period</ns0:p><ns0:p>3.</ns0:p><ns0:p>An approach to analyse agro-prosumers' stability; and 4. Agro-prosumers recruitment to a specific APCG after the evaluation period. The varying nature of agro-prosumers' production behaviour is evaluated using the above approaches, and allocates them to a temporary 'variable APCG'. Later on, the prosumers' overall behaviour is stored and evaluated prior to recruitment to a specific APCG, i.e., to one of the final APCGs. The requirements for the proposed solution are covered via four components (listed above) discussed in detail below.</ns0:p></ns0:div> <ns0:div><ns0:head>Approach to evaluate prosumers' production performance</ns0:head><ns0:p>Finding an approach to estimate agro-prosumers' performance is the first component of the evaluation technique, which helps in understanding the evaluation period activities and evaluation inputs.</ns0:p></ns0:div> <ns0:div><ns0:head>Agro-prosumer evaluation measures</ns0:head><ns0:p>As discussed previously, the 'evaluation period' is an established period of consecutive seasons during which the production behavior of new agro-prosumers who are interested in joining an APCG, is evaluated. The evaluation period is divided into two seasons per year in Australia: winter (i.e., March-August) and summer (i.e., September-February). These winter and summer seasons show non-overlapping, mutually exclusive time periods and are assigned with a production transaction between agro-prosumer and the APCG module using production value. Agro-prosumers' production data is generated using Australian national average. Production data such as family size, farming methods (organic, inorganic), lemon variety (3 major lemon variety has been used) and number of trees (1-10 has been randomly used) and their respective ages (age of a tree is assumed from 5-100 years), are collected as input to evaluate their consumption pattern and production performance for two season or annually. Agro-prosumers' surplus production is considered as the final value for one season/year. Thus, prosumers' performance is estimated using that final value, and is evaluated for each season. Next section explains the approach used to determine the prosumers' performance for each season during the evaluation period.</ns0:p></ns0:div> <ns0:div><ns0:head>The proposed approach</ns0:head><ns0:p>This approach requires two inputs: the input from the agro-prosumer and the input from the APCG module as shown in Figure <ns0:ref type='figure' target='#fig_3'>5</ns0:ref>. Inputs from the agro-prosumer include production summary for a season and the prosumer's preferred APCG. The APCG module's input comprises the prerequisites of the available APCGs. A probabilistic approach is used here to evaluate agro-prosumers' production performance based on the pre-condition criteria of their preferred APCGs. Results of this approach are the 'performance indices' and variable APCG of the agro-prosumer for each season. Performance indices are used to anticipate the level of success and/or failure of an agro-prosumer in meeting the pre-condition criteria of his/her preferred APCGs. To utilize it, different levels of success and failure are represented using a four-point scale as shown in Table <ns0:ref type='table'>1</ns0:ref>. In fact, each performance index shows different value or success rate of performance in the production behavior. The performance scale ranges from 0 to 3, where 3 represents the complete success or match, and the minimum success rate is 80% for meeting the pre-condition criteria. If the success rate is less than 79%, it will be considered as a 'failure'. The performance scale used here has single-integer values. It is difficult to use extreme values, i.e., only high or low, to measure prosumer behavior. Hence, in order to determine and model the performance of prosumers more accurately, various levels of performance should be identified first. Moreover, to accurately determine prosumer performance, the various levels of performance must be identified. A performance score with a value from 0 to 3 will help to indicate prosumers' performance for APCGs development.</ns0:p><ns0:p>&#61623; Complete success: The highest point on the performance Score is 3, which indicates 'Complete success'. This score suggests 100% success rate in interacting with the prosumers' production-sharing process. This level of performance according to the PS suggests that the prosumer is strongly suited to his preferred APCG and meets the desired pre-condition criteria.</ns0:p><ns0:p>&#61623; Intermediate success: This level denotes 90-99% of success rate in interacting with prosumers' production behavior. Performance Score 2 shows that it is the 'medium success' level. This score suggests that in meeting the prosumers' preferred APCG requirements, prosumers' performance reliability is good.</ns0:p><ns0:p>&#61623; Entry success: Performance score 1 indicates 'Entry success'. This score suggests 80-89% success rate while satisfying the pre-requirements of the preferred APCG's. This performance index score suggests that the prosumer is slightly reliable in meeting the desired pre-condition criteria of his/her preferred APCG.</ns0:p><ns0:p>&#61623; Failure: 0 reflects the lowest score in performance, indicating 'failure'. This level depicts 0-79% rate of success in fulfilling the pre-requirements. Thus, this level shows that the prosumer's performance is not reliable enough to meet the pre-condition criteria for the APCG. Hence, the prosumer with this index could be matched with other APCG rather than the preferred one. The mathematical expression of performance indices is given in equation 3 For a season (j) of the evaluation period, the rate of success of the prosumer (P ij ) being allocated to prefer variable APCG (C p ): Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Where P ij is an i th agro-prosumer's performance in the j th season, C p is the preferred APCG, E ij is the real time production commitment of i th agro-prosumer and L p is the production threshold of agro-prosumers preferred APCG.</ns0:p></ns0:div> <ns0:div><ns0:head>Agro-prosumers' transaction assessment during the evaluation period</ns0:head><ns0:p>For ongoing assessment during the evaluation period, agro-prosumer is aimed to assign into his chosen APCG for each season. The evaluation process is shown in Figure <ns0:ref type='figure'>6</ns0:ref>. The key steps of the process are as follows: the prospect prosumer is asked to submit records of production in real time for 'n' seasons during the evaluation period. For each season, dynamic production amount is compared with the minimum threshold (E th ), which is the minimum requirement of any APCG. If the prosumers' production is equal or greater than the E th , the prosumer is viewed to be an eligible prosumer. Next, if a prospect agro-prosumer receives 'eligible prosumer' status during his/her first season, she/he will be promoted to the next season and then to following seasons. However, if she/he fails to meet the eligible agro-prosumer requirement, in the first production season, the evaluation period will be extended with more seasons. However, if the new agro-prosumer is not able to match the minimum threshold (E th ), then the prosumer's evaluation period is extended by another season and the prosumer remains under evaluation until succeeded. On the completion of the evaluation period, prospect agro-prosumers' stability will be analyzed using stability index, which is discussed next.</ns0:p></ns0:div> <ns0:div><ns0:head>An approach developed to analyze agro-prosumer stability</ns0:head><ns0:p>The stability of an agro-prosumers' reliability is estimated for his/her preferred APCG, as well as for those assigned throughout the evaluation period. Figure <ns0:ref type='figure'>7</ns0:ref> shows a process to obtain prosumers' stability for agro-prosumers' chosen APCG. During evaluation period, for each season, agro-prosumers' performance index values are taken as an input along with their temporary APCGs. Equations 4 and 5 formulates a mathematical equation for the approach. SI represents the stability index which is used to determine the feasibility that prosumers will remain in their preferred APCG. The output for I index is between 0 and 3, and a higher I shows high chances of prosumers remaining in their preferred APCG:</ns0:p><ns0:formula xml:id='formula_1'>4 &#119868; &#119901;&#119894; = &#8721; &#119899;&#119904; &#119895; = 1 &#119875;&#119883; &#119894;&#119895;</ns0:formula></ns0:div> <ns0:div><ns0:head>&#119899;&#119904;</ns0:head><ns0:p>Above I pi is the stability index of the i th prosumer with respect to chosen APCG (C p ), PX ij is an i th prosumers' performance index in the j th season and ns is the number of seasons where the prosumer is assigned to his/her chosen APCG. To determine most suitable APCG for an agro-prosumer, rate of engagement to a specific APCG is calculated using equation 5. For example, if the agroprosumers' rate is higher for APCG1 than other APCGs, than the chosen APCG1 is seen as the most favorable APCG for that prosumer's engagement. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Where P i is the i th prosumer, APCG Fr is the r th temporary APCG, count of (APCG Fr ) shows the total number of times the prosumer is selected to r th temporary APCG during the evaluation period and ns is the number of seasons. The next section discusses agro-prosumer engagement to the permanent APCG based on the previously-described method. Agro-prosumers engagement to the permanent APCG after the valuation period. Agro-prosumer engagement to the most suitable APCG is analyzed in this step. The overall performance of prospect agro-prosumers overall performance is assessed at the end of the evaluation period. Figure <ns0:ref type='figure'>8</ns0:ref> is a flowchart showing this process. As discussed in the previous section, the Stability Index, based on an agro-prosumer's performance index, is calculated throughout the evaluation period. Additionally, agro-prosumers' rate of staying in temporary APCGs during the entire evaluation period is assessed. Equation 6 is utilized to identify the combined value of the agro-prosumer being allocated to the permanent APCG. The APCG which shows the highest joined index is chosen as that prosumer's final permanent APCG.</ns0:p></ns0:div> <ns0:div><ns0:head>6</ns0:head><ns0:p>&#119868;&#119875;&#119903;(&#119875; &#119894; &#8712; &#119860;&#119875;&#119862;&#119866; &#119865;&#119903; ) = &#119868; &#119901;&#119894; &#215; &#119877;&#119886;&#119905;&#119890;(&#119875; &#119894; &#8712; &#119860;&#119875;&#119862;&#119866; &#119865;&#119903; )</ns0:p></ns0:div> <ns0:div><ns0:head>Framework 3: Goal Management Framework</ns0:head><ns0:p>The input for the framework includes diverse goals for agro-prosumer community groups. The solution framework consists of a goal management component. The outcome of the goal management phase is an optimized set of overall goals for the community-based, harvest-sharing network. The processes involved in goal management are shown as an overview of the framework in Figure <ns0:ref type='figure'>9</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Goal management</ns0:head><ns0:p>The goal management stage is responsible to attain ideal goals structure out of overall goals. The purpose involves solving diverse conflicting goals in the APCG to obtain best solution in terms of goals priority. The feature of MCGP <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref> and an approach utilised in smart grid goal management <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> is referred to design best possible solution for conflicting goals. To achieve this, each and every identified objective is attached with a rank based on their priority. High rank objectives are treated as goals to work out first, and therefore attempts are made to find a solution which is close to the pre-ranking set of goals. Goal programming minimises the deviation between the theoretical goals and realistic achievements. These deviation can be both positive and negative, thus an objective function is used to minimise the deviations based on the relative importance of the goals. Various areas has utilised goal programming model benefits such as environment, energy, smart grid, academic and health planning <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>, and shows success in solving diverse conflicting goals. In this framework, we adapt MCGP techniques for our framework. Figure <ns0:ref type='figure' target='#fig_10'>10</ns0:ref> presents the algorithm for the goal programming model, where the parameters and equations are explained in the following section. The model has six parts: Income and Incentive objective (I3): The 'income and incentive objective' focus is to earn income and incentive from selling surplus production of APCGs to vegetable/fruit buyers and trading carbon tokens with industries.</ns0:p><ns0:p>V.</ns0:p><ns0:p>Maintenance cost reduction objective (M4): This goal refers to reducing the cost of APCGs maintenance over time. For example, 'maintenance cost' may represent the one time cost to build APCG platform and maintaining the database and transaction records etc. Cost related to collection and distribution of products/vegetation from members, to stores etc.</ns0:p><ns0:p>Additionally providing benefits to the members may require a payment gateway which may incur cost. VI.</ns0:p><ns0:p>Stable APCG objective (S5): The increase in the number of active APCG members, that is, those who dynamically participate in the production-sharing or carbon-sharing network, is a 'stability objective'.</ns0:p></ns0:div> <ns0:div><ns0:head>Part 2: Summary of variables</ns0:head><ns0:p>In order to use MCGP all variables and their deviations are identified. For APCG the idea is to identify variables and summarize their deviations to achieve ideal set of goals. The production amount and carbon tokens generated by each group will be counted as variables and maximizing/minimizing the value is considered as deviation.</ns0:p></ns0:div> <ns0:div><ns0:head>Part 3: objective classification</ns0:head><ns0:p>The objectives are classified as definite and flexible constraints based on the previous objectives (part 1). At this point, the 'definite goals' are outlined as mandatory requirement on the variables, whereas the 'flexible goals' are outlined as the objectives nice to have but not necessary <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref> . The classification of goals are as follows: I. Definite goals: Maximum carbon capture objective (C). For example, the APCG's base is environmental sustainability. Thus, ecological methods must be used for APCG production.</ns0:p><ns0:p>II. Flexible goals: Goals such as local food security (F1), extended community and customer demand objective (L2), income &amp; incentive objective (I3), maintenance cost objective (M4), and stability of APCG (S5). Refinement of these goals helps in achieving the ideal goal set, which would benefit APCG. The variables summaries is defined as: maximum C1, minimum F1, minimum L2, minimum I3, maximum M4, and minimum S5; these are termed 'expected values' in the goal programming model.</ns0:p></ns0:div> <ns0:div><ns0:head>Part 4: Objective ranking</ns0:head><ns0:p>To make sure important goals met first, the priorities of the goals have been assigned. This step discusses ranking out the goals by assigning a weight (or rank) to each goal. As mentioned earlier, goals can be mutually exclusive; i.e. one goal may be achievable at the expense of another. This makes it critically important to assign weights to the goals, so that least important goals are only met after the important ones. Keeping local network food security (F1) as priority, total goal set can be determined as 4!, thus in total 24 structures will be formed such as F1L2I3M4S5, F1L2M4S5I3&#8230; F1S5M4I3L2. Part 5: Goal equation formation Mathematical relations are developed in this section for the definite and flexible goals. Equations are as follows-I. Carbon capture Objective (C): Organic farming methods should be used for APCG produce to increase the carbon token value. II. Food security local demand objective (GC1): Satisfying food security of APCG should be focused. Thus, the purpose of this goal is to minimise the negative deviation from the quantity of surplus production of each APCG.</ns0:p><ns0:p>Let A pi E i be the extra production produced by i th APCG, k 0 and l 0 be negative and positive variance respectively, and t be the number of APCGs; then the equation for food security local demand objective (F1) would be:</ns0:p><ns0:formula xml:id='formula_2'>&#119860; &#119901;&#119894; &#215; &#119864; &#119894; &#8805; 0;&#8704;&#119894; &#8804; &#119905; 7 &#119860; &#119901;&#119894; &#215; &#119864; &#119894; + &#119896; 0 -&#119897; 0 = 0;&#8704;&#119894; &#8804; &#119905;</ns0:formula><ns0:p>Considering 4 APCG groups for this framework, 4 equations will be formed (m=4) for each group;</ns0:p><ns0:formula xml:id='formula_3'>&#8230; &#119873; &#119901;1 &#215; &#119864; 1 + &#119896; 1 -&#119897; 1 = 0; &#119873; &#119901;4 &#215; &#119864; 4 + &#119896; 4 -&#119897; 4 = 0;</ns0:formula></ns0:div> <ns0:div><ns0:head>III. Local community demand objective (L2):</ns0:head><ns0:p>The purpose of L2 is to minimise the negative variance of the total surplus production of all APCG. Assuming requirement from external supermarket is R. And positive and negative variance be s and q, respectively; then the equation will be formed as Income &amp; Incentive objective (I3): Obtaining higher income is another requirement of the framework. The minimum income expectation of the ith APCG be Ii, and positive and negative variance be q1 and s1 respectively; then the equation for this objective will be minimizing negative variance</ns0:p><ns0:formula xml:id='formula_4'>&#8721; &#119898; &#119894; = 1 &#119864; &#119894; &#215; &#119860; &#119901;&#119894; &#8805; &#119877;</ns0:formula><ns0:formula xml:id='formula_5'>&#8721; &#119899; &#119894; = 1 &#119868; &#119894; &#215; &#119864; &#119894; &#215; &#119860; &#119901;&#119894; &#8805; &#119868; 9 &#8721; &#119899; &#119894; = 1 &#119868; &#119894; &#215; &#119864; &#119894; &#215; &#119860; &#119901;&#119894; + &#119902;1 -&#119904;1 = &#119868; V.</ns0:formula><ns0:p>Maintenance cost objective (M4): Let say the maintenance cost allowances be M, and the positive and negative variance be q2 and s2, respectively; equation for the maintenance cost objective (GC4) is obtained with Equation <ns0:ref type='formula'>5</ns0:ref>.6, where Ci is the coefficient, represents the cost rate of ith APCG.</ns0:p><ns0:formula xml:id='formula_6'>&#8721; &#119899; &#119894; = 1 &#119862; &#119894; &#215; &#119864; &#119894; &#215; &#119860; &#119901;&#119894; &#8804; &#119872; 10 &#8721; &#119899; &#119894; = 1 &#119862; &#119894; &#215; &#119864; &#119894; &#215; &#119860; &#119901;&#119894; + &#119902;2 -&#119904;2 = &#119872; VI.</ns0:formula><ns0:p>Sustainability objective (GC5): Let P be the minimum number of prosumers who are participating in APCG, and positive and negative variance be q3 and s3, respectively; then, the formula for the sustainability objective (G5) would be:</ns0:p><ns0:formula xml:id='formula_7'>&#8721; &#119899; &#119894; = 1 &#119860; &#119901;&#119894; &#8805; &#119875; 11 &#8721; &#119899; &#119894; = 1 &#119860; &#119901;&#119894; + &#119902; 3 -&#119904; 3 = &#119875;</ns0:formula></ns0:div> <ns0:div><ns0:head>Part 6 Development of objective functions</ns0:head><ns0:p>Finally the objective function of each goal is formulated and, best possible solution is formed by minimizing the deviations from each goal. The objective functions here are the [(k1, k2, k3, k4), q, q 1 , s 2 , q 3 ]. Partitioning algorithm is used to solve this linear goal programming problem.</ns0:p></ns0:div> <ns0:div><ns0:head>Goal programming solution</ns0:head><ns0:p>As discussed previously, 24 priority goal structure sets are identified along with different ranking order. The partitioning algorithm is utilized as a solution here, in order to solve the linear goal programming problem <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>. The solution working principle implies on the definition of priority structures which implies that higher-order goals must be optimised before lower-order goals are even considered. The solution procedure is shown in Figure <ns0:ref type='figure' target='#fig_10'>11</ns0:ref> which consists of solving a series of linear programming sub-problems by using the solution of the higher-priority problem solved prior to the lower-priority problem. All the sub-problems assigned to a higher priority goals are solved first using the partitioning algorithm. The ideal tableau for this sub-problem is then examined for alternative ideal solutions. If none exists, then the present solution is ideal for the original problem with respect to all the priorities. The algorithm then substitutes the values of the parameters for the flexible goals of the lower priorities to calculate their satisfaction levels, and the problem is solved. However, if alternative ideal solutions do exist, the next set of flexible goal and their objective function terms are added to the problem. This brings the algorithm to the next sub-problem in the series, and the optimisation resumes. The algorithm continues in this manner until no alternative ideal exists for one of the subproblems or until all priorities have been included in the optimisation <ns0:ref type='bibr' target='#b5'>[6,</ns0:ref><ns0:ref type='bibr' target='#b7'>8]</ns0:ref>.</ns0:p><ns0:p>Goal management problem provides the best solution by comparing the achievable set of goals when compared to the predetermined goals. Additionally the identification of the necessary alterations to parameters are explained well in order to achieve all the goals in different priority structures.</ns0:p></ns0:div> <ns0:div><ns0:head>Results and Discussion</ns0:head><ns0:p>In this section, simulation parameters are illustrated for the verification of the frameworks.</ns0:p><ns0:p>1) Framework 1 APCG definition and prerequisites a) Simulation: As shown in Table <ns0:ref type='table'>2</ns0:ref>, the key parameters for the verification are the prosumer production dataset. This framework is proposed using one type of crop only: lemons. It is challenging to obtain a dataset for lemon yields because prosumer community group data is not publicly available. Therefore, prosumer production profiles are generated using minimum and maximum lemon production and consumption. In the sub-section below, we discuss the generation of prosumer profile data.</ns0:p><ns0:p>In this section, prosumer profiles are generated using the Australian standard production and consumption pattern (as shown in Table <ns0:ref type='table'>3</ns0:ref>).</ns0:p><ns0:p>Country/region: In order to generate prosumer profiles, production parameters are analyzed particularly for the State of Victoria, Australia. For this study, prosumers residing in Victoria are used only to generate a sample data set. Therefore, Victorian suburban postcodes are randomly generated for prosumers. The average residential block of land is utilized to generate land sizes across Victoria. For each postcode, latitude and longitude values are determined in order to build prosumer community groups that are in close proximity.</ns0:p><ns0:p>Vegetation/fruit: Lemon trees generally produce the first crop after three years, and reach maturity when they are about five years old. Hence, the age of lemon trees and the variety are considered when estimating the minimum and maximum number of lemons produced during harvest season, and assessing the amount of carbon absorption. For this study, we consider three of the most common varieties: Eureka, Meyer and Lisbon.</ns0:p><ns0:p>Farming method: Organic and inorganic methods affect the production by 10-30%. Organic methods that involve composting, no tilling and no chemical fertilizers can reduce the quantity produced by 20-30%. Thus, this input is also considered when generating the dataset.</ns0:p><ns0:p>Lemon Consumption Rate: For prosumers, it is important to estimate their family consumption and calculate the surplus production that can be shared with the community or market. To do so, the per capita consumption of lemons is estimated and average family size is determined.</ns0:p><ns0:p>Finally, prosumer consumption is calculated and averaged out to obtain the lowest production and highest production rates.</ns0:p><ns0:p>Lemon Production Rate: As a lemon tree ages, its yield increases. When it reaches maturity after five years or so, it can produce an average of ~1500 lemons. The total amount produced also depends on whether organic or inorganic farming methods have been used. Therefore, the farming method used and the age of the lemon tree are combined to estimate the average production for a season or a whole year. Finally, the estimated average production amount is Firstly, the agro-prosumer profiles are collected and the dataset is prepared and checked for data quality. For instance, the production and consumption of agro-prosumers are analyzed and if the maximum production share is less than 50 for a season, this profile is discarded. For this framework, 300 prosumer profiles were obtained as a sample, of which five were discarded as their HYC was less than 50.</ns0:p><ns0:p>Next, the dataset consisting of prosumer profiles is partitioned according to suburb or municipal boundaries, and irrelevant profiles are removed. Of the 300 prosumer profiles, 87 prosumers belong to 'G-206 clusters' and 200 prosumers belong to 'G-207 clusters'. The remaining eight profiles are kept in a small extra cluster as outliers.</ns0:p><ns0:p>The resulting clusters, G-206 and G-207, are obtained after removing the outliers. These clusters are further partitioned into different prosumer groups based on their production rate using the hierarchical clustering method described in section 3.5. For G-206, hierarchical clustering resulted in four clusters. Figure <ns0:ref type='figure' target='#fig_10'>13</ns0:ref> illustrates the number of prosumers allocated to G-206 clusters where c1, c2, c3 and c4 denote four cluster groups produced by the hierarchical method. The same hierarchical clustering is done for the G-207 cluster, which resulted in eight clusters: c1 to c8 (Figure <ns0:ref type='figure' target='#fig_10'>14</ns0:ref>). However, as shown in Figures <ns0:ref type='figure' target='#fig_10'>13 and 14</ns0:ref>, some clusters have a very large number of prosumers; for instance, there are more than 30 agro-prosumers in c3 of G-206, and nearly 60 in c1 of G-207. APCGs need to have a reasonable number of members in each cluster: small clusters can cause inefficiency or overheads, and large clusters can overproduce and cause storage problems or damage (such as infections) to the produce. Hence, in this scenario, the optimization of the clusters by splitting the large clusters is done in order to ensure an appropriate number of members.</ns0:p><ns0:p>In addition, Figures <ns0:ref type='figure' target='#fig_10'>13 and 14</ns0:ref> show clusters which are too small where the number of agroprosumers is less than or little more than ten. For example, cluster c2 in Figure <ns0:ref type='figure' target='#fig_10'>13</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We optimize the originally obtained agro-prosumer clusters into an optimal number of APCGs in order to reach the maximum and minimum number of members expected in each APCG, and the minimum amount of production from each APCG. For G-206, we divide the large clusters into two APCGs by splitting the production quantity further down (we assume 10 prosumers min. and 40 prosumers max.) in each APCG, and each APCG collectively produces quantity (at least---). These finalized clusters are illustrated below in Figure <ns0:ref type='figure' target='#fig_10'>15</ns0:ref> for G-206 clusters. Similarly finalized clusters are produce for G-207. Tables <ns0:ref type='table'>4 and 5</ns0:ref> illustrate the numerical distribution of prosumers into APCGs for G-206 and G-207 respectively. Using the distribution, similar patterns can be used to define and characterize the APCGs. Next, the pre-condition step is used to characterize the APCGs' entry requirements. Table <ns0:ref type='table'>6</ns0:ref> combines the average production and summarizes the pre-condition criteria for different APCGs during a season. The pre-condition criteria are provided to any interested prosumers to give them a better understanding of the entry requirements for a community-based, produce-sharing network.</ns0:p><ns0:p>2) Framework 2 Agro-prosumer recruitment framework a) Simulation: For verification and validation of the agro-prosumer recruitment framework, the solution framework is simulated using MATLAB and Excel. The setting here is a basic set-up for the examination of the proposed framework. To verify the proposed algorithm, 50 agroprosumers production profiles were generated, assuming that these 50 agro-prosumers have shown interest in joining APCGs. For dataset generation, production behavior along with consumption patterns from framework 1 are used. Data is obtained for summer and winter seasons for four APCGs that are defined and characterized for framework 1. Four seasons are used for the evaluation period: two summers and two winters. Thus, a prosumer is evaluated over a two-year period.</ns0:p><ns0:p>The simulation parameters for new agro-prosumer framework are listed in Table <ns0:ref type='table'>7</ns0:ref>.</ns0:p><ns0:p>Eligible agro-prosumers are identified during the evaluation conducted after each season of the evaluation period. Only those agro-prosumers who satisfy the 'eligible production threshold' in the first season can proceed to the next season. Also, eligible agro-prosumers choose their preferred APCG. The assumption here is that registered users cannot change their selection of preferred APCG until the end of the evaluation period; thus, the preferred APCG remains fixed for four seasons. However, the eligible agro-prosumers readiness' in meeting the preferred APCG's pre-condition criteria may be irregular over the seasons during the evaluation period. To solve this issue, as we mentioned that the registered agro-prosumer is required to meet the lower threshold value of the preferred APCG to be able to meet the evaluation criteria. Additionally, to determine the extent to which a registered agro-prosumer meets the pre-condition criteria of the chosen APCG, four performance indicator groups are introduced with values: '3', '2', '1' and '0' indicating 'total success', 'medium success', 'low success' and 'failure', respectively. In this simulation, the prospect prosumers' capability in meeting their chosen APCG's precondition is assessed at first. Figures <ns0:ref type='figure' target='#fig_10'>16, 17</ns0:ref>, 18 and 19 show the percentage of prosumers who are allocated to different performance indices over the four seasons (or two years) for different APCGs, i.e., APCG1, APCG2, APCG3 and APCG4. Result shows APCG 1 and 2 shows prosumers easily satisfying the pre-requisites when compared to APCG 4 which shows variation due to high entry pre-requirements.</ns0:p><ns0:p>3) Framework 3 Goal management a) Simulation: The solution is developed using LINGO, and is discussed in the following subsection. Table <ns0:ref type='table'>8</ns0:ref> shows some of the parameters for the goal programming problem that are obtained based on the available data; some parameters are assumed based using the Australian conditions, as real data could not be accessed or found. Here, we take the four APCGs defined by APCG definition and prerequisites framework. To ease the calculations, local food security demand objective is chosen top priority and keep it the same for all the possible solution structures. Thus reducing total possible solutions to 4! i.e. 24 structures. The different priority structures are formed, where the position of the characters ('F1', 'L2,' 'I3,' 'M4' and 'S5'] shows the priority order of the different goals. LINGO-32 is used to program the algorithm. For instance, I3 on priority sets the objective function for I3 to 0, but increases objective function for L2 to 35564.50. When L2 is set on priority M4 successfully met but I3 increases to 11650. When setting L2 on priority increases the I3 to 11651 and M4 to 84446. Setting M4 achieve just for M4 but does not met for L2 and I3. Same applies for S5. So, putting I3 on top achieves the most except for S5. Hence, making S5 the next priority will help to achieve all desired goals. Putting L2, I3 and M4 objective function together on same priority help achieve the best. Therefore, the negotiated priority set of goals are CF1L2I3M4S5 which is illustrated in Table <ns0:ref type='table'>9</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In order to build a seamless Agro-Prosumer Community Group structure, three key frameworks have been proposed in this paper to build a sustainable network for production sharing network. An APCG definition and prerequisites framework has been proposed to categorize the agroprosumer profiles into feasible APCGs, while defining the pre-condition criteria for each APCG. These pre-condition criteria defined for each APCG can be utilized when recruiting new agroprosumers, i.e., the new agro-prosumers may be required to fulfil the upper and lower thresholds defined for an APCG in order to be accepted as members. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A recruitment framework is presented where, an agro-prosumer is assessed throughout the evaluation period, where his/her likelihood of meeting the APCG's pre-condition criteria and his stability is estimated, and a decision is made regarding membership of an appropriate APCG. Finally, a goal management framework presents an approach that determines the multiple conflicting goals within the community-based production-sharing network, prioritizes the goals based on their relative importance, and negotiates the goals to obtain the optimized set of goals for a community-based, produce-sharing network. The proposed approach for goal management assists in deciding the best priority structure. Simulation results for all three frameworks have been provided to verify the proposed framework. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59316:1:1:CHECK 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59316:1:1:CHECK 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>3 &#119877;&#119886;&#119905;&#119890;(</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>&#119875; &#119894;&#119895; &#8712; &#119862; &#119901; ) { 100%:&#119894;&#119891; &#119864; &#119894;&#119895; &#8805; &#119871; &#119901; &#119864; &#119894;&#119895; &#119871; &#119901; :&#119894;&#119891;&#119864; &#119894;&#119895; &lt; &#119871; &#119901; } PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59316:1:1:CHECK 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>5 &#119877;&#119886;&#119905;&#119890;(</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>&#119875; &#119894; &#8712; &#119860;&#119875;&#119862;&#119866; &#119865;&#119903; ) = &#119888;&#119900;&#119906;&#119899;&#119905; &#119900;&#119891;(&#119860;&#119875;&#119862;&#119866; &#119865;&#119903; ) &#119899;&#119904; PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59316:1:1:CHECK 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Part 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59316:1:1:CHECK 23 Sep 2021) APCGs goal recognition APCGs diverse goals are identified in this phase. These objectives are explained below. I. Carbon content objective (C1): The 'carbon-capture objective' refers to the use of organic farming methods to maximize carbon capture, which will increase the carbon content which can be traded with external companies. More carbon capture will result in more carbon sequestration and less emission. II. Food security within the network (F1): The goal is secure the vegetable/fruit demand of local members within the APCGs. Realistically, some members within an APCG may struggle producing sufficient quantity to meet their own consumption needs. Hence, food security of APCG members have been targeted. III. Providing local food access to wider community (L2): With growing local food, APCGs can make locally grown vegetables available to the extended community such as external customers or supermarkets, greengrocers, and external consumers who are not registered with an APCG. IV.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>1 &#119864;</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>&#119894; &#215; &#119860; &#119901;&#119894; + &#119902; -&#119904; = &#119877; PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59316:1:1:CHECK 23 Sep 2021)Manuscript to be reviewedComputer ScienceIV.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59316:1:1:CHECK 23 Sep 2021) Manuscript to be reviewed Computer Science assessed and consumption is calculated to obtain the LYC and HYC. The LYC and HYC show the maximum contribution for the season that can be expected from a prosumer. After determining the production-sharing rate, we randomly generate 200 production profiles (shown in Figure 12), which are then used to verify the proposed framework for APCG definition and pre-condition characteristics. b) Verification process: For this verification, R software and programming language have been used. The following parameters are used for simulating the APCG definition and the prerequisites framework.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>offers only 11 agro-prosumers and c8 in Figure 14 has only eight agro-prosumers. If the APCG fails to supply an adequate amount of produce to the buyers or market, it might not enjoy good value or strong relationships in the long term and may become unsustainable. Therefore, in this scenario, adjacent prosumer clusters are merged in order to meet the amount of production required of members. For this data set, we reduce the number of clusters, merging the neighbors into one cluster. These finalized clusters constitute the APCGs. PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59316:1:1:CHECK 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>The observations and results obtained by solving the goal problem in LINGO is presented in next section. b) Verification: The solution predicts the division of the objective function according to the process priority level and the sequential solution of the resulting mixed integer linear programming model. The solution obtained at each priority level is used as a constraint at the lower level. The general examples discussed here are intended to illustrate the model's applicability to the problem of practical dimensions.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59316:1:1:CHECK 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 1 Theoretical</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59316:1:1:CHECK 23 Sep 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59316:1:1:CHECK 23 Sep 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Dear Editors and Reviewers, We appreciate your time and suggestions. We have made following changes. Please find related line number. Reviewer’s comments- - Equation number 3 is missing in the paper. - There are only few references in the paper. - In recruitment framework, authors mentioned that the recruitment of prosumers assumes that they cannot change the selection of preferred APCG. There is no discussion and hence it remains unclear how fair this assumption is. - Also, the verification and data generation process described in framework 2 is not very clear. Authors are advised to add more details in this section. -Equation number has been revised. -Few references has been added. -Regarding change in selection of preferred APCG, it is already discussed that the prosumer’s APCG will be locked temporary during evaluation period. And depending on their SI they will be allocated to permanent APCG (Line number 303, 315). Additionally, season one performance will be analysed with threshold amount, if the prosumers production is failed during season one, his/her evaluation period will be extended (line number 285). Finally prosumer can accept permanent APCG which is evaluated using SI generated for each season (line number 307). - More production data details has been added (line number 218-221). Editor’s review- The benefit of using real-time production profiles is justifiable. However, the associated cost and complexity should also be discussed and compared with those off-line versions. Some alternative methods in addition to the hierarchical clustering method may be compared and commented. The expressions and equations throughout the paper should be re-formulated and standardised. For example, try to use '(4)' instead of 'Equation 4'. The right-hand side brace in line 263 is not needed. Cost and complexity has been added (line number 185-189). Alternative methods had been worked out and added in the manuscript (Line 101-105) Equation has been changed to 1,2, etc. Looking forward to hearing from you soon. Kind regards, Pratima Jain (au.pratima@gmail.com) "
Here is a paper. Please give your review comments after reading it.
268
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Speech emotion recognition (SER) is a challenging issue because it is not clear which features are effective for classification. Emotionally related features are always extracted from speech signals for emotional classification. Handcrafted features are mainly used for emotional identification from audio signals. However, these features are not sufficient to correctly identify the emotional state of the speaker. The advantages of a deep convolutional neural network (DCNN) are investigated in the proposed work. A pretrained framework is used to extract the features from speech emotion databases. In this work, we adopt the feature selection (FS) approach to find the discriminative and most important features for SER. Many algorithms are used for the emotion classification problem. We use the random forest (RF), decision tree (DT), support vector machine (SVM), multilayer perceptron classifier (MLP), and k-nearest neighbors (KNN) to classify seven emotions. All experiments are performed by utilizing four different publicly accessible databases. Our method obtains accuracies of 92.02%, 88.77%, 93.61%, and 77.23% for Emo-DB, SAVEE, RAVDESS, and IEMOCAP, respectively, for speaker-dependent (SD) recognition with the feature selection method. Furthermore, compared to current handcrafted feature-based SER methods, the proposed method shows the best results for speaker-independent SER. For EMO-DB, all classifiers attain an accuracy of more than 80% with or without the feature selection technique.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Speech emotion recognition (SER) is a challenging issue because it is not clear which features are effective for classification. Emotionally related features are always extracted from speech signals for emotional classification. Handcrafted features are mainly used for emotional identification from audio signals. However, these features are not sufficient to correctly identify the emotional state of the speaker. The advantages of a deep convolutional neural network (DCNN) are investigated in the proposed work. A pretrained framework is used to extract the features from speech emotion databases. In this work, we adopt the feature selection (FS) approach to find the discriminative and most important features for SER. Many algorithms are used for the emotion classification problem. We use the random forest (RF), decision tree (DT), support vector machine (SVM), multilayer perceptron classifier (MLP), and k-nearest neighbors (KNN) to classify seven emotions. All experiments are performed by utilizing four different publicly accessible databases. Our method obtains accuracies of 92.02%, 88.77%, 93.61%, and 77.23% for Emo-DB, SAVEE, RAVDESS, and IEMOCAP, respectively, for speaker-dependent (SD) recognition with the feature selection method. Furthermore, compared to current handcrafted feature-based SER methods, the proposed method shows the best results for speaker-independent SER.</ns0:p><ns0:p>For EMO-DB, all classifiers attain an accuracy of more than 80\% with or without the feature selection technique.</ns0:p><ns0:p>emotional database, (ii) performing useful feature extraction, and (iii) using deep learning algorithms to design accurate classifiers. However, emotional feature extraction is a significant problem in an SER framework. In prior studies, many researchers have suggested significant features of speech, such as energy, intensity, pitch, standard deviation, cepstrum coefficients, Mel-frequency cepstrum coefficients (MFCCs), zero-crossing rate (ZCR), formant frequency, filter bank energy (FBR), linear prediction cepstrum coefficients (LPCCs), modulation spectral features (MSFs) and Mel-spectrograms. In <ns0:ref type='bibr' target='#b62'>(Sezgin et al. (2012)</ns0:ref>), several distinguishing acoustic features were used to identify emotions: spectral, qualitative, continuous, and Teager energy operator-based (TEO) features. Thus, many researchers have suggested that the feature set comprises more speech emotion information <ns0:ref type='bibr' target='#b56'>(Rayaluru et al. (2019)</ns0:ref>). However, combining feature sets complicates the learning process and enhances the possibility of overfitting. In the last five years, researchers have presented many classification algorithms, such as the hidden Markov model (HMM) <ns0:ref type='bibr' target='#b43'>(Mao et al. (2019)</ns0:ref>), support vector machine (SVM) <ns0:ref type='bibr' target='#b38'>(Kurpukdee et al. (2017)</ns0:ref>), deep belief network (DBN) <ns0:ref type='bibr' target='#b63'>(Shi (2018)</ns0:ref>), K-nearest neighbors (KNN) <ns0:ref type='bibr' target='#b81'>(Zheng et al. (2020)</ns0:ref>) and bidirectional long short-term memory networks (BiLSTMs) <ns0:ref type='bibr' target='#b48'>(Mustaqeem et al. (2020)</ns0:ref>). Some researchers have also suggested different classifiers; in the brain emotional learning model (BEL) <ns0:ref type='bibr' target='#b48'>(Mustaqeem et al. (2020)</ns0:ref>), a multilayer perceptron (MLP) and adaptive neuro-fuzzy inference system are combined for SER. The multikernel Gaussian process (GP) <ns0:ref type='bibr' target='#b11'>(Chen et al. (2016b)</ns0:ref>) is another proposed classification strategy with two related notions. These provide for learning in the algorithm by combining two functions: the radial basis function (RBF) and the linear kernel function. In <ns0:ref type='bibr' target='#b11'>(Chen et al. (2016b)</ns0:ref>), the proposed system extracted two spectral features and used these two features to train different machine learning models.</ns0:p><ns0:p>The proposed technique estimated that the combined features had high accuracy, above 90 percent on the Spanish emotional database and 80 percent on the Berlin emotional database. Han et al. adopted both utterance-and segment-level features to identify emotions. Some researchers have weighted the advantages and disadvantages of each feature. However, no one has identified which feature is the best feature among feature categories <ns0:ref type='bibr' target='#b21'>(El Ayadi et al. (2011)</ns0:ref>; <ns0:ref type='bibr' target='#b68'>Sun et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b3'>Anagnostopoulos et al. (2015)</ns0:ref>). Many deep learning models have been proposed in SER to determine the high-level emotion features of utterances to establish a hierarchical representation of speech. The accuracy of handcrafted features is relatively high, and this feature extraction technique always requires manual labor <ns0:ref type='bibr' target='#b3'>(Anagnostopoulos et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b9'>Chen et al. (2016a</ns0:ref><ns0:ref type='bibr' target='#b8'>Chen et al. ( , 2012))</ns0:ref>). The extraction of handcrafted features usually ignores the high-level features. However, the best and most appropriate features that are emotionally powerful must be selected by effective performance for SER.</ns0:p><ns0:p>Therefore, it is more important to select specific speech features that are not affected by country, speaking style of the speaker, culture, or region. Feature selection (FS) is also essential after extraction and is accompanied by an appropriate classifier to recognize emotions from speech. A summary of FS is presented in <ns0:ref type='bibr' target='#b34'>(Kerkeni et al. (2019)</ns0:ref>). Both feature extraction and FS effectively reduce computational complexity, enhance learning effectiveness, and reduce the storage needed. To extract the local features, we use a convolutional neural network (CNN) (AlexNet). The CNN automatically extracts the appropriate local features from the augmented input spectrogram of an audio speech signal. When using CNNs for the SER system, the spectrogram is frequently used as the CNN input to obtain high-level features. In recent years, numerous studies have been presented, such as <ns0:ref type='bibr' target='#b0'>(Abdel-Hamid et al. (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b37'>Krizhevsky et al. (2017)</ns0:ref>). The authors used a CNN model for feature extraction of audio speech signals. Recently, deep learning models such as AlexNet <ns0:ref type='bibr' target='#b41'>(Li et al. (2021)</ns0:ref>), VGG <ns0:ref type='bibr' target='#b64'>(Simonyan and Zisserman (2015)</ns0:ref>), and ResNet <ns0:ref type='bibr' target='#b29'>(He et al. (2015)</ns0:ref>) have been used extensively to perform different classification tasks. Additionally, these deep learning models regularly perform much better than shallow CNNs. The main reason is that deep CNNs extract mid-level features from the input data using multilevel convolutional and pooling layers.</ns0:p><ns0:p>The main contributions of this paper are as follows: 1). In the proposed study, AlexNet is used to extract features for a speech emotion recognition system. 2). A feature selection approach is used to enhance the accuracy of SER. 3). The proposed approach performs better than existing handcrafted and Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='2'>BACKGROUND</ns0:head><ns0:p>In this study, five different machine learning algorithms are used for emotion recognition tasks. There are two main parts of SER. One part is based on distinguishing feature extraction from audio signals. The second part is based on selecting a classifier that classifies emotional classes from speech utterances.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Speech Emotion Recognition Using Machine Learning Approaches</ns0:head><ns0:p>Researchers have used different machine learning classifiers to identify emotional classes from speech: SVM <ns0:ref type='bibr' target='#b62'>(Sezgin et al. (2012)</ns0:ref>), random forest (RF) <ns0:ref type='bibr' target='#b51'>(Noroozi et al. (2017)</ns0:ref>), Gaussian mixture models (GMMs) <ns0:ref type='bibr' target='#b52'>(Patel et al. (2017)</ns0:ref>), HMMs <ns0:ref type='bibr' target='#b43'>(Mao et al. (2019)</ns0:ref>), CNNs <ns0:ref type='bibr' target='#b14'>(Christy et al. (2020)</ns0:ref>), k-nearest neighbors (KNN) <ns0:ref type='bibr' target='#b33'>(Kapoor and Thakur (2021)</ns0:ref>) and MLP. These algorithms have been commonly used to identify emotions.</ns0:p><ns0:p>Emotions are categorized using two approaches: categorical and dimensional approaches. Emotions are classified into small groups in the categorical approach. Ekman <ns0:ref type='bibr' target='#b20'>(Ekman (1992)</ns0:ref>) proposed six basic emotions: anger, happiness, sadness, fear, surprise, and disgust. In the second category, emotions are defined by axes with a combination of several dimensions <ns0:ref type='bibr' target='#b16'>(Costanzi et al. (2019)</ns0:ref>). Different researchers have described emotions relative to one or more dimensions. Pleasure-arousal-dominance (PAD) is a three-dimensional emotional state model proposed by <ns0:ref type='bibr' target='#b45'>(Mehrabian (1996)</ns0:ref>). Different features are essential in identifying speech emotions from voice. Spectral features are significant and widely used to classify emotions. A decision tree was used to identify emotions from the CASIA Chinese emotion corpus in <ns0:ref type='bibr' target='#b69'>(Tao et al. (2008)</ns0:ref>) and achieved 89.6% accuracy. AB <ns0:ref type='bibr'>Kandali et al. introduced</ns0:ref> an approach to classify emotion-founded MFCCs as the main features and applied a GMM as a classifier <ns0:ref type='bibr' target='#b32'>(Kandali et al. (2009)</ns0:ref>). <ns0:ref type='bibr'>Milton, A. et al. presented</ns0:ref> a three-stage traditional SVM classifying different Berlin emotional datasets <ns0:ref type='bibr' target='#b47'>(Milton et al. (2013)</ns0:ref>). VB Waghmare et al. adopted spectral features (MFCCs) as the main feature and classified emotions from the Marathi speech dataset <ns0:ref type='bibr' target='#b72'>(Waghmare et al. (2014)</ns0:ref>). <ns0:ref type='bibr'>Demircan, S. et al. extracted</ns0:ref> MFCC features from the Berlin EmoDB database. They used the KNN algorithm to recognize speech emotions <ns0:ref type='bibr' target='#b18'>(Demircan and Kahramanli (2014)</ns0:ref>). The Berlin emotional speech database (EMO-DB) was used in the experiment, and the accuracy obtained was between 90% and 99.5%.</ns0:p><ns0:p>Hossain et al. proposed a cloud-based collaborative media system that uses emotions from speech signals and uses standard features such as MFCCs <ns0:ref type='bibr' target='#b31'>(Hossain.M. Shamim (2014)</ns0:ref>). Paralinguistic features and prosodic features were utilized to detect emotions from speech in <ns0:ref type='bibr' target='#b1'>(Alonso et al. (2015)</ns0:ref>). SVM, a radial basis function neural network (RBFNN), and an autoassociative neural network (AANN) were used to recognize emotions after combining two features, MFCCs and the residual phase (RP), from a music database <ns0:ref type='bibr' target='#b49'>(Nalini and Palanivel (2016)</ns0:ref>). SVMs and DBNs were examined utilizing the Chinese academic database <ns0:ref type='bibr' target='#b79'>(Zhang et al. (2017)</ns0:ref>). The accuracy using DBNs was 94.5%, and the accuracy of the SVM was approximately 85%. In <ns0:ref type='bibr'>(C.K. et al. (2017)</ns0:ref>), particle swarm optimization-based features and high-order statistical features were utilized. <ns0:ref type='bibr'>Chourasia et al. implemented</ns0:ref> an SVM and HMM to classify speech emotions after extracting the spectral features from speech signals <ns0:ref type='bibr' target='#b12'>(Chourasia et al. (2021)</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Speech Emotion Recognition Using Deep Learning Approaches</ns0:head><ns0:p>Low-level handcrafted features are very useful in distinguishing speech emotions. With many successful deep neural network (DNN) applications, many experts have started to target in-depth emotional feature learning. <ns0:ref type='bibr'>Schmidt et al. used</ns0:ref> an approach based on linear regression and deep belief networks to identify musical emotions <ns0:ref type='bibr' target='#b61'>(Schmidt and Kim (2011)</ns0:ref>). They used the MoodSwings Lite music database and obtained a 5.41% error rate. Duc Le et al. implemented hybrid classifiers, which were a set of DBNs and HMMs, and attained good results on FAU Aibo <ns0:ref type='bibr' target='#b39'>(Le and Provost (2013)</ns0:ref>). Deng et al. presented a transfer learning feature method for speech emotion recognition based on a sparse autoencoder. Several databases were used, including the eNTERFACE and EMO-DB databases <ns0:ref type='bibr' target='#b19'>(Deng et al. (2013)</ns0:ref>). In <ns0:ref type='bibr' target='#b53'>(Poon-Feng et al. (2014)</ns0:ref>), a generalized discriminant analysis method (Gerda) was presented with several Boltzmann machines to analyze and classify emotions from speech and improve the previous reported baseline by traditional approaches. Erik M. Schmidt et al. proposed a regression-based DBN to recognize music emotions and a model based on three hidden layers to learn emotional features <ns0:ref type='bibr' target='#b28'>(Han et al. (2014)</ns0:ref>).</ns0:p><ns0:p>Trentin et al. proposed a probabilistic echo-state network-based emotion recognition framework that obtained an accuracy of 96.69% using the WaSep database <ns0:ref type='bibr' target='#b70'>(Trentin et al. (2015)</ns0:ref>). More recent work introduced deep retinal CNNs (DRCNNs) in <ns0:ref type='bibr' target='#b50'>(Niu et al. (2017)</ns0:ref>), which showed good performance in recognizing emotions from speech signals. The presented approach obtained the highest accuracy, 99.25%, in the IEMOCAP database. In <ns0:ref type='bibr' target='#b23'>(Fayek et al. (2017)</ns0:ref>), the authors suggested deep learning approaches. A The experiment was performed on the IEMOCAP database with four emotions. Two different databases were used to extract prosodic and spectral features with an ensemble softmax regression approach <ns0:ref type='bibr' target='#b67'>(Sun and Wen (2017)</ns0:ref>). For the identification of emotional groups, experiments were performed on the two different datasets. A CNN was used in <ns0:ref type='bibr' target='#b23'>(Fayek et al. (2017)</ns0:ref>) to classify four emotions from the IEMOCAP database: happy, neutral, angry, and sad. In <ns0:ref type='bibr' target='#b75'>(Xia and Liu (2017)</ns0:ref>), multitasking learning was used to obtain activation and valence data for speech emotion detection using the DBN model. IEMOCAP was used in the experiment to identify the four emotions. However, high computational costs and a large amount of data are required for deep learning techniques. The majority of current speech emotional</ns0:p></ns0:div> <ns0:div><ns0:head>4/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_7'>2021:07:64126:1:0:NEW 14 Sep 2021)</ns0:ref> Manuscript to be reviewed Computer Science with large-scale parameters. A pretrained deep learning model is used based on the above studies. In <ns0:ref type='bibr' target='#b4'>(Badshah et al. (2017)</ns0:ref>), a pretrained DCNN model was introduced for speech emotion recognition. The outcomes were improved with seven emotional states. In <ns0:ref type='bibr' target='#b4'>(Badshah et al. (2017)</ns0:ref>), the authors suggested a DCNN accompanied by discriminant temporal pyramid matching. DNNs were used to divide emotional probabilities into segments <ns0:ref type='bibr' target='#b24'>(Gu et al. (2018)</ns0:ref>), which were utilized to create utterance features; these probabilities were fed to the classifier. The IEMOCAP database was used in the experiment, and the obtained accuracy was 54.3%. In <ns0:ref type='bibr' target='#b80'>(Zhao et al. (2018)</ns0:ref>), the suggested approach used integrated attention with a fully convolutional network (FCN) to automatically learn the optimal spatiotemporal representations of signals from the IEMOCAP database. The hybrid architecture proposed in <ns0:ref type='bibr' target='#b22'>(Etienne et al. (2018)</ns0:ref>)</ns0:p><ns0:p>included a data augmentation technique. In <ns0:ref type='bibr' target='#b73'>(Wang and Guan (2008)</ns0:ref>; <ns0:ref type='bibr' target='#b78'>Zhang et al. (2018)</ns0:ref>), the fully connected layer (FC7) of AlexNet was used for the extraction process. The results were evaluated on four different databases. In <ns0:ref type='bibr' target='#b26'>(Guo et al. (2018)</ns0:ref>), an approach for SER that combined phase and amplitude information utilizing a CNN was investigated. In <ns0:ref type='bibr' target='#b10'>(Chen et al. (2018)</ns0:ref>), a three-dimensional convolutional recurrent neural network including an attention mechanism (ACRNN) was introduced. The identification of emotion was evaluated using the Emo-DB and IEMOCAP databases. The attention process was used to develop a dilated CNN and BiLSTM in <ns0:ref type='bibr' target='#b46'>(Meng et al. (2019)</ns0:ref>). To identify speech emotion, 3D log-Mel spectrograms were examined for global contextual statistics and local correlations. The OpenSMILE package was used to extract features in <ns0:ref type='bibr'>( &#214;zseven (2019)</ns0:ref>). The accuracy obtained with the Emo-DB database was 84%, and it was 72% with the SAVEE database. Pretrained networks have many benefits, including the ability to reduce the training time and improve accuracy. Kernel extreme learning machine (KELM) features were introduced in <ns0:ref type='bibr' target='#b25'>(Guo et al. (2019)</ns0:ref>). An adversarial data augmentation network was presented in <ns0:ref type='bibr' target='#b76'>(Yi and Mak (2019)</ns0:ref>) to create simulated samples to resolve the data scarcity problem.</ns0:p><ns0:p>Energy and pitch were extracted from each audio segment in <ns0:ref type='bibr' target='#b71'>(Ververidis and Kotropoulos (2005)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science (2020)). An implementation of real-time voice emotion identification using AlexNet was described in <ns0:ref type='bibr' target='#b40'>(Lech et al. (2020)</ns0:ref>). When trained on the Berlin Emotional Speech (EMO-DB) database, the presented method obtained an average accuracy of 82%.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>PROPOSED METHOD</ns0:head><ns0:p>This section describes the proposed pretrained CNN (AlexNet) algorithm for the SER framework. We fine-tune the pretrained model <ns0:ref type='bibr' target='#b37'>(Krizhevsky et al. (2017)</ns0:ref>) on the created image-like Mel-spectrogram segments. We do not train our own deep CNN framework owing to the limited emotional audio dataset. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Creation of the Audio Input</ns0:head><ns0:p>In the proposed method, the Mel-spectrogram segment is generated from the original speech signal. We </ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Emotion Recognition Using AlexNet</ns0:head><ns0:p>In the proposed method, CL4 of the pretrained model is used for feature extraction. The CFS feature selection approach is used to select the most discriminative features. The CFS approach selects only very highly correlated features with output class labels. The five different classification models are used to test the accuracy of the feature subsets.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Feature Extraction</ns0:head><ns0:p>In </ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.1'>Input Layer</ns0:head><ns0:p>This layer of the pretrained model is a fixed-size input layer. We resample the Mel spectrogram of the signal to a fixed size 227 &#215; 227 &#215; 3.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.2'>Convolutional Layer (CL)</ns0:head><ns0:p>The convolutional layer is composed of convolutional filters. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.3'>Pooling Layer (PL)</ns0:head><ns0:p>After the CLs, a pooling layer is used. The goal of the pooling layer is to subsample the feature groups.</ns0:p><ns0:p>The feature groups are obtained from the previous CLs to create a single data convolutional feature group from the local areas. Average pooling and max-pooling are the two basic pooling operations. The max-pooling layer employs maximum filter activation across different points in a quantified frame to produce a modified resolution type of CL activation.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.4'>Fully Connected Layers (FCLs)</ns0:head><ns0:p>Fully connected layers incorporate the characteristics acquired from the PL and create a feature vector for classification. The output of the CLs and PLs is given to the fully connected layers. There are three fully connected layers in AlexNet: FC6, FC7, and FC8. A 4096-dimensional feature map is generated by FC6 and FC7, while FC8 generates 1000-dimensional feature groups.</ns0:p><ns0:p>Feature maps can be created using FCLs. These are universal approximations, but fully connected layers do not work fully in recognizing and generalizing the original image pixels. CL4 extracts relevant features from the original pixel values by preserving the spatial correlations inside the image. Consequently, in the experimental setup, features are extracted from the CL4 employed for SER. A total of 64,896 features are obtained from CL4. Certain features are followed by an FS method and pass through a classification model for identification.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4'>Feature Selection</ns0:head><ns0:p>The discriminative and related features for the model are determined by feature selection. FS approaches are used with several models to minimize the training time and enhance the ability to generalize by decreasing overfitting. The main goal of feature selection is to remove insignificant and redundant features.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5'>Correlation-Based Measure</ns0:head><ns0:p>We can identify an excellent feature if it is related to the class features and is not redundant with respect to any other class features. For this reason, we use entropy-based information theory. The equation of entropy-based information theory is defined as:</ns0:p><ns0:p>F(E) = &#8722;&#931;S(e j )log2(S(e j )).</ns0:p><ns0:p>(1)</ns0:p><ns0:p>The entropy of E after examining the values of G is defined in the equation below:</ns0:p><ns0:formula xml:id='formula_0'>F(E/G) = &#8722;&#931;S(g k )&#931;S(e j /g k )log2(S(e j /g k ))<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>S(e j ) denotes the probability for all values of E, whereas S(e j /g k ) denotes the probabilities of E when the values of G are specified. The percentage by which the entropy of E decreases reflects the irrelevant information about E given by G, which is known as information gain. The equation of information gain is given below:</ns0:p><ns0:formula xml:id='formula_1'>I(E/G) = (F(E) &#8722; F(E/G)).</ns0:formula><ns0:p>(3)</ns0:p><ns0:p>If I(E/G) &#191; I(H/G), then we can conclude that feature G is much more closely correlated to feature E than to feature H. We possess one more metric, symmetrical uncertainty, which indicates the correlation between features, defined by the equation below: Manuscript to be reviewed Computer Science highly correlated discriminative attributes. CFS ranks the features by applying a heuristic correlation evaluation function. It estimates the correlation within the features. CFS drops unrelated features that have limited similarity with the class label. The CFS equation is as follows:</ns0:p><ns0:formula xml:id='formula_2'>SU(E, G) = 2[I(E/G)/F(E) + F(G)]. (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>FS = max Sk r c f 1 + r c f 2 + r c f 3 + .... + r c f k k + 2(r f 1 f 2 + .... + r f i f j + .... + r f k f k&#8722;1 ) ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>where k represents the total number of features, r c f i represents the classification correlation of the features, and r f i f j represents the correlation between features. The extracted features are fed into classification algorithms. CFS usually deletes (backward selection) or adds (forward selection) one feature at a time.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.6'>Classification Methods</ns0:head><ns0:p>The discriminative features provide input to the classifiers for emotion classification. In the proposed method, five different classifiers, KNN, RF, decision tree, MLP, and SVM, are used to evaluate the performance of speech emotion recognition.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.7'>Support Vector Machine (SVM)</ns0:head><ns0:p>SVMs are used for binary regression and classification. They create an optimal higher-dimensional space with a maximum class margin. SVMs identify the support vectors v j , weights w f j , and bias b to categorize the input information. For classification of the data, the following expression is used:</ns0:p><ns0:formula xml:id='formula_4'>sk(v, v j ) = (&#961;v e v j + k) z .<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>In the above equations, k is a constant value, and b represents the degree of the polynomial. For a polynomial &#961; &#191; zero:</ns0:p><ns0:formula xml:id='formula_5'>v = (&#931; n i=0 w f j sk(v j , v) + b.<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>In the above equation, sk represents the kernel function, v is the input, vj is the support vector, wfj is the weight, and b is the bias. In our study, we utilize the polynomial kernel to translate the data into a higher-dimensional space.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.8'>k-Nearest Neighbors (KNN)</ns0:head><ns0:p>This classification algorithm keeps all data elements. It identifies the most comparable N examples and employs the target class emotions for all data examples based on similarity measures. In the proposed study, we fixed N = 10 for emotional classification. The KNN method finds the ten closest neighbors using the Euclidean distance, and emotional identification is performed using a majority vote.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.9'>Random Forest (RF)</ns0:head><ns0:p>An RF is a classification and regression ensemble learning classifier. It creates a class of decision trees and a meaningful indicator of the individual trees for data training. The RF replaces each tree in the database at random, resulting in unique trees, in a process called bagging. The RF splits classification networks based on an arbitrary subset of characteristics per tree.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.10'>Multilayer Perceptron (MLP)</ns0:head><ns0:p>MLPs are neural networks that are widely employed in feedforward processes. They consist of multiple computational levels. Identification issues may be solved using MLPs. They use a supervised backpropagation method for classifying occurrences. The MLP classification model consists of three layers: the input layer, the hidden layers, and the output layer. The input layer contains neurons that are directly proportional to the features. The degree of the hidden layers depends on the overall degree of the emotions in the database. It features dimensions after the feature selection approach. The number of output neurons in the database is equivalent to the number of emotions. The sigmoid activation function utilized in this study is represented as follows: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_6'>p i = 1 1 + e &#8722; qi (<ns0:label>8</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In the above equation, the state is represented by pi, whereas the entire weighted input is represented by qi. When using the Emo-DB database, there is only one hidden layer in the MLP. It has 232 neurons.</ns0:p><ns0:p>When using the SAVEE database, there is only one hidden layer in the MLP, and it comprises 90 neurons.</ns0:p><ns0:p>The MLP contains a single hidden layer, and 140 neurons are present in the IEMOCAP database. In </ns0:p></ns0:div> <ns0:div><ns0:head n='4'>EXPERIMENTS</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1'>Datasets</ns0:head><ns0:p>This experimental study contains four emotional speech databases, and these databases are publicly available.</ns0:p><ns0:p>&#8226; Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): RAVDESS is an audio and video database consisting of eight acted emotional categories: calm, neutral, angry, surprise, fear, happy, sad, and disgust, and these emotions are recorded only in North American English. RAVDESS was recorded by 12 male and 12 female professional actors.</ns0:p><ns0:p>&#8226; Surrey Audio-Visual Expressed Emotion (SAVEE): The SAVEE database contains 480 emotional utterances. The SAVEE database was recorded in British English by four male professional actors with seven emotion categories: sadness, neutral, frustration, happiness, disgust, anger, and surprise.</ns0:p><ns0:p>&#8226; Berlin Emotional Speech Database (Emo-DB): The Emo-DB dataset contains 535 utterances with seven emotion categories: neutral, fear, boredom, disgust, sad, angry, and joy. The Emo-DB emotional dataset was recorded in German by five male and five female native-speaker actors.</ns0:p><ns0:p>&#8226; Interactive Emotional Dyadic Motion Capture (IEMOCAP): The IEMOCAP multispeaker database contains approximately 12 hours of audio and video data with seven emotional states, surprise, happiness, sadness, anger, fear, excitement, and frustration, as well as neutral and other states. The IEMOCAP database was recorded by five male and five female professional actors. In this work, we use four (neutral, angry, sadness, and happiness) class labels.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Experimental Setup</ns0:head><ns0:p>All the experiments are completed in version 3.9.0 of the Python language framework. Numerous API libraries are used to train the five distinct models. The framework uses Ubuntu 20.04. The key objective is to implement an input data augmentation and feature selection approach for the five different models.</ns0:p><ns0:p>The feature extraction technique is also involved in the proposed method.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.1'>Anaconda</ns0:head><ns0:p>Anaconda is the best data processing and scientific computing platform for Python. It already includes numerous data science and machine learning libraries. Anaconda also includes many popular visualization libraries, such as matplotlib. It also provides the ability to build a different environment with a few unique libraries to carry out the task.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.2'>Keras</ns0:head><ns0:p>The implementation of our model for all four datasets was completed from scratch using Keras. It makes it extremely simple for the user to add and remove layers and activate and utilize the max-pooling layer in the network. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>EXPERIMENTAL RESULTS AND ANALYSIS</ns0:head></ns0:div> <ns0:div><ns0:head n='5.0.1'>Speaker-Dependent (SD) Experiments</ns0:head><ns0:p>The performance of the proposed SER system is assessed using benchmark databases for the SD experiments. We use ten-fold cross-validation in our studies. All databases are divided randomly into ten equal complementary subsets with a dividing ratio of 80:20 to train and test the model. The experimental results illustrate a significant accuracy improvement by using data resampling and the FS approach. We consider the standard deviation and average weighted recall to evaluate the performance and stability of the SD experiments using the FS approach. The SVM classifier reached 93.61% and 96.02% accuracy for RAVDESS and Emo-DB, respectively, while the obtained accuracies were 88.77% and 77.23% for SAVEE and IEMOCAP, respectively, through the SVM. The MLP classifier obtained 95.80% and 89.12% accuracies with the Emo-DB and IEMOCAB databases, respectively.</ns0:p><ns0:p>The KNN classifier obtained the highest accuracy, 92.45% and 88.34%, with the Emo-DB and RAVDEES datasets. The RF classifier reported the highest accuracy, 93.51%, on the Emo-DB dataset and 86.79% accuracy on the SAVEE dataset with the feature selection approach. The results of the confusion matrix were used to evaluate the identification accuracy of the individual emotional labels.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref> shows that the SVM obtained better recognition accuracy than the other classification models with the FS method. As shown in Figure <ns0:ref type='figure' target='#fig_13'>5</ns0:ref>, the SVM recognized 'frustration' and 'neutral' with the highest accuracies, 88.33% and 91.66%, with the SAVEE dataset. As shown in Figure <ns0:ref type='figure' target='#fig_14'>6</ns0:ref>, the RAVDESS dataset contains eight emotions, including 'anger', 'calm', 'fear', and 'neutral', which are listed with accuracies of 96.32%, 97.65%, 95.54%, and 99.98%, respectively. The IEMOCAP database identified 'anger' with the highest accuracy of 93.23%, while 'happy,' 'sad,' and 'neutral' were recognized with the highest accuracies of 83.41%, 91.45%, and 89.65% with the MLP classifier, respectively.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.0.2'>Speaker-Independent (SI) Experiments</ns0:head><ns0:p>We adopted the single-speaker-out (SSO) method for the SI experiments. One annotator was used for testing, and all other annotators were used for training. In the proposed approach, the IEMOCAP dataset Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed the feature extraction approach with data resampling and the FS method. The FS and data resampling approach improved the accuracy, according to the preliminary results.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:p>We report the average weighted recall and standard deviation to evaluate the SI experiment's performance and stability utilizing the FS method. The SVM obtained the highest accuracies, 90.78%, 84.00%, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:p>. Confusion matrix obtained by the SVM on the IEMOCAP database for the SI experiment were 78.90% and 85.73%, respectively. The RAVDESS database contains eight emotion categories, three of which, 'calm', 'fear', and 'anger,' were identified with accuracies of 94.78%, 91.35%, and 84.60%, respectively, by the MLP. In contrast, the other five emotions were identified with less than 90.00% accuracy, as represented in Figure <ns0:ref type='figure' target='#fig_16'>8</ns0:ref>. The MLP achieved an average accuracy with the SAVEE database of 75.38%. With the SAVEE database, 'anger,' 'neutral,' and 'sad' were recognized with accuracies of 94.22%, 90.66%, and 85.33%, respectively, by the MLP classifier. IEMOCAP achieved an average accuracy of 84.00% with the SVM, while the MLP achieved an average accuracy of 80.23%. Figure <ns0:ref type='figure'>9</ns0:ref> shows that the average accuracy achieved by the SVM with the IEMOCAP database is 84.00%.</ns0:p><ns0:p>Four publicly available databases are used to compare the proposed method. As illustrated in Table <ns0:ref type='table' target='#tab_8'>8</ns0:ref>, </ns0:p></ns0:div> <ns0:div><ns0:head n='6'>CONCLUSIONS AND FUTURE WORK</ns0:head><ns0:p>In this research, the primary emphasis was on learning discriminative and important features from advanced emotional speech databases. Therefore, the main objective of the present research was advanced feature extraction using AlexNet. The proposed CFS approach explored the predictability of every feature.</ns0:p><ns0:p>The results showed the superior performance of the proposed strategy with four datasets in both SD and SI experiments.</ns0:p><ns0:p>To analyze the classification performance of each emotional group, we display the results in the form of confusion matrices. The main benefit of applying the FS method is to reduce the abundance of features by selecting the most discriminative features and eliminating the poor features. We noticed that the pretrained AlexNet framework is very successful for feature extraction techniques that can be trained with a small number of labeled datasets. The performance in the experimental studies empowers us to explore the efficacy and impact of gender on speech signals. The proposed model is also useful for multilanguage databases for emotion classification.</ns0:p><ns0:p>In future studies, we will perform testing and training techniques using different language databases, which should be a useful evaluation of our suggested technique. We will test the proposed approach in the cloud and in an edge computing environment. We would like to evaluate different deep architectures to enhance the system's performance when using spontaneous databases.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>deep-learning methods for SD and SI experiments. The rest of the paper is organized as follows: Part 2 reviews the previous work in SER related to this paper's current study. A detailed description of the emotional dataset used in the presented work and the proposed method for FS and the classifier are discussed in Part 3. The results are discussed in Part 4. Part 5 contains the conclusion and outlines future work. 2/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:1:0:NEW 14 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The structure of our proposed model for audio emotion recognition</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The general architecture of AlexNet</ns0:figDesc><ns0:graphic coords='5,141.73,374.47,413.57,137.86' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Furthermore, computer vision experiments<ns0:ref type='bibr' target='#b57'>(Ren et al. (2016)</ns0:ref>;<ns0:ref type='bibr' target='#b7'>Campos et al. (2017)</ns0:ref>) have depicted that fine-tuning the pretrained CNNs on target data is acceptable to relieve the issue of data insufficiency.AlexNet is a model pretrained on the extensive ImageNet dataset, containing a wide range of different labeled classes, and uses a shorter training time. AlexNet (Krizhevsky et al. (2017)) comprises five convolution layers, three max-pooling layers, and three fully connected layers. In the proposed work, we extract the low-level features from the fourth convolutional layer (CL4). The architecture of our proposed model is displayed in Figure 1. Our model comprises four processes: (a) development of the audio input data, (b) low-level feature extraction using AlexNet, (c) feature selection, and (d) classification. Below, we explain all four steps of our model in detail.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>create three channels of the segment from the original 1D audio speech dataset. Then, the generated segments are converted into fixed-size 227 &#215; 227 &#215; 3 inputs for the proposed model. Following (Zhang et al. (2018)), 64 Mel-filter banks are used to create the log Mel-spectrogram, and each frame is multiplied by a 25 ms window size with a 10 ms overlap. Then, we divide the log Mel spectrogram into fixed segments by using a 64-frame context window. Finally, after extracting the static segment, we calculate the regression coefficients of the first and second order around the time axis, thereby generating the delta and double-delta coefficients of the static Mel spectrogram segment. Consequently, three channels with 64 &#215; 64 &#215; 3 Mel-spectrogram segments can be generated as the inputs of AlexNet, and these channels are identical to the color RGB image. Therefore, we resize the original 64 &#215; 64 &#215; 3 spectrogram to the new size 227 &#215; 227 &#215; 3. In this case, we can create four (middle, side, left, and right) segments of the Mel spectrogram, as shown in Figure 2.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>this study, feature extraction is performed using a pretrained model. The original weight of the model remains fixed, and existing layers are used to extract the features. The pretrained model has a deep structure that contains extra filters for every layer and stacked CLs. It also includes convolutional layers, max-pooling layers, momentum stochastic gradient descent, activation functions, data augmentation, and dropout. AlexNet uses a rectified linear unit (ReLU) activation function. The layers of the network are explained below.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Convolutional filters are used to obtain many local features in the input data from local regions to form various feature groups. AlexNet contains five CLs, in which three layers follow the max-pooling layer. CL1 includes 96 kernels with a size of 11 &#215; 11 &#215; 3, zero padding, and a stride of 4 pixels. CL2 contains 256 kernels, each of which is 5 &#215; 5 &#215; 48 in size and includes a 1-pixel stride and a padding value of 2. The CL3 contains 384 kernels of size 7/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:1:0:NEW 14 Sep 2021) Manuscript to be reviewed Computer Science 3 &#215; 3 &#215; 256. CL4 contains 384 kernels of size 3 &#215; 3 &#215; 192. For the output value of each CL, the ReLU function is used, which speeds up the training process.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>) SU balances the information gain bias toward features with more values by normalizing its value to the range [0,1]. SU analyzes a pair of features symmetrically. Entropy-based techniques need nominal features. These features can be used to evaluate the correlations between continuous features if these features are discretized properly.We use the correlation feature-based approach (CFS)<ns0:ref type='bibr' target='#b74'>Wosiak and Zakrzewska (2018)</ns0:ref> in the proposed work based on the previously described techniques. It evaluates a subset of features and selects only 8/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:1:0:NEW 14 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:64126:1:0:NEW 14 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>comparison, one hidden layer and 285 neurons are present in the RAVDESS dataset. The MLP is a two-level architecture; thus, identification requires two levels: training and testing. The weight values are set throughout the training phase to match them to the particular output class.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Librosa (McFee et al. (2015)) is a basic Python library used for this research. Librosa is used to examine the audio signal recordings. The four (side, middle, left, and right) segments of the Mel spectrogram were obtained through Librosa. 10/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:1:0:NEW 14 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Confusion matrix obtained by the SVM on the Emo-DB database for the SD experiment</ns0:figDesc><ns0:graphic coords='15,141.73,351.02,372.21,264.32' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Confusion matrix obtained by the SVM on the SAVEE database for the SD experiment</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Confusion matrix obtained by the SVM on the RAVDESS database for the SD experiment</ns0:figDesc><ns0:graphic coords='16,141.73,349.24,372.23,273.95' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Confusion matrix obtained by the MLP on the IEMOCAP database for the SD experiment</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Confusion matrix obtained by the SVM on the RAVDESS database for the SI experiment</ns0:figDesc><ns0:graphic coords='17,141.73,351.02,372.23,262.55' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Confusion matrix obtained by the MLP on the RAVDESS database for the SI experiment</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>the developed system outperformed<ns0:ref type='bibr' target='#b26'>(Guo et al. (2018)</ns0:ref>;<ns0:ref type='bibr' target='#b10'>Chen et al. (2018)</ns0:ref>;<ns0:ref type='bibr' target='#b46'>Meng et al. (2019)</ns0:ref>;&#214;zseven (2019);<ns0:ref type='bibr' target='#b6'>Bhavan et al. (2019)</ns0:ref>) on the Emo-DB dataset for the SD experiments. The OpenSMILE package was used to extract features in( &#214;zseven (2019)). The accuracies obtained with the SAVEE and Emo-DB databases were 72% and 84%, respectively. In comparison to<ns0:ref type='bibr' target='#b10'>(Chen et al. (2018)</ns0:ref>; Meng et al. (2019); Satt et al. (2017); Zhao et al. (2018)), the proposed method performed well on the IEMOCAP database. The models in (Chen et al. (2018); Meng et al. (2019); Etienne et al. (2018)) are computationally complex and require extensive periods of training. In the proposed method, AlexNet is used for the extraction process, and the FS technique is applied. The FS approach reduced the classifier's workload while also improving efficiency. When using the RAVDESS database, the suggested technique outperforms (Zeng et al. (2019); Bhavan et al. (2019)) in terms of accuracy. Table 9 illustrates that the suggested approach outperforms (Meng et al. (2019); Sun and Wen (2017); Haider et al. (2020); Yi and Mak (2019); Guo et al. (2019); Badshah et al. (2017); Mustaqeem et al. (2020)) for SI experiments using the Emo-DB database. The authors extracted low-level descriptor feature emotion identification and obtained accuracies with the Emo-DB database of 82.40%, 76.90%, and 83.74%, respectively, in (Sun and Wen (2017); Haider et al. (2020); Yi and Mak (2019)). Different deep learning methods were used for SER with the Emo-DB database in (Meng et al. (2019); Guo et al. (2019); Badshah et al. (2017); Mustaqeem et al. (2020)). In comparison to other speech emotion databases, the SAVEE database is relatively small. The purpose of using a pretrained approach is that it can be trained effectively with limited data. In comparison to<ns0:ref type='bibr' target='#b67'>(Sun and Wen (2017)</ns0:ref>; Haider et al. (2020)), the suggested technique provides better accuracy with the SAVEE database. When using the IEMOCAP 17/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:1:0:NEW 14 Sep 2021) Manuscript to be reviewed Computer Science database, the proposed methodology outperforms (Yi and Mak (2019); Guo et al. (2019); Xia and Liu (2017); Daneshfar et al. (2020); Mustaqeem et al. (2020); Meng et al. (2019)). The classification results of the proposed scheme show a significant improvement over current methods. With the RAVDESS database, the proposed approach achieved 73.50 percent accuracy.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Nomenclature</ns0:figDesc><ns0:table><ns0:row><ns0:cell>ACRNN</ns0:cell><ns0:cell>Attention Convolutional Recur-</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>K-Nearest Neighbors</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>rent Neural Network</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>BEL</ns0:cell><ns0:cell>Brain Emotional Learning</ns0:cell><ns0:cell>LPCC</ns0:cell><ns0:cell>Linear Predictive Cepstral Co-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>efficients</ns0:cell></ns0:row><ns0:row><ns0:cell>BiLSTM</ns0:cell><ns0:cell>Bidirectional Long Short-Term</ns0:cell><ns0:cell>MFCC</ns0:cell><ns0:cell>Mel Frequency Cepstral Coeffi-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Memory</ns0:cell><ns0:cell /><ns0:cell>cients</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN</ns0:cell><ns0:cell cols='2'>Convolutional Neural Network MLP</ns0:cell><ns0:cell>Multilayer Perceptron</ns0:cell></ns0:row><ns0:row><ns0:cell>CL</ns0:cell><ns0:cell>Convolutional Layer</ns0:cell><ns0:cell>MSF</ns0:cell><ns0:cell>Modulation Spectral Features</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN</ns0:cell><ns0:cell cols='2'>Convolutional Neural Network PAD</ns0:cell><ns0:cell>Pleasure-Arousal-Dominance</ns0:cell></ns0:row><ns0:row><ns0:cell>CFS</ns0:cell><ns0:cell>Correlation-Based Feature Se-</ns0:cell><ns0:cell>PL</ns0:cell><ns0:cell>Pooling Layer</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>lection</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DBN</ns0:cell><ns0:cell>Deep Belief Network</ns0:cell><ns0:cell>RBFNN</ns0:cell><ns0:cell>Radial Basis Function Neural</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Network</ns0:cell></ns0:row><ns0:row><ns0:cell>DCNN</ns0:cell><ns0:cell>Deep Convolutional Neural Net-</ns0:cell><ns0:cell>RBF</ns0:cell><ns0:cell>Radial Basis Function</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>work</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DNN</ns0:cell><ns0:cell>Deep Neural Network</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell>Random Forest</ns0:cell></ns0:row><ns0:row><ns0:cell>DRCNN</ns0:cell><ns0:cell>Deep Retinal CNNs</ns0:cell><ns0:cell>RP</ns0:cell><ns0:cell>Residual Phase</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>Decision Tree</ns0:cell><ns0:cell>RNN</ns0:cell><ns0:cell>Recurrent Neural Network</ns0:cell></ns0:row><ns0:row><ns0:cell>FS</ns0:cell><ns0:cell>Feature Selection</ns0:cell><ns0:cell>SAVEE</ns0:cell><ns0:cell>Surrey Audio-Visual Expressed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Emotion</ns0:cell></ns0:row><ns0:row><ns0:cell>FCL</ns0:cell><ns0:cell>Fully Connected Layer</ns0:cell><ns0:cell>SD</ns0:cell><ns0:cell>Speaker-Dependent</ns0:cell></ns0:row><ns0:row><ns0:cell>FBR</ns0:cell><ns0:cell>Filter Bank Energy</ns0:cell><ns0:cell>SI</ns0:cell><ns0:cell>Speaker-Independent</ns0:cell></ns0:row><ns0:row><ns0:cell>GMM</ns0:cell><ns0:cell>Gaussian Mixture Model</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>Support Vector Machine</ns0:cell></ns0:row><ns0:row><ns0:cell>GP</ns0:cell><ns0:cell>Gaussian Process</ns0:cell><ns0:cell>SER</ns0:cell><ns0:cell>Speech Emotion Recognition</ns0:cell></ns0:row><ns0:row><ns0:cell>HMM</ns0:cell><ns0:cell>Hidden Markov Model</ns0:cell><ns0:cell>TEO</ns0:cell><ns0:cell>Teager Energy Operator</ns0:cell></ns0:row><ns0:row><ns0:cell>KELM</ns0:cell><ns0:cell>Kernel Extreme Learning Ma-</ns0:cell><ns0:cell>ZCR</ns0:cell><ns0:cell>Zero-Crossing Rate</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>chine</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>databases have a small amount of data. Deep learning model approaches are insufficient for training</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Detailed description of the datasets</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Datasets</ns0:cell><ns0:cell>Speakers</ns0:cell><ns0:cell cols='2'>Emotions</ns0:cell><ns0:cell /><ns0:cell>Languages</ns0:cell><ns0:cell>Size</ns0:cell></ns0:row><ns0:row><ns0:cell>RAVDESS</ns0:cell><ns0:cell>24 Actors (12</ns0:cell><ns0:cell cols='3'>eight emotions (</ns0:cell><ns0:cell>North American</ns0:cell><ns0:cell>7356 files (total</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>male, 12 female)</ns0:cell><ns0:cell cols='3'>calm, neutral, an-</ns0:cell><ns0:cell>English</ns0:cell><ns0:cell>size: 24.8 GB).</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>gry, happy, fear,</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>surprise, sad, dis-</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>gust )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>4 (male)</ns0:cell><ns0:cell cols='3'>seven emotions</ns0:cell><ns0:cell>British English</ns0:cell><ns0:cell>480 utterances</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>(sadness, neutral,</ns0:cell><ns0:cell /><ns0:cell>(120 utterances</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>frustration, hap-</ns0:cell><ns0:cell /><ns0:cell>per speaker)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>piness, disgust</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>,anger, surprise)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>10 (5 male, 5 fe-</ns0:cell><ns0:cell cols='3'>seven emotions</ns0:cell><ns0:cell>German</ns0:cell><ns0:cell>535 utterances</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>male)</ns0:cell><ns0:cell cols='2'>(neutral,</ns0:cell><ns0:cell>fear,</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>boredom, disgust,</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>sad, angry, joy)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>IEMOCAP</ns0:cell><ns0:cell>10 (5 male, 5 fe-</ns0:cell><ns0:cell>nine</ns0:cell><ns0:cell cols='2'>emotions</ns0:cell><ns0:cell>English</ns0:cell><ns0:cell>12 hours of</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>male)</ns0:cell><ns0:cell cols='2'>(surprise,</ns0:cell><ns0:cell>hap-</ns0:cell><ns0:cell /><ns0:cell>recordings</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>piness, sadness,</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>anger, fear, ex-</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>citement, neutral,</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>frustration and</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>others)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Categories of emotional speech databases, their features, and some examples of each category.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Simulated</ns0:cell><ns0:cell>Semi Natural</ns0:cell></ns0:row><ns0:row><ns0:cell>Description</ns0:cell><ns0:cell>generated by trained and expe-</ns0:cell><ns0:cell>created by having individuals</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>rienced actors delivering the</ns0:cell><ns0:cell>read a script with a different</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>same sentence with different</ns0:cell><ns0:cell>emotions</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>degrees of emotion</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Single emotion at a time</ns0:cell><ns0:cell>yes</ns0:cell><ns0:cell>yes</ns0:cell></ns0:row><ns0:row><ns0:cell>Widely used</ns0:cell><ns0:cell>yes</ns0:cell><ns0:cell>no</ns0:cell></ns0:row><ns0:row><ns0:cell>Copyrights and privacy pro-</ns0:cell><ns0:cell>yes</ns0:cell><ns0:cell>yes</ns0:cell></ns0:row><ns0:row><ns0:cell>tection</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Includes contextual informa-</ns0:cell><ns0:cell>no</ns0:cell><ns0:cell>yes</ns0:cell></ns0:row><ns0:row><ns0:cell>tion</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Includes situational informa-</ns0:cell><ns0:cell>no</ns0:cell><ns0:cell>yes</ns0:cell></ns0:row><ns0:row><ns0:cell>tion</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Emotions that are separate</ns0:cell><ns0:cell>yes</ns0:cell><ns0:cell>no</ns0:cell></ns0:row><ns0:row><ns0:cell>and distinct</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Numerous emotions</ns0:cell><ns0:cell>yes</ns0:cell><ns0:cell>yes</ns0:cell></ns0:row><ns0:row><ns0:cell>Simple to model</ns0:cell><ns0:cell>yes</ns0:cell><ns0:cell>no</ns0:cell></ns0:row><ns0:row><ns0:cell>Numerous emotions</ns0:cell><ns0:cell>yes</ns0:cell><ns0:cell>yes</ns0:cell></ns0:row><ns0:row><ns0:cell>Examples</ns0:cell><ns0:cell>EMO-DB, RAVDESS</ns0:cell><ns0:cell>IEMOCAP , SAVEE</ns0:cell></ns0:row></ns0:table><ns0:note>et al. (2013); Daneshfar et al. (2020)). They also needed fewer training data and could deal directly 190 with dynamic variables. Two different acoustic paralinguistic feature sets were used in (Haider et al. 191 6/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:1:0:NEW 14 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Table 4 gives the results achieved by five different classifiers utilizing the features extracted from CL4 of the model. The SVM achieved 92.11%, 87.65%, 82.98%, and 79.66% accuracies for the Emo-DB, RAVDESS, SAVEE and IEMODB databases, respectively. The proposed method reported the highest accuracy of 86.56% on the Emo-DB database with KNN. The MLP classifier obtained 86.75% accuracy for the IEMOCAP database. In contrast, the SVM reported 79.66% accuracy for the IEMOCAP database. The MLP classifier reported the highest accuracy, 91.51%, on the Emo-DB database. The RF attained 82.47% accuracy on the Emo-DB database, while DT achieved 80.53% accuracy on Emo-DB. Standard deviation and weighted average recall of the SD experiments without FS Table 4 represents the results of the FS approach. The proposed FS technique selected 460 distinguishing features out of a total of 64,896 features for the Emo-DB dataset. The FS method obtained 170,465,277 feature maps for the SAVEE, RAVDESS, and IEMOCAP datasets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>SVM</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>MLP</ns0:cell><ns0:cell>DT</ns0:cell></ns0:row><ns0:row><ns0:cell>RAVDESS</ns0:cell><ns0:cell>87.65&#177;1.79</ns0:cell><ns0:cell>78.65&#177;4.94</ns0:cell><ns0:cell>78.15&#177;3.39</ns0:cell><ns0:cell>80.67&#177;2.89</ns0:cell><ns0:cell>76.28&#177;3.24</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>82.98&#177;4.87</ns0:cell><ns0:cell>78.38&#177;4.10</ns0:cell><ns0:cell>79.81&#177;4.05</ns0:cell><ns0:cell>81.13&#177;3.63</ns0:cell><ns0:cell>69.15&#177;2.85</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>92.11&#177;2.29</ns0:cell><ns0:cell>82.47&#177;3.52</ns0:cell><ns0:cell>86.56&#177;2.78</ns0:cell><ns0:cell>91.51&#177;2.09</ns0:cell><ns0:cell>80.53&#177;4.72</ns0:cell></ns0:row><ns0:row><ns0:cell>IEMODB</ns0:cell><ns0:cell>79.66&#177;4.44</ns0:cell><ns0:cell>80.93&#177;3.75</ns0:cell><ns0:cell>74.33&#177;3.37</ns0:cell><ns0:cell>86.75&#177;3.64</ns0:cell><ns0:cell>67.25&#177;2.33</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Standard deviation and weighted average recall of the SD experiments with FS</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Database</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>MLP</ns0:cell><ns0:cell>DT</ns0:cell></ns0:row><ns0:row><ns0:cell>RAVDESS</ns0:cell><ns0:cell>93.61&#177;1.32</ns0:cell><ns0:cell>85.21&#177;3.55</ns0:cell><ns0:cell>88.34&#177;2.67</ns0:cell><ns0:cell>84.50&#177;2.23</ns0:cell><ns0:cell>78.45&#177;2.67</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>88.77&#177;2.45</ns0:cell><ns0:cell>86.79&#177;2.96</ns0:cell><ns0:cell>83.45&#177;3.21</ns0:cell><ns0:cell>85.45&#177;3.12</ns0:cell><ns0:cell>75.68&#177;3.82</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>96.02&#177;1.07</ns0:cell><ns0:cell>93.51&#177;2.21</ns0:cell><ns0:cell>92.45&#177;2.45</ns0:cell><ns0:cell>95.80&#177;2.34</ns0:cell><ns0:cell>79.13&#177;4.01</ns0:cell></ns0:row><ns0:row><ns0:cell>IEMODB</ns0:cell><ns0:cell>77.23&#177;2.66</ns0:cell><ns0:cell>86.23&#177;2.54</ns0:cell><ns0:cell>82.78&#177;2.17</ns0:cell><ns0:cell>89.12&#177;2.57</ns0:cell><ns0:cell>72.32&#177;1.72</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>11/21</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:1:0:NEW 14 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Standard deviation and weighted average recall of the SI experiment results without FS</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>SVM</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>MLP</ns0:cell><ns0:cell>DT</ns0:cell></ns0:row><ns0:row><ns0:cell>RAVDESS</ns0:cell><ns0:cell>75.34&#177;2.58</ns0:cell><ns0:cell>65.78&#177;2.32</ns0:cell><ns0:cell>69.12&#177;2.20</ns0:cell><ns0:cell>71.01&#177;2.84</ns0:cell><ns0:cell>67.41&#177;2.37</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>63.02&#177;3.21</ns0:cell><ns0:cell>59.66&#177;3.79</ns0:cell><ns0:cell>71.81&#177;3.81</ns0:cell><ns0:cell>65.18&#177;2.05</ns0:cell><ns0:cell>59.55&#177;2.23</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>87.65&#177;2.56</ns0:cell><ns0:cell>79.45&#177;2.11</ns0:cell><ns0:cell>75.30&#177;2.19</ns0:cell><ns0:cell>88.32&#177;2.67</ns0:cell><ns0:cell>76.27&#177;2.35</ns0:cell></ns0:row><ns0:row><ns0:cell>IEMODB</ns0:cell><ns0:cell>61.85&#177;3.20</ns0:cell><ns0:cell>60.11&#177;4.20</ns0:cell><ns0:cell>55.47&#177;2.96</ns0:cell><ns0:cell>63.18&#177;1.62</ns0:cell><ns0:cell>54.69&#177;3.72</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Standard deviation and weighted average recall of the SI experiment results with FS</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Database</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>MLP</ns0:cell><ns0:cell>DT</ns0:cell></ns0:row><ns0:row><ns0:cell>RAVDESS</ns0:cell><ns0:cell>80.94&#177;2.17</ns0:cell><ns0:cell>76.82&#177;2.16</ns0:cell><ns0:cell>75.57&#177;3.29</ns0:cell><ns0:cell>82.75&#177;2.10</ns0:cell><ns0:cell>76.18&#177;1.33</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>70.06&#177;3.33</ns0:cell><ns0:cell>65.55&#177;2.42</ns0:cell><ns0:cell>60.58&#177;3.84</ns0:cell><ns0:cell>75.38&#177;2.74</ns0:cell><ns0:cell>63.69&#177;2.22</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>90.78&#177;2.45</ns0:cell><ns0:cell>85.73&#177;2.58</ns0:cell><ns0:cell>81.32&#177;2.12</ns0:cell><ns0:cell>92.65&#177;3.09</ns0:cell><ns0:cell>78.21&#177;3.47</ns0:cell></ns0:row><ns0:row><ns0:cell>IEMODB</ns0:cell><ns0:cell>84.00&#177;2.76</ns0:cell><ns0:cell>78.08&#177;2.65</ns0:cell><ns0:cell>76.44&#177;3.88</ns0:cell><ns0:cell>80.23&#177;2.77</ns0:cell><ns0:cell>75.78&#177;2.25</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Comparison of the SD experiments with existing methods.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Database</ns0:cell><ns0:cell>Reference</ns0:cell><ns0:cell>Feature</ns0:cell><ns0:cell>Accuracy(%)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>RAVDESS (Bhavan et al. (2019))</ns0:cell><ns0:cell>Spectral Centroids, MFCC and</ns0:cell><ns0:cell>75.69</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>MFCC derivatives</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>RAVDESS Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+RF</ns0:cell><ns0:cell>86.79</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>RAVDESS Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+SVM</ns0:cell><ns0:cell>88.77</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>( &#214;zseven (2019))</ns0:cell><ns0:cell>OpenSmile Features</ns0:cell><ns0:cell>72.39</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+RF</ns0:cell><ns0:cell>86.79</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+SVM</ns0:cell><ns0:cell>88.77</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Guo et al. (2018))</ns0:cell><ns0:cell>Amplitude spectrogram and phase in-</ns0:cell><ns0:cell>91.78</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>formation</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Chen et al. (2018))</ns0:cell><ns0:cell>3-D ACRNN</ns0:cell><ns0:cell>82.82</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Meng et al. (2019))</ns0:cell><ns0:cell>Dilated CNN + BiLSTM</ns0:cell><ns0:cell>90.78</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>( &#214;zseven (2019))</ns0:cell><ns0:cell>OpenSMILE features</ns0:cell><ns0:cell>84.62</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Bhavan et al. (2019))</ns0:cell><ns0:cell>Spectral Centroids, MFCC and</ns0:cell><ns0:cell>92.45</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>MFCC derivatives</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+MLP</ns0:cell><ns0:cell>95.80</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+SVM</ns0:cell><ns0:cell>96.02</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Satt et al. (2017))</ns0:cell><ns0:cell>3 Convolution Layers + LSTM</ns0:cell><ns0:cell>68.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Chen et al. (2018))</ns0:cell><ns0:cell>3-D ACRNN</ns0:cell><ns0:cell>64.74</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Zhao et al. (2018))</ns0:cell><ns0:cell>Attention-BLSTM-FCN</ns0:cell><ns0:cell>64.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Etienne et al. (2018))</ns0:cell><ns0:cell>CNN+LSTM</ns0:cell><ns0:cell>64.50</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Meng et al. (2019))</ns0:cell><ns0:cell>Dilated CNN + BiLSTM</ns0:cell><ns0:cell>74.96</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+MLP</ns0:cell><ns0:cell>89.12</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+RF</ns0:cell><ns0:cell>86.23</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>12/21</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:1:0:NEW 14 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Comparison of SI experiments with existing methods.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Database</ns0:cell><ns0:cell>Reference</ns0:cell><ns0:cell>Feature</ns0:cell><ns0:cell>Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(%)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>RAVDESS Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+MLP</ns0:cell><ns0:cell>82.75</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>RAVDESS Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+SVM</ns0:cell><ns0:cell>80.94</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>(Sun and Wen (2017))</ns0:cell><ns0:cell>Ensemble soft-MarginSoftmax (EM-</ns0:cell><ns0:cell>51.50</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Softmax)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>(Haider et al. (2020))</ns0:cell><ns0:cell>eGeMAPs and emobase</ns0:cell><ns0:cell>42.40</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+MLP</ns0:cell><ns0:cell>75.38</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+SVM</ns0:cell><ns0:cell>70.06</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Badshah et al. (2017))</ns0:cell><ns0:cell>DCNN + DTPM</ns0:cell><ns0:cell>87.31</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Sun and Wen (2017))</ns0:cell><ns0:cell>Ensemble soft-MarginSoftmax (EM-</ns0:cell><ns0:cell>82.40</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Softmax)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Yi and Mak (2019))</ns0:cell><ns0:cell>OpenSmile Features + ADAN</ns0:cell><ns0:cell>83.74</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Guo et al. (2019))</ns0:cell><ns0:cell>Statistical Features and Empirical</ns0:cell><ns0:cell>84.49</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Features+ KELM</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Meng et al. (2019))</ns0:cell><ns0:cell>Dilated CNN+ BiLSTM</ns0:cell><ns0:cell>85.39</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Haider et al. (2020))</ns0:cell><ns0:cell>eGeMAPs and emobase</ns0:cell><ns0:cell>76.90</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Lech et al. (2020))</ns0:cell><ns0:cell>AlexNet</ns0:cell><ns0:cell>82.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Mustaqeem et al. (2020))</ns0:cell><ns0:cell>Radial Basis Function Network(</ns0:cell><ns0:cell>85.57</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>RBFN) + Deep BiLSTM</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+MLP</ns0:cell><ns0:cell>92.65</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+SVM</ns0:cell><ns0:cell>90.78</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Xia and Liu (2017))</ns0:cell><ns0:cell>SP + CNN</ns0:cell><ns0:cell>64.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Chen et al. (2018))</ns0:cell><ns0:cell>Dilated CNN+ BiLSTM</ns0:cell><ns0:cell>69.32</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP Guo et al. (2019)</ns0:cell><ns0:cell>Statistical Features and Empirical</ns0:cell><ns0:cell>57.10</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Features+ KELM</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Yi and Mak (2019))</ns0:cell><ns0:cell>OpenSmile Features + ADAN</ns0:cell><ns0:cell>65.01</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Daneshfar et al. (2020))</ns0:cell><ns0:cell>IS10 + DBN</ns0:cell><ns0:cell>64.50</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Mustaqeem et al. (2020))</ns0:cell><ns0:cell>Radial Basis Function Network(</ns0:cell><ns0:cell>72.2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>RBFN) + Deep BiLSTM</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+MLP</ns0:cell><ns0:cell>89.12</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+RF</ns0:cell><ns0:cell>86.23</ns0:cell></ns0:row></ns0:table><ns0:note>13/21PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:1:0:NEW 14 Sep 2021)</ns0:note></ns0:figure> </ns0:body> "
" Original Article Title: “Effect on Emotion Classification of Feature Selection Approach Using Convolutional Neural Network” Dear Editor: Thank you very much for allowing a resubmission of our manuscript “Effect on Emotion Classification of Feature Selection Approach Using Convolutional Neural Network”. We are very happy to have received a positive evaluation, and we would like to express our appreciation to you and all Reviewers for the thoughtful comments and helpful suggestions. Reviewers raised several concerns, which we have carefully considered and made every effort to address. We fundamentally agree with all the comments made by the Reviewers, and we have incorporated corresponding revisions into the manuscript. Our detailed, point-by-point responses to the editorial and reviewer comments are given below, whereas the corresponding revisions are marked in colored text in the manuscript file. Specifically, blue text indicates changes made in response to the suggestions of Reviewers. Additionally, we have carefully revised the manuscript to ensure that the text is optimally phrased and free from typographical and grammatical errors. We believe that our manuscript has been considerably improved as a result of these revisions, and hope that our revised manuscript “Effect on Emotion Classification of Feature Selection Approach Using Convolutional Neural Network” is acceptable for publication in the Peerj Computer Science. We would like to thank you once again for your consideration of our work and for inviting us to submit the revised manuscript. We look forward to hearing from you. Best regards Hsien-Tsung Chang Chang Chang Gung University Department of Computer Science and Information Engineering Taoyuan, Taiwan E-mail: smallpig@widelab.org Reviewer 1 Basic reporting (Concern#1) The paper is very well written however at several places poor sentence structure makes the paper difficult to read and create ambiguities. Professional proof reading is required before publication. Author response: We changed the sentence structure of the manuscript.  Author action: We updated the manuscript by changing the sentence structure of the manuscript. We also send it to AJE for English proofreading after our revision. Following is the certification. Thanks. Experimental design (Concern#2) The contribution of the paper is the field is appreciable. Author response: Thanks Author action:  Thanks a lot. (Concern#3) In their proposed approach the authors have used the AlexNet for feature extractions. This network architecture is too old and has been superseded by several newer CNN based feed forward networks. Particularly the use of 11X11 convolutions in the AlexNet  has been mostly discontinued and 3x3 kernels are used in most current networks. Author response: Thanks a lot for the suggestion. Author action:  To the best of our knowledge, this is the first time we used an augmented Log-Mel spectrogram with AlexNet to extract the local level features. Additionally, it is also important to mention here that previous studies did not utilize a feature selection approach with pre-trained models and augmentation approach. we used the Correlation-Based feature selection approach with pertained approach (AlexNet) for both simulated and semi-natural databases. According to the existing literature (\cite{8967041, s20216008, 8873581}) most of the current studies used 11*11 convolutional in the Alexnet. (Concern#4) The authors have used the transfer learning approach by using a pretrained AlexNet. However, they have not clearly mentioned the dataset used for pretraining of the Alexnet. Author response: Thanks for the concern. Author action ­­­­­­­­:  We try to revise our manuscript by mentioned the dataset used for pretraining of the Alexnet. (page#5 Line#180) (Concern#5) The authors do not specified the details of deep learning python libraries (e.g. Tensorflow/Keras or Pytorch) etc. in section 4.2. Author response: Thanks a lot for the kind comments. We have included deep learning python libraries in section 4.2. Author action ­­­­­­­­:  We updated the manuscript by including the python libraries in the manuscript.(Page#10 Line#307) (Concern#6) In sub-section 5.0.1 the the number of neurons, activation functions and loss functions used to train the MLP classifier have not been specified. Similarly, hyperparameters used for the other methods are also not mentioned. Author response: Thank you for pointing this out. Author action:  We updated the manuscript by including details of all the machine learning algorithms used as a classifier. (Page#8 Line#260) Validity of the findings (Concern#7) The findings have been evaluated by using the well-known metrics in the field and conclusion is well stated. Author response: Thanks Author action:  Thanks a lot. Reviewer: Mohammad Ali Humayun Basic reporting (Concern#1) Language needs to be improved including the following particular issues: Line 71: ‘have been founded’ should be ‘have been found’ Line 77-78: 'Therefore' opening the first two consecutive sentences Line 109: ‘Emotions are categorized into two approaches’ should be ‘Emotions are categorized using two approaches’ Author response: We changed the sentence structure of the manuscript.  Author action: We updated the manuscript by changing the sentence structure of the manuscript. We also send it to AJE for English proofreading after our revision. Thanks. Following is the certification. (Concern#2) Section 2.1 and literature review in general presents various kinds of techniques and their results without a chronological sequence. Author response: Thanks a lot for the suggestion. Author action: We updated the section 2.1 and 2.2 with a chronological sequence. Experimental design (Concern#3) The method proposed in this paper extracts Alexnet based deep learning features from spectrogram for speech and then uses correlation based feature selection before feeding multiple shallow classifiers for emotion recognition. However the manuscript should address following two main issues: 1st issue: A similar method is proposed by ‘Melissa N., et al 2017 and 2020’. They treat the speech spectrogram as an RGB image and uses Alexnet for feature extraction to classify emotions without the ‘feature selection step though. The manuscript has not referred to these papers at all. The authors must highlight how the proposed method is different or better from the mentioned work. Author response: Thanks a lot for the suggestion. Author action: We updated the manuscript by referred both papers and also compared the results with our proposed approach. (Page#8 Line#260 and Page#13 Table.9) 2nd issue: The experimental setup for feature selection from speech features is not explained in detail. Both sections 3.4 and 3.5 provide the generic foundation of feature selection techniques without any explanation of implementation in the proposed setup. Author response: Thanks a lot for the suggestion. Author action: Beacause, proposed approach uses a pre-trained model to learn deep segment-level acoustic representations such as mid, side, left, right from image-like Mel-spectrograms. The pre-trained model is fine-tuned using target emotion audio corpora from a previously trained Learning algorithm. We used CFS(\cite{Wosiak2018}) approach for selecting the features. The feature selection technique's advantage is reducing the number of features by choosing the most discriminative features and discarding the remaining less useful features. By doing so, the workload of the classifiers was dramatically reduced. Validity of the findings (Concern#4) The results section presents confusion matrices for different datasets without explaining their significance. The manuscript should justify the classification metrics analyzed in results in terms of their significance e.g. what are the consequences of classifying a particular emotion as another and what the confusion matrices aim to highlight. Author response: Thanks a lot for the suggestion. Author action: We updated the manuscript by adding table 3. Table 3 represents the properties of the simulated and semi-natural databases. In our paper, we use two simulated and two Sami-natural databases. (Page#6 Table.3) (Concern#5) Tables 4-7 present accuracies for different classifiers without mentioning ‘accuracy’ in the table tile or any axis Author response: Thanks a lot for the suggestion. Author action: We updated the manuscript by mentioning ' Standard deviation and weighted average recall' in the table title. (Page#10-11 Table.4-7) (Concern#6) Line 398 in conclusion suggests that authors aim to test the results in cloud and edge computing environments. What is the rationale for such tests? Author response: Thanks for the concern. Author action: Because using edge technology, the weights of the deep network parameters can easily be stored for fast processing (/cite{8304394}). We also want to investigate other deep architectures to improve the system's performance using the eNTERFACE database and emotion in the wild challenge databases like BAUM-1s.  "
Here is a paper. Please give your review comments after reading it.
269
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Speech emotion recognition (SER) is a challenging issue because it is not clear which features are effective for classification. Emotionally related features are always extracted from speech signals for emotional classification. Handcrafted features are mainly used for emotional identification from audio signals. However, these features are not sufficient to correctly identify the emotional state of the speaker. The advantages of a deep convolutional neural network (DCNN) are investigated in the proposed work. A pretrained framework is used to extract the features from speech emotion databases. In this work, we adopt the feature selection (FS) approach to find the discriminative and most important features for SER. Many algorithms are used for the emotion classification problem. We use the random forest (RF), decision tree (DT), support vector machine (SVM), multilayer perceptron classifier (MLP), and k-nearest neighbors (KNN) to classify seven emotions. All experiments are performed by utilizing four different publicly accessible databases. Our method obtains accuracies of 92.02%, 88.77%, 93.61%, and 77.23% for Emo-DB, SAVEE, RAVDESS, and IEMOCAP, respectively, for speaker-dependent (SD) recognition with the feature selection method. Furthermore, compared to current handcrafted feature-based SER methods, the proposed method shows the best results for speaker-independent SER. For EMO-DB, all classifiers attain an accuracy of more than 80% with or without the feature selection technique.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Speech emotion recognition (SER) is a challenging issue because it is not clear which features are effective for classification. Emotionally related features are always extracted from speech signals for emotional classification. Handcrafted features are mainly used for emotional identification from audio signals. However, these features are not sufficient to correctly identify the emotional state of the speaker. The advantages of a deep convolutional neural network (DCNN) are investigated in the proposed work. A pretrained framework is used to extract the features from speech emotion databases. In this work, we adopt the feature selection (FS) approach to find the discriminative and most important features for SER. Many algorithms are used for the emotion classification problem. We use the random forest (RF), decision tree (DT), support vector machine (SVM), multilayer perceptron classifier (MLP), and k-nearest neighbors (KNN) to classify seven emotions. All experiments are performed by utilizing four different publicly accessible databases. Our method obtains accuracies of 92.02%, 88.77%, 93.61%, and 77.23% for Emo-DB, SAVEE, RAVDESS, and IEMOCAP, respectively, for speaker-dependent (SD) recognition with the feature selection method. Furthermore, compared to current handcrafted feature-based SER methods, the proposed method shows the best results for speaker-independent SER.</ns0:p><ns0:p>For EMO-DB, all classifiers attain an accuracy of more than 80% with or without the feature selection technique.</ns0:p><ns0:p>Three main issues should be addressed to obtain a successful SER framework: (i) selecting an excellent emotional database, (ii) performing useful feature extraction, and (iii) using deep learning algorithms to design accurate classifiers. However, emotional feature extraction is a significant problem in an SER framework. In prior studies, many researchers have suggested significant features of speech, such as energy, intensity, pitch, standard deviation, cepstrum coefficients, Mel-frequency cepstrum coefficients (MFCCs), zero-crossing rate (ZCR), formant frequency, filter bank energy (FBR), linear prediction cepstrum coefficients (LPCCs), modulation spectral features (MSFs) and Mel-spectrograms. In <ns0:ref type='bibr' target='#b63'>(Sezgin et al. (2012)</ns0:ref>), several distinguishing acoustic features were used to identify emotions: spectral, qualitative, continuous, and Teager energy operator-based (TEO) features. Thus, many researchers have suggested that the feature set comprises more speech emotion information <ns0:ref type='bibr' target='#b58'>(Rayaluru et al. (2019)</ns0:ref>). However, combining feature sets complicates the learning process and enhances the possibility of overfitting. In the last five years, researchers have presented many classification algorithms, such as the hidden Markov model (HMM) <ns0:ref type='bibr' target='#b44'>(Mao et al. (2019)</ns0:ref>), support vector machine (SVM) <ns0:ref type='bibr' target='#b39'>(Kurpukdee et al. (2017)</ns0:ref>), deep belief network (DBN) <ns0:ref type='bibr' target='#b64'>(Shi (2018)</ns0:ref>), K-nearest neighbors (KNN) <ns0:ref type='bibr' target='#b84'>(Zheng et al. (2020)</ns0:ref>) and bidirectional long short-term memory networks (BiLSTMs) <ns0:ref type='bibr' target='#b49'>(Mustaqeem et al. (2020)</ns0:ref>). Some researchers have also suggested different classifiers; in the brain emotional learning model (BEL) <ns0:ref type='bibr' target='#b49'>(Mustaqeem et al. (2020)</ns0:ref>), a multilayer perceptron (MLP) and adaptive neuro-fuzzy inference system are combined for SER. The multikernel Gaussian process (GP) <ns0:ref type='bibr' target='#b13'>(Chen et al. (2016b)</ns0:ref>) is another proposed classification strategy with two related notions. These provide for learning in the algorithm by combining two functions: the radial basis function (RBF) and the linear kernel function. In <ns0:ref type='bibr' target='#b13'>(Chen et al. (2016b)</ns0:ref>), the proposed system extracted two spectral features and used these two features to train different machine learning models.</ns0:p><ns0:p>The proposed technique estimated that the combined features had high accuracy, above 90 percent on the Spanish emotional database and 80 percent on the Berlin emotional database. Han et al. adopted both utterance-and segment-level features to identify emotions. Some researchers have weighted the advantages and disadvantages of each feature. However, no one has identified which feature is the best feature among feature categories <ns0:ref type='bibr' target='#b22'>(El Ayadi et al. (2011)</ns0:ref>; <ns0:ref type='bibr' target='#b69'>Sun et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b3'>Anagnostopoulos et al. (2015)</ns0:ref>). Many deep learning models have been proposed in SER to determine the high-level emotion features of utterances to establish a hierarchical representation of speech. The accuracy of handcrafted features is relatively high, and this feature extraction technique always requires manual labor <ns0:ref type='bibr' target='#b3'>(Anagnostopoulos et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b11'>Chen et al. (2016a</ns0:ref><ns0:ref type='bibr' target='#b10'>Chen et al. ( , 2012))</ns0:ref>). The extraction of handcrafted features usually ignores the high-level features. However, the best and most appropriate features that are emotionally powerful must be selected by effective performance for SER.</ns0:p><ns0:p>Therefore, it is more important to select specific speech features that are not affected by country, speaking style of the speaker, culture, or region. Feature selection (FS) is also essential after extraction and is accompanied by an appropriate classifier to recognize emotions from speech. A summary of FS is presented in <ns0:ref type='bibr' target='#b36'>(Kerkeni et al. (2019)</ns0:ref>). Both feature extraction and FS effectively reduce computational complexity, enhance learning effectiveness, and reduce the storage needed. To extract the local features, we use a convolutional neural network (CNN) (AlexNet). The CNN automatically extracts the appropriate local features from the augmented input spectrogram of an audio speech signal. When using CNNs for the SER system, the spectrogram is frequently used as the CNN input to obtain high-level features. In recent years, numerous studies have been presented, such as <ns0:ref type='bibr' target='#b0'>(Abdel-Hamid et al. (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b38'>Krizhevsky et al. (2017)</ns0:ref>). The authors used a CNN model for feature extraction of audio speech signals. Recently, deep learning models such as AlexNet <ns0:ref type='bibr' target='#b42'>(Li et al. (2021)</ns0:ref>), VGG <ns0:ref type='bibr' target='#b65'>(Simonyan and Zisserman (2015)</ns0:ref>), and ResNet <ns0:ref type='bibr' target='#b31'>(He et al. (2015)</ns0:ref>) have been used extensively to perform different classification tasks. Additionally, these deep learning models regularly perform much better than shallow CNNs. The main reason is that deep CNNs extract mid-level features from the input data using multilevel convolutional and pooling layers.</ns0:p><ns0:p>The detailed abbreviations and definitions used in the paper are listed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>.</ns0:p><ns0:p>The main contributions of this paper are as follows: 1). In the proposed study, AlexNet is used to extract features for a speech emotion recognition system. 2). A feature selection approach is used to enhance the accuracy of SER. 3). The proposed approach performs better than existing handcrafted and deep-learning methods for SD and SI experiments. </ns0:p></ns0:div> <ns0:div><ns0:head n='2'>BACKGROUND</ns0:head><ns0:p>In this study, five different machine learning algorithms are used for emotion recognition tasks. There are two main parts of SER. One part is based on distinguishing feature extraction from audio signals. The second part is based on selecting a classifier that classifies emotional classes from speech utterances.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Speech Emotion Recognition Using Machine Learning Approaches</ns0:head><ns0:p>Researchers have used different machine learning classifiers to identify emotional classes from speech: SVM <ns0:ref type='bibr' target='#b63'>(Sezgin et al. (2012)</ns0:ref>), random forest (RF) <ns0:ref type='bibr' target='#b52'>(Noroozi et al. (2017)</ns0:ref>), Gaussian mixture models (GMMs) <ns0:ref type='bibr' target='#b54'>(Patel et al. (2017)</ns0:ref>), HMMs <ns0:ref type='bibr' target='#b44'>(Mao et al. (2019)</ns0:ref>), CNNs <ns0:ref type='bibr' target='#b15'>(Christy et al. (2020)</ns0:ref>), k-nearest neighbors (KNN) <ns0:ref type='bibr' target='#b35'>(Kapoor and Thakur (2021)</ns0:ref>) and MLP. These algorithms have been commonly used to identify emotions.</ns0:p><ns0:p>Emotions are categorized using two approaches: categorical and dimensional approaches. Emotions are classified into small groups in the categorical approach. Ekman <ns0:ref type='bibr' target='#b21'>(Ekman (1992)</ns0:ref>) proposed six basic emotions: anger, happiness, sadness, fear, surprise, and disgust. In the second category, emotions are defined by axes with a combination of several dimensions <ns0:ref type='bibr' target='#b17'>(Costanzi et al. (2019)</ns0:ref>). Different researchers have described emotions relative to one or more dimensions. Pleasure-arousal-dominance (PAD) is a three-dimensional emotional state model proposed by <ns0:ref type='bibr' target='#b46'>(Mehrabian (1996)</ns0:ref>). Different features are essential in identifying speech emotions from voice. Spectral features are significant and widely used to classify emotions. A decision tree was used to identify emotions from the CASIA Chinese emotion corpus in <ns0:ref type='bibr' target='#b70'>(Tao et al. (2008)</ns0:ref>) and achieved 89.6% accuracy. AB <ns0:ref type='bibr'>Kandali et al. introduced</ns0:ref> an approach to classify emotion-founded MFCCs as the main features and applied a GMM as a classifier <ns0:ref type='bibr' target='#b34'>(Kandali et al. (2009)</ns0:ref>). <ns0:ref type='bibr'>Milton, A. et al. presented</ns0:ref> a three-stage traditional SVM classifying different Berlin emotional datasets <ns0:ref type='bibr' target='#b48'>(Milton et al. (2013)</ns0:ref>). VB Waghmare et al. adopted spectral features (MFCCs) as the main feature and classified emotions from the Marathi speech dataset <ns0:ref type='bibr' target='#b73'>(Waghmare et al. (2014)</ns0:ref>). <ns0:ref type='bibr'>Demircan, S. et al. extracted</ns0:ref> MFCC features from the Berlin EmoDB database. They used the KNN algorithm to recognize speech emotions <ns0:ref type='bibr' target='#b19'>(Demircan and Kahramanli (2014)</ns0:ref>). The Berlin emotional speech database (EMO-DB) was used in the experiment, and the accuracy obtained was between 90% and 99.5%.</ns0:p><ns0:p>Hossain et al. proposed a cloud-based collaborative media system that uses emotions from speech signals and uses standard features such as MFCCs <ns0:ref type='bibr' target='#b33'>(Hossain.M. Shamim (2014)</ns0:ref>). Paralinguistic features and prosodic features were utilized to detect emotions from speech in <ns0:ref type='bibr' target='#b1'>(Alonso et al. (2015)</ns0:ref>). SVM, a radial basis function neural network (RBFNN), and an autoassociative neural network (AANN) were used to recognize emotions after combining two features, MFCCs and the residual phase (RP), from a music database <ns0:ref type='bibr' target='#b50'>(Nalini and Palanivel (2016)</ns0:ref>). SVMs and DBNs were examined utilizing the Chinese academic database <ns0:ref type='bibr' target='#b82'>(Zhang et al. (2017)</ns0:ref>). The accuracy using DBNs was 94.5%, and the accuracy of the SVM was approximately 85%. In <ns0:ref type='bibr'>(C.K. et al. (2017)</ns0:ref>), particle swarm optimization-based features and high-order statistical features were utilized. <ns0:ref type='bibr'>Chourasia et al. implemented</ns0:ref> an SVM and HMM to classify speech emotions after extracting the spectral features from speech signals <ns0:ref type='bibr' target='#b14'>(Chourasia et al. (2021)</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Speech Emotion Recognition Using Deep Learning Approaches</ns0:head><ns0:p>Low-level handcrafted features are very useful in distinguishing speech emotions. With many successful deep neural network (DNN) applications, many experts have started to target in-depth emotional feature learning. <ns0:ref type='bibr'>Schmidt et al. used</ns0:ref> an approach based on linear regression and deep belief networks to identify musical emotions <ns0:ref type='bibr' target='#b62'>(Schmidt and Kim (2011)</ns0:ref>). They used the MoodSwings Lite music database and obtained a 5.41% error rate. Duc Le et al. implemented hybrid classifiers, which were a set of DBNs and HMMs, and attained good results on FAU Aibo <ns0:ref type='bibr' target='#b40'>(Le and Provost (2013)</ns0:ref>). Deng et al. presented a transfer learning feature method for speech emotion recognition based on a sparse autoencoder. Several databases were used, including the eNTERFACE and EMO-DB databases <ns0:ref type='bibr' target='#b20'>(Deng et al. (2013)</ns0:ref>). In <ns0:ref type='bibr' target='#b55'>(Poon-Feng et al. (2014)</ns0:ref>), a generalized discriminant analysis method (Gerda) was presented with several Boltzmann machines to analyze and classify emotions from speech and improve the previous reported baseline by traditional approaches. Erik M. Schmidt et al. proposed a regression-based DBN to recognize music emotions and a model based on three hidden layers to learn emotional features <ns0:ref type='bibr' target='#b30'>(Han et al. (2014)</ns0:ref>).</ns0:p><ns0:p>Trentin et al. proposed a probabilistic echo-state network-based emotion recognition framework that obtained an accuracy of 96.69% using the WaSep database <ns0:ref type='bibr' target='#b71'>(Trentin et al. (2015)</ns0:ref>). More recent work introduced deep retinal CNNs (DRCNNs) in <ns0:ref type='bibr' target='#b51'>(Niu et al. (2017)</ns0:ref>), which showed good performance in recognizing emotions from speech signals. The presented approach obtained the highest accuracy, 99.25%, in the IEMOCAP database. In <ns0:ref type='bibr' target='#b24'>(Fayek et al. (2017)</ns0:ref>), the authors suggested deep learning approaches. A speech signal spectrogram was used as an input. The signal may be represented in terms of time and frequency. The spectrogram is a fundamental and efficient way to describe emotional speech impulses in the time-frequency domain. It has been used with particular effectiveness for voice and speaker recognition and word recognition <ns0:ref type='bibr' target='#b66'>(Stolar et al. (2017)</ns0:ref>). In <ns0:ref type='bibr' target='#b66'>(Stolar et al. (2017)</ns0:ref>), the existing approach used ALEXNet-SVM, experiments were performed on the EMO-DB database with seven emotions. Satt A and S. Rozenberg et al. suggested another efficient convolutional LSTM approach for emotion classification.</ns0:p><ns0:p>The introduced model learned spatial patterns and spatial spectrogram patterns representing information on the emotional states <ns0:ref type='bibr' target='#b61'>(Satt et al. (2017)</ns0:ref>). The experiment was performed on the IEMOCAP database with four emotions. Two different databases were used to extract prosodic and spectral features with an ensemble softmax regression approach <ns0:ref type='bibr' target='#b68'>(Sun and Wen (2017)</ns0:ref>). For the identification of emotional (2017)) to classify four emotions from the IEMOCAP database: happy, neutral, angry, and sad. In <ns0:ref type='bibr' target='#b76'>(Xia and Liu (2017)</ns0:ref>), multitasking learning was used to obtain activation and valence data for speech emotion detection using the DBN model. IEMOCAP was used in the experiment to identify the four emotions.</ns0:p><ns0:p>However, high computational costs and a large amount of data are required for deep learning techniques.</ns0:p><ns0:p>The majority of current speech emotional databases have a small amount of data. Deep learning model approaches are insufficient for training with large-scale parameters. A pretrained deep learning model is used based on the above studies. In <ns0:ref type='bibr' target='#b4'>(Badshah et al. (2017)</ns0:ref>), a pretrained DCNN model was introduced for speech emotion recognition. The outcomes were improved with seven emotional states. In <ns0:ref type='bibr' target='#b4'>(Badshah et al. (2017)</ns0:ref>), The authors suggested a DCNN accompanied by a discriminant temporal pyramid matching with four different databases. In the suggested approach, the authors used six emotional classes for BAUM-1s, eNTERFACE05, RML databases and used seven emotions for the Emo-DB databases. DNNs were used to divide emotional probabilities into segments <ns0:ref type='bibr' target='#b25'>(Gu et al. (2018)</ns0:ref>), which were utilized to create utterance features; these probabilities were fed to the classifier. The IEMOCAP database was used in the experiment, and the obtained accuracy was 54.3%. In <ns0:ref type='bibr' target='#b83'>(Zhao et al. (2018)</ns0:ref>), the suggested approach used integrated attention with a fully convolutional network (FCN) to automatically learn the optimal spatiotemporal representations of signals from the IEMOCAP database. The hybrid architecture proposed in <ns0:ref type='bibr' target='#b23'>(Etienne et al. (2018)</ns0:ref>) included a data augmentation technique. In <ns0:ref type='bibr' target='#b74'>(Wang and Guan (2008)</ns0:ref>; <ns0:ref type='bibr' target='#b80'>Zhang et al. (2018)</ns0:ref>), the fully connected layer (FC7) of AlexNet was used for the extraction process. The results were evaluated on four different databases with six emotional states. In <ns0:ref type='bibr' target='#b27'>(Guo et al. (2018)</ns0:ref>), an approach for SER that combined phase and amplitude information utilizing a CNN was investigated. In Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b29'>(Haider et al. (2020)</ns0:ref>). An implementation of real-time voice emotion identification using AlexNet was described in <ns0:ref type='bibr' target='#b41'>(Lech et al. (2020)</ns0:ref>). When trained on the Berlin Emotional Speech (EMO-DB) database with six emotional classes, the presented method obtained an average accuracy of 82%. According to existing research <ns0:ref type='bibr'>(Stolar et</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>PROPOSED METHOD</ns0:head><ns0:p>This section describes the proposed pretrained CNN (AlexNet) algorithm for the SER framework. We </ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Creation of the Audio Input</ns0:head><ns0:p>In the proposed method, the Mel-spectrogram segment is generated from the original speech signal. We </ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Emotion Recognition Using AlexNet</ns0:head><ns0:p>In the proposed method, CL4 of the pretrained model is used for feature extraction. The CFS feature selection approach is used to select the most discriminative features. The CFS approach selects only very highly correlated features with output class labels. The five different classification models are used to test the accuracy of the feature subsets.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Feature Extraction</ns0:head><ns0:p>In </ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.1'>Input Layer</ns0:head><ns0:p>This layer of the pretrained model is a fixed-size input layer. We resample the Mel spectrogram of the signal to a fixed size 227 &#215; 227 &#215; 3.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.2'>Convolutional Layer (CL)</ns0:head><ns0:p>The convolutional layer is composed of convolutional filters. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.3'>Pooling Layer (PL)</ns0:head><ns0:p>After the CLs, a pooling layer is used. The goal of the pooling layer is to subsample the feature groups.</ns0:p><ns0:p>The feature groups are obtained from the previous CLs to create a single data convolutional feature group from the local areas. Average pooling and max-pooling are the two basic pooling operations. The max-pooling layer employs maximum filter activation across different points in a quantified frame to produce a modified resolution type of CL activation.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.4'>Fully Connected Layers (FCLs)</ns0:head><ns0:p>Fully connected layers incorporate the characteristics acquired from the PL and create a feature vector for classification. The output of the CLs and PLs is given to the fully connected layers. There are three fully connected layers in AlexNet: FC6, FC7, and FC8. A 4096-dimensional feature map is generated by FC6 and FC7, while FC8 generates 1000-dimensional feature groups.</ns0:p><ns0:p>Feature maps can be created using FCLs. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.4'>Feature Selection</ns0:head><ns0:p>The discriminative and related features for the model are determined by feature selection. FS approaches are used with several models to minimize the training time and enhance the ability to generalize by decreasing overfitting. The main goal of feature selection is to remove insignificant and redundant features.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5'>Correlation-Based Measure</ns0:head><ns0:p>We can identify an excellent feature if it is related to the class features and is not redundant with respect to any other class features. For this reason, we use entropy-based information theory. The equation of entropy-based information theory is defined as:</ns0:p><ns0:p>F(E) = &#8722;&#931;S(e j )log2(S(e j )).</ns0:p><ns0:p>(1)</ns0:p><ns0:p>The entropy of E after examining the values of G is defined in the equation below:</ns0:p><ns0:formula xml:id='formula_0'>F(E/G) = &#8722;&#931;S(g k )&#931;S(e j /g k )log2(S(e j /g k ))<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>S(e j ) denotes the probability for all values of E, whereas S(e j /g k ) denotes the probabilities of E when the values of G are specified. The percentage by which the entropy of E decreases reflects the irrelevant information about E given by G, which is known as information gain. The equation of information gain is given below:</ns0:p><ns0:formula xml:id='formula_1'>I(E/G) = (F(E) &#8722; F(E/G)).</ns0:formula><ns0:p>(3)</ns0:p><ns0:p>If I(E/G) &#191; I(H/G), then we can conclude that feature G is much more closely correlated to feature E than to feature H. We possess one more metric, symmetrical uncertainty, which indicates the correlation between features, defined by the equation below: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_2'>SU(E, G) = 2[I(E/G)/F(E) + F(G)]. (<ns0:label>4</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We use the correlation feature-based approach (CFS) <ns0:ref type='bibr' target='#b75'>(Wosiak and Zakrzewska (2018)</ns0:ref>) in the proposed work based on the previously described techniques. It evaluates a subset of features and selects only highly correlated discriminative attributes. CFS ranks the features by applying a heuristic correlation evaluation function. It estimates the correlation within the features. CFS drops unrelated features that have limited similarity with the class label. The CFS equation is as follows:</ns0:p><ns0:formula xml:id='formula_3'>FS = max Sk r c f 1 + r c f 2 + r c f 3 + .... + r c f k k + 2(r f 1 f 2 + .... + r f i f j + .... + r f k f k&#8722;1 ) ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>where k represents the total number of features, r c f i represents the classification correlation of the features, and r f i f j represents the correlation between features. The extracted features are fed into classification algorithms. CFS usually deletes (backward selection) or adds (forward selection) one feature at a time.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>(b) gives the most discriminative number of selected features.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.6'>Classification Methods</ns0:head><ns0:p>The discriminative features provide input to the classifiers for emotion classification. In the proposed method, five different classifiers, KNN, RF, decision tree, MLP, and SVM, are used to evaluate the performance of speech emotion recognition.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.7'>Support Vector Machine (SVM)</ns0:head><ns0:p>SVMs are used for binary regression and classification. They create an optimal higher-dimensional space with a maximum class margin. SVMs identify the support vectors v j , weights w f j , and bias b to categorize the input information. For classification of the data, the following expression is used:</ns0:p><ns0:formula xml:id='formula_4'>sk(v, v j ) = (&#961;v e v j + k) z .<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>In the above equations, k is a constant value, and b represents the degree of the polynomial. For a polynomial &#961; &#191; zero:</ns0:p><ns0:formula xml:id='formula_5'>v = (&#931; n i=0 w f j sk(v j , v) + b.<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>In the above equation, sk represents the kernel function, v is the input, vj is the support vector, wfj is the weight, and b is the bias. In our study, we utilize the polynomial kernel to translate the data into a higher-dimensional space.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.8'>k-Nearest Neighbors (KNN)</ns0:head><ns0:p>This classification algorithm keeps all data elements. It identifies the most comparable N examples and employs the target class emotions for all data examples based on similarity measures. In the proposed study, we fixed N = 10 for emotional classification. The KNN method finds the ten closest neighbors using the Euclidean distance, and emotional identification is performed using a majority vote.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.9'>Random Forest (RF)</ns0:head><ns0:p>An RF is a classification and regression ensemble learning classifier. It creates a class of decision trees and a meaningful indicator of the individual trees for data training. The RF replaces each tree in the database at random, resulting in unique trees, in a process called bagging. The RF splits classification networks based on an arbitrary subset of characteristics per tree.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.10'>Multilayer Perceptron (MLP)</ns0:head><ns0:p>MLPs are neural networks that are widely employed in feedforward processes. They consist of multiple computational levels. Identification issues may be solved using MLPs. They use a supervised backpropagation method for classifying occurrences. The MLP classification model consists of three layers: the input layer, the hidden layers, and the output layer. The input layer contains neurons that are directly proportional to the features. The degree of the hidden layers depends on the overall degree of the emotions in the database. It features dimensions after the feature selection approach. The number of output neurons in the database is equivalent to the number of emotions. The sigmoid activation function utilized in this study is represented as follows: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_6'>p i = 1 1 + e &#8722; qi (<ns0:label>8</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In the above equation, the state is represented by pi, whereas the entire weighted input is represented by qi. When using the Emo-DB database, there is only one hidden layer in the MLP. It has 232 neurons.</ns0:p><ns0:p>When using the SAVEE database, there is only one hidden layer in the MLP, and it comprises 90 neurons.</ns0:p><ns0:p>The MLP contains a single hidden layer, and 140 neurons are present in the IEMOCAP database. In </ns0:p></ns0:div> <ns0:div><ns0:head n='4'>EXPERIMENTS</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1'>Datasets</ns0:head><ns0:p>This experimental study contains four emotional speech databases, and these databases are publicly available, represented in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>(a). </ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Experimental Setup</ns0:head><ns0:p>All the experiments are completed in version 3.9.0 of the Python language framework. Numerous API libraries are used to train the five distinct models. The framework uses Ubuntu 20.04. The key objective is to implement an input data augmentation and feature selection approach for the five different models.</ns0:p><ns0:p>The feature extraction technique is also involved in the proposed method. The lightweight and most straightforward model presented in the proposed study has excellent accuracy. In addition, low-cost complexity can monitor real-time speech emotion recognition systems and show the ability for real-time applications.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.1'>Anaconda</ns0:head><ns0:p>Anaconda is the best data processing and scientific computing platform for Python. It already includes numerous data science and machine learning libraries. Anaconda also includes many popular visualization libraries, such as matplotlib. It also provides the ability to build a different environment with a few unique libraries to carry out the task.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.2'>Keras</ns0:head><ns0:p>The implementation of our model for all four datasets was completed from scratch using Keras. It makes it extremely simple for the user to add and remove layers and activate and utilize the max-pooling layer in the network. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>5 EXPERIMENTAL RESULTS AND ANALYSIS <ns0:ref type='bibr' target='#b9'>(Chau and Phung (2013)</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1'>Speaker-Dependent (SD) Experiments</ns0:head><ns0:p>The performance of the proposed SER system is assessed using benchmark databases for the SD experiments. We use ten-fold cross-validation in our studies. All databases are divided randomly into ten equal complementary subsets with a dividing ratio of 80:20 to train and test the model. The experimental results illustrate a significant accuracy improvement by using data resampling and the FS approach. We consider the standard deviation and average weighted recall to evaluate the performance and stability of the SD experiments using the FS approach. The SVM classifier reached 93.61% and 96.02% accuracy for RAVDESS and Emo-DB, respectively, while the obtained accuracies were 88.77% and 77.23% for SAVEE and IEMOCAP, respectively, through the SVM. The MLP classifier obtained 95.80% and 89.12% accuracies with the Emo-DB and IEMOCAB databases, respectively.</ns0:p><ns0:p>The KNN classifier obtained the highest accuracy, 92.45% and 88.34%, with the Emo-DB and RAVDEES datasets. The RF classifier reported the highest accuracy, 93.51%, on the Emo-DB dataset and 86.79% accuracy on the SAVEE dataset with the feature selection approach. Table <ns0:ref type='table' target='#tab_8'>5</ns0:ref> shows that the SVM obtained better recognition accuracy than the other classification models with the FS method. A confusion matrix is an approach for describing the accuracy of the classification technique. For instance, if the data contains an imbalanced amount of samples in every group or more than two groups, the accuracy the individual emotional labels. The Emo-DB database contains seven emotional categories, three of which, 'sad', 'disgust', and 'neutral,' were identified with accuracies of 98.88%, 98.78%, and 97.45%, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science and 'neutral' with the highest accuracies, 97.78% and 92.45%, with the SAVEE dataset. As shown in Figure <ns0:ref type='figure' target='#fig_14'>5</ns0:ref>, the RAVDESS dataset contains eight emotions, including 'anger', 'calm', 'fear', and 'neutral', which are listed with accuracies of 96.32%, 97.65%, 95.54%, and 99.98%, respectively. The IEMOCAP database identified 'anger' with the highest accuracy of 93.23%, while 'happy,' 'sad,' and 'neutral' Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head n='5.2'>Speaker-Independent (SI) Experiments</ns0:head><ns0:p>We adopted the single-speaker-out (SSO) method for the SI experiments. One annotator was used for testing, and all other annotators were used for training. In the proposed approach, the IEMOCAP dataset Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed <ns0:ref type='table' target='#tab_9'>6</ns0:ref> shows that the SVM obtained better recognition accuracy than the other classification models without the FS method. Table <ns0:ref type='table' target='#tab_10'>7</ns0:ref> represents the outcomes for the SI experiments with the feature extraction approach with data resampling and the FS method. The FS and data resampling approach improved the accuracy, according to the preliminary results.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>We report the average weighted recall and standard deviation to evaluate the SI experiment's performance and stability utilizing the FS method. The SVM obtained the highest accuracies, 90.78%, 84.00%, <ns0:ref type='figure' target='#fig_18'>9</ns0:ref> shows that the average accuracy achieved by the SVM with the IEMOCAP database is 84.00%.</ns0:p><ns0:p>Four publicly available databases are used to compare the proposed method. As illustrated in Table <ns0:ref type='table' target='#tab_11'>8</ns0:ref>, <ns0:ref type='formula'>2020</ns0:ref>)). In comparison to other speech emotion databases, the SAVEE database is relatively small. The purpose of using a pretrained approach is that it can be trained effectively with limited data. In comparison to <ns0:ref type='bibr' target='#b68'>(Sun and Wen (2017)</ns0:ref> of the proposed scheme show a significant improvement over current methods. With the RAVDESS database, the proposed approach achieved 73.50 percent accuracy. Our approach allowed us to identify multiple emotional states with Multiple languages with a higher classification accuracy while using a smaller model size and lower computational costs. In addition, our approach included a simple design and user-friendly operating characteristics, which can make it suitable for implementations such as monitoring people's behavior.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>CONCLUSIONS AND FUTURE WORK</ns0:head><ns0:p>In this research, the primary emphasis was on learning discriminative and important features from advanced emotional speech databases. Therefore, the main objective of the present research was advanced feature extraction using AlexNet. The proposed CFS approach explored the predictability of every feature.</ns0:p><ns0:p>The results showed the superior performance of the proposed strategy with four datasets in both SD and SI experiments.</ns0:p><ns0:p>To analyze the classification performance of each emotional group, we display the results in the form of confusion matrices. The main benefit of applying the FS method is to reduce the abundance of features by selecting the most discriminative features and eliminating the poor features. We noticed that the pretrained AlexNet framework is very successful for feature extraction techniques that can be trained with a small number of labeled datasets. The performance in the experimental studies empowers us to explore the efficacy and impact of gender on speech signals. The proposed model is also useful for multilanguage databases for emotion classification.</ns0:p><ns0:p>In future studies, we will perform testing and training techniques using different language databases, which should be a useful evaluation of our suggested technique. We will test the proposed approach in the cloud and in an edge computing environment. We would like to evaluate different deep architectures to enhance the system's performance when using spontaneous databases.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>The rest of the paper is organized as follows: Part 2 reviews the previous work in SER related to this paper's current study. A detailed description of the emotional dataset used in the presented work and the proposed method for FS and the classifier are discussed in Part 3. The results are discussed in Part 4. Part 2/23 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:2:0:NEW 6 Oct 2021) Manuscript to be reviewed Computer Science 5 contains the conclusion and outlines future work.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The structure of our proposed model for audio emotion recognition</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The general architecture of AlexNet, The parameters of the convolutional layer are represented by the 'Conv(kernel size)-[stride size]-[number of channels]'. The parameters of the max-pooling layer are indicated as 'Maxpool-[kernel size]-[stride size]'.</ns0:figDesc><ns0:graphic coords='5,141.73,374.47,413.57,144.96' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b12'>Chen et al. (2018)</ns0:ref>), a three-dimensional convolutional recurrent neural network including an attention mechanism (ACRNN) was introduced. The identification of emotion was evaluated using the Emo-DB and IEMOCAP databases. The attention process was used to develop a dilated CNN and BiLSTM in5/23 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:2:0:NEW 6 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>fine-tune the pretrained model<ns0:ref type='bibr' target='#b38'>(Krizhevsky et al. (2017)</ns0:ref>) on the created image-like Mel-spectrogram segments. We do not train our own deep CNN framework owing to the limited emotional audio dataset.Furthermore, computer vision experiments (Ren et al. (2016); Campos et al. (2017)) have depicted that fine-tuning the pretrained CNNs on target data is acceptable to relieve the issue of data insufficiency. AlexNet is a model pretrained on the extensive ImageNet dataset, containing a wide range of different labeled classes, and uses a shorter training time. AlexNet (Krizhevsky et al. (2017); Stolar et al. (2017); Lech et al. (2020)) comprises five convolution layers, three max-pooling layers, and three fully connected layers. In the proposed work, we extract the low-level features from the fourth convolutional layer (CL4). The architecture of our proposed model is displayed in Figure 1. Our model comprises four processes: (a) development of the audio input data, (b) low-level feature extraction using AlexNet, (c) feature selection, and (d) classification. Below, we explain all four steps of our model in detail.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>create three channels of the segment from the original 1D audio speech dataset. Then, the generated segments are converted into fixed-size 227 &#215; 227 &#215; 3 inputs for the proposed model. Following (Zhang et al. (2018)), 64 Mel-filter banks are used to create the log Mel-spectrogram, and each frame is multiplied by a 25 ms window size with a 10 ms overlap. Then, we divide the log Mel spectrogram into fixed segments by using a 64-frame context window. Finally, after extracting the static segment, we calculate the regression coefficients of the first and second order around the time axis, thereby generating the delta and double-delta coefficients of the static Mel spectrogram segment. Consequently, three channels with 64 &#215; 64 &#215; 3 Mel-spectrogram segments can be generated as the inputs of AlexNet, and these channels are identical to the color RGB image. Therefore, we resize the original 64 &#215; 64 &#215; 3 spectrogram to the new size 227 &#215; 227 &#215; 3. In this case, we can create four (middle, side, left, and right) segments of the Mel spectrogram, as shown in Figure 2.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>this study, feature extraction is performed using a pretrained model. The original weight of the model remains fixed, and existing layers are used to extract the features. The pretrained model has a deep structure that contains extra filters for every layer and stacked CLs. It also includes convolutional layers, max-pooling layers, momentum stochastic gradient descent, activation functions, data augmentation, and dropout. AlexNet uses a rectified linear unit (ReLU) activation function. The layers of the network are explained below.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Convolutional filters are used to obtain many local features in the input data from local regions to form various feature groups. AlexNet contains five CLs, in which three layers follow the max-pooling layer. CL1 includes 96 kernels with a size of 11 &#215; 11 &#215; 3, zero padding, and a stride of 4 pixels. CL2 contains 256 kernels, each of which is 5 &#215; 5 &#215; 48 in size and includes a 1-pixel stride and a padding value of 2. The CL3 contains 384 kernels of size 3 &#215; 3 &#215; 256. CL4 contains 384 kernels of size 3 &#215; 3 &#215; 192. For the output value of each CL, the ReLU function is used, which speeds up the training process. 8/23 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:2:0:NEW 6 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:64126:2:0:NEW 6 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>comparison, one hidden layer and 285 neurons are present in the RAVDESS dataset. The MLP is a two-level architecture; thus, identification requires two levels: training and testing. The weight values are set throughout the training phase to match them to the particular output class.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Librosa (McFee et al. (2015)) is a basic Python library used for this research. Librosa is used to examine the audio signal recordings. The four (side, middle, left, and right) segments of the Mel spectrogram were obtained through Librosa. 11/23 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:2:0:NEW 6 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>of the classification alone may be deceptive. Thus, calculating a confusion matrix provides a clearer understanding of what our classification model gets right and what kinds of mistakes it makes. It is common used in related researches(Zhang et al. (2018); Chen et al. (2018); Zhang et al. (2019)). The row means the actual emotion classes in the confusion matrix, while the column indicates the predicted emotion classes. The results of the confusion matrix are used to evaluate the identification accuracy of</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Confusion matrix obtained by the SVM on the Emo-DB database for the SD experiment</ns0:figDesc><ns0:graphic coords='16,141.73,63.78,372.21,264.32' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Confusion matrix obtained by the SVM on the SAVEE database for the SD experiment</ns0:figDesc><ns0:graphic coords='16,141.73,351.02,372.21,264.32' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Confusion matrix obtained by the SVM on the RAVDESS database for the SD experiment</ns0:figDesc><ns0:graphic coords='17,141.73,63.78,372.23,262.55' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Confusion matrix obtained by the MLP on the IEMOCAP database for the SD experiment</ns0:figDesc><ns0:graphic coords='17,141.73,349.24,372.23,273.95' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Confusion matrix obtained by the SVM on the RAVDESS database for the SI experiment</ns0:figDesc><ns0:graphic coords='18,141.73,351.02,372.23,262.55' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Confusion matrix obtained by the MLP on the RAVDESS database for the SI experiment</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Confusion matrix obtained by the SVM on the IEMOCAP database for the SI experiment</ns0:figDesc><ns0:graphic coords='19,141.73,63.78,372.23,273.95' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head /><ns0:label /><ns0:figDesc>the developed system outperformed<ns0:ref type='bibr' target='#b27'>(Guo et al. (2018)</ns0:ref>;<ns0:ref type='bibr' target='#b12'>Chen et al. (2018)</ns0:ref>;<ns0:ref type='bibr' target='#b47'>Meng et al. (2019)</ns0:ref>;&#214;zseven (2019);<ns0:ref type='bibr' target='#b7'>Bhavan et al. (2019)</ns0:ref>) on the Emo-DB dataset for the SD experiments. The OpenSMILE package was used to extract features in( &#214;zseven (2019)). The accuracies obtained with the SAVEE and Emo-DB databases were 72% and 84%, respectively. In comparison to<ns0:ref type='bibr' target='#b12'>(Chen et al. (2018)</ns0:ref>; Meng et al. (2019); Satt et al. (2017); Zhao et al. (2018)), the proposed method performed well on the IEMOCAP database. The models in (Chen et al. (2018); Meng et al. (2019); Etienne et al. (2018)) are computationally complex and require extensive periods of training. In the proposed method, AlexNet is used for the extraction process, and the FS technique is applied. The FS approach reduced the classifier's workload while also improving 18/23 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:2:0:NEW 6 Oct 2021) Manuscript to be reviewed Computer Science efficiency. When using the RAVDESS database, the suggested technique outperforms (Zeng et al. (2019); Bhavan et al. (2019)) in terms of accuracy. Table 9 illustrates that the suggested approach outperforms (Meng et al. (2019); Sun and Wen (2017); Haider et al. (2020); Yi and Mak (2019); Guo et al. (2019); Badshah et al. (2017); Mustaqeem et al. (2020)) for SI experiments using the Emo-DB database. The authors extracted low-level descriptor feature emotion identification and obtained accuracies with the Emo-DB database of 82.40%, 76.90%, and 83.74%, respectively, in (Sun and Wen (2017); Haider et al. (2020); Yi and Mak (2019)). Different deep learning methods were used for SER with the Emo-DB database in (Meng et al. (2019); Guo et al. (2019); Badshah et al. (2017); Mustaqeem et al. (</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head /><ns0:label /><ns0:figDesc>; Haider et al. (2020)), the suggested technique provides better accuracy with the SAVEE database. When using the IEMOCAP database, the proposed methodology outperforms (Yi and Mak (2019); Guo et al. (2019); Xia and Liu (2017); Daneshfar et al. (2020); Mustaqeem et al. (2020); Meng et al. (2019)). The classification results</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Nomenclature</ns0:figDesc><ns0:table><ns0:row><ns0:cell>4/23</ns0:cell></ns0:row></ns0:table><ns0:note>groups, experiments were performed on the two different datasets. A CNN was used in(Fayek et al. </ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>(a) Alexnet Layers Architecture and (b) Number of selected features after CFS</ns0:figDesc><ns0:table><ns0:row><ns0:cell>(a) Layer Type</ns0:cell><ns0:cell>Size</ns0:cell><ns0:cell cols='2'>Kernels Size</ns0:cell><ns0:cell>Number of Features</ns0:cell></ns0:row><ns0:row><ns0:cell>Image input</ns0:cell><ns0:cell>227&#215;227&#215;3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>150,528</ns0:cell></ns0:row><ns0:row><ns0:cell>Convolution Layer#1</ns0:cell><ns0:cell>11&#215;11&#215;3</ns0:cell><ns0:cell>96</ns0:cell><ns0:cell /><ns0:cell>253,440</ns0:cell></ns0:row><ns0:row><ns0:cell>Activation Function</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Channel normalization</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Pooling</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Convolution Layer#2</ns0:cell><ns0:cell>5&#215;5&#215;48</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell /><ns0:cell>186,624</ns0:cell></ns0:row><ns0:row><ns0:cell>Activation Function</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Convolution Layer#3</ns0:cell><ns0:cell>3&#215;3&#215;256</ns0:cell><ns0:cell>384</ns0:cell><ns0:cell /><ns0:cell>64,896</ns0:cell></ns0:row><ns0:row><ns0:cell>Activation Function</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Channel normalization</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>pooling</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Convolution Layer#4</ns0:cell><ns0:cell>3&#215;3&#215;192</ns0:cell><ns0:cell>384</ns0:cell><ns0:cell /><ns0:cell>64,896</ns0:cell></ns0:row><ns0:row><ns0:cell>Activation Function</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Convolution Layer#5</ns0:cell><ns0:cell>3&#215;3&#215;192</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell /><ns0:cell>43,264</ns0:cell></ns0:row><ns0:row><ns0:cell>Activation Function</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Pooling</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Fully Connected Layer</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>4096</ns0:cell></ns0:row><ns0:row><ns0:cell>Activation Function</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Dropout</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Fully Connected Layer</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>4096</ns0:cell></ns0:row><ns0:row><ns0:cell>Activation Function</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Dropout</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Fully Connected Layer</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1000</ns0:cell></ns0:row><ns0:row><ns0:cell>(b) Database</ns0:cell><ns0:cell cols='2'>Number of extracted features</ns0:cell><ns0:cell cols='2'>No. of best features using CFS</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>64,896</ns0:cell><ns0:cell /><ns0:cell>458</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>64,896</ns0:cell><ns0:cell /><ns0:cell>150</ns0:cell></ns0:row><ns0:row><ns0:cell>IEMOCAP</ns0:cell><ns0:cell>64,896</ns0:cell><ns0:cell /><ns0:cell>445</ns0:cell></ns0:row><ns0:row><ns0:cell>RAVDESS</ns0:cell><ns0:cell>64,896</ns0:cell><ns0:cell /><ns0:cell>267</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>2019)) to create</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>simulated samples to resolve the data scarcity problem. Energy and pitch were extracted from each</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>audio segment in (Ververidis and Kotropoulos (2005); Rao et al. (2013); Daneshfar et al. (2020)). They</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>also needed fewer training data and could deal directly with dynamic variables. Two different acoustic</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>paralinguistic feature sets were used in</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note><ns0:ref type='bibr' target='#b47'>(Meng et al. (2019)</ns0:ref>). To identify speech emotion, 3D log-Mel spectrograms were examined for global contextual statistics and local correlations. The OpenSMILE package was used to extract features in( &#214;zseven (2019)). The accuracy obtained with the Emo-DB database was 84%, and it was 72% with the SAVEE database. Pretrained networks have many benefits, including the ability to reduce the training time and improve accuracy. Kernel extreme learning machine (KELM) features were introduced in<ns0:ref type='bibr' target='#b26'>(Guo et al. (2019)</ns0:ref>). An adversarial data augmentation network was presented in(Yi and Mak (</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>(a) Detailed description of the datasets, (b) Categories of emotional speech databases, their features, and some examples of each category</ns0:figDesc><ns0:table><ns0:row><ns0:cell>(a) Datasets</ns0:cell><ns0:cell>Speakers</ns0:cell><ns0:cell /><ns0:cell cols='2'>Emotions</ns0:cell><ns0:cell /><ns0:cell>Languages</ns0:cell><ns0:cell>Size</ns0:cell></ns0:row><ns0:row><ns0:cell>RAVDESS</ns0:cell><ns0:cell cols='2'>24 Actors (12</ns0:cell><ns0:cell cols='3'>eight emotions (</ns0:cell><ns0:cell>North American</ns0:cell><ns0:cell>7356 files (total</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>male, 12 female)</ns0:cell><ns0:cell /><ns0:cell cols='3'>calm, neutral, an-</ns0:cell><ns0:cell>English</ns0:cell><ns0:cell>size: 24.8 GB).</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>gry, happy, fear,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>surprise, sad, dis-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>gust )</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>4 (male)</ns0:cell><ns0:cell /><ns0:cell cols='3'>seven emotions</ns0:cell><ns0:cell>British English</ns0:cell><ns0:cell>480 utterances</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>(sadness, neutral,</ns0:cell><ns0:cell>(120 utterances</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>frustration, hap-</ns0:cell><ns0:cell>per speaker)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>piness, disgust</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>,anger, surprise)</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell cols='2'>10 (5 male, 5 fe-</ns0:cell><ns0:cell cols='3'>seven emotions</ns0:cell><ns0:cell>German</ns0:cell><ns0:cell>535 utterances</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>male)</ns0:cell><ns0:cell /><ns0:cell cols='2'>(neutral,</ns0:cell><ns0:cell>fear,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>boredom, disgust,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>sad, angry, joy)</ns0:cell></ns0:row><ns0:row><ns0:cell>IEMOCAP</ns0:cell><ns0:cell cols='2'>10 (5 male, 5 fe-</ns0:cell><ns0:cell>nine</ns0:cell><ns0:cell cols='2'>emotions</ns0:cell><ns0:cell>English</ns0:cell><ns0:cell>12 hours of</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>male)</ns0:cell><ns0:cell /><ns0:cell cols='2'>(surprise,</ns0:cell><ns0:cell>hap-</ns0:cell><ns0:cell>recordings</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>piness, sadness,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>anger, fear, ex-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>citement, neutral,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>frustration and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>others)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>(b)</ns0:cell><ns0:cell /><ns0:cell cols='2'>Simulated</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Semi Natural</ns0:cell></ns0:row><ns0:row><ns0:cell>Description</ns0:cell><ns0:cell /><ns0:cell cols='5'>generated by trained and expe-</ns0:cell><ns0:cell>created by having individuals</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='5'>rienced actors delivering the</ns0:cell><ns0:cell>read a script with a different</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='5'>same sentence with different</ns0:cell><ns0:cell>emotions</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='4'>degrees of emotion</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Single emotion at a time</ns0:cell><ns0:cell cols='2'>yes</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>yes</ns0:cell></ns0:row><ns0:row><ns0:cell>Widely used</ns0:cell><ns0:cell /><ns0:cell cols='2'>yes</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>no</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Copyrights and privacy protection</ns0:cell><ns0:cell cols='2'>yes</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>yes</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Includes contextual information</ns0:cell><ns0:cell>no</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>yes</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Includes situational information</ns0:cell><ns0:cell>no</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>yes</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Emotions that are separate and dis-</ns0:cell><ns0:cell cols='2'>yes</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>no</ns0:cell></ns0:row><ns0:row><ns0:cell>tinct</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Numerous emotions</ns0:cell><ns0:cell /><ns0:cell cols='2'>yes</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>yes</ns0:cell></ns0:row><ns0:row><ns0:cell>Simple to model</ns0:cell><ns0:cell /><ns0:cell cols='2'>yes</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>no</ns0:cell></ns0:row><ns0:row><ns0:cell>Numerous emotions</ns0:cell><ns0:cell /><ns0:cell cols='2'>yes</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>yes</ns0:cell></ns0:row><ns0:row><ns0:cell>Examples</ns0:cell><ns0:cell /><ns0:cell cols='5'>EMO-DB,SAVEE, RAVDESS IEMOCAP</ns0:cell></ns0:row></ns0:table><ns0:note>7/23PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:2:0:NEW 6 Oct 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>These are universal approximations, but fully connected layers do not work fully in recognizing and generalizing the original image pixels. CL4 extracts relevant features from the original pixel values by preserving the spatial correlations inside the image. Consequently, in the experimental setup, features are extracted from the CL4 employed for SER. A total of 64,896 features are obtained from CL4. Certain features are followed by a FS method and pass through a classification model for identification. Table2(a) represents a detailed layers architecture of proposed model. AlexNet required 227x227 size RGB images as input. Each convolution filter yields a stack of the feature map. The learning approach starts with an initial learning rate of 0.001 and gradually decreases with a drop rate of 0.1. By using 96 filters of 11x11x3, CL1 creates an array of activation maps. As a consequence, CL4 generates 384 activation maps (3x3x192 filters).</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>&#8226;</ns0:head><ns0:label /><ns0:figDesc>Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): RAVDESS is an audio and video database consisting of eight acted emotional categories: calm, neutral, angry, surprise, fear, happy, sad, and disgust, and these emotions are recorded only in North American English. RAVDESS was recorded by 12 male and 12 female professional actors.</ns0:figDesc><ns0:table /><ns0:note>&#8226; Surrey Audio-Visual Expressed Emotion (SAVEE): The SAVEE database contains 480 emotional utterances. The SAVEE database was recorded in British English by four male professional actors with seven emotion categories: sadness, neutral, frustration, happiness, disgust, anger, and surprise.&#8226; Berlin Emotional Speech Database (Emo-DB): The Emo-DB dataset contains 535 utterances with seven emotion categories: neutral, fear, boredom, disgust, sad, angry, and joy. The Emo-DB emotional dataset was recorded in German by five male and five female native-speaker actors.&#8226; Interactive Emotional Dyadic Motion Capture (IEMOCAP): The IEMOCAP multispeaker database contains approximately 12 hours of audio and video data with seven emotional states, surprise, happiness, sadness, anger, fear, excitement, and frustration, as well as neutral and other states. The IEMOCAP database was recorded by five male and five female professional actors. In this work, we use four (neutral, angry, sadness, and happiness) class labels. Table3(b) illustrates the features of databases, which are used in a proposed method.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Table 4 gives the results achieved by five different classifiers utilizing the features extracted from CL4 of the model. The SVM achieved 92.11%, 87.65%, 82.98%, and 79.66% accuracies for the Emo-DB, RAVDESS, SAVEE and IEMODB databases, respectively. The proposed method reported the highest accuracy of 86.56% on the Emo-DB database with KNN. The MLP classifier obtained 86.75% accuracy for the IEMOCAP database. In contrast, the SVM reported 79.66% accuracy for the IEMOCAP database. The MLP classifier reported the highest accuracy, 91.51%, on the Emo-DB database. The RF attained 82.47% accuracy on the Emo-DB database, while DT achieved 80.53% accuracy on Emo-DB. Standard deviation and weighted average recall of the SD experiments without FS Table 5 represents the results of the FS approach. The proposed FS technique selected 458 distinguishing features out of a total of 64,896 features for the Emo-DB dataset. The FS method obtained 150,445,267 feature maps for the SAVEE, RAVDESS, and IEMOCAP datasets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>SVM</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>MLP</ns0:cell><ns0:cell>DT</ns0:cell></ns0:row><ns0:row><ns0:cell>RAVDESS</ns0:cell><ns0:cell>87.65&#177;1.79</ns0:cell><ns0:cell>78.65&#177;4.94</ns0:cell><ns0:cell>78.15&#177;3.39</ns0:cell><ns0:cell>80.67&#177;2.89</ns0:cell><ns0:cell>76.28&#177;3.24</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>82.98&#177;4.87</ns0:cell><ns0:cell>78.38&#177;4.10</ns0:cell><ns0:cell>79.81&#177;4.05</ns0:cell><ns0:cell>81.13&#177;3.63</ns0:cell><ns0:cell>69.15&#177;2.85</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>92.11&#177;2.29</ns0:cell><ns0:cell>82.47&#177;3.52</ns0:cell><ns0:cell>86.56&#177;2.78</ns0:cell><ns0:cell>91.51&#177;2.09</ns0:cell><ns0:cell>80.53&#177;4.72</ns0:cell></ns0:row><ns0:row><ns0:cell>IEMODB</ns0:cell><ns0:cell>79.66&#177;4.44</ns0:cell><ns0:cell>80.93&#177;3.75</ns0:cell><ns0:cell>74.33&#177;3.37</ns0:cell><ns0:cell>86.75&#177;3.64</ns0:cell><ns0:cell>67.25&#177;2.33</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Standard deviation and weighted average recall of the SD experiments with FS</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Database</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>MLP</ns0:cell><ns0:cell>DT</ns0:cell></ns0:row><ns0:row><ns0:cell>RAVDESS</ns0:cell><ns0:cell>93.61&#177;1.32</ns0:cell><ns0:cell>85.21&#177;3.55</ns0:cell><ns0:cell>88.34&#177;2.67</ns0:cell><ns0:cell>84.50&#177;2.23</ns0:cell><ns0:cell>78.45&#177;2.67</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>88.77&#177;2.45</ns0:cell><ns0:cell>86.79&#177;2.96</ns0:cell><ns0:cell>83.45&#177;3.21</ns0:cell><ns0:cell>85.45&#177;3.12</ns0:cell><ns0:cell>75.68&#177;3.82</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>96.02&#177;1.07</ns0:cell><ns0:cell>93.51&#177;2.21</ns0:cell><ns0:cell>92.45&#177;2.45</ns0:cell><ns0:cell>95.80&#177;2.34</ns0:cell><ns0:cell>79.13&#177;4.01</ns0:cell></ns0:row><ns0:row><ns0:cell>IEMODB</ns0:cell><ns0:cell>77.23&#177;2.66</ns0:cell><ns0:cell>86.23&#177;2.54</ns0:cell><ns0:cell>82.78&#177;2.17</ns0:cell><ns0:cell>89.12&#177;2.57</ns0:cell><ns0:cell>72.32&#177;1.72</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>12/23</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:2:0:NEW 6 Oct 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Standard deviation and weighted average recall of the SI experiment results without FS</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>SVM</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>MLP</ns0:cell><ns0:cell>DT</ns0:cell></ns0:row><ns0:row><ns0:cell>RAVDESS</ns0:cell><ns0:cell>75.34&#177;2.58</ns0:cell><ns0:cell>65.78&#177;2.32</ns0:cell><ns0:cell>69.12&#177;2.20</ns0:cell><ns0:cell>71.01&#177;2.84</ns0:cell><ns0:cell>67.41&#177;2.37</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>63.02&#177;3.21</ns0:cell><ns0:cell>59.66&#177;3.79</ns0:cell><ns0:cell>71.81&#177;3.81</ns0:cell><ns0:cell>65.18&#177;2.05</ns0:cell><ns0:cell>59.55&#177;2.23</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>87.65&#177;2.56</ns0:cell><ns0:cell>79.45&#177;2.11</ns0:cell><ns0:cell>75.30&#177;2.19</ns0:cell><ns0:cell>88.32&#177;2.67</ns0:cell><ns0:cell>76.27&#177;2.35</ns0:cell></ns0:row><ns0:row><ns0:cell>IEMODB</ns0:cell><ns0:cell>61.85&#177;3.20</ns0:cell><ns0:cell>60.11&#177;4.20</ns0:cell><ns0:cell>55.47&#177;2.96</ns0:cell><ns0:cell>63.18&#177;1.62</ns0:cell><ns0:cell>54.69&#177;3.72</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Standard deviation and weighted average recall of the SI experiment results with FS</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Database</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>MLP</ns0:cell><ns0:cell>DT</ns0:cell></ns0:row><ns0:row><ns0:cell>RAVDESS</ns0:cell><ns0:cell>80.94&#177;2.17</ns0:cell><ns0:cell>76.82&#177;2.16</ns0:cell><ns0:cell>75.57&#177;3.29</ns0:cell><ns0:cell>82.75&#177;2.10</ns0:cell><ns0:cell>76.18&#177;1.33</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>70.06&#177;3.33</ns0:cell><ns0:cell>65.55&#177;2.42</ns0:cell><ns0:cell>60.58&#177;3.84</ns0:cell><ns0:cell>75.38&#177;2.74</ns0:cell><ns0:cell>63.69&#177;2.22</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>90.78&#177;2.45</ns0:cell><ns0:cell>85.73&#177;2.58</ns0:cell><ns0:cell>81.32&#177;2.12</ns0:cell><ns0:cell>92.65&#177;3.09</ns0:cell><ns0:cell>78.21&#177;3.47</ns0:cell></ns0:row><ns0:row><ns0:cell>IEMODB</ns0:cell><ns0:cell>84.00&#177;2.76</ns0:cell><ns0:cell>78.08&#177;2.65</ns0:cell><ns0:cell>76.44&#177;3.88</ns0:cell><ns0:cell>80.23&#177;2.77</ns0:cell><ns0:cell>75.78&#177;2.25</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Comparison of the SD experiments with existing methods.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Database</ns0:cell><ns0:cell>Reference</ns0:cell><ns0:cell>Feature</ns0:cell><ns0:cell>Accuracy(%)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>RAVDESS (Bhavan et al. (2019))</ns0:cell><ns0:cell>Spectral Centroids, MFCC and</ns0:cell><ns0:cell>75.69</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>MFCC derivatives</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>RAVDESS Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+RF</ns0:cell><ns0:cell>86.79</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>RAVDESS Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+SVM</ns0:cell><ns0:cell>88.77</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>( &#214;zseven (2019))</ns0:cell><ns0:cell>OpenSmile Features</ns0:cell><ns0:cell>72.39</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+RF</ns0:cell><ns0:cell>86.79</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+SVM</ns0:cell><ns0:cell>88.77</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Guo et al. (2018))</ns0:cell><ns0:cell>Amplitude spectrogram and phase in-</ns0:cell><ns0:cell>91.78</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>formation</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Chen et al. (2018))</ns0:cell><ns0:cell>3-D ACRNN</ns0:cell><ns0:cell>82.82</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Meng et al. (2019))</ns0:cell><ns0:cell>Dilated CNN + BiLSTM</ns0:cell><ns0:cell>90.78</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>( &#214;zseven (2019))</ns0:cell><ns0:cell>OpenSMILE features</ns0:cell><ns0:cell>84.62</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Bhavan et al. (2019))</ns0:cell><ns0:cell>Spectral Centroids, MFCC and</ns0:cell><ns0:cell>92.45</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>MFCC derivatives</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+MLP</ns0:cell><ns0:cell>95.80</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+SVM</ns0:cell><ns0:cell>96.02</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Satt et al. (2017))</ns0:cell><ns0:cell>3 Convolution Layers + LSTM</ns0:cell><ns0:cell>68.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Chen et al. (2018))</ns0:cell><ns0:cell>3-D ACRNN</ns0:cell><ns0:cell>64.74</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Zhao et al. (2018))</ns0:cell><ns0:cell>Attention-BLSTM-FCN</ns0:cell><ns0:cell>64.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Etienne et al. (2018))</ns0:cell><ns0:cell>CNN+LSTM</ns0:cell><ns0:cell>64.50</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Meng et al. (2019))</ns0:cell><ns0:cell>Dilated CNN + BiLSTM</ns0:cell><ns0:cell>74.96</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+MLP</ns0:cell><ns0:cell>89.12</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+RF</ns0:cell><ns0:cell>86.23</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>13/23</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:2:0:NEW 6 Oct 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Comparison of SI experiments with existing methods.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Database</ns0:cell><ns0:cell>Reference</ns0:cell><ns0:cell>Feature</ns0:cell><ns0:cell>Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(%)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>RAVDESS Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+MLP</ns0:cell><ns0:cell>82.75</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>RAVDESS Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+SVM</ns0:cell><ns0:cell>80.94</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>(Sun and Wen (2017))</ns0:cell><ns0:cell>Ensemble soft-MarginSoftmax (EM-</ns0:cell><ns0:cell>51.50</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Softmax)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>(Haider et al. (2020))</ns0:cell><ns0:cell>eGeMAPs and emobase</ns0:cell><ns0:cell>42.40</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+MLP</ns0:cell><ns0:cell>75.38</ns0:cell></ns0:row><ns0:row><ns0:cell>SAVEE</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+SVM</ns0:cell><ns0:cell>70.06</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Badshah et al. (2017))</ns0:cell><ns0:cell>DCNN + DTPM</ns0:cell><ns0:cell>87.31</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Sun and Wen (2017))</ns0:cell><ns0:cell>Ensemble soft-MarginSoftmax (EM-</ns0:cell><ns0:cell>82.40</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Softmax)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Yi and Mak (2019))</ns0:cell><ns0:cell>OpenSmile Features + ADAN</ns0:cell><ns0:cell>83.74</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Guo et al. (2019))</ns0:cell><ns0:cell>Statistical Features and Empirical</ns0:cell><ns0:cell>84.49</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Features+ KELM</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Meng et al. (2019))</ns0:cell><ns0:cell>Dilated CNN+ BiLSTM</ns0:cell><ns0:cell>85.39</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Haider et al. (2020))</ns0:cell><ns0:cell>eGeMAPs and emobase</ns0:cell><ns0:cell>76.90</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Lech et al. (2020))</ns0:cell><ns0:cell>AlexNet</ns0:cell><ns0:cell>82.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>(Mustaqeem et al. (2020))</ns0:cell><ns0:cell>Radial Basis Function Network(</ns0:cell><ns0:cell>85.57</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>RBFN) + Deep BiLSTM</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+MLP</ns0:cell><ns0:cell>92.65</ns0:cell></ns0:row><ns0:row><ns0:cell>Emo-DB</ns0:cell><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+SVM</ns0:cell><ns0:cell>90.78</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Xia and Liu (2017))</ns0:cell><ns0:cell>SP + CNN</ns0:cell><ns0:cell>64.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Chen et al. (2018))</ns0:cell><ns0:cell>Dilated CNN+ BiLSTM</ns0:cell><ns0:cell>69.32</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP Guo et al. (2019)</ns0:cell><ns0:cell>Statistical Features and Empirical</ns0:cell><ns0:cell>57.10</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Features+ KELM</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Yi and Mak (2019))</ns0:cell><ns0:cell>OpenSmile Features + ADAN</ns0:cell><ns0:cell>65.01</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Daneshfar et al. (2020))</ns0:cell><ns0:cell>IS10 + DBN</ns0:cell><ns0:cell>64.50</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP (Mustaqeem et al. (2020))</ns0:cell><ns0:cell>Radial Basis Function Network(</ns0:cell><ns0:cell>72.2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>RBFN) + Deep BiLSTM</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+MLP</ns0:cell><ns0:cell>89.12</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IEMOCAP Proposed Approach</ns0:cell><ns0:cell>AlexNet+FS+RF</ns0:cell><ns0:cell>86.23</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>14/23</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64126:2:0:NEW 6 Oct 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_16'><ns0:head /><ns0:label /><ns0:figDesc>80.94%, and 70.06%, for the Emo-DB, IEMOCAP, RAVDESS, and SAVEE databases, respectively, followed by the FS method in the SI experiments. However, the MLP achieved the highest accuracies, 92.65%, 80.23%, 82.75%, and 75.38%, for the Emo-DB, IEMOCAP, RAVDESS, and SAVEE databases, respectively, followed by the FS method in the SI experiments. The confusion matrices of the results obtained for the SI experiments are shown in Figs.7-9to analyze the individual emotional groups' identification accuracies. The average accuracies achieved with the IEMOCAP and Emo-DB databases were 78.90% and 85.73%, respectively. The RAVDESS database contains eight emotion categories, three of which, 'calm', 'fear', and 'anger,' were identified with accuracies of 94.78%, 91.35%, and 84.60%, respectively, by the MLP. In contrast, the other five emotions were identified with less than 90.00% accuracy, as represented in Figure8. The MLP achieved an average accuracy with the SAVEE database of 75.38%. With the SAVEE database, 'anger,' 'neutral,' and 'sad' were recognized with accuracies of 94.22%, 90.66%, and 85.33%, respectively, by the MLP classifier. IEMOCAP achieved an average accuracy of 84.00% with the SVM, while the MLP achieved an average accuracy of 80.23%. Figure</ns0:figDesc><ns0:table /></ns0:figure> </ns0:body> "
"Original Article Title: “Effect on Speech Emotion Classification of a Feature Selection Approach Using a Convolutional Neural Network” Dear Editor: Thank you very much for giving us a chance to revise our manuscript entitled Effect on Speech Emotion Classification of a Feature Selection Approach Using a Convolutional Neural Network. We are very happy to have received a positive evaluation, and we would like to express our appreciation to you and all reviewers for the thoughtful comments and helpful suggestions. There are still three suggestions, which we have carefully considered and made every effort to address. We fundamentally agree with all the suggestions made by the reviewer and editor, and we have incorporated corresponding revisions into the manuscript. Our detailed, point-bypoint responses to the editorial and reviewer comments are given below, whereas the corresponding revisions are marked in colored text in the manuscript file. Specifically, colored text indicates changes made in response to the suggestions of reviewers. Additionally, we have carefully revised the manuscript to ensure that the text is optimally phrased and free from typographical and grammatical errors. We believe that our manuscript has been considerably improved as a result of these revisions, and hope that our revised manuscript entitled Effect on Speech Emotion Classification of a Feature Selection Approach Using a Convolutional Neural Network is acceptable for publication in the Peerj Computer Science. We would like to thank you once again for your consideration of our work and for inviting us to submit the revised manuscript. We look forward to hearing from you. Best regards Hsien-Tsung Chang Chang Chang Gung University Department of Computer Science and Information Engineering Taoyuan, Taiwan E-mail: smallpig@widelab.org Reviewer 1 Experimental design (Concern#1) The authors now cited Melissa N. Stolar; Margaret Lech 2017 and 2020 which follow a similar approach to the proposed method however the similarities and differences in the proposed approach with these references in particular and other approaches, in general, must be highlighted`. Author response: We thank the reviewer for the suggestion. Author action: We updated the manuscript by highlighted the references with differences and similarities in the manuscript. In existing literature (\cite{8085174, 4563453, 8270472, 10.3389/fcomp.2020.00014, BHAVAN2019104886, OZSEVEN2019320, Guo2018}), most of the studies used simulated databases with few emotional states. On the other hand, in the proposed study, we utilized eight emotional states for RAVDESS, seven emotional states for SAVEE, six emotional states for Emo-DB, and four emotional states for the IEMOCAP database. Therefore, our results are state-of-the-art for both simulated and seminatural databases (page#7 Line#199). In \cite {7883728}, a pre-trained DCNN model was introduced for speech emotion recognition. The outcomes were improved with seven emotional states (Emo-DB) (simulated) (page#5 Line#170). In \cite{4563453} the fully connected layer (FC7) of Alex Net was used for the extraction process. The results were evaluated on four different databases with six emotions (page#5 Line#180). In (\cite{8270472}), the existing approach used ALEXNet-SVM, experiments were performed on the EMO-DB database (simulated) with seven emotions (page#4 Line#156). An implementation of real-time voice emotion identification using AlexNet was described in (\cite{10.3389/fcomp.2020.00014}). When trained on the Berlin Emotional Speech (EMO-DB) (simulated) database with six emotional classes, the presented method obtained an average accuracy of 82\% (Page#7 Line#197,). Pretrained networks have many benefits, including the ability to reduce training time and improve accuracy. They also need fewer training data and deal directly with dynamic variables. Our model results are based on multiple languages with multiple emotional states. (Concern#2) The authors have explained in the response letter their rationale and strategy for feature selection from speech. However, it will be helpful for replication, etc. to add the implementation details in the manuscript itself about the pipeline for extracting spectrogram features and then selecting the most suitable. Author response: We deeply appreciate the reviewer for his very insightful and constructive comments Author action: We agree with the reviewer’s assessment. Accordingly, throughout the manuscript, we have updated the manuscript by adding the detailed pipeline for extracting spectrogram features and also add the table of the number of most discriminative features after applied the CFS approach (page#7, Table#3(a),(b)). Figure 2 illustrates the pipeline for extracting features from CL4 (page#7 Table 3), the learning approach starts with an initial learning rate of 0.001 and gradually decreases with a drop rate of 0.1. By using 96 filters of 11x11x3, CL1 creates an array of activation maps. Consequently, CL4 generates 384 activation maps (3x3x192 filters). (page#9 Line271). We also updated figure 2 with captions (page#4, figure#2). Because we used CFS(\cite{Wosiak2018}) approach for selecting the features, equation 5 represented a feature selection approach, where k represents the total number of features, $r_{cfi}$ represents the classification correlation of the features, and $r_{fifj}$ represents the correlation between features. The extracted features are fed into classification algorithms. CFS usually deletes (backward selection) or adds (forward selection) one feature at a time. Table 3(b) gives the most discriminative number of selected features. (page#9 Line#283). Validity of the findings (Concern#3) As a response to the justification for classification metrics used in the results section e.g. the confusion matrix, the authors have referred to Table 3 which compares the nature of speech datasets. The suggestion here was to explain the reasons for using certain classification metrics/confusion matrices, what they indicate and what are the implications of the obtained results in real-world scenarios. Author response: We thank the reviewer for pointing out our mistake. Here, we apologize that we make a grave mistake. Author action: A confusion matrix is an approach for describing the accuracy of the classification technique. For instance, if the data contains an imbalanced amount of samples in every group or more than two groups, the accuracy of the classification alone may be deceptive. Thus, calculating a confusion matrix provides a clearer understanding of what our classification model gets right and what kinds of mistakes it makes. We could noticed that confusion matrix were widely used for classification metrics in related researches(\cite{7956190,8421023,8873581}), we also cite those papers to make reader clarify why we used this metric. The row in the confusion matrix means the actual emotion classes in the confusion matrix, while the column indicates the predicted emotion classes. The results of the confusion matrix are used to evaluate the identification accuracy of the individual emotional labels. e.g, The Emo-DB database contains seven emotional categories, three of which, 'sad', 'disgust', and 'neutral,' were identified with accuracies of 98.88\%, 98.78\%, and 97.45\%, respectively, by the SVM illustrated in Figure 3. As shown in Figure 4, the SVM recognized 'frustration' and 'neutral' with the highest accuracies, 97.78\% and 92.45\%, with the SAVEE dataset. (page#12 Line#381). The lightweight and most straightforward model presented in the proposed study has excellent accuracy. In addition, low-cost complexity can monitor real-time speech emotion recognition systems and show the ability for real-time applications (page#11 Line#340). Also, Our approach allowed us to identify multiple emotional states with Multiple languages with a higher classification accuracy while using a smaller model size and lower computational costs. In addition, our approach included a simple design and user-friendly operating characteristics, which can make it suitable for implementations such as monitoring people’s behavior. (page#19 line#450). "
Here is a paper. Please give your review comments after reading it.
270
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Image memorability is a very hard problem in Image Processing due to its subjective nature. But due to the introduction of Deep Learning and the large availability of data and GPUs, great strides have been made in predicting the memorability of an image. In this paper, we propose a novel deep learning architecture called ResMem-Net that is a hybrid of LSTM and CNN that uses information from the hidden layers of the CNN to compute the memorability score of an image. The intermediate layers are important for predicting the output because they contain information about the intrinsic properties of the image. The proposed architecture automatically learns visual emotions and saliency, shown by the heatmaps generated using the GradRAM technique. We have also used the heatmaps and results to analyze and answer one of the most important questions in image memorability: 'What makes an image memorable?'. The model is trained and evaluated using the publicly available Large-scale Image Memorability dataset (LaMem) from MIT. The results show that the model achieves a rank correlation of 0.679 and a mean squared error of 0.011, which is better than the current state-of-the-art models and is close to human consistency (p=0.68). The proposed architecture also has a significantly low number of parameters compared to the state-of-the-art architecture, making it memory efficient and suitable for production.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Data is core essential component to almost every media platform in this digital era, starting from the television to social networks. Every media platform relies on content to engage its users. It provides a compulsion for these platforms to understand the exponentially growing data to serve the right content to their users. Since most of these platforms rely on visual data, concepts such as popularity, emotions, interestingness, aesthetics, and, most importantly, memorability are very crucial in increasing viewership <ns0:ref type='bibr' target='#b0'>(Kong et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b1'>Celikkale, Erdem &amp; Erdem, 2015)</ns0:ref>. In this paper, image memorability is the concept taken into consideration which is one of the most underexplored deep learning applications.</ns0:p><ns0:p>Human beings normally rely on visual memories to remember things and also will be able to identify and discriminate objects in real life. Human cognition to properly remember and forget visual data is crucial as it affects every form of our engagement with the external world <ns0:ref type='bibr' target='#b2'>(Bainbridge, Dilks &amp; Oliva, 2017;</ns0:ref><ns0:ref type='bibr' target='#b3'>Schurgin, 2018)</ns0:ref>. However, not all humans remember the same visual information in a common manner <ns0:ref type='bibr' target='#b4'>(Gretz &amp; Huff, 2020;</ns0:ref><ns0:ref type='bibr' target='#b5'>Rust, Mehrpour., 2020)</ns0:ref>. It is a long-standing question that neuroscientists have asked for years, and research is still underway to explain how exactly the cognitive processes in the brain encode and store certain information to retrieve that information when required properly. The human brain can encode intrinsic information about objects, events, words, and images after a single exposure to visual data <ns0:ref type='bibr' target='#b6'>(Alves et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b7'>Fukuda, Vogel., 2019)</ns0:ref>. Image memorability is generally measured as the probability that a person will be able to identify a repeated photograph when he or she is presented with a stream of images <ns0:ref type='bibr'>(Isola et al., 2011a)</ns0:ref>. By definition, image memorability is a subjective measure that approximately quantifies how a person can remember an image <ns0:ref type='bibr'>(Isola et al., 2011b)</ns0:ref>. Cognitive psychologists have shown that more memorable images leave a larger trace of the brain's long-term memory <ns0:ref type='bibr' target='#b10'>(Broers &amp; Busch , 2021)</ns0:ref>. However, the memorability of a certain image can slightly vary from person to person and depends on the person's context and previous experiences <ns0:ref type='bibr' target='#b11'>(Bainbridge, 2020)</ns0:ref>. But this slight variation is fine because this allows us to make approximate predictions using computational methods.</ns0:p><ns0:p>Researchers have shown that, even though there exist slight variations, humans show a level of consistency when remembering the same kind of images with a very similar probability irrespective of the time delay <ns0:ref type='bibr' target='#b12'>(Sommer et al., 2021)</ns0:ref>. This research has led to the inference that it is possible to measure an individual's probability of remembering an image. To measure the probability that a person will remember an image, the person is presented with a stream of images. This process is called Visual Memorability Game <ns0:ref type='bibr'>(Isola et al., 2011a)</ns0:ref>. The stream of images contained two kinds of images, targets, and fillers. The annotator is shown images one by one, where the image is displayed for 2.4 seconds. Between each target image multiple filler images are shown unbeknownst to the annotator. On a random manner, previously shown target images are repeatedly shown now and then. When each image is being shown, the annotator is asked to press a key in the keyboard if that annotator feels that the target image is being repeated in the stream. Based on this, then the percentage of times the annotator has correctly identified repeated target images will be checked and is annotated as the memorability of the target image from one annotator. The same set of images are shown to multiple more people in the same manner. So, the approximate memorability scores will be obtained for the same image from multiple people. As mentioned earlier, since the memorability of the image is only going to slightly vary for most people, these approximate measures are taken as the ground truth memorability score. The slight difference between most deep learning datasets and image memorability datasets is that, for each image, we'll have multiple annotations, that is, multiple memorability scores, which is fine because most of them aren't going to vary that much. As mentioned earlier, unlike other properties of images such as photo composition or image quality, image memorability cannot be objectively defined and hence might slightly vary from person to person. However, generally, humans agree with each other on certain common factors that tend to make an image more memorable despite this large variability. Factors like color harmony and object interestingness are generally agreed upon by people as factors that improve image memorability <ns0:ref type='bibr'>(Khosla et al., 2015a)</ns0:ref>.</ns0:p><ns0:p>Few methods have been proposed <ns0:ref type='bibr' target='#b14'>(Perera, Tal, &amp; Zelnik, 2019;</ns0:ref><ns0:ref type='bibr' target='#b15'>Fajtl et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b17'>Squalli-Houssaini et al., 2018)</ns0:ref> to predict the memorability of an image using deep learning methods. Those methods either used handcrafted features or ensemble models to predict the memorability score. Ensemble models are hard to train, computationally expensive and are prone to overfitting <ns0:ref type='bibr'>(Canchumuni, Alexandre &amp; Pacheco, 2019)</ns0:ref> . The overfit models normally do not perform well on different kinds of images that are not in the training set, while computationally expensive models are not suitable for deployment to real world on web servers or computers with low memory GPUs and real-world deep learning systems are heavily reliant on computers with GPUs. Methods that use handcrafted features along with machine learning models are not accurate on different kinds of images because is it extremely hard to handcraft a comprehensive amount of features that can span a wide distribution. The idea of using data-driven strategies to predict image memorability was first introduced by <ns0:ref type='bibr'>(Isola et al., 2011a)</ns0:ref>. The Visual Memorability Game was used to prepare the images in the Isola et al. dataset and annotate their respective memorability score. The game was run on Amazon Mechanical Turk, where users were presented with a stream of images with some images repeating on a random basis. The users were asked to press a key when they believe that the image displayed was already seen before. In the Isola et al. dataset, they have collected 2222 images along with the annotated memorability scores. Since memorability can vary slightly from person to person, each image was shown to 78 participants on an average, when the annotators played the Visual Memorability Game. Each image being tagged more than once accounts for the slight variation in memorability among people. This also means that when the deep learning model is being trained, during each epoch, the model will be given the same image as input multiple times but with a slightly varying ground truth. When they analyzed the images and their memorability scores together, they have understood that the memorability of an image is highly related to a certain object and scene semantics such as 'Labelled Object Counts,' 'Labelled Object Areas' and 'Object Label Presences.' Also, when each image was segregated into scene categories, it was inferred that much of what contributes to the image's memorability score was from both the object and scene semantics. They have followed up on their work to understand the human-understandable visual attributes to understand memorability as a cognitive process. They have developed a deep learning model that can predict scene category of an image to with another deep learning model that predicts image memorability to understand and identify a compact set of image properties that affect image memorability <ns0:ref type='bibr' target='#b19'>(Lu et al., 2020)</ns0:ref>. A new dataset, Large-scale Image Memorability dataset (LaMem) which is publicly available, is a novel and diverse dataset with 60,000 images, each tagged with memorability score similar to the dataset by Isola et al. The authors <ns0:ref type='bibr'>(Khosla et al., 2015a)</ns0:ref> have used Convolutional Neural Networks (MemNet) to fine-tune deep features that outperform all other features by a large margin. The analysis made by the author on the responses of high-level Convolutional Neural Networks (CNN) layers shows which objects are positive. A new computational model based on an attention mechanism to predict image memorability based on deep learning was proposed. In this paper, the authors have shown that emotional bias affects the performance of the proposed algorithm due to the deep learning framework arousing negative pictures than positive or neutral pictures <ns0:ref type='bibr' target='#b20'>(Baveye et al., 2016)</ns0:ref>. Squalli-Houssaini et al. presented a hybrid CNN with Suport Vector Regression (SVR) model trained on the LaMem dataset. The model achieved an average rank correlation of 0.64 across the validation sets. Based on the predictions, the correlation between interestingness and memorability was analyzed. The predictions were compared using the Flickr Interestingness API and the results showed that memorability did not correlate much with interestingness (Squalli-Houssaini et al., 2018). Visual attention has a huge effect on image memorability <ns0:ref type='bibr' target='#b15'>(Fajtl, J et al., 2018)</ns0:ref> . However, very little work has been done on taking advantage of visual attention to predict image memorability. Mancas and Meur proposed a model that uses a new set of attention-driven features by identifying the link between image saliency and image memorability. The model achieved a 2% increase in performance from the existing models. It was also inferred that images with highly localized regions are more memorable than those with specific regions of interest (Mancas M &amp; Le Meur, 2013). A novel deep learning architecture was proposed that took advantage of the visual attention mechanism to predict image memorability by <ns0:ref type='bibr' target='#b15'>(Fajtl, J et al., 2018)</ns0:ref>. The architecture made use of a hybrid of Feedforward CNN architecture and attention mechanism to build a model that can help build attention maps and, in turn, predict memorability scores. The model attained excellent results, but the biggest downside was overfitting and lack of the provision to use transfer learning swiftly. The model also contains a large number of parameters making it hard for real-time production. Another model that used visual attention mechanism was proposed by <ns0:ref type='bibr' target='#b22'>(Zhu et al., 2020)</ns0:ref>. The architecture is a multi-task learning network that was trained on LaMem dataset and AADB dataset <ns0:ref type='bibr' target='#b0'>(Kong et al., 2016)</ns0:ref> to predict both the memorability score and aesthetic score of an image, hence it was also trained using two datasets at the same time, one for image memorability and the other for image aesthetics. The model used a pixelwise contextual attention mechanism to generate feature maps. Even though this model was able to use transfer learning, the attention mechanism used is computationally expensive, especially if the number of channels in the intermediate layers is high. This model for the memorability task achieved a rank correlation of only 0.660, which is a much lower score than the ones achieved other existing models. A Hidden Markov Model (HMM) produced using Variational Hierarchical Expectation Maximization was proposed by <ns0:ref type='bibr' target='#b23'>(Ellahi et al., 2020)</ns0:ref>. A new dataset with 625 images was tagged by 49 subjects. During the data annotation session, an eye-gaze camera setup was used to track the eye-gaze of each subject when they were presented with a stream of images. The goal of this setup was to analyze how much eye gaze contributed to image memorability. The model achieved an accuracy of only 61.48% when the ground truth eye gaze and predicted eye gaze were compared. A novel multiple instance-based deep CNN for image memorability prediction was proposed that shows the performance levels that are close to human performance on the LaMem dataset. The model shows EMNet, automatically learns various object semantics and visual emotions using multiple instance learning frameworks to properly understand the emotional cues that contribute extensively to the memorability score of an image <ns0:ref type='bibr' target='#b24'>(Basavaraju, &amp; Sur, 2019)</ns0:ref>. The main problem with the previously proposed state of the art models is that they are computationally intensive. Some of the previously proposed models are not suitable for production purposes. Most of the previously proposed models constitute several pre-processing stages and use multiple CNNs in a parallel manner to provide results. The issues that accompany these strategies are over-fitting, high computational complexity and high memory requirements. To solve these issues, there is a need for an approach that results in a smaller number of parameters and a model that contains layers that can prevent overfitting. Therefore, in this work, the proposed Residual Memory Net (ResMem-Net) is a novel deep learning architecture that contains fewer parameters than previous models, making it computationally less expensive and hence is also faster during both training and inference. ResMem-Net also uses 1x1 convolution layers and Global Average Pooling (GAP) layers, which also helps to reduce the chances of overfitting. In this model, a hybrid Convolutional Neural Networks and Long Short-Term Memory Networks (LSTM) to build a deep neural network architecture that uses a memory-driven technique to predict the memorability of images. ResMem-Net achieves results that are very close to human performance on the LaMem dataset. Transfer learning is also taken advantage of during the training process, which has helped ResMem-Net to generalize better. The publicly available LaMem dataset is used to train the model, consisting of 60,000 images, with each image being labeled with a memorability score. The dataset consists of images from a diverse group of object-centric and scene-centric pictures. The images in the dataset are also tested to evoke varying kinds of emotions. The dataset was tagged using the Visual Memorability Game using the Amazon Mechanical Turk platform as mentioned before in this paper. Previous works have shown astonishing results that have helped to achieve good results on LaMem and Isola et al. dataset. This architecture has brought it very close to human performance with a rank correlation of 0.679 on the LaMem dataset, that can significantly impact applications of image memorability prediction in e-commerce platforms and photo-sharing platforms. Other fine-tuning strategies such as transfer learning and one cycle learning policy have also been employed to achieve a state-of-the-art results on this dataset. Finally, heatmaps have been generated using Gradient Regression Activation Map (GradRAM) technique <ns0:ref type='bibr' target='#b25'>(Selvaraju et al., 2017)</ns0:ref>, which allows us to visualize the portions of the image that causes the image to be memorable. Even though this paper focuses on the results of the LaMem dataset, the key contribution of this paper is the novel ResMem-Net Neural Network architecture which can be used for any other classification or regression task in which the intermediate features of the CNN might be useful.</ns0:p><ns0:p>In the upcoming section, the proposed model and the datasets used are explained in detail. The Manuscript to be reviewed Computer Science update rule are also discussed. Then the results of the model are compared in detail with existing works and a qualitative analysis done to understand memorability is also discussed. Finally, the potential future enhancements is discussed in the conclusion.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>This section deals with the proposed Neural Network architecture, the dataset used, and the evaluation of the proposed model's performance. Further, the results obtained from an extensive set of experiments are compared with previous state-of-the-art results. It shows the superiority of the proposed architecture; for every problem solved by deep learning, four core entities have to be defined before the results are obtained. They are the dataset, the neural network architecture, the loss function and the training procedure.</ns0:p></ns0:div> <ns0:div><ns0:head>Deep Hybrid CNN for the prediction of memorability scores</ns0:head><ns0:p>This section provides a detailed explanation of the ResMem-Net. A visual depiction is given in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. The figure shows that there are two distinct portions in the entire architecture. At the top of ResMem-Net, ResNet-50 <ns0:ref type='bibr' target='#b26'>(He et al., 2015)</ns0:ref> is used as the backbone, state of the art deep learning architecture for many applications. ResNet-50 is a 50-layer deep neural network that contains convolution kernels at each layer. The main innovation in ResNet-50 is the skip connection which helps to avoid vanishing gradients in very deep neural networks. The input image is given to ResNet-50, and the size of the image used in our experiment is 224x224 px. One of the core features of the proposed architecture is that the CNN part of the architecture is fully convolutional, and due to the use of Adaptive Average Polling layers, the input image can be higher or lesser than 224x224 px size.</ns0:p><ns0:p>At the bottom of ResMem-Net, a Long Short-Term Memory (LSTM) unit is responsible for predicting the output, the memorability score. LSTM is an enhancement to Recurrent Neural Networks (RNN). RNNs are generally used for sequential data such as text-based data or timeseries data. However, in RNN, there are no memory units to resolve any long-term dependencies <ns0:ref type='bibr' target='#b27'>(Cho et al., 2014)</ns0:ref>. Several variants of LSTM were analyzed, and it showed that the standard LSTM model with forget gate gave the best results on a wide variety of tasks <ns0:ref type='bibr'>(Greff et al., 2019)</ns0:ref>. In an LSTM unit, a 'cell state' is computed that can retain information from previous input sequences. LSTM units accept sequential data as inputs, and in this architecture, the input to the LSTM unit are the activations of the hidden layers of the ResNet-50 model, as shown in Figure <ns0:ref type='figure'>2</ns0:ref>. As the input sequences being sent to the LSTM unit have to be of the same size, Global Average Pooling (GAP) is used to shrink the activations of the hidden layers to a size of (Cx1x1) where C is the number of channels. Manuscript to be reviewed Computer Science feature maps. This allows us to ensure that there is no need to care too much about the size of the output activations at each layer and the input image's size. As studied in <ns0:ref type='bibr' target='#b29'>(Hsiao et al., 2019)</ns0:ref>, Global Average Pooling also does not have any parameter to optimize, thus avoiding overfitting and reducing computational needs. GAP layers can be thought of as an entity, that enforces the feature maps (outputs of intermediate layers) to be the confidence maps of various intrinsic features of the input image. Hence, GAP also acts as structural regularizers without requiring any hyper parameters. Also, global average pooling sums the spatial information, hence they are also robust to any spatial changes in the feature maps. Further, a convolution operation is done on the output of GAP layers to obtain a 128-channel output which can be flattened to obtain a vector of Rank 128. The main reason behind passing the hidden layer activations to the LSTM unit is to ensure that the cell state vector can remember the important information from the hidden layers. When the final layer's activation is passed to the LSTM unit, the important information of the previous layers along with the final layer's activation is obtained and then all that information is used to compute the memorability score. The LSTM layer's output is an n-dimensional vector, passed to a linear fully connected layer that gives a scalar output, which is the memorability score of the image. This strategy allows us not just to use the final layer's activations alone which is generally done in previous works discussed.</ns0:p></ns0:div> <ns0:div><ns0:head>Mathematical Formulation of the model</ns0:head><ns0:p>So, the input image is a tensor of size <ns0:ref type='bibr' target='#b2'>(3,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>224)</ns0:ref>, denoted by A 0. The output of L th identity block is denoted by A L, as shown in equations ( <ns0:ref type='formula'>1</ns0:ref>) and <ns0:ref type='bibr' target='#b1'>(2)</ns0:ref>. At each L th identity block, the output of the identity block is calculated by:</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>L L L 1 Z W A &#61485; &#61501; &#61636; where (2) L L A relu(Z ) &#61501; relu(a) max(0,a) &#61501;</ns0:formula><ns0:p>where, Z L is the output of the L th identity block, A L is the output of the activation function with Z L as its input. For all L, A L is passed through a Global Activation Pooling layer, which converts a (C, W, H) tensor to a (C, 1, 1) tensor by taking the average of each channel in the activation matrix A L . At the LSTM layer, the initial cell state is denoted by C 0, and h0 denotes the initial activation. Before the hidden layer activations are passed to the LSTM, C 0 and h 0 are initialized as random vectors using 'He' initialization strategy. The LSTM unit consists of three important gates that form the crux of the model: (3) Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_1'>t 1 t u hc ub u G sigmoid (W c W x b ) &#61500; &#61485; &#61502; &#61500; &#61502; &#61501; &#61483; &#61483; Forget Gate : (4) t 1 t u fc fx f G sigmoid (W c W x b ) &#61500; &#61485; &#61502; &#61500; &#61502; &#61501; &#61483; &#61483; PeerJ Comput. Sci</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Output Gate :</ns0:p><ns0:p>(5)</ns0:p><ns0:formula xml:id='formula_2'>t 1 t o oc ox u G sigmoid (W C W x b ) &#61500; &#61485; &#61502; &#61500; &#61502; &#61501; &#61483; &#61483;</ns0:formula><ns0:p>Hidden cell state : (6)</ns0:p><ns0:formula xml:id='formula_3'>t t t 1 u f h G h G * h &#61500; &#61502; &#61500; &#61502; &#61500; &#61485; &#61502; &#61501; &#61483; &#61483; LSTM output : (7) t t o C G * h &#61500; &#61502; &#61500; &#61502; &#61501;</ns0:formula><ns0:p>The output of G u , G f , G o , h &lt;t&gt; , and c &lt;t&gt; can be calculated using the formulas given in equations ( <ns0:ref type='formula'>3</ns0:ref>), ( <ns0:ref type='formula'>4</ns0:ref>), ( <ns0:ref type='formula'>5</ns0:ref>), ( <ns0:ref type='formula'>6</ns0:ref>) and ( <ns0:ref type='formula'>7</ns0:ref>), respectively.</ns0:p></ns0:div> <ns0:div><ns0:head>The Loss Function</ns0:head><ns0:p>The scores of the images in both the mentioned datasets are continuous-valued outputs, making this entire task a regression task. To understand how good our model predicts memorability, loss functions are used, which can approximate the divergence between the target distribution and the predicted distribution. Generally, for regression tasks, the L2 loss function, also known as the Mean Squared Error (MSE), is used as the loss function for the proposed model and the formula is given in equation <ns0:ref type='bibr' target='#b7'>(8)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_4'>+ &#61548;&#8721;(&#61553;) 2<ns0:label>(8)</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>n 2 i i i 1 1 MSE (y y ) n &#61501; &#61501; &#61485; &#61669; %</ns0:formula><ns0:p>Where the ( ) represents the predicted value, while y i represents the ground-truth value of the i th &#119910; image in the dataset, &#61548; represents weight decay and &#61553; represents the weights. The second term is added to the existing loss function to prevent the model from overfitting. The regularization procedure is known as L2 regularization, which multiplies a weight decay (hyperparameter) and the summation of all the weights used in the Neural Network. The weight decay prevents the weights from being too big, which ultimately prevents the model from overfitting.</ns0:p></ns0:div> <ns0:div><ns0:head>Pseudocode for ResMem-Net</ns0:head><ns0:p>The pseudocode for the forward-pass of ResMem-Net is given below. Initially, the information passed through each layer in the backbone by passing the previous layer's output to the next layer. Each time an output from a layer is obtained, the outputs are passed to a global average pooling layer, which works as depicted in the function called globalAveragePooling. The outputs from the globalAveragePooling method are passed to the LSTM_CELL at each iteration. After the final iteration, the memorability score can be retrieved from the LSTM_CELL.</ns0:p><ns0:p>Procedure mem (images):</ns0:p><ns0:formula xml:id='formula_6'>Cache = [] A[0] = images[0]</ns0:formula><ns0:p>For i=1 to n_layers: In figure <ns0:ref type='figure'>3</ns0:ref>, the pipeline used during this research is depicted. The process starts with data collection and processing and then proceeds with the model development phase. In the model development phase, the model's architecture is initially defined, modified to our task and finally programmed. Then the training phase is done with the given datasets, and finally, hyperparameter tuning is done, where various batch sizes, learning rates and residual models are tried to find the optimal settings. Then to analyze the results, GradRAM technique is used to visualize the activation maps to understand how the model predicts the results.</ns0:p><ns0:formula xml:id='formula_7'>A[i] = W[i] ( ) a[i-1] + b[i] &#61636; A[i] = relu(A[i]) A[i] = A[i] + A[i-1] Cache[i] = A[i]</ns0:formula></ns0:div> <ns0:div><ns0:head>Dataset Used</ns0:head><ns0:p>In this paper, two publicly available datasets are used: the LaMem dataset and dataset from (Isola et al. LaMem is currently the largest publicly image memorability dataset that contains 60,000 annotated images. Images were taken from MIR Flickr, AVA dataset, Affective images dataset, MIT 1003 dataset, SUN dataset, image popularity dataset and Pascal dataset. The dataset is very diverse as it includes both object-centric and scene-centric images that capture a wide variety of emotions. The dataset from Isola et al. contains 2222 images from the SUN dataset. Both datasets were annotated using the Visual Memorability Game. Amazon Mechanical Turk was used to allow users to view the images and play the game which helped annotate the images.</ns0:p><ns0:p>Both the datasets were collected with human consistency in mind, i.e., the authors ran human consistency tests to understand how consistent the users are able to detect repetition of images. The consistency was measured using Spearman's rank correlation, and the rank correlation for LaMem and Isola et al. are 0.68 and 0.75 respectively. The human consistency was calculated by inviting a new set of participants to play the Visual Memorability Game. The participants were split into two halves and were asked to independently play the game for the images in the datasets. Then, the human consistency was measured by how similar the second half the participants' memorability scores were to the memorability scores obtained from the first half of the participants. This analysis show that humans are generally consistent when it comes to remembering or forgetting images. Also, for both the datasets, the authors of the datasets have themselves provided the dataset splits along with the dataset. Those files contain both the ground truth values of each image and information about whether they belong to the training or validation sets. In the LaMem dataset, 45,000 images are given for training, while 10,000 images for validation.</ns0:p></ns0:div> <ns0:div><ns0:head>Optimization</ns0:head><ns0:p>The loss function is actually differentiable and is also a function of the parameters of the Neural Network. The gradient of the loss function concerning the weights can guide us through a path to allow us to identify the right set of parameters that yield a low loss using gradient descent-based methods. In our experiments, a slightly modified method of ADAM optimizer is used, which is a combination of Stochastic Gradient Descent with Momentum and RMSprop added with the cost function (Yi, Ahn, &amp; Ji, 2020) The loss function of Neural Networks is very uneven and sloppy due to the presence of too many local minima and saddle points. This modified version of ADAM uses exponentially weighted moving averages. Initially, compute the momentum values are computed using equations ( <ns0:ref type='formula' target='#formula_12'>9</ns0:ref>), ( <ns0:ref type='formula'>10</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_8'>11</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_8'>(9) J( ) H( ) J( ) &#61622; &#61553; &#61553; &#61501; &#61548; &#61553; &#61483; &#61622;&#61553; where, J -cost function &#61548; -scaling constant (hyperparameter) &#61553; -weights (10) i i 1 i m m (1 )H( ) &#61485; &#61501; &#61537; &#61483; &#61485; &#61537; &#61553; (<ns0:label>11</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>) 2 i i 1 i v v (1 )H( )) &#61485; &#61501; &#61538; &#61483; &#61485; &#61538; &#61553;</ns0:formula><ns0:p>where, &#61537;, &#61538; -scaling constant Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>After the momentum values are calculated, the update rule for the weights is done using equation 12,</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_10'>) i i 1 i i m v &#61483; &#61553; &#61501; &#61553; &#61485; &#61544; &#61483; &#61541;<ns0:label>12</ns0:label></ns0:formula><ns0:p>Where,&#61553; i -current weight</ns0:p><ns0:formula xml:id='formula_11'>&#61553; i+1 -updated weight &#61544; -learning rate i i i i m v &#710;m ; v (1 ) (1 ) &#61501; &#61501; &#61501; &#61537; &#61485; &#61538;</ns0:formula><ns0:p>&#61541; -constant to avoid zero division (usually 10 -6 )</ns0:p><ns0:p>Adding the cost function to the gradient of the weights w.r.t the cost functions ensures that the loss landscape is much smoother and can converge at a good minimum. This helps because, even if the gradient of the cost function w.r.t weights are very small, adding the scaled version of the cost function ensures that the weights keep changing, ensuring that the model doesn't get stuck in local minima or saddle points.</ns0:p></ns0:div> <ns0:div><ns0:head>Learning Rate and One Cycle Learning Policy</ns0:head><ns0:p>The learning rate is one of the most important hyperparameters in deep learning as it decides how quickly the loss moves towards a minimum in the loss function's surface. Learning rates can decide whether a model converges or diverges over time. If a high learning rate is used throughout the training process, loss of the model may diverge over some time, but if it is set to a low value, then the model may take too much time to converge. To solve this issue, generally the learning rate is reduced over time using a decaying function. But decaying functions can lead to the model's parameters to be stuck in saddle points or in local minima, which can lead to the model not learning new parameters in the consecutive epochs. To avoid these issues, <ns0:ref type='bibr' target='#b31'>(Smith, 2018)</ns0:ref> has proposed a method called One Cycle Learning. In one cycle learning policy, for each epoch, the learning rate is varied between a lower bound and upper bound. The lower bound's value is usually set at 1/5th or 1/10th of the upper bound.</ns0:p><ns0:p>In one cycle learning, each epoch is split into 2 steps of equal length. All deep learning models are trained using mini-batches, so if the dataset has 100 batches, the first 50 batches are included in step 1 and the rest are included in step 2. During the start of each epoch, the learning rate is set to the lower bound's value and at the end of each mini-batch, the learning rate is slowly increased to ensure that the learning rate reaches the upper bound by the end of step 1. In step 2, the training proceeds with the upper bound as the learning rate and then the learning rate is slowly decayed after each mini-batch, to ensure that by the end of step 2, the learning rate is back to lower bound. This is then repeated for each epoch. Varying the learning rate between a high and a low value allows the model to escape the local minima or saddle points. The higher learning rate allows the model to escape local minima and saddle points during training, while the lower learning rate ensures that the training leads to parameters that ensure a lower loss in the loss function.</ns0:p></ns0:div> <ns0:div><ns0:head>Transfer learning</ns0:head><ns0:p>Transfer learning is training a model on a large dataset and retraining the same model on a different dataset with lesser data. Intuitively, the learned features from larger datasets are used to help improve accuracy on datasets with smaller data points. In our work, a pre-trained ResNet-50 that is trained on the ImageNet dataset is used, which contains 3.2 million images, with each image categorized in one among the 1000 categories.</ns0:p><ns0:p>In our work, the semantic features learned through ImageNet will allow the model to be quickly trained and perform better on identifying the memorability of images in the validation dataset. The feature maps in the pretrained ResNet-50 will contain feature maps for objects, scenes and other visual cues that aren't present in the images present in the datasets that are used to train the model. This is so because, the pretrained ResNet-50 was trained on a dataset with diverse set of images. So, after careful retraining, many of these feature maps in the pretrained model will be retained. This will allow the re-trained model to identify the objects and scenes not present in the LaMem and Isola et al. datasets, which can drastically improve real-time deployment performance. Empirical evidence for the above explanation is given in <ns0:ref type='bibr' target='#b32'>(Rusu et al, 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiments and Results</ns0:head><ns0:p>In this section, the evaluation criteria, training settings and outcome of the experiments are discussed.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation Metric</ns0:head><ns0:p>The L2 loss function is generally a good metric to find how well the proposed model performs, but here the Rank Correlation method is also used to evaluate the proposed model. The Spearman Rank Correlation (&#961;) is computed between the predicted score and target score, is used to find the consistency between the predicted scores and target score from the dataset. The value of &#961; ranges from -1 to 1. If the rank correlation is extremely close to 1 or -1, then it means that there is a strong positive or negative agreement respectively between the predicted value and ground truth, while a rank correlation of 0 represents that there is complete disagreement. The rank correlation between predicted and target memorability score is given by the equation ( <ns0:ref type='formula' target='#formula_12'>9</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_12'>n 2 i i i 1 3 6. (r S ) 1 n n &#61501; &#61485; &#61554; &#61501; &#61485; &#61485; &#61669;<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>Where r i is the ground truth while s i is the predicted value from the model, whereas n is the number of images in the dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Training Settings and Results</ns0:head><ns0:p>The batch size was set at 24 throughout the training process and the images were resized to a size of 224x224. Since transfer learning is employed, when the backbone's (ResNet-50) parameters were freezed, the upper bound and lower bound for the learning rate was set at 0.01 and 0.001 respectively. After 10 epochs, the backbone's parameters were unfreezed and then the upper bound and lower bound for the learning rate was set at 0.001 and 0.0001 for 15 epochs. For the rest of the epochs, the lower bound and upper bound for the learning rate were set at 0.0001 and 0.00001 respectively. The training process consisted of a total of 40 epochs. For regularization, a value of 0.0001 was set as L2 weight decay. The model was trained on a Nvidia Quadro P5000 GPU which has 16GB GPU memory and 2560 CUDA cores. To ensure stable training, normalization and dropout layers were used. The authors of the LaMem dataset have given 5 training set splits because each image has multiple annotations. Hence, 5 different models were trained, one for each split, and the results were averaged across the models while testing. For cross-validation purposes, the authors of the LaMem datasets divided the dataset into five sets, where each set contains 45,000 images for training, 10,000 images for testing and 3,741 images for validation purposes. After the training the model using the above settings, ResMem-Net obtained a rank correlation of 0.679 on the LaMem dataset and 0.673 on the Isola et al. dataset as mentioned in table 1 and table <ns0:ref type='table'>2</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>In this section, the results of the experimental outcomes are taken and compared with existing models on two datasets, namely, LaMem and the other from Isola et al. dataset. Then, the comparison the number of parameters present in the existing models and the proposed model is done to establish why the proposed model has lesser memory requirements. Finally, the results of a qualitative analysis using the GradRAM method is presented to understand which regions of the image lead to higher memorability scores to answer the question, 'What makes an image more memorable?' .</ns0:p><ns0:p>The Spearman Rank Correlation metric has been used to evaluate the models and consistency of the results. Since each image has been annotated by multiple subjects, the rank correlation metric is better suited to compare how consistently the models are predicting memorability scores. The five models discussed in the introduction are considered and the average of results is compared with the previous works. To ensure that the comparison of our results is fair, as mentioned in results sections, five models were trained on the five sets and found the average of the results. The models used for comparison were also trained in the same way by the respective authors, hence that ensures that the differences between the results of the previous models is due to the models only, not due to any other reasons. The reason behind the superior performance can be attributed to the use of LSTM unit, modified optimization function, pretrained ResNet-50 backbone and the use of cyclic learning rates. Tables <ns0:ref type='table'>3 and 4</ns0:ref> depict the predicted scores on various sets of images on the dataset. In both the tables, the images are arranged in descending order of the predicted memorability scores. For example, the 'Top 10' row depicts the average of the top 10 highest predicted memorability scores that are predicted by various network architectures and finally, the average of the ground truth of the same images is also given in the same row. The results are based on average over the 5-fold cross-validation tests as provided by the creators of the datasets.</ns0:p><ns0:p>From both Tables <ns0:ref type='table'>3 and 4</ns0:ref>, it can be inferred that, on average, ResMem-Net performs better than previously proposed models on both Since the results are from the validation set, it is clear that the model did not overfit but rather learned features that contribute to the memorability scores of the image. The validation set also encompasses a wide variety of landscapes and events, which also leads us to believe that the model performs well on different kinds of images.</ns0:p></ns0:div> <ns0:div><ns0:head>Computational Complexity Analysis</ns0:head><ns0:p>This section deals with the comparison of the computational complexity of various previous models and ResMem-Net. It has been already established that ResMem-Net is quite minimal compared to other previously proposed models in network parameter size. </ns0:p></ns0:div> <ns0:div><ns0:head>Qualitative analysis of the results</ns0:head><ns0:p>In this section, the inferences and patterns that were identified after visually analyzing the results of ResMem-Net are discussed. To aid us with this process, GradRAM technique was used to understand which part of the images are focused by ResMem-Net or in other words, which part of the image gives larger activations. GradRAM is an extension of Class Activation Maps (CAM), which uses the gradient obtained during backpropagation process, to generate heatmaps. The heatmaps generated shed light on which part of the image enhances the image's memorability. This shows how the hidden layers in the ResNet-50 backbone outputs feature maps and it is this information that has is being used by the LSTM unit to make predictions. Based on the heatmaps and careful manual analysis of the results using randomly selected images for different categories from the Isola et al. dataset, the following inferences are made:</ns0:p><ns0:p>The object in the image contributes more to the memorability score than the scene in which the object is placed. In almost every heatmap, it is observable that the portion of the image containing the main object provides higher activations compared to the rest of the image. Also, images with no objects are predicted to be less memorable compared to images containing objects (both living and non-living). Also, images containing a single central object is seen to be more memorable than images with multiple objects. The average rank correlation of the predicted memorability of the images with a single central object is 0.69, while the average of the rank correlation of the images without a central object is 0.36. Also, the presence of humans in the image contributes to a better memorability score. If the human in the image is clearly visible, then the memorability averages at 0.68, while if the image does not contain any human or object, then the memorability averages at 0.31.</ns0:p><ns0:p>Using a model pretrained on object classification datasets provide better results and trains faster than using a model pretrained on scene classification datasets <ns0:ref type='bibr' target='#b35'>(Jing et al., 2016)</ns0:ref>. This can be attributed to the fact that memorability scores are directly related to the presence of objects. Thus, a model whose weights contain information about objects take lesser time to converge to a minima. Also, image aesthetics does not have much to do with Image memorability. Some images containing content related to violence are not aesthetically good, but the memorability score of the image is high.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Capturing memorable pictures is bit challenging as it requires an enormous amount of creativity. However, just like any other phenomenon in nature, humans' capability to remember certain images more follows a pattern. This paper becomes unique by introducing ResMem-Net, a novel neural network architecture that combines a pretrained deep learning model (ResNet-50) and a LSTM unit. The model was trained using One Cycle Learning Policy which allows the use of cyclic learning rates during training. ResMem-Net has provided a close to human performance on predicting the memorability of an image using LaMem dataset which is the largest publicly available dataset for image memorability. The rank correlation of ResMem-Net is 0.679, which is extremely close to human accuracy 0.68. This obtained result 6.09% increase the performance of MemNet, 35% increase from CNN-MTLES, 2% increase from MCDRNet and a 1.2% increase from EMNet. Based on the qualitative analysis executed using GradRAM method, it was inferred that the object plays a bigger role in enhancing the memorability of the image. A pre-trained model that consists of weights from an object classification dataset converges quickly than a model pre-trained on scene classification. These results were observed manually by looking through the highly rated images and lowly rated images. Heatmaps generated using the GradRAM method was also used to analyze and obtain the above inferences. The limitation of the current work is that even though the model contains much lesser number of parameters than other state-of-the-art models, ResMem-Net is still not deployable to mobile based GPUs. To solve this issue, further research can be done to use mobile compute efficient architectures like MobileNetV3 or EfficientNet, which are also pre-trained on the ImageNet dataset. Further research can also be done to improve the accuracy of the model by replacing ResNet-50 with more recent architectures like ResNext. The LSTM unit can also be replaced with more recent architectures like the Transformer architecture or BiDirectional RNNs. A more generic suggestion is to spend time to develop larger datasets for image memorability prediction because with larger datasets, neural networks can generalize better.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>use of transfer learning, optimization function, evaluation metrics, loss function and weight PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61139:1:1:NEW 26 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Update Gate -Decides what information should be remembered and what information should be thrown away 2. Forget Gate -To decide which information is worth storing 3. Output Gate -The output of the LSTM unit Update Gate :</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>For i = 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>x, h t-1 , c t-1 ):it = sigmoid ( W xi * x + W hi * h t-1 + W ci * c t-1 + b i ) ft = sigmoid ( W xf * x + W hf * h t-1 + W cf * c t-1 + b f ) ct = ft* c t-1 + it * tanh(W hc * h t-1 + W xc * x + b c ) ot = sigmoid( W xo * x + W ho * h t-1 + W co * ct + b o ) ht =ot * tanh(ct) return ht, ct Procedure globalAveragePooling(tensor): c, h, w = dimensions(tensor) for i in range(c): Avg = (1/h) * (&#931;tensor[i]) tensor[i] = Avg return tensor</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>m i , v i -first, second momentum m 0 , v 0 -initial momentum (set to 0) PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61139:1:1:NEW 26 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Global Average Pooling is very much like Densely connected Layers in Neural Networks because it performs a linear transformation on a set of</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61139:1:1:NEW 26 Aug 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>represents the results of various models on the LaMem dataset and table 2 represents results of various models on the Isola et al. dataset. Table1and figure4show that ResMem-Net attains a rank correlation of 0.679, a 6.09% increase from MemNet, a 35% increase from CNN-MTLES, a 2% increase from MCDRNet and a 1.2% increase from EMNet. The human-level Manuscript to be reviewed Computer Science accuracy on LaMem is 0.68, and ResMem-Net has brought us extremely close to human accuracy with a difference of just 0.001. From table 2 and figure5, it can be inferred that ResMem-Net attains a rank correlation of 0.673, which is a 10.33% higher from MemNet, 5.48% increase from MCDRNet, 1.4% increase from EMNet and a 45.67% increase from SVR. The authors have not provided the human accuracy for this dataset. Hence, it is not possible to tell how close ResMem-Net is to human accuracy for the Isola et al. dataset, but it is clear that ResMem-Net has outperformed all other previous works.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61139:1:1:NEW 26 Aug 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>Isola et al. dataset and LaMem dataset. For the top 10, EMNet predicts a memorability of 91.89% and 82.43% on the LaMem and Isola et al. datasets, respectively, while ResMem-Net predicts an average memorability of 93.82% and 82.61% on the same LaMem and Isola et al. datasets, respectively. MCDR-Net obtained an average memorability 93.15% and 81.75%, while MemNet has obtained 91.7% and 80.16% on the LaMem and Isola et al. datasets respectively for the top<ns0:ref type='bibr' target='#b9'>10</ns0:ref>. When compared to the ground truth, which is 100%, these scores clearly state that ResMem-Net is more consistent with the images with high memorability. On the other hand, when for the 'Bottom 10' images, EMNet predicts an average memorability of 48.41% and 27.42%, while for MCDRNet it is 50.94% and 26.52% on the LaMem and Isola et al. datasets, respectively. In comparison, ResMem-Net predicts an average memorability of 47.9% and 27.42%, respectively. Again, when comparing those results to the ground truth values, which is 33.57% and 5.69% for LaMem and Isola et al. datasets respectively, ResMem-Net provides similar results to EMNet. It would also be unfair to completely ignore the results of traditional machine learning algorithms on image memorability. Despite many empirical results that depict the superiority of deep learning algorithms on computer vision tasks, certain studies have shown that the use of hand-crafted features when ensembled with machine learning algorithms such as SVR or Random Forests can, in fact, provide better results. Of course, concerns regarding the generalization of the models on new data have been raised, which are the very papers that propose the non-deep learning-based strategies themselves. However, in table 5 and figure6, it is very clear that the proposed ResMem-Net quite easily outperforms traditional machine learning strategies.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>Table 6 shows the number of parameters or weights present in ResMem-Net, MemNet, MCDRNet and EMNet. It is clear from the table that ResMem-Net has a significantly much lesser number of parameters than the previously proposed network architectures. CNN's are composed of convolution operations, which are very much compute intensive. So, the lesser the number of parameters, the faster the model takes to provide outputs. The bigger advantage of ResMem-Net is that it has a significantly lesser number of weights and still provides better accuracy on both LaMem and Isola et al. datasets. The time taken for ResMem-Net to process an image of size 512x512px is approximately 0.024s on Nvidia Quadro P5000 GPU. It should also be noted that having too many parameters can cause overfitting and hence ResMem-Net is less prone to overfitting because it has a significantly lower number of weight parameters.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61139:1:1:NEW 26 Aug 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"ResMem-Net: Memory based deep CNN for image memorability estimation Manuscript ID: 61139 Manuscript Type: Article We would like to thank the Editor and reviewers for their helpful comments and suggestions. We have updated the manuscript to address all the comments supplied by the reviewers. We feel that by incorporating these suggestions, the quality of the paper has improved substantially. This material below addresses each issue raised by the editors and the reviewers. *Reviewer -1* Thank you so much for your positive feedback. We have improved the manuscript according to the reviewer’s comments and suggestions. Comment 1: This paper discusses the image memorability using Deep CNN and the work presented in this paper reads well and is logically structured in a clear narrative. The paper has a descriptive title, abstract, and keywords. Still, I have the following comments and observations that need to be accommodated or addressed in this paper: It is better to state the main motivations and contributions of this paper more clearly in the introduction. The novelty of the proposed work should be better emphasized. Response: The motivations and contributions have been elaborated in the introduction section. Comment 2: In One Cycle Learning, the use of the words Cycle, iterations, and epoch seems ambiguous and not properly explained. Response: The introductory part for One Cycle Learning has been rewritten and is present in line numbers 454 to 476. Comment 3: It is also not clearly mentioned in what way One Cycle Learning leads to better model fitting. Response: The reason behind why One Cycle Learning leads to better model fitting is added to line numbers 472 to 476. Comment 4: In addition, the authors should include the details about the hyper-parameter settings. Response: The hyper parameter settings and hardware used have been added. The content is present in line numbers 516 to 524. Comment 5: The evaluation metrics mention that the rank correlation of -1 represents complete disagreement, which I don't think is the case. The statement can be rechecked and rewrite accordingly. Response: The statement has been rechecked and rewritten. The changes are present in line numbers 507 to 509. *Reviewer -2* Thank you so much for your positive feedback. We have improved the manuscript according to the reviewer’s comments and suggestions. Comment 1: The English language in the paper has to be polished. Response: The language and grammar mistakes have been checked and rewritten. Comment 2: What are the limitations of the existing works that motivated the current work? Response: The limitations of the existing works have been mentioned in line numbers 204 to 210. Comment 3: Discuss the drawbacks of the current work in the conclusion. Response: The drawbacks of the current work and future enhancement has been added to the conclusion. Comment 4: There must be maximum two paragraphs in conclusion section. The first paragraph is for briefly discussing the entire paper and the second paragraph is for discussing some future works. Response: The potential future works have been added to the conclusion part and is present in line numbers 657 to 665. Comment 5: Short form must be defined as properly when used for the first time. Response: The short forms have been checked and have been defined and spelled out when used for the first time. Comment 6: Never use I, we, you, our, etc. in a research article. Use one “Tense” to write the entire paper. “Present Tense” is preferable. Response: Those first person words were checked and removed. The tense was also changed wherever appropriate. Comment 7: All the figures, tables, equations and references must be cited in the text. Response: All the tables, equations and references has now been cited in the text. Comment 8: References must be cited sequentially starting from 1, then 2, then 3 and so on. Response: The references have now been sequentially cited in the text. Comment 9: The discussion is very important in the research paper. Nevertheless, this section is short and should be presented completely. Response: More information regarding the datasets and the annotation process has been added. Recent papers and more details regarding the existing work has been added. *Reviewer -3* Thank you so much for your positive feedback. We have improved the manuscript according to the reviewer’s comments and suggestions. Comment 1: Paper is well written. Authors should add a little background of the study and limitations of the existing works and clearly explain the contributions at the end of the introduction. Response: The limitations of existing works have been elaborated in the introduction and the contributions are explained. Comment 2: Spell out each acronym the first time used in the Abstract as well as the body of the paper. Response: The acronyms have been checked and have been spelled out the first time it was used in both the abstract and the body. Comment 3: The objectives of this paper need to be polished. Contribution list should be polished at the end of the introduction section and last paragraph of the introduction should be the organization of the paper. Response: The objectives of the paper and the organization of the paper have been added. Comment 4: The authors should clearly mention all parameters used to evaluate the performance of the model. Response: More details about the evaluation methodologies has been added in line numbers 529 to 532. Comment 5: The procedures and analysis of the data are seen to be unclear. Response: More information about the dataset has been added to the introduction and “Dataset Used” sections. More information about the qualitative analysis has been added as well. Comment 6: Authors should clearly describe splitting criteria. Response: The splitting criteria have been added in lines 408 to 412. Comment 7: Should discuss more about the dataset. Response: More information about how the datasets were annotated and splitting criteria has been added in the Introduction, Dataset Used and Conclusion sections. Comment 8: Major contribution was not clearly mentioned in the conclusion part. Response: Major contribution has been added with more minor details in line numbers 643 to 646 in the conclusion. Comment 9: The discussion is very important in research paper. Nevertheless, this section is short and should be presented completely. Response: More information regarding the datasets and the annotation process has been added. Recent papers and more details regarding the existing work has been added. Comment 10: Please improve the overall readability of the paper. Response: The paper has been proofread and grammar mistakes have been rectified. *Reviewer -4* Thank you so much for your positive feedback. We have improved the manuscript according to the reviewer’s comments and suggestions. Comment1: The literature review should mention more recently published papers, i.e, models published in the year 2020. Response: Recent papers have been added to the literature review and are present in line numbers in 183-198. Comment2: No mathematical equations (or) pseudocode have been given for the optimization function. Also, the difference between ADAM optimizer and the optimization function used for the proposed architecture needs clear explanation. The line 'It adds the cost function the cost function to ensure that our learning path is much smoother and can converge at a good minimum' isn't sufficiently clear. Response: The mathematical equations for the optimization function have been added. The difference between the modified ADAM optimizer has been elaborated and the reason why the optimization function works better than ADAM optimizer has been elaborated too. Comment3: It has been mentioned in the paper that the model has been trained using the Mean Squared Error (L2) loss function. However, there is no comparison of the final L2 scores of the proposed model and the final L2 scores of the previously proposed models. Response: Since the previous works haven’t included the final L2 scores on their papers and since L2 scores don’t measure consistency across splits, the L2 loss comparison couldn’t be done in this paper. However, the final L2 score of the proposed model has been mentioned in the paper and it is 0.011. Comment4: In what way does Global Average Pooling improve generalizability? It is mentioned in the manuscript that the Global Average Pooling helps prevent over fitting, but the reasoning behind that statement requires explanation. Response: A more detailed reasoning for why global average pooling helps overfitting and how it helps the model has been added to line numbers 284 to 288. Comment5: Approximate runtime for processing an image using the model during inference should be provided in computational complexity analysis. Response: The approximate runtime for processing an image using the model has been added to the Computational complexity analysis section. Comment6: Include comparison table of proposed model to traditional non-Neural Network based models. Response: Figure 7 and Table 5 include comparisons of the proposed model to the traditional machine learning models proposed previously. Comment7: In results and discussion section, provide information about why the proposed model is able to provide better results than the other models. Response: The information has been added to the results section. Comment8: Explain why Spearman's Rank correlation is used to compare the models. The model has been trained with L2 loss, so why isn't it used to evaluate the proposed model. Response: The reason behind why the spearman rank correlation is used to compare the models has been mentioned in the results and evaluation metric sections. Comment9: During the qualitative analysis, is the model's accuracy evaluated with images outside of the mentioned datasets? Response: The images were randomly chosen taken from the Isola et al. dataset. This has been added to the qualitative analysis section. Comment10: The inferences made in the qualitative analysis section should be backed with more statistics. Response: Average memorability scores for a couple of categories have been added to the qualitative analysis. Comment11: In the Dataset Used section, information regarding how the data was collected and tagged, and the distribution of the memorability scores needs to be discussed. Response: Information about how the data was collected, annotated and the splitting criteria for the validation set has been added to the dataset. Comment12: The meaning of the variables used in the equations should explained after each equation is mentioned. Response: The equations have been checked and now all the variables have been explained. Comment13: The conclusion is not covering the drawbacks of the current manuscript. Response: The drawbacks of the current models have been discussed in conclusion *Reviewer -5* Thank you so much for your positive feedback. We have improved the manuscript according to the reviewer’s comments and suggestions. Comment1: This paper addresses the interesting question of what makes an image memorable and uses state-of-the-art computational models in its quest to better understand why certain images are remembered better than others. The paper introduces a novel neural network architecture which can be used to better approximate, relative to other models, the memorability of images. I enjoyed this paper and think it has the ability to contribute a useful model to the literature. In addition to proposing a novel model, the authors utilize a clean approach consisting of publicly available datasets to provide support for the utility of their neural network design and its applicability to other researchers. I do, however, have a few concerns about the paper in its current form. Below I have listed these specific concerns, as well as suggestions as to how, in my opinion, the paper could be improved. I have organized these concerns relating to both theoretical considerations and content, as well as writing and style. I would recommend revising your writing with respect to memorability, so that it is a bit cleaner and easier for the reader to follow. In lines 91-92 you describe “measure an individual's probability of remembering an image”, but then later you discuss that memorability is a property of the image. Try to be clearer about whether the memorability you are focused on in this work is a property of the image or of a person. The phrasing in line 95 implies that memorability could be tied to the person, rather than the image. Response: Thank you for the suggestion. The phrasing and language did seem slightly ambiguous. So, those lines have been modified and in some places elaborated in line numbers 86, 89, 91 to 93 and 98 to 115. Comment2: In line 200: “The main innovation in ResNet”, is this your innovation or the work of the creators of ResNet? If so, please cite. Response: The paper that introduced ResNet has been cited. Comment3: Pertaining to lines 115-116, in the previous paragraphs you discussed how memorability could vary between people, provide a bit more detail about how this memorability score here was annotated. Response: The visual memorability game was used to annotate the images and this has been added to those lines. The previous manuscript did not elaborate the visual memorability game, but now it has been elaborated in lines 98 to 115. Comment4: In line 174, what is the “human performance” you are referring to here? Can you describe where this comes from? And then later in line 410, is this the same human accuracy referenced here? If not, please describe. Response: The methodology used to calculate the human performance (human consistency) has been added to the “Dataset Used” section in line numbers 402 to 407. Comment5: Pertaining to lines 373-375, can you explain more about how real-time deployment performance can drastically improve for objects and scenes that are not in the new datasets? Response: A better explanation has been added under transfer learning section. Comment6: In line 107, what type of deployment do you mean here? Are you referring to the application of it? Please include citations to support the various claims made in lines: ⁃ Line 89-90 ⁃ Line 104-109 ⁃ Line 134 (and remove the phrase “which was already mentioned“) ⁃ Line 487 - 493 A few instances of repeated words or phrasing ⁃ Line 399-400 “cross-validation purposes” ⁃ Line 497-499 “humans’ capability…” Re-word/Re-phrase to make comprehension easier for the reader ⁃ Line 41 “a hardest problem” ⁃ Line 89 & 334 “prove” ⁃ Line 110-111 rephrase “was given” ⁃ Line 122-123 “greedy algorithm” -- I am not sure what you mean by this ⁃ Line 159-160 and 165-166 both statements “propose” the new model. Can you remove one of these to make this clearer? Response: The word ‘Deployment’ in line 107 here refers to deployment to web servers or as a software to computers with low GPU capabilities. I’ve rephrased it again to improve clarity. The citations have been added to the various claims in above mentioned line numbers. The above-mentioned suggestions to rephrase or re-word the content has also been done. Comment7: The methods and approach you described are technically sound and fits with the general scope of this journal. In line 45 in the Abstract, you note the “information from the hidden layers” — could you speak to what type of information is stored in each layer? Can you connect this to the qualitative section later in the paper where you discuss contributions to memorability such as the presence of an object within the scene and the presence of a human, etc. Response: The “information from the hidden layers” part has been explained in the abstract and is also connected to the qualitative section later in the paper. Comment8: Can you provide data and/or a quantifiable measure to support the claims in lines 481-485. For example, instead of stating that manually you observed a difference with respect to the presence of a human or not, you could quantify the average memorability for images containing a human vs without a human. Response: The statistics have been added to that section. Comment 9: In line 436, please report the ground-truth values here like you did in line 432 for the Top 10 Response: The ground truth values have been added. Comment 10: The findings and data reported are appropriate, as well as the figures to support findings. Though I would recommend adding to the description of, Figures 5, 6, & 7, to state where the reported performance comes from (e.g., averaged across, etc.). Additionally, the axis labels in Figure 9 are nearly impossible to read. Though not extremely important, it would be helpful to provide a figure with higher resolution so that they can be read. Conclusions are well stated and impact of the novel model is described. When testing with the other models, did you use the same 5-set cross-validation approaches across all models? Please provide more details to confirm the similarity in the approaches so that it is easier to compare the various models and to be confident that differences in consistency of the predicted scores are due to the models themselves, not just the other processes/steps. Response: All the models used the same 5-set cross validation sets provided by the authors themselves. This detail has been added to the results section. The description for the tables have been modified. Comment 11: In line 111, is this the dataset that you refer to later as the Isola dataset? If so, please label it here on the first time you introduce it. If not, what makes this different? In line 328, the same comment -- is this the Isola dataset? Response: Yes, the dataset referred in both the places is the Isola et al. dataset. It is has been labelled now and cited too. Comment 12: In the supplementary material you provide the link to the LaMem dataset, however, can you also add how to access the Isola dataset? Response: The link to the dataset is: http://people.csail.mit.edu/phillipi/Image%20memorability/cvpr_memorability_data.zip The link has been added to the supplementary material too. Comment 13: Minor writing suggestions 1) A few instances of unnecessary capitalization throughout, for example: Line 84 “Image Memorability” Line 440 “Computer Vision tasks” Line 491 “Image Aesthetics” Response: These changes have been made. Comment 14: Line 202, the use of ResNet-50 is inconsistent, please use the same phrasing throughout so it is easier for the reader to follow Response: The consistency of the use of ResNet-50 has been checked throughout the paper and have been rectified. Comment 15: Line 276 “it’s” should be “its” Response: This has been rectified. Comment 16: Line 418, the word “close” appears twice Response: The repetition has been checked. Comment 17: Line 161 “ResMet-Net” should be “ResMem-Net” Response: The spelling has been modified. Comment 18: 508 “per-trained” model should be “pre-trained model” Response: The spelling mistake has been rectified. We once again would like to thank the reviewers for their constructive comments that helped to improve the quality of our work. We hope that our response is acceptable for the queries raised by the reviewers. Thanking You. Sincerely, Authors "
Here is a paper. Please give your review comments after reading it.
271
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Image memorability is a very hard problem in Image Processing due to its subjective nature. But due to the introduction of Deep Learning and the large availability of data and GPUs, great strides have been made in predicting the memorability of an image. In this paper, we propose a novel deep learning architecture called ResMem-Net that is a hybrid of LSTM and CNN that uses information from the hidden layers of the CNN to compute the memorability score of an image. The intermediate layers are important for predicting the output because they contain information about the intrinsic properties of the image. The proposed architecture automatically learns visual emotions and saliency, shown by the heatmaps generated using the GradRAM technique. We have also used the heatmaps and results to analyze and answer one of the most important questions in image memorability: 'What makes an image memorable?'. The model is trained and evaluated using the publicly available Large-scale Image Memorability dataset (LaMem) from MIT. The results show that the model achieves a rank correlation of 0.679 and a mean squared error of 0.011, which is better than the current state-of-the-art models and is close to human consistency (p=0.68). The proposed architecture also has a significantly low number of parameters compared to the state-of-the-art architecture, making it memory efficient and suitable for production.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Data is core essential component to almost every media platform in this digital era, starting from the television to social networks. Every media platform relies on content to engage their users. It provides a compulsion for these platforms to understand the exponentially growing data to serve the right content to their users. Since most of these platforms rely on visual data, concepts such as popularity, emotions, interestingness, aesthetics, and, most importantly, memorability are very crucial in increasing viewership <ns0:ref type='bibr' target='#b0'>(Kong et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b1'>Celikkale, Erdem &amp; Erdem, 2015)</ns0:ref>. In this paper, image memorability is the concept taken into consideration which is one of the most underexplored deep learning applications.</ns0:p><ns0:p>Human beings normally rely on visual memories to remember things and also will be able to identify and discriminate objects in real life. Human cognition to properly remember and forget visual data is crucial as it affects every form of our engagement with the external world <ns0:ref type='bibr' target='#b2'>(Bainbridge, Dilks &amp; Oliva, 2017;</ns0:ref><ns0:ref type='bibr' target='#b3'>Schurgin, 2018)</ns0:ref>. However, not all humans remember the same visual information in a common manner <ns0:ref type='bibr' target='#b4'>(Gretz &amp; Huff, 2020;</ns0:ref><ns0:ref type='bibr' target='#b5'>Rust, Mehrpour., 2020)</ns0:ref>. It is a long-standing question that neuroscientists have asked for years, and research is still underway to explain how exactly the cognitive processes in the brain encode and store certain information to retrieve that information when required properly. The human brain can encode intrinsic information about objects, events, words, and images after a single exposure to visual data <ns0:ref type='bibr' target='#b6'>(Alves et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b7'>Fukuda, Vogel., 2019)</ns0:ref>. Image memorability is generally measured as the probability that a person will be able to identify a repeated photograph when he or she is presented with a stream of images <ns0:ref type='bibr'>(Isola et al., 2011a)</ns0:ref>. By definition, image memorability is a subjective measure that approximately quantifies how a person can remember an image <ns0:ref type='bibr'>(Isola et al., 2011b)</ns0:ref>. Cognitive psychologists have shown that more memorable images leave a larger trace of the brain's long-term memory <ns0:ref type='bibr' target='#b10'>(Broers &amp; Busch , 2021)</ns0:ref>. However, the memorability of a certain image can slightly vary from person to person and depends on the person's context and previous experiences <ns0:ref type='bibr' target='#b11'>(Bainbridge, 2020)</ns0:ref>. But this slight variation is fine because this allows us to make approximate predictions using computational methods.</ns0:p><ns0:p>Researchers have shown that, even though there exist slight variations, humans show a level of consistency when remembering the same kind of images with a very similar probability irrespective of the time delay <ns0:ref type='bibr' target='#b12'>(Sommer et al., 2021)</ns0:ref>. This research has led to the inference that it is possible to measure an individual's probability of remembering an image. To measure the probability that a person will remember an image, the person is presented with a stream of images. This process is called Visual Memorability Game <ns0:ref type='bibr'>(Isola et al., 2011a)</ns0:ref>. The stream of images contained two kinds of images, targets, and fillers. The annotator is shown images one by one, where the image is displayed for 2.4 seconds. Between each target image multiple filler images are shown unbeknownst to the annotator. On a random manner, previously shown target images are repeatedly shown now and then. When each image is being shown, the annotator is asked to press a key in the keyboard if that annotator feels that the target image is being repeated in the stream. Based on this, then the percentage of times the annotator has correctly identified repeated target images will be checked and is annotated as the memorability of the target image from one annotator. The same set of images are shown to multiple more people in the same manner. So, the approximate memorability scores will be obtained for the same image from multiple people. Image memorability is a reflection of individual viewing the image, but however, the level of memorability of an image is quite similar across individuals most of the time <ns0:ref type='bibr' target='#b12'>(Sommer et al., 2021)</ns0:ref>. So, since the memorability of the image is only going to slightly vary for most people, these approximate measures are taken as the ground truth memorability score. The slight difference between most deep learning datasets and image memorability datasets is that, for each image, we'll have multiple annotations, that is, multiple memorability scores, which is fine because most of them aren't going to vary that much. As mentioned earlier, unlike more objective properties of images such as photo composition or image quality, image memorability cannot be objectively defined and hence might slightly vary from person to person. However, generally, humans agree with each other on certain common factors that tend to make an image more memorable despite this large variability. Factors like color harmony and object interestingness are generally agreed upon by people as factors that improve image memorability <ns0:ref type='bibr'>(Khosla et al., 2015a)</ns0:ref>. Few methods have been proposed <ns0:ref type='bibr' target='#b14'>(Perera, Tal, &amp; Zelnik, 2019;</ns0:ref><ns0:ref type='bibr' target='#b15'>Fajtl et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b16'>Squalli-Houssaini et al., 2018)</ns0:ref> to predict the memorability of an image using deep learning methods. Those methods either used handcrafted features or ensemble models to predict the memorability score. Ensemble models are hard to train, computationally expensive and are prone to overfitting <ns0:ref type='bibr'>(Canchumuni, Alexandre &amp; Pacheco, 2019)</ns0:ref> . The overfit models normally do not perform well on different kinds of images that are not in the training set, while computationally expensive models are not suitable for deployment to real world on web servers or computers with low memory GPUs and real-world deep learning systems are heavily reliant on computers with GPUs. Methods that use handcrafted features along with machine learning models are not accurate on different kinds of images because is it extremely hard to handcraft a comprehensive amount of features that can span a wide distribution. The idea of using data-driven strategies to predict image memorability was first introduced by <ns0:ref type='bibr'>(Isola et al., 2011a)</ns0:ref>. The Visual Memorability Game was used to prepare the images in the Isola et al. dataset and annotate their respective memorability score. The game was run on Amazon Mechanical Turk, where users were presented with a stream of images with some images repeating on a random basis. The users were asked to press a key when they believe that the image displayed was already seen before. In the Isola et al. dataset, they have collected 2222 images along with the annotated memorability scores. Since memorability can vary slightly from person to person, each image was shown to 78 participants on an average, when the annotators played the Visual Memorability Game. Each image being tagged more than once accounts for the slight variation in memorability among people. This also means that when the deep learning model is being trained, during each epoch, the model will be given the same image as input multiple times but with a slightly varying ground truth. When they analyzed the images and their memorability scores together, they have understood that the memorability of an image is highly related to a certain object and scene semantics such as 'Labelled Object Counts,' 'Labelled Object Areas' and 'Object Label Presences.' Also, when each image was segregated into scene categories, it was inferred that much of what contributes to the image's memorability score was from both the object and scene semantics. They have followed up on their work to understand the human-understandable visual attributes to understand memorability as a cognitive process. They have developed a deep learning model that can predict scene category of an image to with another deep learning model that predicts image memorability to understand and identify a compact set of image properties that affect image memorability <ns0:ref type='bibr' target='#b18'>(Lu et al., 2020)</ns0:ref>. A new dataset, Large-scale Image Memorability dataset (LaMem) which is publicly available, is a novel and diverse dataset with 60,000 images, each tagged with memorability score similar to the dataset by Isola et al. The authors <ns0:ref type='bibr'>(Khosla et al., 2015a)</ns0:ref> have used Convolutional Neural Networks (MemNet) to fine-tune deep features that outperform all other features by a large margin. The analysis made by the author on the responses of high-level Convolutional Neural Networks (CNN) layers shows which objects are positive. A new computational model based on an attention mechanism to predict image memorability based on deep learning was proposed. In this paper, the authors have shown that emotional bias affects the performance of the proposed algorithm due to the deep learning framework arousing negative pictures than positive or neutral pictures <ns0:ref type='bibr' target='#b19'>(Baveye et al., 2016)</ns0:ref>. Squalli-Houssaini et al. presented a hybrid CNN with Support Vector Regression (SVR) model trained on the LaMem dataset. The model achieved an average rank correlation of 0.64 across the validation sets. Based on the predictions, the correlation between interestingness and memorability was analyzed. The predictions were compared using the Flickr Interestingness API and the results showed that memorability did not correlate much with interestingness (Squalli-Houssaini et al., 2018). Visual attention has a huge effect on image memorability <ns0:ref type='bibr' target='#b15'>(Fajtl, J et al., 2018)</ns0:ref> . However, very little work has been done on taking advantage of visual attention to predict image memorability. Mancas and Meur proposed a model that uses a new set of attention-driven features by identifying the link between image saliency and image memorability. The model achieved a 2% increase in performance from the existing models. It was also inferred that images with highly localized regions are more memorable than those with specific regions of interest (Mancas M &amp; Le Meur, 2013). A novel deep learning architecture was proposed that took advantage of the visual attention mechanism to predict image memorability by <ns0:ref type='bibr' target='#b15'>(Fajtl, J et al., 2018)</ns0:ref>. The architecture made use of a hybrid of Feedforward CNN architecture and attention mechanism to build a model that can help build attention maps and, in turn, predict memorability scores. The model attained excellent results, but the biggest downside was overfitting and lack of the provision to use transfer learning swiftly. The model also contains a large number of parameters making it hard for real-time production. Another model that used visual attention mechanism was proposed by <ns0:ref type='bibr' target='#b21'>(Zhu et al., 2020)</ns0:ref>. The architecture is a multi-task learning network that was trained on LaMem dataset and AADB dataset <ns0:ref type='bibr' target='#b0'>(Kong et al., 2016)</ns0:ref> to predict both the memorability score and aesthetic score of an image, hence it was also trained using two datasets at the same time, one for image memorability and the other for image aesthetics. The model used a pixelwise contextual attention mechanism to generate feature maps. Even though this model was able to use transfer learning, the attention mechanism used is computationally expensive, especially if the number of channels in the intermediate layers is high. This model for the memorability task achieved a rank correlation of only 0.660, which is a much lower score than the ones achieved other existing models. An ensemble model that predicts video memorability was proposed by <ns0:ref type='bibr' target='#b22'>(Zhao T et al., 2021)</ns0:ref>. The model was trained on the MediaEval2020 dataset and is an ensemble of models that extract audio, video, image and text features from the input to predict video memorability. The features of the audio were extracted using a pretrained VGG model, while the image and video features were extracted using a ResNet-152. These features were then passed onto other machine learning models to get the memorability score. It was found that Bayesian Ridge Regressor worked best for processing audio features while a Support Vector Regressor worked best for processing image and video features. The text features for the tagged human annotated captions were obtained using GloVe word embeddings. The model achieved a rank correlation of 0.370 for short term memorability and 0.289 for long term memorability on the validation set of the dataset. A multi-modal fusion-based model trained on the MediaEval2019 dataset for video memorability prediction was proposed by (Leyva R and Sanchez V, 2021). This model takes advantage of motion estimation techniques and combines it with text, audio and image features. To estimate motion and obtain its feature vectors, two 3DResNets were used. The image features were extracted using ResNet-56 and ResNet-152, while the text features were obtained using a combination of CNN and Gated Recurrent Unit (GRU). The feature vectors from text, image, and motion estimation are then processed through late fusion and then a Bayesian Ridge Regressor predicts the memorability score. On the validation set, the model obtained a rank correlation of 0.5577 for short term memorability and 0.3443 for long term memorability. A Hidden Markov Model (HMM) produced using Variational Hierarchical Expectation Maximization was proposed by <ns0:ref type='bibr' target='#b24'>(Ellahi et al., 2020)</ns0:ref>. A new dataset with 625 images was tagged by 49 subjects. During the data annotation session, an eye-gaze camera setup was used to track the eye-gaze of each subject when they were presented with a stream of images. The goal of this setup was to analyze how much eye gaze contributed to image memorability. The model achieved an accuracy of only 61.48% when the ground truth eye gaze and predicted eye gaze were compared. A novel multiple instance-based deep CNN for image memorability prediction was proposed that shows the performance levels that are close to human performance on the LaMem dataset. The model shows EMNet, automatically learns various object semantics and visual emotions using multiple instance learning frameworks to properly understand the emotional cues that contribute extensively to the memorability score of an image <ns0:ref type='bibr' target='#b25'>(Basavaraju, &amp; Sur, 2019)</ns0:ref>. The main problem with the previously proposed state of the art models is that they are computationally intensive. Some of the previously proposed models are not suitable for production purposes. Most of the previously proposed models constitute several pre-processing stages and use multiple CNNs in a parallel manner to provide results. The issues that accompany these strategies are over-fitting, high computational complexity and high memory requirements. To solve these issues, there is a need for an approach that results in a smaller number of parameters and a model that contains layers that can prevent overfitting. Therefore, to solve the above-mentioned issues, in this work, the proposed Residual Memory Net (ResMem-Net) is a novel deep learning architecture that contains fewer parameters than previous models, making it computationally less expensive and hence is also faster during both training and inference. ResMem-Net also uses 1x1 convolution layers and Global Average Pooling (GAP) layers, which also helps to reduce the chances of overfitting. In this model, a hybrid of Convolutional Neural Networks and Long Short-Term Memory Networks (LSTM) is used to build a deep neural network architecture that uses a memory-driven technique to predict the memorability of images. ResMem-Net achieves results that are very close to human performance on the LaMem dataset. Transfer learning is also taken advantage of during the training process, which has helped ResMem-Net to generalize better. The publicly available LaMem dataset is used to train the model, consisting of 60,000 images, with each image being labeled with a memorability score. The proposed architecture has given close to human performance with a rank correlation of 0.679 on the LaMem dataset. Finally, heatmaps have been generated using Gradient Regression Activation Map (GradRAM) technique <ns0:ref type='bibr' target='#b26'>(Selvaraju et al., 2017)</ns0:ref>, which is used the visualize and analyze the portions of the image that causes the image to be memorable. Even though this paper focuses on the results of the LaMem and Isola et al. dataset, the key contribution of this paper is the novel ResMem-Net Neural Network architecture which can be used for any other classification or regression task in which the intermediate features of the CNN might be useful.</ns0:p><ns0:p>In the materials &amp; methods section, the proposed architecture, the novelty of the architecture, loss function, and the datasets used are explained in detail. The use of transfer learning, optimization function, evaluation metrics, loss function and weight update rule are also discussed in the materials &amp; methods sections. In the experiments and results section, the experimental setup, hyperparameters used, training settings the results of the model are discussed. The results are compared in detail with existing works and a qualitative analysis done to understand memorability is also discussed. Finally, in the conclusions section, the proposed work and results are summarized and then the potential future enhancements are discussed..</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>This section deals with the proposed Neural Network architecture, the dataset used, and the evaluation of the proposed model's performance. Further, the results obtained from an extensive set of experiments are compared with previous state-of-the-art results. It shows the superiority of the proposed architecture; for every problem solved by deep learning, four core entities have to be defined before the results are obtained. They are the dataset, the neural network architecture, the loss function and the training procedure.</ns0:p></ns0:div> <ns0:div><ns0:head>Deep Hybrid CNN for the prediction of memorability scores</ns0:head><ns0:p>This section provides a detailed explanation of the ResMem-Net. A visual depiction is given in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. The figure shows that there are two distinct portions in the entire architecture. At the top of ResMem-Net, ResNet-50 <ns0:ref type='bibr' target='#b27'>(He et al., 2015)</ns0:ref> is used as the backbone, state of the art deep learning architecture for many applications. ResNet-50 is a 50-layer deep neural network that contains convolution kernels at each layer. The main innovation in ResNet-50 is the skip connection which helps to avoid vanishing gradients in very deep neural networks. The skip connection is present at every convolutional kernel present in the ResNet-50 model. The skip connection adds the input of the convolutional kernel to the output, hence allowing the model to propagate information to the next layer even if the output of the convolutional kernel is too small in terms of numerical value. This is how ResNet-50 and other variants of ResNet are not prone to vanishing gradient problem <ns0:ref type='bibr' target='#b27'>(He et al., 2015)</ns0:ref>. Since there are going to be 50 convolutional kernels in ResNet-50, it is a Deep CNN. The input image is given to ResNet-50, and the size of the image used in our experiment is 224x224 px. One of the core features of the proposed architecture is that the CNN part of the architecture is fully convolutional, and due to the use of Adaptive Average Polling layers, the model isn't constrained to the size of the input image hence the input image can be higher or lesser than 224x224 px size.</ns0:p><ns0:p>At the bottom of ResMem-Net, a Long Short-Term Memory (LSTM) unit is responsible for predicting the output, the memorability score. LSTM is an enhancement to Recurrent Neural Networks (RNN). RNNs are generally used for sequential data such as text-based data or timeseries data. However, in RNN, there are no memory units to resolve any long-term dependencies <ns0:ref type='bibr' target='#b28'>(Cho et al., 2014)</ns0:ref>. Several variants of LSTM were analyzed, and it showed that the standard LSTM model with forget gate gave the best results on a wide variety of tasks <ns0:ref type='bibr'>(Greff et al., 2019)</ns0:ref>. In an LSTM unit, a 'cell state' is computed that can retain information from previous input sequences. The cell state is computed using 'forget gate' and 'output gate' as demonstrated in Figure <ns0:ref type='figure'>2</ns0:ref>. These gates determine which information from previous layers should be removed from the cell state vector and which information should be retained. LSTM units accept sequential data as inputs, and in this architecture, the input to the LSTM unit are the activations of the hidden layers of the ResNet-50 model, as shown in Figure <ns0:ref type='figure'>2</ns0:ref>. As the input sequences being sent to the LSTM unit must be of the same size, Global Average Pooling (GAP) is used to shrink the activations of the hidden layers to a size of (Cx1x1) where C is the number of channels. Global Average Pooling is very much like Densely connected Layers in Neural Networks because it performs a linear transformation on a set of feature maps. This allows us to ensure that there is no need to care too much about the size of the output activations at each layer and the input image's size. As studied in <ns0:ref type='bibr' target='#b30'>(Hsiao et al., 2019)</ns0:ref>, Global Average Pooling also does not have any parameter to optimize, thus avoiding overfitting and reducing computational needs. GAP layers can be thought of as an entity, that enforces the feature maps (outputs of intermediate layers) to be the confidence maps of various intrinsic features of the input image. Hence, GAP also acts as structural regularizers without requiring any hyper parameters. Also, global average pooling sums the spatial information, hence they are also robust to any spatial changes in the feature maps. Further, a convolution operation is done on the output of GAP layers to obtain a 128-channel output which can be flattened to obtain a vector of Rank 128. The main reason behind passing the hidden layer activations to the LSTM unit is to ensure that the cell state vector can remember and retain the important information from the previous hidden layers. When the final layer's activation is passed to the LSTM unit, the important information of the previous layers along with the final layer's activation is obtained and then all that Manuscript to be reviewed Computer Science information is used to compute the memorability score. The LSTM layer's output is an ndimensional vector, passed to a linear fully connected layer that gives a scalar output, which is the memorability score of the image. This strategy allows us not just to use the final layer's activations alone which is generally done in previous works discussed.</ns0:p></ns0:div> <ns0:div><ns0:head>Mathematical Formulation of the model</ns0:head><ns0:p>So, the input image is a tensor of size <ns0:ref type='bibr' target='#b2'>(3,</ns0:ref><ns0:ref type='bibr'>224,</ns0:ref><ns0:ref type='bibr'>224)</ns0:ref>, denoted by A 0. The output of L th identity block is denoted by A L, as shown in equations ( <ns0:ref type='formula'>1</ns0:ref>) and <ns0:ref type='bibr' target='#b1'>(2)</ns0:ref>. At each L th identity block, the output of the identity block is calculated by:</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>L L L 1 Z W A &#61485; &#61501; &#61636; where (2) L L A relu(Z ) &#61501; relu(a) max(0,a) &#61501;</ns0:formula><ns0:p>where, Z L is the output of the L th identity block, A L is the output of the activation function with Z L as input. For all L, A L is passed through a Global Activation Pooling layer, which converts a (C, W, H) tensor to a (C, 1, 1) tensor by taking the average of each channel in the activation matrix A L . At the LSTM layer, the initial cell state is denoted by C 0, and h0 denotes the initial activation. Before the hidden layer activations are passed to the LSTM, C 0 and h 0 are initialized as random vectors using 'He' initialization strategy to help avoid the exploding gradient problem <ns0:ref type='bibr' target='#b27'>(He at al., 2015)</ns0:ref>. The LSTM unit consists of three important gates that form the crux of the model: (3)</ns0:p><ns0:formula xml:id='formula_1'>t 1 t u hc ub u G sigmoid (W c W x b ) &#61500; &#61485; &#61502; &#61500; &#61502; &#61501; &#61483; &#61483; Forget Gate : (4) t 1 t u fc fx f G sigmoid (W c W x b ) &#61500; &#61485; &#61502; &#61500; &#61502; &#61501; &#61483; &#61483;</ns0:formula><ns0:p>Output Gate :</ns0:p><ns0:p>(5)</ns0:p><ns0:formula xml:id='formula_2'>t 1 t o oc ox u G sigmoid (W C W x b ) &#61500; &#61485; &#61502; &#61500; &#61502; &#61501; &#61483; &#61483;</ns0:formula><ns0:p>Hidden cell state :</ns0:p><ns0:formula xml:id='formula_3'>t t t 1 u f h G h G * h &#61500; &#61502; &#61500; &#61502; &#61500; &#61485; &#61502; &#61501; &#61483; &#61483; LSTM output : (7) t t o C G * h &#61500; &#61502; &#61500; &#61502; &#61501;<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>The output of G u , G f , G o , h &lt;t&gt; , and c &lt;t&gt; can be calculated using the formulas given in equations ( <ns0:ref type='formula'>3</ns0:ref>), ( <ns0:ref type='formula'>4</ns0:ref>), ( <ns0:ref type='formula'>5</ns0:ref>), ( <ns0:ref type='formula' target='#formula_3'>6</ns0:ref>) and ( <ns0:ref type='formula'>7</ns0:ref>), respectively.</ns0:p></ns0:div> <ns0:div><ns0:head>The Loss Function</ns0:head><ns0:p>The scores of the images in both the mentioned datasets are continuous-valued outputs, making this entire task a regression task. To understand how good our model predicts memorability, loss functions are used, which can approximate the divergence between the target distribution and the predicted distribution. Generally, for regression tasks, the L2 loss function, also known as the Mean Squared Error (MSE), is used as the loss function for the proposed model and the formula is given in equation ( <ns0:ref type='formula'>8</ns0:ref>).</ns0:p><ns0:p>+ &#61548;&#8721;(&#61553;) 2 (8)</ns0:p><ns0:formula xml:id='formula_4'>n 2 i i i 1 1 MSE (y y ) n &#61501; &#61501; &#61485; &#61669; %</ns0:formula><ns0:p>Where the ( ) represents the predicted value, while y i represents the ground-truth value of the i th &#119910; image in the dataset, &#61548; represents weight decay and &#61553; represents the weights. The second term is added to the existing loss function to prevent the model from overfitting. The regularization procedure is known as L2 regularization, which multiplies a weight decay (hyperparameter) and the summation of all the weights used in the Neural Network. The weight decay prevents the weights from being too big, which ultimately prevents the model from overfitting.</ns0:p></ns0:div> <ns0:div><ns0:head>Pseudocode for ResMem-Net</ns0:head><ns0:p>The pseudocode for the forward-pass of ResMem-Net is given below. Initially, the information passed through each layer in the backbone by passing the previous layer's output to the next layer. Each time an output from a layer is obtained, the outputs are passed to a global average pooling layer, which works as depicted in the function called globalAveragePooling. The outputs from the globalAveragePooling method are passed to the LSTM_CELL at each iteration. After the final iteration, the memorability score can be retrieved from the LSTM_CELL.</ns0:p><ns0:p>Procedure mem (images):</ns0:p><ns0:formula xml:id='formula_5'>Cache = [] A[0] = images[0]</ns0:formula><ns0:p>For i=1 to n_layers: In figure <ns0:ref type='figure'>3</ns0:ref>, the pipeline used during this research is depicted. The process starts with data collection and processing and then proceeds with the model development phase. In the model development phase, the model's architecture is initially defined, modified to our task and finally programmed. Then the training phase is done with the given datasets, and finally, hyperparameter tuning is done, where various batch sizes, learning rates and residual models are tried to find the optimal settings. Then to analyze the results, GradRAM technique is used to visualize the activation maps to understand how the model predicts the results.</ns0:p><ns0:formula xml:id='formula_6'>A[i] = W[i] ( ) a[i-1] + b[i] &#61636; A[i] = relu(A[i]) A[i] = A[i] + A[i-1] Cache[i] = A[i]</ns0:formula></ns0:div> <ns0:div><ns0:head>Dataset Used</ns0:head><ns0:p>In this paper, two publicly available datasets are used: the LaMem dataset and dataset from Isola et al. LaMem is currently the largest publicly image memorability dataset that contains 60,000 annotated images. Images were taken from MIR Flickr, AVA dataset, Affective images dataset, MIT 1003 dataset, SUN dataset, image popularity dataset and Pascal dataset. The dataset is very diverse as it includes both object-centric and scene-centric images that capture a wide variety of emotions. The dataset from Isola et al. contains 2222 images from the SUN dataset. Both datasets were annotated using the Visual Memorability Game. Amazon Mechanical Turk was used to allow users to view the images and play the game which helped annotate the images. Both the datasets were collected with human consistency in mind, i.e., the authors ran human consistency tests to understand how consistent the users are able to detect repetition of images. The consistency was measured using Spearman's rank correlation, and the rank correlation for LaMem and Isola et al. are 0.68 and 0.75 respectively. The human consistency was calculated by inviting a new set of participants to play the Visual Memorability Game. The participants were split into two halves and were asked to independently play the game for the images in the datasets. Then, the human consistency was measured by how similar the second half the participants' memorability scores were to the memorability scores obtained from the first half of the participants. This analysis show that humans are generally consistent when it comes to remembering or forgetting images. Also, for both the datasets, the authors of the datasets have themselves provided the dataset splits along with the dataset. Those files contain both the ground truth values of each image and information about whether they belong to the training or validation sets. In the LaMem dataset, 45,000 images are given for training, while 10,000 images for validation.</ns0:p></ns0:div> <ns0:div><ns0:head>Optimization</ns0:head><ns0:p>The loss function is actually differentiable and is also a function of the parameters of the Neural Network. The gradient of the loss function concerning the weights can guide us through a path to allow us to identify the right set of parameters that yield a low loss using gradient descent-based methods. In our experiments, a slightly modified method of ADAM optimizer is used, which is a combination of Stochastic Gradient Descent with Momentum and RMSprop added with the cost function (Yi, Ahn, &amp; Ji, 2020) The loss function of Neural Networks is very uneven and sloppy due to the presence of too many local minima and saddle points. This modified version of ADAM uses exponentially weighted moving averages. Initially, compute the momentum values are computed using equations ( <ns0:ref type='formula' target='#formula_10'>9</ns0:ref>), ( <ns0:ref type='formula'>10</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_7'>11</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_7'>(9) J( ) H( ) J( ) &#61622; &#61553; &#61553; &#61501; &#61548; &#61553; &#61483; &#61622;&#61553; where, J -cost function &#61548; -scaling constant (hyperparameter) &#61553; -weights (10) i i 1 i m m (1 )H( ) &#61485; &#61501; &#61537; &#61483; &#61485; &#61537; &#61553; v i = &#946;v i-1 + (1-&#946;) (H(&#952;i)) 2<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>where, &#61537;, &#61538; -scaling constant m i , v i -first, second momentum m 0 , v 0 -initial momentum (set to 0)</ns0:p><ns0:p>After the momentum values are calculated, the update rule for the weights is done using equation 12,</ns0:p><ns0:formula xml:id='formula_8'>i i 1 i i m v &#61483; &#61553; &#61501; &#61553; &#61485; &#61544; &#61483; &#61541;<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>Where,&#61553; i -current weight</ns0:p><ns0:formula xml:id='formula_9'>&#61553; i+1 -updated weight &#61544; -learning rate i i i i m v &#710;m ; v (1 ) (1 ) &#61501; &#61501; &#61501; &#61537; &#61485; &#61538;</ns0:formula><ns0:p>&#61541; -constant to avoid zero division (usually 10 -6 )</ns0:p><ns0:p>Adding the cost function to the gradient of the weights w.r.t the cost functions ensures that the loss landscape is much smoother and can converge at a good minimum. This helps because, even if the gradient of the cost function w.r.t weights are very small, adding the scaled version of the cost function ensures that the weights keep changing, ensuring that the model doesn't get stuck in local minima or saddle points.</ns0:p></ns0:div> <ns0:div><ns0:head>Learning Rate and One Cycle Learning Policy</ns0:head><ns0:p>The learning rate is one of the most important hyperparameters in deep learning as it decides how quickly the loss moves towards a minimum in the loss function's surface. <ns0:ref type='bibr'>Learning</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In our work, the semantic features learned through ImageNet will allow the model to be quickly trained and perform better on identifying the memorability of images in the validation dataset. The feature maps in the pretrained ResNet-50 will contain feature maps for objects, scenes and other visual cues that aren't present in the images present in the datasets that are used to train the model. This is so because, the pretrained ResNet-50 was trained on a dataset with diverse set of images. So, after careful retraining, many of these feature maps in the pretrained model will be retained. This will allow the re-trained model to identify the objects and scenes not present in the LaMem and Isola et al. datasets, which can drastically improve real-time deployment performance. Empirical evidence for the above explanation is given in <ns0:ref type='bibr' target='#b33'>(Rusu et al, 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiments and Results</ns0:head><ns0:p>In this section, the evaluation criteria, and outcome of the experiments are discussed. The training settings i.e, the hyperparameters, hardware used, training and validation splits of the dataset are discussed. Finally, the outcome of the training process and the reason behind superior results are explained.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation Metric</ns0:head><ns0:p>The L2 loss function is generally a good metric to find how well the proposed model performs, but here the Rank Correlation method is also used to evaluate the proposed model. The Spearman Rank Correlation (&#961;) is computed between the predicted score and target score, is used to find the consistency between the predicted scores and target score from the dataset. The value of &#961; ranges from -1 to 1. If the rank correlation is extremely close to 1 or -1, then it means that there is a strong positive or negative agreement respectively between the predicted value and ground truth, while a rank correlation of 0 represents that there is complete disagreement. The rank correlation between predicted and target memorability score is given by the equation ( <ns0:ref type='formula' target='#formula_10'>9</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_10'>n 2 i i i 1 3 6. (r S ) 1 n n &#61501; &#61485; &#61554; &#61501; &#61485; &#61485; &#61669;<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>Where r i is the ground truth while s i is the predicted value from the model, whereas n is the number of images in the dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Training Settings and Results</ns0:head><ns0:p>The batch size was set at 24 throughout the training process and the images were resized to a size of 224x224. Since transfer learning is employed, when the backbone's (ResNet-50) parameters were freezed, the upper bound and lower bound for the learning rate was set at 0.01 and 0.001 respectively. After 10 epochs, the backbone's parameters were unfreezed and then the upper bound and lower bound for the learning rate was set at 0.001 and 0.0001 for 15 epochs. For the rest of the epochs, the lower bound and upper bound for the learning rate were set at 0.0001 and 0.00001 respectively. The training process consisted of a total of 40 epochs. For regularization, a value of 0.0001 was set as L2 weight decay. The model was trained on a Nvidia Quadro P5000 GPU which has 16GB GPU memory and 2560 CUDA cores.</ns0:p><ns0:p>To ensure stable training, normalization and dropout layers were used. The authors of the LaMem dataset have given 5 training set splits because each image has multiple annotations. Hence, 5 different models were trained, one for each split, and the results were averaged across the models while testing. For cross-validation purposes, the authors of the LaMem datasets divided the dataset into five sets, where each set contains 45,000 images for training, 10,000 images for testing and 3,741 images for validation purposes. After the training the model using the above settings, ResMem-Net obtained an average rank correlation of 0.679 on the LaMem dataset and 0.673 on the Isola et al. dataset as mentioned in table <ns0:ref type='table' target='#tab_1'>1 and table 2</ns0:ref>. These results indicate that the use of a hybrid of pretrained CNN and LSTM has contributed to the increase in the accuracy of the model. Also, the model contains only a ResNet-50 backbone and a LSTM unit, making it computationally less expensive.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>In this section, the results of the experimental outcomes are taken and compared with existing models on two datasets, namely, LaMem and the Isola et al. dataset. Then, the comparison the number of parameters present in the existing models and the proposed model is done to establish why the proposed model has lesser memory requirements and to establish why the model is better suited for deployments to servers or other production needs. Finally, the results of the qualitative analysis done using the GradRAM method is presented to understand which regions of the image lead to higher memorability scores and to answer the question, 'What makes an image more memorable?'.</ns0:p><ns0:p>The Spearman Rank Correlation metric has been used to evaluate the models and consistency of the results. Since each image has been annotated by multiple subjects, the rank correlation metric is better suited than the L2 loss to compare how consistently the models are predicting memorability scores. The five models discussed in the introduction are considered and the average of results is compared with the previous works. To ensure that the comparison of our results is fair, as mentioned in results sections, five models were trained on the five sets and found the average of the results. The models used for comparison were also trained in the same way by the respective authors, hence that ensures that the differences between the results of the previous models is due to the models only, not due to any other reasons. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>EMNet and a 45.67% increase from SVR. The authors have not provided the human accuracy for this dataset. Hence, it is not possible to tell how close ResMem-Net is to human accuracy for the Isola et al. dataset, but it is clear that ResMem-Net has outperformed all other previous works.</ns0:p><ns0:p>The reason behind the superior performance can be attributed to the use of LSTM unit, modified optimization function, pretrained ResNet-50 backbone and the use of cyclic learning rates. Tables <ns0:ref type='table'>3 and 4</ns0:ref> depict the predicted scores on various sets of images on the dataset. In both the tables, the images are arranged in descending order of the predicted memorability scores. For example, the 'Top 10' row depicts the average of the top 10 highest predicted memorability scores that are predicted by various network architectures and finally, the average of the ground truth of the same images is also given in the same row. The results are based on average over the 5-fold cross-validation tests as provided by the creators of the datasets.</ns0:p><ns0:p>From both Tables <ns0:ref type='table'>3 and 4</ns0:ref>, it can be inferred that, on average, ResMem-Net performs better than previously proposed models on both Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head>Computational Complexity Analysis</ns0:head><ns0:p>This section deals with the comparison of the computational complexity of various previous models and ResMem-Net. It has been already established that ResMem-Net is quite minimal compared to other previously proposed models in network parameter size. </ns0:p></ns0:div> <ns0:div><ns0:head>Qualitative analysis of the results</ns0:head><ns0:p>In this section, the inferences and patterns that were identified after visually analyzing the results of ResMem-Net are discussed. To aid us with this process, GradRAM technique was used to understand which part of the images are focused by ResMem-Net or in other words, which part of the image gives larger activations. GradRAM is an extension of Class Activation Maps (CAM), which uses the gradient obtained during backpropagation process, to generate heatmaps.</ns0:p><ns0:p>The heatmaps generated shed light on which part of the image enhances the image's memorability. This shows how the hidden layers in the ResNet-50 backbone outputs feature maps and it is this information that has is being used by the LSTM unit to make predictions. Based on the heatmaps and careful manual analysis of the results using randomly selected images for different categories from the Isola et al. dataset, the following inferences are made:</ns0:p><ns0:p>The object in the image contributes more to the memorability score than the scene in which the object is placed. In almost every heatmap, it is observable that the portion of the image containing the main object provides higher activations compared to the rest of the image. Also, images with no objects are predicted to be less memorable compared to images containing objects (both living and non-living). Also, images containing a single central object is seen to be more memorable than images with multiple objects. The average rank correlation of the predicted memorability of the images with a single central object is 0.69, while the average of the rank correlation of the images without a central object is 0.36. Also, the presence of humans in the image contributes to a better memorability score. If the human in the image is clearly visible, then the memorability averages at 0.68, while if the image does not contain any human or object, then the memorability averages at 0.31.</ns0:p><ns0:p>Using a model pretrained on object classification datasets provide better results and trains faster than using a model pretrained on scene classification datasets <ns0:ref type='bibr' target='#b36'>(Jing et al., 2016)</ns0:ref>. This can be attributed to the fact that memorability scores are directly related to the presence of objects. Thus, a model whose weights contain information about objects take lesser time to converge to a minimum (Best N, Ott J &amp; Linstead E J, 2020). Also, image aesthetics does not have much to do with Image memorability. A few images containing content related to violence are not aesthetically good, but the memorability score of the image is high, with an average memorability of 0.61.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Capturing memorable pictures is bit challenging as it requires an enormous amount of creativity. However, just like any other phenomenon in nature, humans' capability to remember certain images more follows a pattern. This paper becomes unique by introducing ResMem-Net, a novel neural network architecture that combines a pretrained deep learning model (ResNet-50) and a LSTM unit. The model was trained using One Cycle Learning Policy which allows the use of cyclic learning rates during training. ResMem-Net has provided a close to human performance on predicting the memorability of an image using LaMem dataset which is the largest publicly available dataset for image memorability. The rank correlation of ResMem-Net is 0.679, which is extremely close to human accuracy 0.68. This obtained result 6.09% increase the performance of MemNet, 35% increase from CNN-MTLES, 2% increase from MCDRNet and a 1.2% increase from EMNet. Based on the qualitative analysis executed using GradRAM method, it was inferred that the object plays a bigger role in enhancing the memorability of the image. A pre-trained model that consists of weights from an object classification dataset converges quickly than a model pre-trained on scene classification. These results were observed manually by looking through the highly rated images and lowly rated images. Heatmaps generated using the GradRAM method was also used to analyze and obtain the above inferences. The limitation of the current work is that even though the model contains much lesser number of parameters than other state-of-the-art models, ResMem-Net is still not deployable to mobile based GPUs. To solve this issue, further research can be done to use mobile compute efficient architectures like MobileNetV3 or EfficientNet, which are also pre-trained on the ImageNet dataset. Further research can also be done to improve the accuracy of the model by replacing ResNet-50 with more recent architectures like ResNext. The LSTM unit can also be replaced with more recent architectures like the Transformer architecture or BiDirectional RNNs. A more generic suggestion is to spend time to develop larger datasets for image memorability prediction because with larger datasets, neural networks can generalize better.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61139:2:0:NEW 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Update Gate -Decides what information should be remembered and what information should be thrown away 2. Forget Gate -To decide which information is worth storing 3. Output Gate -The output of the LSTM unit Update Gate :</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>For</ns0:head><ns0:label /><ns0:figDesc>x, h t-1 , c t-1 ):it = sigmoid ( W xi * x + W hi * h t-1 + W ci * c t-1 + b i ) ft = sigmoid ( W xf * x + W hf * h t-1 + W cf * c t-1 + b f ) ct = ft* c t-1 + it * tanh(W hc * h t-1 + W xc * x + b c ) ot =sigmoid( W xo * x + W ho * h t-1 + W co * ct + b o ) ht = ot * tanh(ct) return ht, ct Procedure globalAveragePooling(tensor): c, h, w = dimensions(tensor) for i in range(c): Avg = (1/h) * (&#931;tensor[i]) tensor[i] = Avg return tensor</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>rates can decide whether a model converges or diverges over time. If a high learning rate is used throughout the training process, loss of the model may diverge over some time, but if it is set to a low value, then the model may take too much time to converge. To solve this issue, generally the learning rate is reduced over time using a decaying function. But decaying functions can lead to the model's parameters to be stuck in saddle points or in local minima, which can lead to the model not learning new parameters in the consecutive epochs. To avoid these issues,<ns0:ref type='bibr' target='#b32'>(Smith, 2018)</ns0:ref> has proposed a method called One Cycle Learning. In one cycle learning policy, for each epoch, the learning rate is varied between a lower bound and upper bound. The lower bound's value is usually set at 1/5th or 1/10th of the upper bound.In one cycle learning, each epoch is split into 2 steps of equal length. All deep learning models are trained using mini-batches, so if the dataset has 100 batches, the first 50 batches are included in step 1 and the rest are included in step 2. During the start of each epoch, the learning rate is set to the lower bound's value and at the end of each mini-batch, the learning rate is slowly increased to ensure that the learning rate reaches the upper bound by the end of step 1. In step 2, the training proceeds with the upper bound as the learning rate and then the learning rate is slowly decayed after each mini-batch, to ensure that by the end of step 2, the learning rate is back to lower bound. This is then repeated for each epoch. Varying the learning rate between a high and a low value allows the model to escape the local minima or saddle points. The higher learning rate allows the model to escape local minima and saddle points during training, while the lower learning rate ensures that the training leads to parameters that ensure a lower loss in the loss function. Transfer learning is training a model on a large dataset and retraining the same model on a different dataset with lesser data. Intuitively, the learned features from larger datasets are used to help improve accuracy on datasets with smaller data points. In our work, a pre-trained ResNet-50 that is trained on the ImageNet dataset is used, which contains 3.2 million images, with each image categorized in one among the 1000 categories.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Transfer learning</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61139:2:0:NEW 23 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>represents the results of various models on the LaMem dataset and table 2 represents results of various models on the Isola et al. dataset. Table1and figure4show that ResMem-Net attains a rank correlation of 0.679, a 6.09% increase from MemNet, a 35% increase from CNN-MTLES, a 2% increase from MCDRNet and a 1.2% increase from EMNet. The human-level accuracy on LaMem is 0.68, and ResMem-Net has brought us extremely close to human accuracy with a difference in rank correlation of just 0.001. From table 2 and figure5, it can be inferred that ResMem-Net attains a rank correlation of 0.673, which is a 10.33% higher from MemNet, 5.48% increase from MCDRNet, 1.4% increase from</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61139:2:0:NEW 23 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>Isola et al. dataset and LaMem dataset. For the top 10, EMNet predicts a memorability of 91.89% and 82.43% on the LaMem and Isola et al. datasets, respectively, while ResMem-Net predicts an average memorability of 93.82% and 82.61% on the same LaMem and Isola et al. datasets, respectively. MCDR-Net obtained an average memorability 93.15% and 81.75%, while MemNet has obtained 91.7% and 80.16% on the LaMem and Isola et al. datasets respectively for the top<ns0:ref type='bibr' target='#b9'>10</ns0:ref>. When compared to the ground truth, which is 100%, these scores clearly state that ResMem-Net is more consistent with the images with high memorability. On the other hand, when for the 'Bottom 10' images, EMNet predicts an average memorability of 48.41% and 27.42%, while for MCDRNet it is 50.94% and 26.52% on the LaMem and Isola et al. datasets, respectively. In comparison, ResMem-Net predicts an average memorability of 47.9% and 27.42%, respectively. Again, when comparing those results to the ground truth values, which is 33.57% and 5.69% for LaMem and Isola et al. datasets respectively, ResMem-Net provides similar results to EMNet. It would also be unfair to completely ignore the results of traditional machine learning algorithms on image memorability. Despite many empirical results that depict the superiority of deep learning algorithms on computer vision tasks, certain studies have shown that the use of hand-crafted features when ensembled with machine learning algorithms such as SVR or Random Forests can, in fact, provide better results. Of course, concerns regarding the generalization of the models on new data have been raised, which are the very papers that propose the non-deep learning-based strategies themselves. However, in table 5 and figure6, it is very clear that the proposed ResMem-Net quite easily outperforms traditional machine learning strategies. Since the results are from the validation set, it is clear that the model did not overfit but rather learned features that contribute to the memorability scores of the image. The validation set also encompasses a wide variety of landscapes and events, which also leads us to believe that the model performs well on different kinds of images.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61139:2:0:NEW 23 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>Table6shows the number of parameters or weights present in ResMem-Net, MemNet, MCDRNet and EMNet. It is clear from the table that ResMem-Net has a significantly much lesser number of parameters than the previously proposed network architectures. CNN's are composed of convolution operations, which are very much compute intensive. So, the lesser the number of parameters, the faster the model takes to provide outputs. The bigger advantage of ResMem-Net is that it has a significantly lesser number of weights and still provides better accuracy on both LaMem and Isola et al. datasets. The time taken for ResMem-Net to process an image of size 512x512px is approximately 0.024s on Nvidia Quadro P5000 GPU. It should also be noted that having too many parameters can cause overfitting and hence ResMem-Net is less prone to overfitting because it has a significantly lower number of weight parameters.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61139:2:0:NEW 23 Sep 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61139:2:0:NEW 23 Sep 2021) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61139:2:0:NEW 23 Sep 2021)</ns0:note> </ns0:body> "
"ResMem-Net: Memory based deep CNN for image memorability estimation Manuscript ID: Manuscript Type: Article We would like to thank the Editor and reviewers for their helpful comments and suggestions. We have updated the manuscript to address all the comments supplied by the reviewers. We feel that by incorporating these suggestions, the quality of the paper has improved substantially. This material below addresses each issue raised by the editors and the reviewers. *Reviewer -2* Thank you so much for your positive feedback. We have improved the manuscript according to the reviewer’s comments and suggestions. Comment 1: This research work 'ResMem-Net: Memory based deep CNN for image memorability estimation' is proposes. The research domain addressed in this research paper is novel, and the results are satisfactory. But it needs the below improvements: 1) Reduce unnecessary text from the paper. Response: Redundant information and repeated content have been removed from the paper. Comment 2: The authors should discuss more on deep CNN. Response: More information has been included in the Materials and Methods section. Comment 3: The Experiment discussion should be written more in-depth, precise, and concrete, such as what questions were resolved? How can the proposed method solve these problems? The most recent works should be discussed in the related work section. Response: The introduction to Experiment and Results section and the Discussion section are polished to reflect the objectives of the experiment done and the questions that have been answered are present in the introduction of the Experiments section and Discussion section. An additional statistic and citation has been added to the Qualitative analysis section. More recent works have been added to the related works section. Comment 4: The objectives of this paper need to be polished. The contribution list should be polished at the end of the introduction section, and the last paragraph of the introduction should be the organization of the paper. Response: The objectives and contribution list has been polished. Also, redundant, and repeated information have been removed. The last paragraph of the introduction has now been polished to better reflects the overall organization of the paper. Comment 5: Reduce the size of the introduction. Response: Redundant information has been removed from the introduction. Comment 6: Contributions at the end of the introduction section should be polished. Response: The contribution of the paper is now polished and repeated information has been removed. Comment 7: Relevant literature reviews of the latest similar research studies on the topic at hand must be discussed. Response: Papers published in 2021 have been included in literature review. Comment 8: The quality of the figures is not good. Response: The graphs and flowchart has been replaced with higher resolution pictures. Comment 9: Don’t use “Its,” ‘I, your’ and any informal text in the paper. Response: Those words have been removed from the paper. Comment 10: There are some grammar and typo errors. Response: Grammar and typo errors have been rechecked and edited. *Reviewer -5* Thank you so much for your positive feedback. We have improved the manuscript according to the reviewer’s comments and suggestions. Comment 1: I appreciate the authors' diligence and am satisfied with how they have addressed a majority of my concerns and incorporated my feedback and suggestions to improve the manuscript. I do have two minor concerns (described below) that I think could be addressed to further improve the manuscript. First, while I see how the authors addressed my concerns and provided additional details to help clarify their intention with the use of the term 'image memorability', I still have a remaining question as to whether they are trying to portray that image memorability is a property of the image or more a reflection of the individual who is viewing the image? I feel like the authors could expand this around lines 112 and 113, perhaps by adding a bit more to the sentence in 114 and 115 to help make this clearer. Response: A sentence has been added to say clearly whether image memorability is a property of the image or a reflection of the individual. Comment 2: I would ask the authors to provide a citation/support for the claims made in line 854. Additionally, it would be helpful if they could add some statistics pertaining to the findings of violence and image aesthetics relating to memory, lines 854 - 856. It would be helpful if they followed the same format that they had used previously when reporting about the impact of the presence of humans on memorability. Response: I’m not precisely sure which lines are being meant here because the Conclusion part itself ends at line number 763. Based on the suggested changes, I’m assuming that the changes are suggested for the final lines in the “Qualitative analysis of the results” section. Citation has been added to support the claim about convergence to minima. The average memorability score for violence related images have been added. Comment 3: Line 220, “Suport” should be “Support” Response: The change has been made. Comment 4: Line 492-493, missing closing parenthesis “)” Response: All opening and closing parenthesis have been rechecked to confirm that there are no mismatches We once again would like to thank the reviewers for their constructive comments that helped to improve the quality of our work. We hope that our response is acceptable for the queries raised by the reviewers. Thanking You. Sincerely, Authors "
Here is a paper. Please give your review comments after reading it.
272
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The way developers implement their algorithms and how these implementations behave on modern CPUs are governed by the design and organization of these. The vectorization units (SIMD) are among the few CPUs' parts that can and must be explicitly controlled. In the HPC community, the x86 CPUs and their vectorization instruction sets were de-facto the standard for decades. Each new release of an instruction set was usually a doubling of the vector length coupled with new operations. Each generation was pushing for adapting and improving previous implementations. The release of the ARM scalable vector extension (SVE) changed things radically for several reasons. First, we expect ARM processors to equip many supercomputers in the next years. Second, SVE's interface is different in several aspects from the x86 extensions as it provides different instructions, uses a predicate to control most operations, and has a vector size that is only known at execution time. Therefore, using SVE opens new challenges on how to adapt algorithms including the ones that are already well-optimized on x86. In this paper, we port a hybrid sort based on the well-known Quicksort and Bitonic-sort algorithms. We use a Bitonic sort to process small partitions/arrays and a vectorized partitioning implementation to divide the partitions. We explain how we use the predicates and how we manage the non-static vector size. We also explain how we efficiently implement the sorting kernels. Our approach only needs an array of O(log N) for the recursive calls in the partitioning phase, both in the sequential and in the parallel case. We test the performance of our approach on a modern ARMv8.2 (A64FX) CPU and assess the different layers of our implementation by sorting/partitioning integers, double floating-point numbers, and key/value pairs of integers. Our results show that our approach is faster than the GNU C++ sort algorithm by a speedup factor of 4 on average.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Data-processing algorithms (like sorting) are of this kind and require a significant programming effort to be vectorized efficiently. Also, the possibility of creating a fully vectorized implementation, with no scalar sections and with few data transformations, is only possible and efficient if the instruction set extension (IS) provides the needed operations. This is why new ISs together with their new operations make it possible to invent approaches that were not feasible previously, at the cost of reprogramming.</ns0:p><ns0:p>Vectorizing a code can be described as solving a puzzle, where the board is the target algorithm and the pieces are the size of the vector and the instructions. However, the paradigm changes with SVE [6, <ns0:ref type='bibr' target='#b0'>7,</ns0:ref><ns0:ref type='bibr' target='#b1'>8]</ns0:ref> because the size of the vector is unknown at compile time. This can have a significant impact on the transformation from scalar to vectorial. As an example, consider that a developer wants to work on a fixed number of values, which could be linked to the problem to solve, e.g. a 16 &#215; 16 matrix-matrix product, or based on other references, e.g. the size of the L1 cache. When the size of the vector is known at development time, a block of data can be mapped to the corresponding number of vectors and working on the vectors can be done with static/known number of operations. With a variable size, it is required to either implement different kernels for each of the possible sizes (like if they were different ISs) or by finding a generic way to vectorize the kernel, which could be a tedious task. We could expect SVE to be less upgraded than x86 ISs because there will be no need to release a new IS even when new CPU generations will support larger vectors.</ns0:p><ns0:p>In the current paper, we focus on the adaptation of a sorting strategy and its efficient implementation for the ARM CPUs with SVE. Our implementation is generic and works for any size equal to a power of two. The contributions of this study are:</ns0:p><ns0:p>&#8226; Describe how we port our AVX-SORT algorithm <ns0:ref type='bibr' target='#b2'>[9]</ns0:ref> to SVE;</ns0:p><ns0:p>&#8226; Define a new Bitonic-sort variant using SVE and how runtime vector size impact the implementation 1 ;</ns0:p><ns0:p>&#8226; Implement an efficient Quicksort variant using OpenMP <ns0:ref type='bibr' target='#b3'>[10]</ns0:ref> tasks.</ns0:p><ns0:p>All in all, we show how we can obtain a fast and vectorized in-place sorting implementation. 2 The paper is organized as follows: We first give background information related to vectorization and sorting in Section 2. Then, in Section 3, we describe our strategies for sorting small arrays, for partitioning and for our parallel sort. Finally, the performance study is detailed in Section 4.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>BACKGROUND</ns0:head></ns0:div> <ns0:div><ns0:head n='2.1'>Sorting algorithms</ns0:head></ns0:div> <ns0:div><ns0:head n='2.1.1'>Quicksort (QS) overview</ns0:head><ns0:p>QS <ns0:ref type='bibr' target='#b4'>[11]</ns0:ref> is a sorting algorithm that followed a divide-and-conquer strategy: the input array is recursively partitioned until the partitions hold a single element. The partitioning algorithm moves the values lower than a pivot at the beginning of the array, and greater values at the end, with a linear complexity. The worst-case complexity of QS is O(n 2 ), but in practice it has an average complexity of O(n log n). The complexity is tied to the pivot, and it must be close to the median to ensure a low complexity. However, it is a very popular sorting algorithm thanks to its simplicity in terms of implementation, and its speed in practice. An example of a QS execution is provided in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>.</ns0:p><ns0:p>To parallelize the QS and other divide-and-conquer approaches, it is common to create a task for each recursive call followed by a wait statement. For instance, a thread partitions the array in two, and then creates two tasks (one for each of the partition). To ensure coherency, the thread waits for the completion of the tasks before continuing. We refer to this parallel strategy as the QS-par.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.2'>GNU std::sort implementation (STL)</ns0:head><ns0:p>The standard requires a worst-case complexity of O(n log n) <ns0:ref type='bibr'>[12]</ns0:ref> (it was an average complexity until year 2003 <ns0:ref type='bibr' target='#b5'>[13]</ns0:ref>) that a pure QS implementation cannot guarantee. Consequently, the QS algorithm cannot be used alone as a standard C++ sort. As a result, the current STL implementation relies on 3 different . The pivot is equal to the value in the middle: the first pivot is 3, then at the second recursion level it is 2 and 6. l is the left index, and r the right index.</ns0:p><ns0:p>algorithms 3 . This i3-part hybrid sorting algorithm is composed of an Introsort <ns0:ref type='bibr' target='#b6'>[14]</ns0:ref> to a maximum depth of 2 &#215; log 2 n to obtain small partitions. These partitions are then sorted using an insertion sort, which is a 2-part hybrid composed of Quicksort and Heapsort.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.3'>Bitonic sorting network</ns0:head><ns0:p>In computer science, a sorting network is an abstract that describes how the values to sort are compared and exchanged. A network is defined for a given number of values. It is possible to represent graphically a sorting network where horizontal lines represent the input values, and vertical connection between those lines represent compare and exchange units. The literature provides various examples of sorting networks, and our approach relies on the Bitonic sort <ns0:ref type='bibr' target='#b7'>[15]</ns0:ref>. This network is straightforward to implement and its algorithm complexity for any input is of O(n log(n) 2 ). This algorithm demonstrated good performances on parallel computers <ns0:ref type='bibr' target='#b8'>[16]</ns0:ref> and GPUs <ns0:ref type='bibr' target='#b9'>[17]</ns0:ref>. We provide a Bitonic sorting network to sort 16 values in values. We use the terms symmetric and stair exchanges to refer to the red and orange stages, respectively.</ns0:p><ns0:p>A symmetric stage is always followed by stair stages from half size to size two. The Bitonic sort does not maintain the original order of the values and thus is not stable. We can implement a sorting network by hard-coding the connections between the lines only if we know the size of the input array. In this case, we simply translate the picture into an algorithm. However, for a dynamic array size, the implementation has to be flexible by relying on formulas that define when the lines cross <ns0:ref type='bibr' target='#b10'>[18]</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='2.2'>Vectorization</ns0:head><ns0:p>The word vectorization defines a CPU capability of applying a single operation/instruction to a vector of values, instead of a single/scalar value <ns0:ref type='bibr' target='#b11'>[19]</ns0:ref>. Thanks to this feature, the peak performance of single cores had continued to increase despite the stagnation of the clock frequency since the mid-2000s. In the meanwhile, the length of the SIMD registers (i.e., the size of the vectors) has continuously increased, which increases the performance of the chips accordingly. In the current study, the term vector has no relation to an expandable vector data structure, such as std::vector, but refers to the data type managed by the CPU in this sense. The size of the vectors is variable and depends on both the instruction set and the type of vector's elements, and corresponds to the size of the registers in the chip.</ns0:p><ns0:p>The SIMD instructions can be called in the assembly language or using intrinsic functions, which are small functions that are intended to be replaced with a single assembly instruction by the compiler. There is usually a one-to-one mapping between intrinsics and assembly instructions, but this is not always true, as some intrinsics are converted into several instructions. Moreover, the compiler is free to use different instructions as long as they give the same results. values (considering that a vector size of 256 bits). In the resulting vector, each element will be the summation of the values from a and b at the same position. A summation using a predicate vector ( ) is shown. In this case, the resulting vector contains 0 at the positions where t f is false.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.1'>SVE instruction set</ns0:head><ns0:p>The SVE is a feature for ARMv8 processors. The size of the vector is not fixed at compile time (the specification limits the size to 2048 bits and ensures that it is a multiple of 128 bits) such that a binary that includes SVE instructions can be executed on ARMv8 that support SVE no matter the size of their registers. Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> illustrates the difference between a scalar summation and a vector summation with a vector size of 256 bits.</ns0:p><ns0:p>SVE provides most classic operations that also exist in x86 vectorization extensions, such as loading a contiguous block of values from the main memory and transforming it into a SIMD-vector (load), filling a SIMD-vector with a value (set), move back a SIMD-vector into memory (store) and basic arithmetic operations. SVE also provides advanced operations like gather, scatter, indexed accesses, permutations, comparisons, and conversions. It is also possible to get the maximum or the minimum of a vector or element-wise between two vectors.</ns0:p><ns0:p>Another significant difference with other ISs, is the use of predicate vectors i.e. the use of Boolean vectors (svbool t) that allow controlling more finely the instructions by selecting the affected elements, for example. Also, while in AVX-512 the value returned by a test/comparison (vpcmpd/vcmppd) is a mask (integer), in SVE, the result is svbool t.</ns0:p><ns0:p>A minor difference, but which impacts our implementation, is that SVE does not support a store-some as it exists in AVX-512 (vpcompressps/vcompresspd), where some values of a vector can be stored contiguously in memory. With SVE it is needed to first compact the values of a vector to put the values to be saved at the beginning of the vector, and then perform a store, or to use a scatter. However, both approaches need extra Boolean or indices vectors and additional instructions.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Related work on vectorized sorting algorithms</ns0:head><ns0:p>The literature on sorting and vectorized sorting implementations is very large. Therefore, we only cite some studies we consider most related to our work.</ns0:p><ns0:p>Sanders et al. <ns0:ref type='bibr' target='#b12'>[20]</ns0:ref> provide a sorting technique that tries to remove branches and improves the prediction of a scalar sort. The results show that the method provides a speedup by a factor of 2 against the STL Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(the implementation of the STL was different). This study illustrates the early strategy to adapt sorting algorithms to a given hardware, and also shows the need for low-level optimizations, due to the limited instructions available.</ns0:p><ns0:p>Later, Inoue et al. <ns0:ref type='bibr' target='#b13'>[21]</ns0:ref> propose a parallel sorting on top of combosort vectorized with the VMX instruction set of IBM architecture. Unaligned memory access is avoided, and the L2 cache is efficiently managed by using an out-of-core/blocking scheme. The authors show a speedup by a factor of 3 against the GNU C++ STL.</ns0:p><ns0:p>In a different study <ns0:ref type='bibr' target='#b14'>[22]</ns0:ref>, Furtak et al. use a sorting-network for small-sized arrays, similar to our own approach. However, instead of dividing the main array into sorted partitions (partitions of increasing contents), and applying a small efficient sort on each of those partitions, the authors perform the opposite.</ns0:p><ns0:p>They apply multiple small sorts on sub-parts of the array, and then they finish with a complicated merge scheme using extra memory to sort globally all the sub-parts. A very similar approach was later proposed by Chhugani et al. <ns0:ref type='bibr' target='#b15'>[23]</ns0:ref>.</ns0:p><ns0:p>More recently, Gueron et al. provided a new approach for AVX2 <ns0:ref type='bibr' target='#b16'>[24]</ns0:ref>. The authors use a Quicksort variant with a vectorized partitioning function, and an insertion sort once the partitions are small enough (as the STL does). The partition method relies on look-up tables, with a mapping between the comparison's result of an SIMD-vector against the pivot, and the move/permutation that must be applied to the vector.</ns0:p><ns0:p>The authors show a speedup by a factor of 4 against the STL, but their approach is not always faster than the Intel IPP library. The proposed method is not suitable for AVX-512 because the lookup tables will occupy too much memory. This issue and the use of extra memory, can be solved with the new instructions of the AVX-512. As a side remark, the authors do not compare their proposal to the standard C++ partition function. It is the only part of their algorithm that is vectorized.</ns0:p><ns0:p>In our previous work <ns0:ref type='bibr' target='#b2'>[9]</ns0:ref>, we have proposed the first hybrid QS/Bitonic algorithm implemented with AVX-512. We have described how we can vectorize the partitioning algorithm and create a branchfree/vectorized Bitonic sorting kernel. To do so, we put the values of the input array into SIMD vectors.</ns0:p><ns0:p>Then, we sort each vector individually, and finally we exchange values between vectors either during the symmetric or stair stage. Our method was 8 times faster to sort small arrays and 1.7 times faster to sort large arrays compared to the Intel IPP library. However, our method was sequential and could not simply be converted to SVE when we consider that the vector size is unknown at compile time. In this study, we refer to this approach as the AVX-512-QS.</ns0:p><ns0:p>Hou et al. <ns0:ref type='bibr' target='#b17'>[25]</ns0:ref> designed a framework for the automatic vectorization of parallel sort on x86-based processors. Using a DSL, their tool generates a SIMD sorting network based on a formula. Their approach shows a significant speedup against STL, and especially they show a speedup of 6.7 in parallel against the sort from Intel TBB on Intel Knights Corner MIC. The method is of great interest as it avoids programming by hand the core of the sorting kernel. Any modification, such as the use of a new IS, requires upgrading the framework. To the best of our knowledge, they do not support SVE yet.</ns0:p><ns0:p>Yin et al. <ns0:ref type='bibr' target='#b18'>[26]</ns0:ref> described an efficient parallel sort on AVX-512-based multi-core and many-core architectures. Their approach achieves to sort 1.1 billion floats per second on an Intel KNL (AVX-512).</ns0:p><ns0:p>Their parallel algorithm is similar to the one we use in the current study because they first sort sub-parts of the input array and then merge them by pairs until there is only one result. However, their parallel merging is out-of-place and requires doubling the needed memory, which is not the case for us. Besides, their Bitonic sorting kernel differs from ours, because we follow the Bitonic algorithm without the need for matrix transposition inside the registers.</ns0:p><ns0:p>Watkins et al. <ns0:ref type='bibr' target='#b19'>[27]</ns0:ref> provide an alternative approach to sort based on the merging of multiple vectors.</ns0:p><ns0:p>Their method is 2 times faster than the Intel IPP library and 5 times faster than the C-lib qsort. They can sort 500 million keys per second on an Intel KNL (AVX-512) but they also need to have an external array when merging, which we avoid in our approach.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.4'>Related work on vectorized with SVE</ns0:head><ns0:p>Developing optimized kernels with SVE is a recent research topic. We refer to studies that helped better understand this architecture, even if they did not focus on sorting.</ns0:p><ns0:p>Meyer et al. <ns0:ref type='bibr' target='#b20'>[28]</ns0:ref> studied the assembly code generated when implementing lattice quantum chromodynamics (LQCD) kernels. They evaluate if the compiler was capable of generating vectorized assembly from a scalar C code, which was the case. LQCD has also been studied by Alappat et al. <ns0:ref type='bibr' target='#b22'>[29]</ns0:ref> in addition to sparse matrix vector product (SpMV). The authors studied various effects and properties of the A64FX, Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and demonstrated that for some kernels it competes with a V100 GPU.</ns0:p><ns0:p>Kodama et al. <ns0:ref type='bibr' target='#b23'>[30]</ns0:ref> tried evaluating the impact on performance when changing the vector size, while using the same hardware. Their objective was oriented to SVE since SVE kernels are vector size independent. At the time of the study, no hardware was supported SVE, hence the authors used an emulator. Additionally, they speculated on how the vector size could be changed, which does not respect the current SVE technology. Nevertheless, they concluded that using a larger vector than the hardware registers could be beneficial, by reducing the number of instructions, but could also be negative by the need of more registers.</ns0:p><ns0:p>Aoki et al. <ns0:ref type='bibr' target='#b24'>[31]</ns0:ref> implemented the H.265 video codec using SVE. Their implementation reduces the number of instructions by half, but no performance results were given.</ns0:p><ns0:p>Wan et al. <ns0:ref type='bibr' target='#b25'>[32]</ns0:ref> implemented level-2 basic linear algebra routines (BLAS). They evaluated their implementation using the Arm emulator ARMIE, which predicted a 17x speedup against Neon, the previous ARM vector ISA.</ns0:p><ns0:p>Domle <ns0:ref type='bibr' target='#b26'>[33]</ns0:ref> evaluated the performance differences for various benchmarks and five different compilers on A64FX. The author advised Fujitsu for Fortran codes, GNU for integer-intensive apps, and any clang-based compilers for C/C++, but concluded that there was not a single perfect compiler and that it is advised to test for each application.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>SORTING WITH SVE</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1'>Overview</ns0:head><ns0:p>Our SVE-QS shares similarities with the AVX-512-QS as it is composed of two key steps. First, we partition the data recursively using the sve partition function described in Section 3.3, as in the classical QS. Second, once the partitions are smaller than a given threshold, we sort them with the sve bitonic sort wrapper function from Section 3.2. To sort in parallel, we rely on the classical parallelization scheme for the divide-and-conquer algorithm, but propose several optimizations. This allows an easy parallelization method, which can be fully implemented using OpenMP.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Bitonic-based sort on SVE vectors</ns0:head><ns0:p>In this section, we detail our Bitonic-sort to sort small arrays that have less than 16 times VEC SIZE elements, where VEC SIZE is the size of a SIMD vector. We used this function in our QS implementation to sort partitions that are small enough.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.1'>Sorting one vector</ns0:head><ns0:p>We sort a single vector by applying the same operations as the ones shown in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>(a). We perform the compare and exchange following the indexes shown in the Bitonic sorting network figure. Thanks to the vectorization, we are able to work on an entire vector without the need of iterating on the values individually. However, we cannot hard-code the indices of the elements that should be compared and exchanged, because we do not know the size of the vector. Therefore, we use a loop-based scheme where we efficiently generate permutation and Boolean vectors to perform the correct comparisons. We use the same pattern for both the symmetric and the stair stages.</ns0:p><ns0:p>In the symmetric stage, the values are first compared by contiguous pairs, e.g. each value at an even index i is compared with the value at i + 1 and each value at en odd index j is compared with the value at j &#8722; 1. Additionally, we see in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>(a) that the width of comparison doubles at each iteration and that the comparisons are from the sides to the center. In our approach, we use three vectors. First, a Boolean vector that shows the direction of the comparisons, e.i. for each index it tells if it has to be compared with a value at a greater index (and will take the minimum of both) or with a value at a lower index (and will take the maximum of both). Second, we need a shift coefficient vector which gives the step of the comparisons, i.e. it tells for each index the relative position of the index to be compared with. Finally, we need an index vector that contains increasing values from 0 to N-1 (for any index i, vec[i] = i) and that we use to sum with the shift coefficient vector to get a permutation vector. The permutation vector tells for any index which other index it should be compared against.</ns0:p><ns0:p>We give the pseudo-code of our vectorized implementation in Algorithm 1 where the corresponding SVE instructions and possible vector values are written in comments. In the beginning, the Boolean vector must contain repeated false and true values because the values are compared by contiguous pairs. To build it, we use the svzip1 instruction which interleaves elements from low halves of two inputs, and pass Manuscript to be reviewed</ns0:p><ns0:p>Computer Science one vector of true and a vector of false as parameters (line 7). Then, at each iteration, the number of false should double and be followed by the same number of true. To do so, we use again the svzip1 instruction, but we pass the Boolean vector as parameters (line 20). The vector of increasing indexes is built with a single SVE instruction (line 5). The shift coefficients vector is built by interleaving 1 and &#8722;1 (line 9).</ns0:p><ns0:p>The permutation index is generated by summing the two vectors (line 12) and uses to permute the input (line 14). So, at each iteration, we use the updated Boolean vector to decide if we add or subtract two times the iteration index (line 22). Also, this algorithm is never used as presented here because each of its iterations must be followed by a stair stage.</ns0:p><ns0:p>Algorithm 1: SVE Bitonic sort for one vector, symmetric stage.</ns0:p><ns0:p>Input: vec: a SVE vector to sort. Output: vec: the vector sorted. We use the same principle in the stair stage, with one vector for the Boolean that shows the direction of the exchange and another to store the relative index for comparison. We sum this last vector with the index vector to get the permutation indices. If we study again to Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>(a), we observe that the algorithm starts by working on parts of half the size of the previous symmetric stage. Then, at each iteration, the parts are subdivided until they contain two elements. Besides, the width of the exchange is the same for all elements in an iteration and is then divided by two for the next iteration.</ns0:p><ns0:formula xml:id='formula_0'>5 vecIndexes = (i &#8712; [0, N-1] &#8594; i) 6 // svzip1 -[F, T, F, T, ..., F, T] 7 falseTrueVecOut = (i &#8712; [0, N-1] &#8594; i is odd ? False : True) 8 // svneg/svdup -[1, -1, 1, -1, ..., 1, -1] 9 vecIndexesPermOut = (i &#8712; [0, N-1] &#8594; falseTrueVecOut[i] ? -1 : 1)</ns0:formula><ns0:formula xml:id='formula_1'>12 premuteIndexes = (i &#8712; [0, N-1] &#8594; vecIndexes[i] + vecIndexesPermOut[i]) 13 // svtbl -[vec[1], vec[0], vec[3], vec[2], ..., vec[N-1], vec[N-2]] 14 vecPermuted = (i &#8712; [0, N-1] &#8594; vec[premuteIndexes[i]]) 15 // svsel/svmin/svmax -[..., Min(vec[i], vec[i+1]), Max(vec[i], vec[i+1]), ...] 16 vec = (i &#8712; [0, N-1] &#8594; falseTrueVecOut[i] ? 17 ---------Max(vec[i], vecPermuted[i]): 18 ---------Min(vec[i], vecPermuted[i])) 19 // svzip1 -[F, F, T, T, F, F, T, T, ...] 20 falseTrueVecOut = (i &#8712; [0, N-1] &#8594; falseTrueVecOut[i/2]) 21 // svsel/svadd/svsub -[3, 2, 1, 0, 3, 2, 1, 0, ...] 22 vecIndexesPermOut = (i &#8712; [0, N-1] &#8594; falseTrueVecOut[i] ? 23 -----------------vecIndexesPermOut[i]-stepOut*2 : 24 -----------------vecIndexesPermOut[i]+stepOut*2)</ns0:formula><ns0:p>We provide a pseudo-code of our vectorized algorithm in Algorithm 2. To manage the Boolean vector:</ns0:p><ns0:p>we use the svuzp2 instruction that select odd elements from two inputs and concatenate them. In our case, we give a vector that contains a repeated pattern composed of false x times, followed by true x times (x a power of two) to svuzp2 to get a vector with repetitions of size x/2. Therefore, we pass the vector of Boolean generated during the symmetric stage to svuzp2 to initialize the new Boolean vector (line 7).</ns0:p><ns0:p>We divide the exchange step by two for all elements (line 23). The permutation (line 15) and exchange (line 17) are similar to what is performed in the symmetric stage.</ns0:p><ns0:p>The complete function to sort a vector is a mix of the symmetric (sve bitonic sort 1v symmetric) and stair (sve bitonic sort 1v stairs) functions; each iteration of the symmetric stage is followed by the inner loop of the stair stage. The corresponding C++ source code of a fully vectorized implementation is given in Appendix A.1.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.2'>Sorting more than one vectors</ns0:head><ns0:p>To sort more than one vector, we profit that the same patterns are repeated at different scales; to sort V vectors, we re-use the function that sorts V /2 vectors and so on. We provide an example to sort two vectors in Algorithm 3, where we start by sorting each vector individually using the sve bitonic sort 1v function. Then, we compare and exchange values between both vectors (line 9), and we finish by applying the same stair stage on each vector individually. Our real implementation uses an optimization that consists in a full inlining followed by a merge of the same operations done on different data. For instance, </ns0:p><ns0:formula xml:id='formula_2'>12 premuteIndexes = (i &#8712; [0, N-1] &#8594; vecIndexes[i] + 13 --------(falseTrueVecIn[i] ? -vecIncrement[i] : vecIncrement[i])) 14 // svtbl 15 vecPermuted = (i &#8712; [0, N-1] &#8594; vec[premuteIndexes[i]]) 16 // svsel/svmin/svmax 17 vec = (i &#8712; [0, N-1] &#8594; falseTrueVecIn[i] ? 18 ---------Max(vec[i], vecPermuted[i]): 19 ---------Min(vec[i], vecPermuted[i])) 20 // svuzp2 21 falseTrueVecIn = (i &#8712; [0, N-1] &#8594; falseTrueVecIn[(i*2+1)%N]) 22 // svdiv 23 vecIncrement = (i &#8712; [0, N-1] &#8594; vecIncrement[i] / 2); 24 end 25 falseTrueVecOut = (i &#8712; [0, N-1] &#8594; falseTrueVecOut[i/2]) 26 end 27 return vec</ns0:formula><ns0:p>instead of two consecutive calls to sve bitonic sort 1v (lines 7 and 8), we inline the functions. But since they are similar but on different data, we merge them into one that works on both vectors at the same time.</ns0:p><ns0:p>In our sorting implementation, we provide the functions to sort up to 16 vectors.</ns0:p><ns0:p>Algorithm 3: SIMD bitonic sort for two vectors of double floating-point values.</ns0:p><ns0:p>Input: vec1 and vec2: two double floating-point SVE vectors to sort. Output: vec1 and vec2: the two vectors sorted with vec1 lower or equal than vec2. 1 function sve bitonic exchange rev(vec1, vec2) </ns0:p><ns0:formula xml:id='formula_3'>2 vec1 copy = (i &#8712; [0, N-1] &#8594; vec1[N-1-i]) 3 vec1 = (i &#8712; [0, N-1] &#8594; Min(vec1[i], vec2[i]) 4 vec2 = (i &#8712; [0, N-1] &#8594; Max(vec1 copy[i], vec2[i])</ns0:formula></ns0:div> <ns0:div><ns0:head n='3.2.3'>Sorting small arrays</ns0:head><ns0:p>Once a partition contains less than 16 SIMD-vector elements, it can be sorted with our SVE-Bitonic functions. We select the appropriate SVE-Bitonic function (the one that matches the size of the array to sort) with a switch statement, in a function interface that we refer to as sve bitonic sort wrapper.</ns0:p><ns0:p>However, the partitions obtained from the QS do not necessarily have a size multiple of the vector's length. Therefore, we pad the last vector with an extra value, which is the greatest possible value for the target data type. During the execution of the sort, these last values will be compared but never exchanged and will remain at the end of the last vector.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.4'>Optimization by comparing vectors' min/max values or whether vectors are already sorted</ns0:head><ns0:p>There are two main points where we can apply optimization in our implementation. The first one is to avoid exchanging values between vectors if their contents are already in the correct order, i.e. no values will be exchanged between the vectors because their values respect the ordering objective. For instance, in Algorithm 3, we can compare if the greatest value in vector vec2 (SVE instruction svmaxv) is lower than or equal to the lowest value in vector vec1 (SVE instruction svminv). If this is the case, the function Manuscript to be reviewed Computer Science can simply sort each vector individually. The same mechanism can be applied to any number of vectors, and it can be used at function entry or inside the loops to break when it is known that no more values will be exchanged. The second optimization can be applied when we want to sort a single vector by checking if it is already sorted. Similarly to the first optimization, this check can be done at function entry or in the loops, such as at lines 2 and 10, in Algorithm 2. We propose two implementations to test if a vector is sorted and provide the details in Appendix A.2.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Partitioning with SVE</ns0:head><ns0:p>Our partitioning strategy is based on the AVX-512-partition. In this algorithm, we start by saving the extremities of the input array into two vectors that remain unchanged until the end of the algorithm. By doing so, we free the extremity of the array that can be overwritten. Then, in the core part of the algorithm, we load a vector and compare it to the pivot. The values lower than the pivot are stored on the left side of the array, and the values greater than the pivot are stored on the right side while moving the corresponding cursor indexes. Finally, when there is no more value to load, the two vectors that were loaded at the beginning are compared to the pivot and stored in the array accordingly.</ns0:p><ns0:p>When we implement this algorithm using SVE we obtain a Boolean vector b when we compare a vector to partition with the pivot. We use b to compact the vector and move the values lower or equal than the pivot on the left, and then we generate a secondary Boolean vector to store only as a sub-part of the vector. We manage the values greater than the pivot similarly by using the negate of b.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4'>Sorting key/value pairs</ns0:head><ns0:p>The sorting methods we have described are designed to sort arrays of numbers. However, some applications need to sort key/value pairs. More precisely, the sort is applied on the keys, and the values contain extra information such as pointers to arbitrary data structures, for example. We extend our SVE-Bitonic and SVE-Partition functions by making sure that the same permutations/moves apply to the keys and the values. In the sort kernels, we replace the minimum and maximum statements with a comparison operator that gives us a Boolean vector. We use this vector to transform both the vector of keys and the vector of values. For the partitioning kernel, we already use a comparison operator, therefore, we add extra code to apply the same transformations to the vector of values and the vector of keys.</ns0:p><ns0:p>In terms of high-level data structure, we support two approaches. In the first one, we store the keys and the values in two distinct arrays, which allow us to use contiguous load/store. In the second one, the key/value is stored by pair contiguously in a single array, such that loading/storing requires non-contiguous memory accesses.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5'>Parallel sorting</ns0:head><ns0:p>Our parallel implementation is based on the QS-par that we extend with several optimizations. In the QS-par parallelization strategy, it is possible to avoid having too many tasks or tasks on too small partitions by stopping creating tasks after a given recursive level. This approach allows to fix the number of tasks at the beginning, but could end in an unbalanced configuration (if the tasks have different workload) that is difficult to resolve on the fly. Therefore, in our implementation, we create a task for every partition larger than the L1 cache, as shown in Algorithm 4 line 26. However, we do not rely on the OpenMP task statement because it is impossible to control the data locality. Instead, we use one task list per thread (lines 2, 11 and 33). Each thread uses its list as a stack to store the intervals of the recursive calls and also as a task list where each interval can be processed in a task. In a steady-state, each thread accesses only its list: after each partitioning, a thread puts the interval of the first sub-partition in the list and continues with the second sub-partition (line 35). When the partition is smaller than the L1 cache, the thread executes the sequential SVE-QS. We use a work-stealing strategy when a thread has an empty list such that the thread will try to pick a task in others' lists. The order of access to others' lists is done such that a thread accesses the lists from threads of closer ids to far ids, e.g. a thread of id i will look at i + 1, i &#8722; 1, i + 2, and so on. We refer to this optimized version as the SVE-QS-par.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>PERFORMANCE STUDY</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1'>Configuration</ns0:head><ns0:p>We assess our method on an ARMv8.2 A64FX -Fujitsu with 48 cores at 1.8GHz and 512-bit SVE, i. shared L2 cache per CMG. For the sequential executions, we pinned the process with taskset -c 0, and for the parallel executions, we use OMP PROC BIND=TRUE. We use the ARM compiler 20.3 (based on LLVM 9.0.1) with the aggressive optimization flag -O3. We compare our sequential implementations against the GNU STL 20200312 from which we use the std::sort and std::partition functions. We also compare against SVE512-Bitonic, which is an implementation that we have obtained by performing a translation of our original AVX-512 into SVE. This implementation works only for 512-bit SVE, but this makes it possible to hard-code all the indices of the compare and exchange of the Bitonic algorithm. Moreover, SVE512-Bitonic does not use any loop, i.e. it can be seen as if we had fully unrolled the loops of the SVE-Bitonic.</ns0:p><ns0:p>We compare our parallel implementation against the Boost 4 1.73.0 from which we use the block in direct sort function. The test file used for the following benchmark is available online (https:// gitlab.inria.fr/bramas/arm-sve-sort) and includes the different sorts presented in this study, plus some additional strategies and tests. 5 Our QS uses a 5-values median pivot selection (whereas the STL sort function uses a 3-values median). The arrays to sort are populated with randomly generated values. Our implementation does not include the potential optimizations described in Section 3.2.4 that can be applied when there is a chance that parts or totality of the input array are already sorted. 6 As it is possible to virtually change the size of the SIMD vectors at runtime, we evaluated if using different vector sizes (128 or 256) could increase the performance of our approach. It appears that the performance was always worse, and consequently we decided not to include these results in the current study.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Performance to sort small arrays</ns0:head><ns0:p>We provide in Figure <ns0:ref type='figure' target='#fig_13'>4</ns0:ref> the execution times to sort arrays of 1 element to 16 &#215; VEC SIZE elements. This corresponds to at most 128 double floating-point values, or 256 integer values. We test all the sizes by step 1, such that we include sizes not multiple of the SIMD-vector's length. For more than 20 values, the SVE-Bitonic always delivers better performance than the STL. The speedup is significant and increases with the number of values to reach 5 for 256 integer values. The execution time per item increases every VEC SIZE values because the cost of sorting is not tied to the number of values but to the number of SIMD-vectors to sort, as explained in Section 3.2.3. For example, we have to sort two SIMD-vector of 16 values to process from 17 to 32 integers. Our method reaches a speedup of 3.6 to sort key/value pairs. To sort key/value pairs, we obtain similar performance if we sort pairs of integers stored contiguously or two arrays of integers, one for the keys and one for the values. Comparing our two SVE implementations, SVE-Bitonic appears more efficient than SVE512-bitonic, except for very small number of values. This means that considering a static vector size of 512 bits, with compare-exchange indices hard coded and no loops/branches, does not provide any benefit, and is even slower for more than 70 values. This means that, for our kernels, the CPU manages more easily loops with branches (SVE-bitonic) than a large amount of instructions without branches (SVE512-bitonic). Moreover, the hard-coded exchange indices are stored in memory and should be load to register, which appear to hurt the performance compared to building these indices using several instructions. Sorting double floating-points values or pairs of integers takes similar duration up to 64 values, then with more values it is faster to sort pairs of integers.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Partitioning performance</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_14'>5</ns0:ref> shows the execution times to partition using our SVE-Partition or the STL's partition function.</ns0:p><ns0:p>Our method provides again a speedup of an average factor of 4 for integers and key/values (with two arrays), and 3 for floating-point values. We see no difference if the data fit in the caches L1/L2 or not, neither in terms of performance nor in the difference between the STL and our implementation. However, there is a significant difference between partitioning two arrays of integers (one for the key and the other for the values) or one array of pairs of integers. The only difference between both implementations is that we work with distinct svint32 t vectors in the first one, and with svint32x2 t vector pairs in the second.</ns0:p><ns0:p>But the difference is mainly in the memory accesses during the loads/stores. The partitioning of one array or two arrays of integers appears equivalent, and this can be unexpected because we need more instructions when managing the latter. Indeed, we have to apply the same transformations to the keys and the values, and we have twice memory accesses.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4'>Performance to sort large arrays</ns0:head><ns0:p>Figure <ns0:ref type='figure'>6</ns0:ref> shows the execution times to sort arrays up to a size of &#8776; 10 9 items. Our SVE-QS is always faster in all configurations. The difference between SVE-QS and the STL sort is stable for size greater than 10 3 values with a speedup of more than 4 to our benefit to sort integers. There is an effect when sorting 64 values (the left-wise points) as the execution time is not the same as the one observed when sorting less than 16 vectors (Figure <ns0:ref type='figure' target='#fig_13'>4</ns0:ref>). The only difference is that here we call the main SVE-QS functions, which call the SVE-Bitonic functions after just one test on the size, whereas in the previous results we call the SVE-Bitonic functions directly. We observe that when sorting key/value pairs, there is again a benefit when using two distinct arrays of scalars compared with a single array of pairs. From the previous results, it is clear that this difference comes from the partitioning for which the difference also exists (Figure <ns0:ref type='figure' target='#fig_14'>5</ns0:ref>), whereas the difference is negligible in the sorting of arrays smaller than 16 vectors (Figure <ns0:ref type='figure' target='#fig_13'>4</ns0:ref>). However, as the size of the array increases, this difference vanishes, and it becomes even faster to sort Floating-point values than keys/values.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5'>Performance of the parallel version</ns0:head><ns0:p>Figure <ns0:ref type='figure'>7</ns0:ref> shows the performance for different number of threads of a parallel sort implementation from the boost library (block indirect sort) against our task-based implementation (SVE-QS-par). Our approach is faster in all configurations, but the results show the benefit of using a merge strategy to sort large arrays, as in block indirect sort. Indeed, as the number of threads increases, the SVE-QS-par becomes faster but reaches a limit and using more than 8 threads does not provide any benefit (the four curves for 8, 16, 32, and 48 threads, overlap or are very close for both data types). Moreover, while 1 thread executions show that our SVE-QS-par is faster for both data types, for large arrays the block indirect sort provides very close performance. This illustrates the limit of the divide-and-conquer parallelization strategy to process large arrays since the first steps, i.e. the partitioning steps, are not or poorly parallel. Moreover, using more than 12 threads (the number of threads per CMG), while working in place on the original array implies memory transfers across the memory nodes, which impacts the scalability. Whereas the block indirect sort, which uses a merge kernel but at the cost of using additional data buffers, has a more stable scalability. Nevertheless, our approach is trivial to implement and delivers high performance when using few threads, which is valuable when an application uses multiple processes per node.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.6'>Comparison with the AVX-512 implementation</ns0:head><ns0:p>The results obtained in our previous study on Intel Xeon Platinum 8170 Skylake CPU at 2.10GHz <ns0:ref type='bibr' target='#b2'>[9]</ns0:ref> shows that our AVX-512-QS was sorting at a speed of &#8776; 10 &#8722;9 second per element (obtained by T /N &#8226; ln(N)).</ns0:p><ns0:p>This was almost 10 times faster than the STL (10 &#8722;8 second per element). The speedup obtained with SVE in the current study is lower and does not come from our new implementation, which is generic regarding the vector size, because the SVE512-QS is not faster, either. The difference does not come either from the memory accesses, because it is significant for small arrays (that fit in the L1 cache), or the number of vectorial registers, which is 32 for both hardware. Profiling the code reveals that the cycles per instruction is around 1.7 for both SVE512-QS and SVE-QS in sequential, which is not ideal. The L2 cache miss rate is lower than 10%, which indicates that the memory access pattern is adequate. The memory bandwidth is given in Table <ns0:ref type='table' target='#tab_7'>1</ns0:ref>. We observe that the peak of HBM2 (256GB/s) is not reached. Additionally, the table indicates that to sort 4GB of data, our approach will read/write 252GB of data, but it was already the case for AVX-512-QS. From the hardware specification of the A64FX [34], we can observe that most SIMD SVE instructions have a latency between 4 and 9 cycles. Therefore, we conclude that the difference between our AVX-512 and SVE versions comes from the cost of the SIMD instructions and the pipelining of these because the memory access appears fine, and the difference is already significant for small arrays. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head n='5'>CONCLUSIONS</ns0:head><ns0:p>In this paper, we described new implementations of the Bitonic sorting network and the partition algorithm that have been designed for the SVE instruction set. These two algorithms are used in our Quicksort variant, which makes it possible to have a fully vectorized implementation. Our approach shows superior performance on ARMv8.2 (A64FX) in all configurations against the GNU C++ STL. It provides a speedup up of 5 when sorting small arrays (less than 16 SIMD-vectors), and a speedup above 4 for large arrays. We also demonstrate that our algorithm is less efficient when we fully unroll the loops and use hard-coded exchange indices in the Bitonic stage (by considering that the vector if of size 512bits). This strategy was efficient when implemented with AVX512 and executed on Intel Skylake. Our parallel implementation is efficient, but it could be improved when working on large arrays by using a merge on sorted partitions instead of a recursive parallel strategy (at a cost of using external memory buffers). In addition, we would like to compare the performance obtained with different compilers because there are many ways to transform and optimize a C++ code with intrinsics into a binary.</ns0:p><ns0:p>Besides, these results is a good example to foster the community to revisit common problems that have kernels for x86 vectorial extensions but not for SVE yet. Indeed, as the ARM-based architecture will become available on more HPC platforms, having high-performance libraries of all domains will become critical. Moreover, some algorithms that were not competitive when implemented with x86 ISA may be easier to vectorize with SVE, thanks to the novelties it provides, and achieve high-performance. Finally, the source code of our implementation is publicly available and ready to be used and compared against.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>ACKNOWLEDGMENT</ns0:head><ns0:p>This work used the Isambard 2 UK National Tier-2 HPC Service (http://gw4.ac.uk/isambard/) operated by GW4 and the UK Met Office, and funded by EPSRC (EP/T022078/1). In addition, this work used the Farm-SVE library <ns0:ref type='bibr' target='#b27'>[35]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>A APPENDIX</ns0:head></ns0:div> <ns0:div><ns0:head>A.1 Source code of sorting one vector of integers</ns0:head><ns0:p>In Code 1, we provide the implementation of sorting one vector using Bitonic sorting network and SVE.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Quicksort example to sort [4, 2, 3, 1, 6] to [1, 2, 3, 4, 6]. The pivot is equal to the value in the middle: the first pivot is 3, then at the second recursion level it is 2 and 6. l is the left index, and r the right index.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2(a). The execution of the example goes from left to right as a timeline. The values are moved from left to right, and when they cross an exchange unit they are potentially transferred along the vertical bar.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2(b) provide a real example where we print the values along the horizontal lines when sorting 8</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Bitonic sorting network examples. In red boxes, the exchanges are done from extremities to the center, and we refer to it as the symmetric stage. Whereas in orange boxes, the exchanges are done with a linear progression, and we refer to it as the stair stage. (a) Bitonic sorting network for input of size 16. All vertical bars/switches exchange values in the same direction. (b) Example of 8 values sorted by a Bitonic sorting network.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Summation example of single precision floating-point values using : ( ) scalar standard C++ code and ( ) SVE SIMD-vector of 8 values (considering that a vector size of 256 bits). In the resulting vector, each element will be the summation of the values from a and b at the same position. A summation using a predicate vector ( ) is shown. In this case, the resulting vector contains 0 at the positions where t f is false.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>5/ 19 PeerJ</ns0:head><ns0:label>19</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:06:62923:1:1:NEW 15 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>1 function sve bitonic sort 1v symmetric(vec) 2 // Number of values in a vector 3 N = get hardware size() 4 // svindex -[O, 1, ...., N-1]</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>10 for</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>stepOut from 1 to N-1, doubling stepOut at each step do 11 // svadd -[1, 0, 3, 2, ..., N-1, N-2]</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>7 / 19 PeerJ 8 /</ns0:head><ns0:label>7198</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:06:62923:1:1:NEW 15 Sep 2021) Manuscript to be reviewed Computer Science Algorithm 2: SVE Bitonic sort for one vector, stair stage. The gray lines are copied from the symmetric stage (Algorithm 1) Input: vec: a SVE vector to sort. Output: vec: the vector sorted. 1 function sve bitonic sort 1v stairs(vec) 2 N = get hardware size() 3 vecIndexes = (i &#8712; [0, N-1] &#8594; i) 4 falseTrueVecOut = (i &#8712; [0, N-1] &#8594; i is odd ? False : True) 5 for stepOut from 1 to N-1, doubling stepOut at each step do 6 // svuzp2 7 falseTrueVecIn = (i &#8712; [0, N-1] &#8594; falseTrueVecOut[(i*2+1)%N]) / svdup -[stepOut/2, stepOut/2, ...] 9 vecIncrement = (i &#8712; [0, N-1] &#8594; stepOut/2) 10 for stepIn from stepOut/2 to 1, dividing stepIn by 2 at each step do 11 // svadd/svneg -[stepOut/4, stepOut/4,..., -stepOut/4, -stepOut/4]</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>5 9 [</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>return {vec1, vec2} 6 function sve bitonic sort 2v(vec1, vec2) 7 vec1 = sve bitonic sort 1v(vec1) 8 vec2 = sve bitonic sort 1v(vec2) vec1, vec2] = sve bitonic exchange rev(vec1, vec2) 10 vec1 = sve bitonic sort 1v stairs(vec1) 11 vec2 = sve bitonic sort 1v stairs(vec2)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>8 / 19 PeerJ</ns0:head><ns0:label>819</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:06:62923:1:1:NEW 15 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>7 core 8 while 9 /if interval is null then 13 if current thread idle is False then 14 current 24 end 25 function 30 /</ns0:head><ns0:label>7891314242530</ns0:label><ns0:figDesc>e. a vector can contain 16 integers and 8 double floating-point values. The node has 32 GB HBM2 memory 9/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:1:1:NEW 15 Sep 2021) Manuscript to be reviewed Computer Science Algorithm 4: Simplified algorithm of SVE-QS-par. Input: array: data to sort. 1 function sve par sort(array, N) par sort(array, 0, N, buckets) nb threads idle = nb threads do / Try to get a task, from current thread's list 10 // then neighbors' lists, etc. 11 interval = steal task(buckets) 12 sort(array, interval.start, interval.end, buckets) 23 end core par sort(array, start, end, buckets) 26 if (start-end) &#215; size of element &#8804; size of L1 then 27 // Sort sequentially 28 sve bitonic sort(array, start, end) 29 else / Partition the array 31 p = sve partition(array, start, end) 32 // Put first partition in the buckets 33 insert(buckets[current thread id()], p.second partition) 34 // Directly work on first partition 35 core par sort(array, p.first partition.start, p.first partition.end, buckets) 36 end arranged in 4 core memory groups (CMGs) with 12 cores and 8GB each, 64KB private L1 cache, 8MB</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Execution time divided by n ln(n) to sort from 1 to 16 &#215; VEC SIZE values. The execution time is obtained from the average of 2 &#8226; 10 3 sorts with different values for each size. The speedup of the SVE-Bitonic against the STL is shown above the SVE-Bitonic lines. Key/value integers as a std::pair are plot with dashed lines, and as two distinct integer arrays (int*[2]) are plot with dense lines.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Execution time divided by n of elements to partition arrays filled with random values with sizes from 64 to &#8776; 10 9 elements.The pivot is selected randomly. The execution time is obtained from the average of 20 executions with different values. The speedup of the SVE-partition against the STL is shown above the lines. The vertical lines represent the caches relatively to the processed data type (&#8722; for the integers and &#8226; &#8722; &#8226; for floating-points and the key/value integers). Key/value integers as a std::pair are plot with dashed lines, and as two distinct integer arrays (int*[2]) are plot with dense lines.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 7 . 19 PeerJ</ns0:head><ns0:label>719</ns0:label><ns0:figDesc>Figure 7. Execution time divided by n ln(n) to sort in parallel arrays filled with random values with sizes from 512 to &#8776; 10 9 elements. The execution time is obtained from the average of 5 executions with different values. The speedup of the parallel SVE-QS-par against the sequential execution is shown above the lines for 16 and 48 threads. The vertical lines represent the caches relatively to the processed data type (&#8722; for the integers and &#8226; &#8722; &#8226; for the floating-points).</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>4/19 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2021:06:62923:1:1:NEW 15 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:1:1:NEW 15 Sep 2021)</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='7'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='5'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.62</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Integers (int)</ns0:cell><ns0:cell cols='2'>std::sort</ns0:cell><ns0:cell cols='2'>SVE-Bitonic</ns0:cell><ns0:cell>SVE512-Bitonic</ns0:cell></ns0:row><ns0:row><ns0:cell>Time in s/ n ln(n)</ns0:cell><ns0:cell>10 &#8722;8</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>1.97</ns0:cell><ns0:cell cols='5'>2.69 3.20 3.36 3.89 4.34 4.83</ns0:cell><ns0:cell cols='5'>3.88 4.05 4.17 4.29 4.09 4.48 4.64 4.87</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>10 &#8722;9</ns0:cell><ns0:cell cols='2'>0</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>120</ns0:cell><ns0:cell>140</ns0:cell><ns0:cell>160</ns0:cell><ns0:cell>180</ns0:cell><ns0:cell>200</ns0:cell><ns0:cell>220</ns0:cell><ns0:cell>240</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Number of integer values n</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.59</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Floating-points (double)</ns0:cell><ns0:cell cols='2'>std::sort</ns0:cell><ns0:cell cols='2'>SVE-Bitonic</ns0:cell><ns0:cell>SVE512-Bitonic</ns0:cell></ns0:row><ns0:row><ns0:cell>Time in s/ n ln(n)</ns0:cell><ns0:cell>10 &#8722;8</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1.42</ns0:cell><ns0:cell>1.84</ns0:cell><ns0:cell /><ns0:cell>2.48</ns0:cell><ns0:cell /><ns0:cell>2.27</ns0:cell><ns0:cell>2.39</ns0:cell><ns0:cell /><ns0:cell>2.34</ns0:cell><ns0:cell>2.72</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>10 &#8722;9</ns0:cell><ns0:cell cols='2'>0</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>90</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>110</ns0:cell><ns0:cell>120</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Number of floating-point values n</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.39</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Key/value integers (int*[2])</ns0:cell><ns0:cell cols='2'>std::sort</ns0:cell><ns0:cell cols='2'>SVE-Bitonic</ns0:cell><ns0:cell>SVE512-Bitonic</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Key/value integer pairs (std::pair)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Time in s/ n ln(n)</ns0:cell><ns0:cell cols='2'>10 &#8722;8</ns0:cell><ns0:cell /><ns0:cell cols='2'>1.61 1.83</ns0:cell><ns0:cell cols='4'>2.48 2.56 3.00 3.53 3.90</ns0:cell><ns0:cell cols='5'>3.06 3.17 3.04 3.18 3.13 3.27 3.41 3.57</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>10 &#8722;9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>120</ns0:cell><ns0:cell>140</ns0:cell><ns0:cell>160</ns0:cell><ns0:cell>180</ns0:cell><ns0:cell>200</ns0:cell><ns0:cell>220</ns0:cell><ns0:cell>240</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Number of pair values n</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>11/19</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>Execution time divided by n ln(n) to sort arrays filled with random values with sizes from 64 to &#8776; 10 9 elements. The execution time is obtained from the average of 5 executions with different values. The speedup of the SVE-QS against the STL is shown above the SVE-QS lines. The vertical lines represent the caches relatively to the processed data type (&#8722; for the integers and &#8226; &#8722; &#8226; for floating-points and the integer pairs). Key/value integers as a std::pair are plot with dashed lines, and as two distinct integer arrays (int*[2]) are plot with dense lines.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science 10 2 10 3 10 &#8722;8.6 10 &#8722;8.4 10 &#8722;8.2 10 &#8722;8 10 &#8722;7.8 2.89 3.63 4.17 2.70 2.29 2.58 Time in s/ n ln(n) Key/value integer pairs (std::pair) Integers (int) Floating-points (double) 10 4 10 5 4.32 4.39 2.71 2.70 Number of integer or floating-point values n std::sort 10 6 4.44 2.73 Key/value integers (int*[2]) std::sort 10 2 10 3 10 4 10 5 10 6 10 &#8722;8.6 10 &#8722;8.4 10 &#8722;8.2 10 &#8722;8 10 &#8722;7.8 1.54 2.75 3.43 3.63 3.74 3.83 2.54 2.57 2.51 2.40 2.29 2.23 Number of pair values n Time in s/ n ln(n) Integers (int) boost::block indirect sort SVE-QS-par 1 Thread 8 Thread 10 3 10 4 10 5 10 6 10 7 10 &#8722;10 10 &#8722;9 10 &#8722;8 1.07 1.00 0.94 4.92 8.66 11.96 10 7 4.46 SVE-QS 2.72 SVE-QS 10 7 4.05 2.18 16 Thread 32 Thread 10 8 10 8 4.45 SVE512-QS 10 9 4.49 2.75 2.73 SVE512-QS 10 8 10 9 4.10 4.16 2.14 2.12 48 Thread 10 9 12.08 14.65 Number of integer values n Time in s/ n ln(n) Floating-points (double) boost::block indirect sort SVE-QS-par 1 Thread 16 Thread 48 Thread 8 Thread 32 Thread 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 &#8722;8 1.02 1.00 6.10 10.46 12.04 13.57 13.54 Number of floating-point values n Figure 6. Computer Science 10 &#8722;9 1.59 Time in s/ n ln(n)</ns0:cell></ns0:row></ns0:table><ns0:note>13/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:1:1:NEW 15 Sep 2021) Manuscript to be reviewed 14/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:1:1:NEW 15 Sep 2021)Manuscript to be reviewed</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Amount of memory accessed and corresponding bandwidth for SVE-QS and SVE512-QS to sort arrays of integers of size N. The accesses are measured by capturing the calls to SIMD loads/stores.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>SVE-QS</ns0:cell><ns0:cell cols='2'>SVE512-QS</ns0:cell></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>Size (GB)</ns0:cell><ns0:cell>Memory</ns0:cell><ns0:cell>Bandwith</ns0:cell><ns0:cell>Memory</ns0:cell><ns0:cell>Bandwith</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>read/write (GB)</ns0:cell><ns0:cell>(GB/s)</ns0:cell><ns0:cell>read/write (GB)</ns0:cell><ns0:cell>(GB/s)</ns0:cell></ns0:row><ns0:row><ns0:cell>2 6</ns0:cell><ns0:cell>2.56E-07</ns0:cell><ns0:cell>5.12E-07</ns0:cell><ns0:cell>3.76E-01</ns0:cell><ns0:cell>2.75E-06</ns0:cell><ns0:cell>1.471</ns0:cell></ns0:row><ns0:row><ns0:cell>2 9</ns0:cell><ns0:cell>2.05E-06</ns0:cell><ns0:cell>1.38E-05</ns0:cell><ns0:cell>1.325</ns0:cell><ns0:cell>3.88E-05</ns0:cell><ns0:cell>3.370</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2 12 1.64E-05</ns0:cell><ns0:cell>2.26E-04</ns0:cell><ns0:cell>2.427</ns0:cell><ns0:cell>3.96E-04</ns0:cell><ns0:cell>3.768</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2 15 1.31E-04</ns0:cell><ns0:cell>2.67E-03</ns0:cell><ns0:cell>3.064</ns0:cell><ns0:cell>4.19E-03</ns0:cell><ns0:cell>4.289</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2 18 1.05E-03</ns0:cell><ns0:cell>2.95E-02</ns0:cell><ns0:cell>3.651</ns0:cell><ns0:cell>4.05E-02</ns0:cell><ns0:cell>4.467</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2 21 8.39E-03</ns0:cell><ns0:cell>3.10E-01</ns0:cell><ns0:cell>4.192</ns0:cell><ns0:cell>3.98E-01</ns0:cell><ns0:cell>4.870</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2 24 6.71E-02</ns0:cell><ns0:cell>2.947</ns0:cell><ns0:cell>4.420</ns0:cell><ns0:cell>3.644</ns0:cell><ns0:cell>4.995</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2 27 5.37E-01</ns0:cell><ns0:cell>27.726</ns0:cell><ns0:cell>4.645</ns0:cell><ns0:cell>33.051</ns0:cell><ns0:cell>5.087</ns0:cell></ns0:row><ns0:row><ns0:cell>2 30</ns0:cell><ns0:cell>4.294</ns0:cell><ns0:cell>252.233</ns0:cell><ns0:cell>4.822</ns0:cell><ns0:cell>299.740</ns0:cell><ns0:cell>5.257</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='1'>The current study provides a translation of our AVX-SORT into SVE but also a completely new approach which works when the vector size is unknown at compile time.2 The functions described in the current study are available at https://gitlab.inria.fr/bramas/sve-sort. This repository includes a clean header-only library and a test file that generates the performance study of the current manuscript. The code is under MIT license.2/19PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:1:1:NEW 15 Sep 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot' n='4'>https://www.boost.org/5 It can be executed on any CPU using the Farm-SVE library (https://gitlab.inria.fr/bramas/farm-sve).6 This implementation is partially implemented in the branch optim of the code repository.10/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:1:1:NEW 15 Sep 2021) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='16'>/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:1:1:NEW 15 Sep 2021)</ns0:note> </ns0:body> "
"A fast vectorized sorting implementation based on the ARM scalable vector extension (SVE). Answer to Reviewers Bérenger Bramas September 6, 2021 1 Summary We would like to thank both reviewers for their valuable comments. We have updated the paper according to all suggestions, and we provide a point to point response in the following. We also provide a latex-diff version of the paper that highlights the modifications from the previous version. 2 Reviewer 1 (Anonymous) (1) It is good the author declares this work is to port the previous work in AVX to SVE and cites the paper ”A Novel Hybrid Quicksort Algorithm Vectorized using AVX-512 on Intel Skylake”. There is no cover letter so that the reviewer is not certain whether it is allowable to directly reuse figures 1, 2 and 3. Besides, it is not clear the difference between this paper and the reference paper. We would like to thank the reviewer for this comment. We understand that the figures looked similar, but they were actually not the same. We updated them again in the new manuscript to make them even more different. Fig1 and Fig2 do not use the same numbers, and they use different fonts and colors. Fig3 is now an example based on SVE (and not AVX as it was the case in the previous version). In order to clarify the difference with the other study, we updated the contribution list with a footnote: The current study provides a translation of our AVX-SORT into SVE but also a completely new approach which works when the vector size is unknown at compile time. We look forward to the reviewer’s opinion if this statement is enough. (2) This paper is mainly about SVE, so it is better to not only mention AVX in the related work. We thank the reviewer for this comment. We added a new section about the related work on vectorizing with SVE and refer to 6 studies. (3) The performance figures, especially figure 4 and figure 7, are very hard to read; e.g., there are two markers that are too similar in figure 4, and lines with the same color but different width in figure 7. We split figures 4, 6 and 7. We look forward for the reviewer’s feedback to know if this is now more easy to read. (4) As declared in the paper, there are three contributions. It would be good if the author could detail the last one, ”Implement an efficient Quicksort variant using OpenMP tasks”, because there are only very brief descriptions about this in the paper. We thank the reviewer for this comment, and we added a simplified version of our algorithm (Algorithm 4). (5) What’s the difference between SVE-Bitonic and SVE512-Bitonic? Is SVE512-Bitonic a version of SVE-Bitonic with fixed vector length? If so, the performance is a little confusing as the performance of SVE512-Bitonic will not be worse than SVE-Bitonic, right? We apologize that our text was not clear enough. The original document was stating in 4.1: We also compare against an implementation that we have obtained by performing a translation of our original AVX-512 into SVE. This implementation works only for 512-bit SVE. 1 SVE512-Bitonic works only for a vector size of 512 bits but more importantly, the compare and exchange indices are hard-coded (and several loops are removed because the size of the vector is known). Therefore, we have updated this paragraph to make it clearer. (6) In figure 7, there are several lines of different numbers of threads are overlapped, which means the overheads and benefits of different numbers of threads on the same problem are the same. The reviewer was wondering why this happens. We would like to thank the reviewer for pointing out that more details are needed on this part. We added sentences to the section 4.5 Performance of the parallel version, and we would like to provide more information here. The weakness of parallel divide and conquer comes from the first steps that are not or poorly parallel. For instance, in our implementation (considering that we work on a large array), the execution time in sequential could be defined as: T 1 = T f irst partitioning + T second partitioning + ... (1) T first partitioning cannot be parallelized, but with two threads the execution time should be: T 2 = T f irst partitioning + (T 1 − T f irst partitioning)/2. (2) The same is valid for larger number of threads, and with 4 threads the execution time should be: T 4 = T f irst partitioning+T second partitioning/2+(T −T f irst partitioning−T second partitioning)/4 (3) It is clear that if the first partitioning steps represent a large portion of the execution time, the scaling will very quickly drop (the first partitioning is sequential, the second can be done by only two threads, and so on). This what happen in our case. A classic workaround would be to have each thread sorts a part of the array and to merge the sorting partitions at the end. However, parallel merging is tedious and will not be efficient in place. This is why extra memory could be needed to make it possible to have an efficient parallel merge (as with Boost). A second effect is the fact that the A64FX is organized with 4 core memory groups (CMGs) of 12 threads. Therefore, when using more than 12 threads (16 threads in our case) we use more than one CMG and the memory transfer between CMG will impact the performance. Therefore, our approach works well but up to 8 threads (but remains competitive with more threads) and will be difficult to improve without extra memory allocations. (7) It would be a plus if the author could define how good the improvement is by using SVE in this paper, e.g., comparing it to the theoretical one instead of only to AVX. We would like to apologize, but it is not clear for us what the reviewer meant by ”theoretical”. However, in 4.6 Comparison with the AVX-512 implementation, we added sentences about the cycles per instruction and the memory bandwidth of our implementation. We look forward if this answers the reviewer’s request. 3 Reviewer 2 (Anonymous) Basic reporting. 1. This paper need to add more information about Scalable Vector Extension in introduction and related part. Especially in section 2.2 and 2.3. It feels like the author is writing a paper about AVX instead of SVE. In section 2.2.1 why use avx512 as an example instead of SVE? We thank the reviewer for this comment. We removed the paragraph on x86 instruction sets and updated the figure 3 to correspond to SVE. SVE related work and publications are not mentioned in section 2.3. We thank the reviewer for this comment. We added a new section about the related work on vectorizing with SVE and refer to 6 studies. Basic reporting. 2. Figure 4 is very confusing and hard to read. Choose different lines and markers will be better. We split figures 4, 6 and 7. We look forward for the reviewer’s feedback to know if this is now more easy to read. 2 Experimental design. 1. In line 365 the author claims that ”This means that considering a static vector size of 512 bits, with compare-exchange indices hard coded and no loops/branches, does not provide any benefit, and is even slower for more than 70 values.” The author needs to provide more information and details to support this assumption, because normally extra branching instruction may need extra cycles. It will be helpful if the author can provide low level explanation, such as assemble code snippets. Also some performance tools. Validity of the findings. ”We also demonstrate that an implementation designed for a fixed size of vectors is less efficient”, I would like to see more evidence to support this conclusion. We thank the reviewer for pointing out that this sentence was not accurate enough. We update this sentence in our conclusion and the end of section 4.2. We attempt to recall the following elements: ˆ SVE-Bitonic: generic regarding the vector size, exchange indices are generated by several instructions, uses loops. ˆ SVE512-Bitonic: works only for vector size of 512bits, exchange indices are hard-coded (need to be loaded), no loops/branches. ˆ Impossible version: uses loops AND hard-coded indices (or it would require to have a large array of indices to select the right column depending on the loop index). ˆ Unrolled SVE-Bitonic: works only for vector size of 512bits, exchange indices are generated by several instructions (as with SVE-Bitonic), no loops/branches (as SVE512-Bitonic). From our results (not included) using unrolled SVE-Bitonic was less efficient than SVE-Bitonic, which means that branches are correctly managed and predicted by the CPU and/or the CPU has difficulty to benefit from thousands of instructions without branches. Also in SVE512-Bitonic the hard-coded indices must be loaded from the memory to be put in the vectorial registers, and this seems to hurt the performance too. For this two reasons, SVE512-Bitonic is less efficient than SVE-Bitonic. But SVE512Bitonic is interesting because it is a direct translation of our AVX512 kernel. This allows us to compare first the AVX512 kernel with SVE512-Bitonic (which rely on the same algorithm and follow the same implementation), and then to compare with SVE-Bitonic, which is specific to SVE and use a different algorithm and a different implementation. We perform a profiling using arm MAP to sort 2097152 values and obtain the following summary: # SVE-QS CPU Metrics: Linux perf event metrics: Cycles per instruction: L2D cache miss: Stalled backend cycles: Stalled frontend cycles: 1.77 4.8% || 26.4% |==| 31.6% |==| # SVE512-QS CPU Metrics: Linux perf event metrics: Cycles per instruction: L2D cache miss: Stalled backend cycles: Stalled frontend cycles: 1.72 6.1% || 30.5% |==| 27.9% |==| We can see that the difference is not significant but SVE512-QS has lower cycles per instruction (no branching) and little higher L2D cache miss (need to load arrays of indices). As a side note, the cycles per instruction is not impressive even when there are no loops. We added a few sentences about this point in 4.6 Comparison with the AVX-512 implementation, and we hope that the text is now clearer. Experimental design. 2. The vector length of A64FX is configurable, I would like to see the results of the same experiments by setting vector length to 256 bits, then we can evaluate performance benefits with different vector length. We would like to thank the reviewer for this great advice. We have tested different vector sizes (128, 256, and as previously 512) and we put here some results (see Figures 1, 2 and 3 at the end of the current document). It seems that there is no benefit of using vector of size less than 512. Consequently, we 3 added a sentence in the manuscript but did not put the details (section 4.1). We look forward to the confirmation from the reviewer that this answers the question. The size of the vector was changed using the following code: #define PR_SVE_SET_VL 50 #define PR_SVE_GET_VL 51 #include <cstdlib> #include <unistd.h> int main(int argc, char** argv){ if(getenv('VECBITS')){ int svebitstoset=atoi(getenv('VECBITS')); std::cout << 'svebitstoset ' << svebitstoset << std::endl; int info = prctl(PR_SVE_SET_VL, svebitstoset/8); std::cout << 'info ' << info << std::endl; } std::cout << 'Vec size = ' << svcntb()*8 << 'bits' << std::endl; 4 Integers (int) - S=128 Time in s/ n ln(n) Integers (int) - S=256 Integers (int) - S=512 10−8 0 20 40 60 80 100 120 140 160 180 200 220 240 Number of integer values n Figure 1: Execution time divided by n ln(n) to sort from 1 to 16 × VEC SIZE values. The execution time is obtained from the average of 2 · 103 sorts with different values for each size. The results are for three vector sizes: 128, 256 and 512. Integers (int) - S=128 Integers (int) - S=256 Time in s/ n Integers (int) - S=512 10−8 10−9 102 103 104 105 106 107 108 109 Number of values n Figure 2: Execution time divided by n of elements to partition arrays filled with random values with sizes from 64 to ≈ 109 elements.The pivot is selected randomly. The execution time is obtained from the average of 20 executions with different values. The results are for three vector sizes: 128, 256 and 512. 5 Integers (int) - S=128 Integers (int) - S=256 Integers (int) - S=512 −8 Time in s/ n ln(n) 10 10−8.5 102 103 104 105 106 107 108 109 Number of integer or floating-point values n Figure 3: Execution time divided by n ln(n) to sort arrays filled with random values with sizes from 64 to ≈ 109 elements. The execution time is obtained from the average of 5 executions with different values. The results are for three vector sizes: 128, 256 and 512. 6 "
Here is a paper. Please give your review comments after reading it.
273
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The way developers implement their algorithms and how these implementations behave on modern CPUs are governed by the design and organization of these. The vectorization units (SIMD) are among the few CPUs' parts that can and must be explicitly controlled. In the HPC community, the x86 CPUs and their vectorization instruction sets were de-facto the standard for decades. Each new release of an instruction set was usually a doubling of the vector length coupled with new operations. Each generation was pushing for adapting and improving previous implementations. The release of the ARM scalable vector extension (SVE) changed things radically for several reasons. First, we expect ARM processors to equip many supercomputers in the next years. Second, SVE's interface is different in several aspects from the x86 extensions as it provides different instructions, uses a predicate to control most operations, and has a vector size that is only known at execution time. Therefore, using SVE opens new challenges on how to adapt algorithms including the ones that are already well-optimized on x86. In this paper, we port a hybrid sort based on the well-known Quicksort and Bitonic-sort algorithms. We use a Bitonic sort to process small partitions/arrays and a vectorized partitioning implementation to divide the partitions. We explain how we use the predicates and how we manage the non-static vector size. We also explain how we efficiently implement the sorting kernels. Our approach only needs an array of O(log N) for the recursive calls in the partitioning phase, both in the sequential and in the parallel case. We test the performance of our approach on a modern ARMv8.2 (A64FX) CPU and assess the different layers of our implementation by sorting/partitioning integers, double floating-point numbers, and key/value pairs of integers. Our results show that our approach is faster than the GNU C++ sort algorithm by a speedup factor of 4 on average.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Data-processing algorithms (like sorting) are of this kind and require a significant programming effort to be vectorized efficiently. Also, the possibility of creating a fully vectorized implementation, with no scalar sections and with few data transformations, is only possible and efficient if the instruction set extension (IS) provides the needed operations. This is why new ISs together with their new operations make it possible to invent approaches that were not feasible previously, at the cost of reprogramming.</ns0:p><ns0:p>Vectorizing a code can be described as solving a puzzle, where the board is the target algorithm and the pieces are the size of the vector and the instructions. However, the paradigm changes with SVE [6, <ns0:ref type='bibr' target='#b0'>7,</ns0:ref><ns0:ref type='bibr' target='#b1'>8]</ns0:ref> because the size of the vector is unknown at compile time. This can have a significant impact on the transformation from scalar to vectorial. As an example, consider that a developer wants to work on a fixed number of values, which could be linked to the problem to solve, e.g. a 16 &#215; 16 matrix-matrix product, or based on other references, e.g. the size of the L1 cache. When the size of the vector is known at development time, a block of data can be mapped to the corresponding number of vectors and working on the vectors can be done with static/known number of operations. With a variable size, it is required to either implement different kernels for each of the possible sizes (like if they were different ISs) or by finding a generic way to vectorize the kernel, which could be a tedious task. We could expect SVE to be less upgraded than x86 ISs because there will be no need to release a new IS even when new CPU generations will support larger vectors.</ns0:p><ns0:p>In the current paper, we focus on the adaptation of a sorting strategy and its efficient implementation for the ARM CPUs with SVE. Our implementation is generic and works for any size equal to a power of two. The contributions of this study are:</ns0:p><ns0:p>&#8226; Describe how we port our AVX-SORT algorithm <ns0:ref type='bibr' target='#b2'>[9]</ns0:ref> to SVE;</ns0:p><ns0:p>&#8226; Define a new Bitonic-sort variant using SVE and how runtime vector size impact the implementation 1 ;</ns0:p><ns0:p>&#8226; Implement an efficient Quicksort variant using OpenMP <ns0:ref type='bibr' target='#b3'>[10]</ns0:ref> tasks.</ns0:p><ns0:p>All in all, we show how we can obtain a fast and vectorized in-place sorting implementation. 2 The paper is organized as follows: We first give background information related to vectorization and sorting in Section 2. Then, in Section 3, we describe our strategies for sorting small arrays, for partitioning and for our parallel sort. Finally, the performance study is detailed in Section 4.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>BACKGROUND</ns0:head></ns0:div> <ns0:div><ns0:head n='2.1'>Sorting algorithms</ns0:head></ns0:div> <ns0:div><ns0:head n='2.1.1'>Quicksort (QS) overview</ns0:head><ns0:p>QS <ns0:ref type='bibr' target='#b4'>[11]</ns0:ref> is a sorting algorithm that followed a divide-and-conquer strategy: the input array is recursively partitioned until the partitions hold a single element. The partitioning algorithm moves the values lower than a pivot at the beginning of the array, and greater values at the end, with a linear complexity. The worst-case complexity of QS is O(n 2 ), but in practice it has an average complexity of O(n log n). The complexity is tied to the pivot, and it must be close to the median to ensure a low complexity. However, it is a very popular sorting algorithm thanks to its simplicity in terms of implementation, and its speed in practice. An example of a QS execution is provided in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>.</ns0:p><ns0:p>To parallelize the QS and other divide-and-conquer approaches, it is common to create a task for each recursive call followed by a wait statement. For instance, a thread partitions the array in two, and then creates two tasks (one for each of the partition). To ensure coherency, the thread waits for the completion of the tasks before continuing. We refer to this parallel strategy as the QS-par.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.2'>GNU std::sort implementation (STL)</ns0:head><ns0:p>The standard requires a worst-case complexity of O(n log n) <ns0:ref type='bibr'>[12]</ns0:ref> (it was an average complexity until year 2003 <ns0:ref type='bibr' target='#b5'>[13]</ns0:ref>) that a pure QS implementation cannot guarantee. Consequently, the QS algorithm cannot be used alone as a standard C++ sort. As a result, the current STL implementation relies on 3 different . The pivot is equal to the value in the middle: the first pivot is 3, then at the second recursion level it is 2 and 6. l is the left index, and r the right index.</ns0:p><ns0:p>algorithms 3 . This i3-part hybrid sorting algorithm is composed of an Introsort <ns0:ref type='bibr' target='#b6'>[14]</ns0:ref> to a maximum depth of 2 &#215; log 2 n to obtain small partitions. These partitions are then sorted using an insertion sort, which is a 2-part hybrid composed of Quicksort and Heapsort.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.3'>Bitonic sorting network</ns0:head><ns0:p>In computer science, a sorting network is an abstract that describes how the values to sort are compared and exchanged. A network is defined for a given number of values. It is possible to represent graphically a sorting network where horizontal lines represent the input values, and vertical connection between those lines represent compare and exchange units. The literature provides various examples of sorting networks, and our approach relies on the Bitonic sort <ns0:ref type='bibr' target='#b7'>[15]</ns0:ref>. This network is straightforward to implement and its algorithm complexity for any input is of O(n log(n) 2 ). This algorithm demonstrated good performances on parallel computers <ns0:ref type='bibr' target='#b8'>[16]</ns0:ref> and GPUs <ns0:ref type='bibr' target='#b9'>[17]</ns0:ref>. We provide a Bitonic sorting network to sort 16 values in values. We use the terms symmetric and stair exchanges to refer to the red and orange stages, respectively.</ns0:p><ns0:p>A symmetric stage is always followed by stair stages from half size to size two. The Bitonic sort does not maintain the original order of the values and thus is not stable. We can implement a sorting network by hard-coding the connections between the lines only if we know the size of the input array. In this case, we simply translate the picture into an algorithm. However, for a dynamic array size, the implementation has to be flexible by relying on formulas that define when the lines cross <ns0:ref type='bibr' target='#b10'>[18]</ns0:ref>.</ns0:p><ns0:p>3 See the libstdc++ documentation on the sorting algorithm available at https://gcc.gnu.org/onlinedocs/libstdc++/libstdc++-html-USERS-4.4/a01347.html#l05207</ns0:p></ns0:div> <ns0:div><ns0:head>3/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:2:0:NEW 12 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='2.2'>Vectorization</ns0:head><ns0:p>The word vectorization defines a CPU capability of applying a single operation/instruction to a vector of values, instead of a single/scalar value <ns0:ref type='bibr' target='#b11'>[19]</ns0:ref>. Thanks to this feature, the peak performance of single cores had continued to increase despite the stagnation of the clock frequency since the mid-2000s. In the meanwhile, the length of the SIMD registers (i.e., the size of the vectors) has continuously increased, which increases the performance of the chips accordingly. In the current study, the term vector has no relation to an expandable vector data structure, such as std::vector, but refers to the data type managed by the CPU in this sense. The size of the vectors is variable and depends on both the instruction set and the type of vector's elements, and corresponds to the size of the registers in the chip.</ns0:p><ns0:p>The SIMD instructions can be called in the assembly language or using intrinsic functions, which are small functions that are intended to be replaced with a single assembly instruction by the compiler. There is usually a one-to-one mapping between intrinsics and assembly instructions, but this is not always true, as some intrinsics are converted into several instructions. Moreover, the compiler is free to use different instructions as long as they give the same results. values (considering that a vector size of 256 bits). In the resulting vector, each element will be the summation of the values from a and b at the same position. A summation using a predicate vector ( ) is shown. In this case, the resulting vector contains 0 at the positions where t f is false.</ns0:p><ns0:p>The SVE is a feature for ARMv8 processors. The size of the vector is not fixed at compile time (the specification limits the size to 2048 bits and ensures that it is a multiple of 128 bits) such that a binary that includes SVE instructions can be executed on ARMv8 that support SVE no matter the size of their registers. Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> illustrates the difference between a scalar summation and a vector summation with a vector size of 256 bits.</ns0:p><ns0:p>SVE provides most classic operations that also exist in x86 vectorization extensions, such as loading a contiguous block of values from the main memory and transforming it into a SIMD-vector (load), filling a SIMD-vector with a value (set), move back a SIMD-vector into memory (store) and basic arithmetic operations. SVE also provides advanced operations like gather, scatter, indexed accesses, permutations, comparisons, and conversions. It is also possible to get the maximum or the minimum of a vector or element-wise between two vectors.</ns0:p><ns0:p>Another significant difference with other ISs, is the use of predicate vectors i.e. the use of Boolean vectors (svbool t) that allow controlling more finely the instructions by selecting the affected elements, for example. Also, while in AVX-512 the value returned by a test/comparison (vpcmpd/vcmppd) is a mask (integer), in SVE, the result is svbool t.</ns0:p><ns0:p>A minor difference, but which impacts our implementation, is that SVE does not support a store-some as it exists in AVX-512 (vpcompressps/vcompresspd), where some values of a vector can be stored contiguously in memory. With SVE it is needed to first compact the values of a vector to put the values to be saved at the beginning of the vector, and then perform a store, or to use a scatter. However, both approaches need extra Boolean or indices vectors and additional instructions.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Related work on vectorized sorting algorithms</ns0:head><ns0:p>The literature on sorting and vectorized sorting implementations is very large. Therefore, we only cite some studies we consider most related to our work.</ns0:p><ns0:p>Sanders et al. <ns0:ref type='bibr' target='#b12'>[20]</ns0:ref> provide a sorting technique that tries to remove branches and improves the prediction of a scalar sort. The results show that the method provides a speedup by a factor of 2 against the STL (the implementation of the STL was different). This study illustrates the early strategy to adapt sorting Manuscript to be reviewed</ns0:p><ns0:p>Computer Science algorithms to a given hardware, and also shows the need for low-level optimizations, due to the limited instructions available.</ns0:p><ns0:p>Later, Inoue et al. <ns0:ref type='bibr' target='#b13'>[21]</ns0:ref> propose a parallel sorting on top of combosort vectorized with the VMX instruction set of IBM architecture. Unaligned memory access is avoided, and the L2 cache is efficiently managed by using an out-of-core/blocking scheme. The authors show a speedup by a factor of 3 against the GNU C++ STL.</ns0:p><ns0:p>In a different study <ns0:ref type='bibr' target='#b14'>[22]</ns0:ref>, Furtak et al. use a sorting-network for small-sized arrays, similar to our own approach. However, instead of dividing the main array into sorted partitions (partitions of increasing contents), and applying a small efficient sort on each of those partitions, the authors perform the opposite.</ns0:p><ns0:p>They apply multiple small sorts on sub-parts of the array, and then they finish with a complicated merge scheme using extra memory to sort globally all the sub-parts. A very similar approach was later proposed by Chhugani et al. <ns0:ref type='bibr' target='#b15'>[23]</ns0:ref>.</ns0:p><ns0:p>More recently, Gueron et al. provided a new approach for AVX2 <ns0:ref type='bibr' target='#b16'>[24]</ns0:ref>. The authors use a Quicksort variant with a vectorized partitioning function, and an insertion sort once the partitions are small enough (as the STL does). The partition method relies on look-up tables, with a mapping between the comparison's result of an SIMD-vector against the pivot, and the move/permutation that must be applied to the vector.</ns0:p><ns0:p>The authors show a speedup by a factor of 4 against the STL, but their approach is not always faster than the Intel IPP library. The proposed method is not suitable for AVX-512 because the lookup tables will occupy too much memory. This issue and the use of extra memory, can be solved with the new instructions of the AVX-512. As a side remark, the authors do not compare their proposal to the standard C++ partition function. It is the only part of their algorithm that is vectorized.</ns0:p><ns0:p>In our previous work <ns0:ref type='bibr' target='#b2'>[9]</ns0:ref>, we have proposed the first hybrid QS/Bitonic algorithm implemented with AVX-512. We have described how we can vectorize the partitioning algorithm and create a branchfree/vectorized Bitonic sorting kernel. To do so, we put the values of the input array into SIMD vectors.</ns0:p><ns0:p>Then, we sort each vector individually, and finally we exchange values between vectors either during the symmetric or stair stage. Our method was 8 times faster to sort small arrays and 1.7 times faster to sort large arrays compared to the Intel IPP library. However, our method was sequential and could not simply be converted to SVE when we consider that the vector size is unknown at compile time. In this study, we refer to this approach as the AVX-512-QS.</ns0:p><ns0:p>Hou et al. <ns0:ref type='bibr' target='#b17'>[25]</ns0:ref> designed a framework for the automatic vectorization of parallel sort on x86-based processors. Using a DSL, their tool generates a SIMD sorting network based on a formula. Their approach shows a significant speedup against STL, and especially they show a speedup of 6.7 in parallel against the sort from Intel TBB on Intel Knights Corner MIC. The method is of great interest as it avoids programming by hand the core of the sorting kernel. Any modification, such as the use of a new IS, requires upgrading the framework. To the best of our knowledge, they do not support SVE yet.</ns0:p><ns0:p>Yin et al. <ns0:ref type='bibr' target='#b18'>[26]</ns0:ref> described an efficient parallel sort on AVX-512-based multi-core and many-core architectures. Their approach achieves to sort 1.1 billion floats per second on an Intel KNL (AVX-512).</ns0:p><ns0:p>Their parallel algorithm is similar to the one we use in the current study because they first sort sub-parts of the input array and then merge them by pairs until there is only one result. However, their parallel merging is out-of-place and requires doubling the needed memory, which is not the case for us. Besides, their Bitonic sorting kernel differs from ours, because we follow the Bitonic algorithm without the need for matrix transposition inside the registers.</ns0:p><ns0:p>Watkins et al. <ns0:ref type='bibr' target='#b19'>[27]</ns0:ref> provide an alternative approach to sort based on the merging of multiple vectors.</ns0:p><ns0:p>Their method is 2 times faster than the Intel IPP library and 5 times faster than the C-lib qsort. They can sort 500 million keys per second on an Intel KNL (AVX-512) but they also need to have an external array when merging, which we avoid in our approach.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.4'>Related work on vectorized with SVE</ns0:head><ns0:p>Developing optimized kernels with SVE is a recent research topic. We refer to studies that helped better understand this architecture, even if they did not focus on sorting.</ns0:p><ns0:p>Meyer et al. <ns0:ref type='bibr' target='#b20'>[28]</ns0:ref> studied the assembly code generated when implementing lattice quantum chromodynamics (LQCD) kernels. They evaluate if the compiler was capable of generating vectorized assembly from a scalar C code, which was the case. LQCD has also been studied by Alappat et al. <ns0:ref type='bibr' target='#b22'>[29]</ns0:ref> in addition to sparse matrix vector product (SpMV). The authors studied various effects and properties of the A64FX, and demonstrated that for some kernels it competes with a V100 GPU.</ns0:p></ns0:div> <ns0:div><ns0:head>5/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:2:0:NEW 12 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Kodama et al. <ns0:ref type='bibr' target='#b23'>[30]</ns0:ref> tried evaluating the impact on performance when changing the vector size, while using the same hardware. Their objective was oriented to SVE since SVE kernels are vector size independent. At the time of the study, no hardware was supported SVE, hence the authors used an emulator. Additionally, they speculated on how the vector size could be changed, which does not respect the current SVE technology. Nevertheless, they concluded that using a larger vector than the hardware registers could be beneficial, by reducing the number of instructions, but could also be negative by the need of more registers.</ns0:p><ns0:p>Aoki et al. <ns0:ref type='bibr' target='#b24'>[31]</ns0:ref> implemented the H.265 video codec using SVE. Their implementation reduces the number of instructions by half, but no performance results were given.</ns0:p><ns0:p>Wan et al. <ns0:ref type='bibr' target='#b25'>[32]</ns0:ref> implemented level-2 basic linear algebra routines (BLAS). They evaluated their implementation using the Arm emulator ARMIE, which predicted a 17x speedup against Neon, the previous ARM vector ISA.</ns0:p><ns0:p>Domle <ns0:ref type='bibr' target='#b26'>[33]</ns0:ref> evaluated the performance differences for various benchmarks and five different compilers on A64FX. The author advised Fujitsu for Fortran codes, GNU for integer-intensive apps, and any clang-based compilers for C/C++, but concluded that there was not a single perfect compiler and that it is advised to test for each application.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>SORTING WITH SVE</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1'>Overview</ns0:head><ns0:p>Our SVE-QS shares similarities with the AVX-512-QS as it is composed of two key steps. First, we partition the data recursively using the sve partition function described in Section 3.3, as in the classical QS. Second, once the partitions are smaller than a given threshold, we sort them with the sve bitonic sort wrapper function from Section 3.2. To sort in parallel, we rely on the classical parallelization scheme for the divide-and-conquer algorithm, but propose several optimizations. This allows an easy parallelization method, which can be fully implemented using OpenMP.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Bitonic-based sort on SVE vectors</ns0:head><ns0:p>In this section, we detail our Bitonic-sort to sort small arrays that have less than 16 times VEC SIZE elements, where VEC SIZE is the size of a SIMD vector. We used this function in our QS implementation to sort partitions that are small enough.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.1'>Sorting one vector</ns0:head><ns0:p>We sort a single vector by applying the same operations as the ones shown in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>(a). We perform the compare and exchange following the indexes shown in the Bitonic sorting network figure. Thanks to the vectorization, we are able to work on an entire vector without the need of iterating on the values individually. However, we cannot hard-code the indices of the elements that should be compared and exchanged, because we do not know the size of the vector. Therefore, we use a loop-based scheme where we efficiently generate permutation and Boolean vectors to perform the correct comparisons. We use the same pattern for both the symmetric and the stair stages.</ns0:p><ns0:p>In the symmetric stage, the values are first compared by contiguous pairs, e.g. each value at an even index i is compared with the value at i + 1 and each value at en odd index j is compared with the value at j &#8722; 1. Additionally, we see in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>(a) that the width of comparison doubles at each iteration and that the comparisons are from the sides to the center. In our approach, we use three vectors. First, a Boolean vector that shows the direction of the comparisons, e.i. for each index it tells if it has to be compared with a value at a greater index (and will take the minimum of both) or with a value at a lower index (and will take the maximum of both). Second, we need a shift coefficient vector which gives the step of the comparisons, i.e. it tells for each index the relative position of the index to be compared with. Finally, we need an index vector that contains increasing values from 0 to N-1 (for any index i, vec[i] = i) and that we use to sum with the shift coefficient vector to get a permutation vector. The permutation vector tells for any index which other index it should be compared against.</ns0:p><ns0:p>We give the pseudo-code of our vectorized implementation in Algorithm 1 where the corresponding SVE instructions and possible vector values are written in comments. In the beginning, the Boolean vector must contain repeated false and true values because the values are compared by contiguous pairs. To build it, we use the svzip1 instruction which interleaves elements from low halves of two inputs, and pass one vector of true and a vector of false as parameters (line 7). Then, at each iteration, the number of false Manuscript to be reviewed</ns0:p><ns0:p>Computer Science should double and be followed by the same number of true. To do so, we use again the svzip1 instruction, but we pass the Boolean vector as parameters (line 20). The vector of increasing indexes is built with a single SVE instruction (line 5). The shift coefficients vector is built by interleaving 1 and &#8722;1 (line 9).</ns0:p><ns0:p>The permutation index is generated by summing the two vectors (line 12) and uses to permute the input (line 14). So, at each iteration, we use the updated Boolean vector to decide if we add or subtract two times the iteration index (line 22). Also, this algorithm is never used as presented here because each of its iterations must be followed by a stair stage.</ns0:p><ns0:p>Algorithm 1: SVE Bitonic sort for one vector, symmetric stage.</ns0:p><ns0:p>Input: vec: a SVE vector to sort. Output: vec: the vector sorted. </ns0:p><ns0:formula xml:id='formula_0'>5 vecIndexes = (i &#8712; [0, N-1] &#8594; i) 6 // svzip1 -[F, T, F, T, ..., F, T] 7 falseTrueVecOut = (i &#8712; [0, N-1] &#8594; i is odd ? False : True) 8 // svneg/svdup -[1, -1, 1, -1, ..., 1, -1] 9 vecIndexesPermOut = (i &#8712; [0, N-1] &#8594; falseTrueVecOut[i] ? -1 : 1)</ns0:formula><ns0:formula xml:id='formula_1'>12 premuteIndexes = (i &#8712; [0, N-1] &#8594; vecIndexes[i] + vecIndexesPermOut[i]) 13 // svtbl -[vec[1], vec[0], vec[3], vec[2], ..., vec[N-1], vec[N-2]] 14 vecPermuted = (i &#8712; [0, N-1] &#8594; vec[premuteIndexes[i]]) 15 // svsel/svmin/svmax -[..., Min(vec[i], vec[i+1]), Max(vec[i], vec[i+1]), ...] 16 vec = (i &#8712; [0, N-1] &#8594; falseTrueVecOut[i] ? 17 ---------Max(vec[i], vecPermuted[i]): 18 ---------Min(vec[i], vecPermuted[i])) 19 // svzip1 -[F, F, T, T, F, F, T, T, ...] 20 falseTrueVecOut = (i &#8712; [0, N-1] &#8594; falseTrueVecOut[i/2]) 21 // svsel/svadd/svsub -[3, 2, 1, 0, 3, 2, 1, 0, ...] 22 vecIndexesPermOut = (i &#8712; [0, N-1] &#8594; falseTrueVecOut[i] ? 23 -----------------vecIndexesPermOut[i]-stepOut*2 : 24 -----------------vecIndexesPermOut[i]+stepOut*2) 25 end 26 return vec</ns0:formula><ns0:p>We use the same principle in the stair stage, with one vector for the Boolean that shows the direction of the exchange and another to store the relative index for comparison. We sum this last vector with the index vector to get the permutation indices. If we study again to Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>(a), we observe that the algorithm starts by working on parts of half the size of the previous symmetric stage. Then, at each iteration, the parts are subdivided until they contain two elements. Besides, the width of the exchange is the same for all elements in an iteration and is then divided by two for the next iteration.</ns0:p><ns0:p>We provide a pseudo-code of our vectorized algorithm in Algorithm 2. To manage the Boolean vector:</ns0:p><ns0:p>we use the svuzp2 instruction that select odd elements from two inputs and concatenate them. In our case, we give a vector that contains a repeated pattern composed of false x times, followed by true x times (x a power of two) to svuzp2 to get a vector with repetitions of size x/2. Therefore, we pass the vector of Boolean generated during the symmetric stage to svuzp2 to initialize the new Boolean vector (line 7).</ns0:p><ns0:p>We divide the exchange step by two for all elements (line 23). The permutation (line 15) and exchange (line 17) are similar to what is performed in the symmetric stage.</ns0:p><ns0:p>The complete function to sort a vector is a mix of the symmetric (sve bitonic sort 1v symmetric) and stair (sve bitonic sort 1v stairs) functions; each iteration of the symmetric stage is followed by the inner loop of the stair stage. The corresponding C++ source code of a fully vectorized implementation is given in Appendix A.1.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.2'>Sorting more than one vectors</ns0:head><ns0:p>To sort more than one vector, we profit that the same patterns are repeated at different scales; to sort V vectors, we re-use the function that sorts V /2 vectors and so on. We provide an example to sort two vectors in Algorithm 3, where we start by sorting each vector individually using the sve bitonic sort 1v function. Then, we compare and exchange values between both vectors (line 9), and we finish by applying the same stair stage on each vector individually. Our real implementation uses an optimization that consists in a full inlining followed by a merge of the same operations done on different data. For instance, instead of two consecutive calls to sve bitonic sort 1v (lines 7 and 8), we inline the functions. But since </ns0:p><ns0:formula xml:id='formula_2'>12 premuteIndexes = (i &#8712; [0, N-1] &#8594; vecIndexes[i] + 13 --------(falseTrueVecIn[i] ? -vecIncrement[i] : vecIncrement[i])) 14 // svtbl 15 vecPermuted = (i &#8712; [0, N-1] &#8594; vec[premuteIndexes[i]]) 16 // svsel/svmin/svmax 17 vec = (i &#8712; [0, N-1] &#8594; falseTrueVecIn[i] ? 18 ---------Max(vec[i], vecPermuted[i]): 19 ---------Min(vec[i], vecPermuted[i])) 20 // svuzp2 21 falseTrueVecIn = (i &#8712; [0, N-1] &#8594; falseTrueVecIn[(i*2+1)%N]) 22 // svdiv 23 vecIncrement = (i &#8712; [0, N-1] &#8594; vecIncrement[i] / 2); 24 end 25 falseTrueVecOut = (i &#8712; [0, N-1] &#8594; falseTrueVecOut[i/2]) 26 end 27 return vec</ns0:formula><ns0:p>they are similar but on different data, we merge them into one that works on both vectors at the same time.</ns0:p><ns0:p>In our sorting implementation, we provide the functions to sort up to 16 vectors.</ns0:p><ns0:p>Algorithm 3: SIMD bitonic sort for two vectors of double floating-point values.</ns0:p><ns0:p>Input: vec1 and vec2: two double floating-point SVE vectors to sort. Output: vec1 and vec2: the two vectors sorted with vec1 lower or equal than vec2. 1 function sve bitonic exchange rev(vec1, vec2) </ns0:p><ns0:formula xml:id='formula_3'>2 vec1 copy = (i &#8712; [0, N-1] &#8594; vec1[N-1-i]) 3 vec1 = (i &#8712; [0, N-1] &#8594; Min(vec1[i], vec2[i]) 4 vec2 = (i &#8712; [0, N-1] &#8594; Max(vec1 copy[i], vec2[i])</ns0:formula></ns0:div> <ns0:div><ns0:head n='3.2.3'>Sorting small arrays</ns0:head><ns0:p>Once a partition contains less than 16 SIMD-vector elements, it can be sorted with our SVE-Bitonic functions. We select the appropriate SVE-Bitonic function (the one that matches the size of the array to sort) with a switch statement, in a function interface that we refer to as sve bitonic sort wrapper.</ns0:p><ns0:p>However, the partitions obtained from the QS do not necessarily have a size multiple of the vector's length. Therefore, we pad the last vector with an extra value, which is the greatest possible value for the target data type. During the execution of the sort, these last values will be compared but never exchanged and will remain at the end of the last vector.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.4'>Optimization by comparing vectors' min/max values or whether vectors are already sorted</ns0:head><ns0:p>There are two main points where we can apply optimization in our implementation. The first one is to avoid exchanging values between vectors if their contents are already in the correct order, i.e. no values will be exchanged between the vectors because their values respect the ordering objective. For instance, in Algorithm 3, we can compare if the greatest value in vector vec2 (SVE instruction svmaxv) is lower than or equal to the lowest value in vector vec1 (SVE instruction svminv). If this is the case, the function can simply sort each vector individually. The same mechanism can be applied to any number of vectors, Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and it can be used at function entry or inside the loops to break when it is known that no more values will be exchanged. The second optimization can be applied when we want to sort a single vector by checking if it is already sorted. Similarly to the first optimization, this check can be done at function entry or in the loops, such as at lines 2 and 10, in Algorithm 2. We propose two implementations to test if a vector is sorted and provide the details in Appendix A.2.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Partitioning with SVE</ns0:head><ns0:p>Our partitioning strategy is based on the AVX-512-partition. In this algorithm, we start by saving the extremities of the input array into two vectors that remain unchanged until the end of the algorithm. By doing so, we free the extremity of the array that can be overwritten. Then, in the core part of the algorithm, we load a vector and compare it to the pivot. The values lower than the pivot are stored on the left side of the array, and the values greater than the pivot are stored on the right side while moving the corresponding cursor indexes. Finally, when there is no more value to load, the two vectors that were loaded at the beginning are compared to the pivot and stored in the array accordingly.</ns0:p><ns0:p>When we implement this algorithm using SVE we obtain a Boolean vector b when we compare a vector to partition with the pivot. We use b to compact the vector and move the values lower or equal than the pivot on the left, and then we generate a secondary Boolean vector to store only as a sub-part of the vector. We manage the values greater than the pivot similarly by using the negate of b.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4'>Sorting key/value pairs</ns0:head><ns0:p>The sorting methods we have described are designed to sort arrays of numbers. However, some applications need to sort key/value pairs. More precisely, the sort is applied on the keys, and the values contain extra information such as pointers to arbitrary data structures, for example. We extend our SVE-Bitonic and SVE-Partition functions by making sure that the same permutations/moves apply to the keys and the values. In the sort kernels, we replace the minimum and maximum statements with a comparison operator that gives us a Boolean vector. We use this vector to transform both the vector of keys and the vector of values. For the partitioning kernel, we already use a comparison operator, therefore, we add extra code to apply the same transformations to the vector of values and the vector of keys.</ns0:p><ns0:p>In terms of high-level data structure, we support two approaches. In the first one, we store the keys and the values in two distinct arrays, which allow us to use contiguous load/store. In the second one, the key/value is stored by pair contiguously in a single array, such that loading/storing requires non-contiguous memory accesses.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5'>Parallel sorting</ns0:head><ns0:p>Our parallel implementation is based on the QS-par that we extend with several optimizations. In the QS-par parallelization strategy, it is possible to avoid having too many tasks or tasks on too small partitions by stopping creating tasks after a given recursive level. This approach allows to fix the number of tasks at the beginning, but could end in an unbalanced configuration (if the tasks have different workload) that is difficult to resolve on the fly. Therefore, in our implementation, we create a task for every partition larger than the L1 cache, as shown in Algorithm 4 line 26. However, we do not rely on the OpenMP task statement because it is impossible to control the data locality. Instead, we use one task list per thread (lines 2, 11 and 33). Each thread uses its list as a stack to store the intervals of the recursive calls and also as a task list where each interval can be processed in a task. In a steady-state, each thread accesses only its list: after each partitioning, a thread puts the interval of the first sub-partition in the list and continues with the second sub-partition (line 35). When the partition is smaller than the L1 cache, the thread executes the sequential SVE-QS. We use a work-stealing strategy when a thread has an empty list such that the thread will try to pick a task in others' lists. The order of access to others' lists is done such that a thread accesses the lists from threads of closer ids to far ids, e.g. a thread of id i will look at i + 1, i &#8722; 1, i + 2, and so on. We refer to this optimized version as the SVE-QS-par.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>PERFORMANCE STUDY</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1'>Configuration</ns0:head><ns0:p>We assess our method on an ARMv8.2 A64FX -Fujitsu with 48 cores at 1.8GHz and 512-bit SVE, i. shared L2 cache per CMG. For the sequential executions, we pinned the process with taskset -c 0, and for the parallel executions, we use OMP PROC BIND=TRUE. We use the ARM compiler 20.3 (based on LLVM 9.0.1) with the aggressive optimization flag -O3. We compare our sequential implementations against the GNU STL 20200312 from which we use the std::sort and std::partition functions. We also compare against SVE512-Bitonic, which is an implementation that we have obtained by performing a translation of our original AVX-512 into SVE. This implementation works only for 512-bit SVE, but this makes it possible to hard-code all the indices of the compare and exchange of the Bitonic algorithm. Moreover, SVE512-Bitonic does not use any loop, i.e. it can be seen as if we had fully unrolled the loops of the SVE-Bitonic.</ns0:p><ns0:p>We compare our parallel implementation against the Boost 4 1.73.0 from which we use the block in direct sort function. The test file used for the following benchmark is available online (https:// gitlab.inria.fr/bramas/arm-sve-sort) and includes the different sorts presented in this study, plus some additional strategies and tests. 5 Our QS uses a 5-values median pivot selection (whereas the STL sort function uses a 3-values median). The arrays to sort are populated with randomly generated values. Our implementation does not include the potential optimizations described in Section 3.2.4 that can be applied when there is a chance that parts or totality of the input array are already sorted. 6 As it is possible to virtually change the size of the SIMD vectors at runtime, we evaluated if using different vector sizes (128 or 256) could increase the performance of our approach. It appears that the performance was always worse, and consequently we decided not to include these results in the current study.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Performance to sort small arrays</ns0:head><ns0:p>We provide in Figure <ns0:ref type='figure' target='#fig_12'>4</ns0:ref> the execution times to sort arrays of 1 element to 16 &#215; VEC SIZE elements. This corresponds to at most 128 double floating-point values, or 256 integer values. We test all the sizes by step 1, such that we include sizes not multiple of the SIMD-vector's length. For more than 20 values, the SVE-Bitonic always delivers better performance than the STL. The speedup is significant and increases with the number of values to reach 5 for 256 integer values. The execution time per item increases every VEC SIZE values because the cost of sorting is not tied to the number of values but to the number of SIMD-vectors to sort, as explained in Section 3.2.3. For example, we have to sort two SIMD-vector of 16 values to process from 17 to 32 integers. Our method reaches a speedup of 3.6 to sort key/value pairs. To sort key/value pairs, we obtain similar performance if we sort pairs of integers stored contiguously or two arrays of integers, one for the keys and one for the values. Comparing our two SVE implementations, SVE-Bitonic appears more efficient than SVE512-bitonic, except for very small number of values. This means that considering a static vector size of 512 bits, with compare-exchange indices hard coded and no loops/branches, does not provide any benefit, and is even slower for more than 70 values. This means that, for our kernels, the CPU manages more easily loops with branches (SVE-bitonic) than a large amount of instructions without branches (SVE512-bitonic). Moreover, the hard-coded exchange indices are stored in memory and should be load to register, which appear to hurt the performance compared to building these indices using several instructions. Sorting double floating-points values or pairs of integers takes similar duration up to 64 values, then with more values it is faster to sort pairs of integers.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Partitioning performance</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_13'>5</ns0:ref> shows the execution times to partition using our SVE-Partition or the STL's partition function.</ns0:p><ns0:p>Our method provides again a speedup of an average factor of 4 for integers and key/values (with two arrays), and 3 for floating-point values. We see no difference if the data fit in the caches L1/L2 or not, neither in terms of performance nor in the difference between the STL and our implementation. However, there is a significant difference between partitioning two arrays of integers (one for the key and the other for the values) or one array of pairs of integers. The only difference between both implementations is that we work with distinct svint32 t vectors in the first one, and with svint32x2 t vector pairs in the second.</ns0:p><ns0:p>But the difference is mainly in the memory accesses during the loads/stores. The partitioning of one array or two arrays of integers appears equivalent, and this can be unexpected because we need more instructions when managing the latter. Indeed, we have to apply the same transformations to the keys and the values, and we have twice memory accesses.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4'>Performance to sort large arrays</ns0:head><ns0:p>Figure <ns0:ref type='figure'>6</ns0:ref> shows the execution times to sort arrays up to a size of &#8776; 10 9 items. Our SVE-QS is always faster in all configurations. The difference between SVE-QS and the STL sort is stable for size greater than 10 3 values with a speedup of more than 4 to our benefit to sort integers. There is an effect when sorting 64 values (the left-wise points) as the execution time is not the same as the one observed when sorting less than 16 vectors (Figure <ns0:ref type='figure' target='#fig_12'>4</ns0:ref>). The only difference is that here we call the main SVE-QS functions, which call the SVE-Bitonic functions after just one test on the size, whereas in the previous results we call the SVE-Bitonic functions directly. We observe that when sorting key/value pairs, there is again a benefit when using two distinct arrays of scalars compared with a single array of pairs. From the previous results, it is clear that this difference comes from the partitioning for which the difference also exists (Figure <ns0:ref type='figure' target='#fig_13'>5</ns0:ref>), whereas the difference is negligible in the sorting of arrays smaller than 16 vectors (Figure <ns0:ref type='figure' target='#fig_12'>4</ns0:ref>). However, as the size of the array increases, this difference vanishes, and it becomes even faster to sort Floating-point values than keys/values.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5'>Performance of the parallel version</ns0:head><ns0:p>Figure <ns0:ref type='figure'>7</ns0:ref> shows the performance for different number of threads of a parallel sort implementation from the boost library (block indirect sort) against our task-based implementation (SVE-QS-par). Our approach is faster in all configurations, but the results show the benefit of using a merge strategy to sort large arrays, as in block indirect sort. Indeed, as the number of threads increases, the SVE-QS-par becomes faster but reaches a limit. Using more than 8 threads (16, 32 or 48 threads) does not provide any benefit, and the four curves for 8, 16, 32, and 48 threads, overlap or are very close for both data types. Moreover, while 1 thread executions show that our SVE-QS-par is faster for both data types, for large arrays the block indirect sort provides very close performance. This illustrates the limit of the divide-and-conquer parallelization strategy to process large arrays since the first steps, i.e. the partitioning steps, are not or poorly parallel. Moreover, using more than 12 threads (the number of threads per CMG), while working in place on the original array implies memory transfers across the memory nodes, which impacts the scalability. Whereas the block indirect sort, which uses a merge kernel but at the cost of using additional data buffers, has a more stable scalability. Nevertheless, our approach is trivial to implement and delivers high performance when using few threads, which is valuable when an application uses multiple processes per node.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.6'>Comparison with the AVX-512 implementation</ns0:head><ns0:p>The results obtained in our previous study on Intel Xeon Platinum 8170 Skylake CPU at 2.10GHz <ns0:ref type='bibr' target='#b2'>[9]</ns0:ref> shows that our AVX-512-QS was sorting at a speed of &#8776; 10 &#8722;9 second per element (obtained by T /N &#8226; ln(N)).</ns0:p><ns0:p>This was almost 10 times faster than the STL (10 &#8722;8 second per element). The speedup obtained with SVE in the current study is lower and does not come from our new implementation, which is generic regarding the vector size, because the SVE512-QS is not faster, either. The difference does not come either from the memory accesses, because it is significant for small arrays (that fit in the L1 cache), or the number of vectorial registers, which is 32 for both hardware. Profiling the code reveals that the cycles per instruction is around 1.7 for both SVE512-QS and SVE-QS in sequential, which is not ideal. The L2 cache miss rate is lower than 10%, which indicates that the memory access pattern is adequate. The memory bandwidth is given in Table <ns0:ref type='table' target='#tab_5'>1</ns0:ref>. We observe that the peak of HBM2 (256GB/s) is not reached. Additionally, the table indicates that to sort 4GB of data, our approach will read/write 252GB of data, but it was already the case for AVX-512-QS. From the hardware specification of the A64FX [34], we can observe that most SIMD SVE instructions have a latency between 4 and 9 cycles. Therefore, we conclude that the difference between our AVX-512 and SVE versions comes from the cost of the SIMD instructions and the pipelining of these because the memory access appears fine, and the difference is already significant for small arrays. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head n='5'>CONCLUSIONS</ns0:head><ns0:p>In this paper, we described new implementations of the Bitonic sorting network and the partition algorithm that have been designed for the SVE instruction set. These two algorithms are used in our Quicksort variant, which makes it possible to have a fully vectorized implementation. Our approach shows superior performance on ARMv8.2 (A64FX) in all configurations against the GNU C++ STL. It provides a speedup up of 5 when sorting small arrays (less than 16 SIMD-vectors), and a speedup above 4 for large arrays. We also demonstrate that our algorithm is less efficient when we fully unroll the loops and use hard-coded exchange indices in the Bitonic stage (by considering that the vector if of size 512bits). This strategy was efficient when implemented with AVX512 and executed on Intel Skylake. Our parallel implementation is efficient, but it could be improved when working on large arrays by using a merge on sorted partitions instead of a recursive parallel strategy (at a cost of using external memory buffers). In addition, we would like to compare the performance obtained with different compilers because there are many ways to transform and optimize a C++ code with intrinsics into a binary.</ns0:p><ns0:p>Besides, these results is a good example to foster the community to revisit common problems that have kernels for x86 vectorial extensions but not for SVE yet. Indeed, as the ARM-based architecture will become available on more HPC platforms, having high-performance libraries of all domains will become critical. Moreover, some algorithms that were not competitive when implemented with x86 ISA may be easier to vectorize with SVE, thanks to the novelties it provides, and achieve high-performance. Finally, the source code of our implementation is publicly available and ready to be used and compared against.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>ACKNOWLEDGMENT</ns0:head><ns0:p>This work used the Isambard 2 UK National Tier-2 HPC Service (http://gw4.ac.uk/isambard/) operated by GW4 and the UK Met Office, which is an EPSRC project (EP/T022078/1). In addition, this work used the Farm-SVE library <ns0:ref type='bibr' target='#b27'>[35]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>A APPENDIX</ns0:head></ns0:div> <ns0:div><ns0:head>A.1 Source code of sorting one vector of integers</ns0:head><ns0:p>In Code 1, we provide the implementation of sorting one vector using Bitonic sorting network and SVE.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Quicksort example to sort [4, 2, 3, 1, 6] to [1, 2, 3, 4, 6]. The pivot is equal to the value in the middle: the first pivot is 3, then at the second recursion level it is 2 and 6. l is the left index, and r the right index.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2(a). The execution of the example goes from left to right as a timeline. The values are moved from left to right, and when they cross an exchange unit they are potentially transferred along the vertical bar.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2(b) provide a real example where we print the values along the horizontal lines when sorting 8</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Bitonic sorting network examples. In red boxes, the exchanges are done from extremities to the center, and we refer to it as the symmetric stage. Whereas in orange boxes, the exchanges are done with a linear progression, and we refer to it as the stair stage. (a) Bitonic sorting network for input of size 16. All vertical bars/switches exchange values in the same direction. (b) Example of 8 values sorted by a Bitonic sorting network.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Summation example of single precision floating-point values using : ( ) scalar standard C++ code and ( ) SVE SIMD-vector of 8 values (considering that a vector size of 256 bits). In the resulting vector, each element will be the summation of the values from a and b at the same position. A summation using a predicate vector ( ) is shown. In this case, the resulting vector contains 0 at the positions where t f is false.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>6/ 18</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:2:0:NEW 12 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>1 function sve bitonic sort 1v symmetric(vec) 2 // Number of values in a vector 3 N = get hardware size() 4 // svindex -[O, 1, ...., N-1]</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>10 for</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>stepOut from 1 to N-1, doubling stepOut at each step do 11 // svadd -[1, 0, 3, 2, ..., N-1, N-2]</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>7 / 18 PeerJ 8 /</ns0:head><ns0:label>7188</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:06:62923:2:0:NEW 12 Oct 2021) Manuscript to be reviewed Computer Science Algorithm 2: SVE Bitonic sort for one vector, stair stage. The gray lines are copied from the symmetric stage (Algorithm 1) Input: vec: a SVE vector to sort. Output: vec: the vector sorted. 1 function sve bitonic sort 1v stairs(vec) 2 N = get hardware size() 3 vecIndexes = (i &#8712; [0, N-1] &#8594; i) 4 falseTrueVecOut = (i &#8712; [0, N-1] &#8594; i is odd ? False : True) 5 for stepOut from 1 to N-1, doubling stepOut at each step do 6 // svuzp2 7 falseTrueVecIn = (i &#8712; [0, N-1] &#8594; falseTrueVecOut[(i*2+1)%N]) / svdup -[stepOut/2, stepOut/2, ...] 9 vecIncrement = (i &#8712; [0, N-1] &#8594; stepOut/2) 10 for stepIn from stepOut/2 to 1, dividing stepIn by 2 at each step do 11 // svadd/svneg -[stepOut/4, stepOut/4,..., -stepOut/4, -stepOut/4]</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>5 9 [</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>return {vec1, vec2} 6 function sve bitonic sort 2v(vec1, vec2) 7 vec1 = sve bitonic sort 1v(vec1) 8 vec2 = sve bitonic sort 1v(vec2) vec1, vec2] = sve bitonic exchange rev(vec1, vec2) 10 vec1 = sve bitonic sort 1v stairs(vec1) 11 vec2 = sve bitonic sort 1v stairs(vec2)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>8 / 18 PeerJ</ns0:head><ns0:label>818</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:06:62923:2:0:NEW 12 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>7 core 8 while 9 /if interval is null then 13 if current thread idle is False then 14 current 24 end 25 function 30 /</ns0:head><ns0:label>7891314242530</ns0:label><ns0:figDesc>e. a vector can contain 16 integers and 8 double floating-point values. The node has 32 GB HBM2 memory arranged in 4 core memory groups (CMGs) with 12 cores and 8GB each, 64KB private L1 cache, 8MB 9/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:2:0:NEW 12 Oct 2021) Manuscript to be reviewed Computer Science Algorithm 4: Simplified algorithm of SVE-QS-par. Input: array: data to sort. 1 function sve par sort(array, N) par sort(array, 0, N, buckets) nb threads idle = nb threads do / Try to get a task, from current thread's list 10 // then neighbors' lists, etc. 11 interval = steal task(buckets) 12 sort(array, interval.start, interval.end, buckets) 23 end core par sort(array, start, end, buckets) 26 if (start-end) &#215; size of element &#8804; size of L1 then 27 // Sort sequentially 28 sve bitonic sort(array, start, end) 29 else / Partition the array 31 p = sve partition(array, start, end) 32 // Put first partition in the buckets 33 insert(buckets[current thread id()], p.second partition) 34 // Directly work on first partition 35 core par sort(array, p.first partition.start, p.first partition.end, buckets) 36 end</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Execution time divided by n ln(n) to sort from 1 to 16 &#215; VEC SIZE values. The execution time is obtained from the average of 2 &#8226; 10 3 sorts with different values for each size. The speedup of the SVE-Bitonic against the STL is shown above the SVE-Bitonic lines. Key/value integers as a std::pair are plot with dashed lines, and as two distinct integer arrays (int*[2]) are plot with dense lines.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Execution time divided by n of elements to partition arrays filled with random values with sizes from 64 to &#8776; 10 9 elements.The pivot is selected randomly. The execution time is obtained from the average of 20 executions with different values. The speedup of the SVE-partition against the STL is shown above the lines. The vertical lines represent the caches relatively to the processed data type (&#8722; for the integers and &#8226; &#8722; &#8226; for floating-points and the key/value integers). Key/value integers as a std::pair are plot with dashed lines, and as two distinct integer arrays (int*[2]) are plot with dense lines.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 7 . 18 PeerJ</ns0:head><ns0:label>718</ns0:label><ns0:figDesc>Figure 7. Execution time divided by n ln(n) to sort in parallel arrays filled with random values with sizes from 512 to &#8776; 10 9 elements. The execution time is obtained from the average of 5 executions with different values. The speedup of the parallel SVE-QS-par against the sequential execution is shown above the lines for 16 and 48 threads. The vertical lines represent the caches relatively to the processed data type (&#8722; for the integers and &#8226; &#8722; &#8226; for the floating-points).</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>4/18 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2021:06:62923:2:0:NEW 12 Oct 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>Execution time divided by n ln(n) to sort arrays filled with random values with sizes from 64 to &#8776; 10 9 elements. The execution time is obtained from the average of 5 executions with different values. The speedup of the SVE-QS against the STL is shown above the SVE-QS lines. The vertical lines represent the caches relatively to the processed data type (&#8722; for the integers and &#8226; &#8722; &#8226; for floating-points and the integer pairs). Key/value integers as a std::pair are plot with dashed lines, and as two distinct integer arrays (int*[2]) are plot with dense lines.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science 10 2 10 3 10 &#8722;8.6 10 &#8722;8.4 10 &#8722;8.2 10 &#8722;8 10 &#8722;7.8 2.89 3.63 4.17 2.70 2.29 2.58 Time in s/ n ln(n) Key/value integer pairs (std::pair) Integers (int) Floating-points (double) 10 4 10 5 4.32 4.39 2.71 2.70 Number of integer or floating-point values n std::sort 10 6 4.44 2.73 Key/value integers (int*[2]) std::sort 10 2 10 3 10 4 10 5 10 6 10 &#8722;8.6 10 &#8722;8.4 10 &#8722;8.2 10 &#8722;8 10 &#8722;7.8 1.54 2.75 3.43 3.63 3.74 3.83 2.54 2.57 2.51 2.40 2.29 2.23 Number of pair values n Time in s/ n ln(n) Integers (int) boost::block indirect sort SVE-QS-par 1 Thread 8 Thread 10 3 10 4 10 5 10 6 10 7 10 &#8722;10 10 &#8722;9 10 &#8722;8 1.07 1.00 0.94 4.92 8.66 11.96 10 7 4.46 SVE-QS 2.72 SVE-QS 10 7 4.05 2.18 16 Thread 32 Thread 10 8 10 8 4.45 SVE512-QS 10 9 4.49 2.75 2.73 SVE512-QS 10 8 10 9 4.10 4.16 2.14 2.12 48 Thread 10 9 12.08 14.65 Number of integer values n Time in s/ n ln(n) Floating-points (double) boost::block indirect sort SVE-QS-par 1 Thread 16 Thread 48 Thread 8 Thread 32 Thread 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 &#8722;8 1.02 1.00 6.10 10.46 12.04 13.57 13.54 Number of floating-point values n Figure 6. Computer Science 10 &#8722;9 1.59 Time in s/ n ln(n)</ns0:cell></ns0:row></ns0:table><ns0:note>13/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:2:0:NEW 12 Oct 2021) Manuscript to be reviewed 14/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:2:0:NEW 12 Oct 2021)Manuscript to be reviewed</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Amount of memory accessed and corresponding bandwidth for SVE-QS and SVE512-QS to sort arrays of integers of size N. The accesses are measured by capturing the calls to SIMD loads/stores.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>SVE-QS</ns0:cell><ns0:cell cols='2'>SVE512-QS</ns0:cell></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>Size (GB)</ns0:cell><ns0:cell>Memory</ns0:cell><ns0:cell>Bandwith</ns0:cell><ns0:cell>Memory</ns0:cell><ns0:cell>Bandwith</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>read/write (GB)</ns0:cell><ns0:cell>(GB/s)</ns0:cell><ns0:cell>read/write (GB)</ns0:cell><ns0:cell>(GB/s)</ns0:cell></ns0:row><ns0:row><ns0:cell>2 6</ns0:cell><ns0:cell>2.56E-07</ns0:cell><ns0:cell>5.12E-07</ns0:cell><ns0:cell>3.76E-01</ns0:cell><ns0:cell>2.75E-06</ns0:cell><ns0:cell>1.471</ns0:cell></ns0:row><ns0:row><ns0:cell>2 9</ns0:cell><ns0:cell>2.05E-06</ns0:cell><ns0:cell>1.38E-05</ns0:cell><ns0:cell>1.325</ns0:cell><ns0:cell>3.88E-05</ns0:cell><ns0:cell>3.370</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2 12 1.64E-05</ns0:cell><ns0:cell>2.26E-04</ns0:cell><ns0:cell>2.427</ns0:cell><ns0:cell>3.96E-04</ns0:cell><ns0:cell>3.768</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2 15 1.31E-04</ns0:cell><ns0:cell>2.67E-03</ns0:cell><ns0:cell>3.064</ns0:cell><ns0:cell>4.19E-03</ns0:cell><ns0:cell>4.289</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2 18 1.05E-03</ns0:cell><ns0:cell>2.95E-02</ns0:cell><ns0:cell>3.651</ns0:cell><ns0:cell>4.05E-02</ns0:cell><ns0:cell>4.467</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2 21 8.39E-03</ns0:cell><ns0:cell>3.10E-01</ns0:cell><ns0:cell>4.192</ns0:cell><ns0:cell>3.98E-01</ns0:cell><ns0:cell>4.870</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2 24 6.71E-02</ns0:cell><ns0:cell>2.947</ns0:cell><ns0:cell>4.420</ns0:cell><ns0:cell>3.644</ns0:cell><ns0:cell>4.995</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2 27 5.37E-01</ns0:cell><ns0:cell>27.726</ns0:cell><ns0:cell>4.645</ns0:cell><ns0:cell>33.051</ns0:cell><ns0:cell>5.087</ns0:cell></ns0:row><ns0:row><ns0:cell>2 30</ns0:cell><ns0:cell>4.294</ns0:cell><ns0:cell>252.233</ns0:cell><ns0:cell>4.822</ns0:cell><ns0:cell>299.740</ns0:cell><ns0:cell>5.257</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='1'>The current study provides a translation of our AVX-SORT into SVE but also a completely new approach which works when the vector size is unknown at compile time.2 The functions described in the current study are available at https://gitlab.inria.fr/bramas/sve-sort. This repository includes a clean header-only library and a test file that generates the performance study of the current manuscript. The code is under MIT license.2/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:2:0:NEW 12 Oct 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot' n='4'>https://www.boost.org/5 It can be executed on any CPU using the Farm-SVE library (https://gitlab.inria.fr/bramas/farm-sve).6 This implementation is partially implemented in the branch optim of the code repository.10/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:2:0:NEW 12 Oct 2021)Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='16'>/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62923:2:0:NEW 12 Oct 2021)</ns0:note> </ns0:body> "
"A fast vectorized sorting implementation based on the ARM scalable vector extension (SVE). Answer to Reviewers Bérenger Bramas October 12, 2021 1 Summary We would like to thank the reviewer for the valuable comments. We have updated the paper according to all suggestions and we provide a point to point response in the following. We also provide a latex-diff version of the paper that highlights the modifications from the previous version. 2 Reviewer 1 (Anonymous) Thanks for the authors’ detailed response, and there is one remaining question. In the description of Figure 7, it’s said ”using more than 8 threads does not provide any benefit”, which is not true when the number of values is large, as shown in the figure. We would like to thank the reviewer for pointing that this sentence was not clear enough. Using more than 8 threads provides a benefit against boost, but does not provide a benefit against 8 threads (said differently using 16, 32 or 48 threads is not better than 8 threads). Therefore, we updated the sentence to make it clear. (1) Figure 7 is split so that maybe markers could be used to differentiate lines instead of line width. We updated the markers, but also keep the different lines. (2) there is only one subsubsection (2.2.1), and maybe it’s better to remove it. We would like to thank the reviewer, we removed the title. 1 "
Here is a paper. Please give your review comments after reading it.
274
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The ubiquity of uncertainty across application domains generates a need for principled support for uncertainty management in semantically aware systems. A probabilistic ontology provides constructs for representing uncertainty in domain ontologies. While the literature has been growing on formalisms for representing uncertainty in ontologies, there remains little guidance in the knowledge engineering literature for how to design probabilistic ontologies. To address the gap, this paper presents the Uncertainty Modeling Process for Semantic Technology (UMP-ST), a new methodology for modeling probabilistic ontologies. To explain how the methodology works and to verify that it can be applied to different scenarios, this paper describes step-by-step the construction of a proof-of-concept probabilistic ontology. The resulting domain model is intended to support identification of fraud in public procurements in Brazil. While the case study illustrates the development of a probabilistic ontology in the PR-OWL probabilistic ontology language, the methodology is applicable to any ontology formalism that properly integrates uncertainty with domain semantics.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The ability to represent and reason with uncertainty is important across a wide range of domains. For this reason, there is a need for a well-founded integration of uncertainty representation into ontology languages. In recognition of this need, the past decade has seen a significant increase in formalisms that integrate uncertainty representation into ontology languages. This has given birth to several new languages such as: PR-OWL <ns0:ref type='bibr' target='#b15'>(Costa, 2005;</ns0:ref><ns0:ref type='bibr' target='#b18'>Costa et al., 2005</ns0:ref><ns0:ref type='bibr' target='#b20'>Costa et al., , 2008;;</ns0:ref><ns0:ref type='bibr' target='#b7'>Carvalho, 2011;</ns0:ref><ns0:ref type='bibr' target='#b13'>Carvalho et al., 2013)</ns0:ref>, OntoBayes <ns0:ref type='bibr' target='#b80'>(Yang and Calmet, 2005)</ns0:ref>, BayesOWL <ns0:ref type='bibr' target='#b28'>(Ding et al., 2006)</ns0:ref>, P-CLASSIC <ns0:ref type='bibr' target='#b52'>(Koller et al., 1997)</ns0:ref> and probabilistic extensions of SHIF(D) and SHOIN(D) <ns0:ref type='bibr' target='#b59'>(Lukasiewicz, 2008)</ns0:ref>.</ns0:p><ns0:p>However, the increased expressive power of these languages creates new challenges for the ontology designer. In addition to developing a formal representation of entities and relationships in a domain, the ontology engineer must develop a formal characterization of the uncertainty associated with attributes of entities and relationships among them. While a robust literature exists in both ontology engineering <ns0:ref type='bibr' target='#b2'>(Allemang and Hendler, 2008;</ns0:ref><ns0:ref type='bibr' target='#b38'>Gomez-Perez et al., 2004)</ns0:ref> and knoweldge engineering for probability models <ns0:ref type='bibr' target='#b56'>(Laskey and Mahoney, 2000;</ns0:ref><ns0:ref type='bibr' target='#b30'>Druzdzel and van der Gaag, 2000;</ns0:ref><ns0:ref type='bibr' target='#b53'>Korb and Nicholson, 2003;</ns0:ref><ns0:ref type='bibr' target='#b64'>O'Hagan et al., 2006)</ns0:ref>, these fields have developed largely independently. The literature contains very little guidance on how to build ontologies that capture knowledge about domain uncertainties.</ns0:p><ns0:p>To fill the gap, this paper describes the Uncertainty Modeling Process for Semantic Technology (UMP-ST), a methodology for defining a probabilistic ontology and using it for plausible reasoning in applications that use semantic technology. The methodology is illustrated through a use case in which semantic technology is applied to the problem of identifying fraud in public procurement in Brazil. The purpose of the use case is to show how to apply the methodology on a simplified but realistic problem, and to provide practical guidance to probabilistic ontology designers on how to apply the UMP-ST. This paper goes beyond previous work on UMP-ST (e.g., <ns0:ref type='bibr' target='#b41'>Haberlin, 2013;</ns0:ref><ns0:ref type='bibr' target='#b10'>Carvalho et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alencar, 2015)</ns0:ref> to provide a comprehensive explanation of the methodology in the context of application to a real world problem, along with pragmatic suggestions for how to apply the methodology in practice. For purpose of exposition, our focus is primarily on the procurement fraud use case, but the UMP-ST is applicable to any domain in which semantic technology can be applied.</ns0:p><ns0:p>Our purpose is not to provide rigorous scientific evidence for the value of UMP-ST in comparison with any other methodology or no methodology. <ns0:ref type='bibr'>de Hoog (1998)</ns0:ref> says, 'it is extremely difficult to judge the value of a methodology in an objective way. Experimentation is of course the proper way to do it, but it is hardly feasible because there are too many conditions that cannot be controlled.' The value of a methodology like UMP-ST lies in its ability to support a complex system development effort that extends over a long period of time and requires significant resources to implement. Besides the large number of uncontrolled variables, the resources to implement a single case of sufficient complexity is difficult; experimentation requiring multiple parallel implementations is prohibitive. Nevertheless, the experience of our development team is that the structure provided by UMP-ST was essential to the ability to capture the expert's knowledge in a model whose results were a reasonable match to the expert's judgments.</ns0:p><ns0:p>The paper is organized as follows. The next section reviews existing design methodologies that provided inspiration for UMP-ST. The following section introduces the UMP-ST process. Next, we introduce our use case devoted to identifying fraud in public procurement in Brazil. The fifth section explains the four disciplines of the UMP-ST in the context of the fraud use case, and is followed by a section discussing applicability of the UMP-ST to other domains. The paper concludes with a section on future work and a final section presenting our concluding remarks.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Successful development of any complex system requires following a structured, systematic process for design, implementation and evaluation. Existing software and systems engineering processes are useful as starting points, but must be tailored for engineering a probabilistic ontology. The UMP-ST draws upon a number of related processes for software engineering, ontology engineering, and Bayesian network engineering to provide a process tailored to probabilistic ontology engineering. To provide a context for introducing the UMP-ST, this section reviews related literature on design processes that provided an initial basis for tailoring the UMP-ST.</ns0:p></ns0:div> <ns0:div><ns0:head>The Unified Process</ns0:head><ns0:p>The UP <ns0:ref type='bibr' target='#b50'>(Jacobson et al., 1999;</ns0:ref><ns0:ref type='bibr' target='#b54'>Kruchten, 2000;</ns0:ref><ns0:ref type='bibr' target='#b3'>Balduino, 2007)</ns0:ref> is a widely applied software engineering process. It has three main characteristics: it is iterative and incremental; it is architecture centric; and it is risk focused. Each project is divided into small chunks, called iterations, concluding in delivery of executable code. These frequent deliverables yield an incremental implementation of the system. A key deliverable is the executable architecture, which is a partial implementation of the system that validates the architecture and builds the foundation of the system. Finally, the UP mitigates risk by prioritizing the highest risk features for early implementation. The reasoning is simple: if a critical aspect of the system is going to fail, it is better to discover this early enough to rework the design or cancel the project, than to realize after the fact that large amounts of resources have been wasted on a non-viable project.</ns0:p><ns0:p>The UP defines the project lifecycle as composed of four phases: Inception; Elaboration; Construction; and Transition. Inception is usually the shortest phase. The main goal is to define the justification for the project, its scope, the risks involved, and the key requirements. In the elaboration phase, the primary concerns are to define most of the requirements, to address the known risks, and to define and validate the system architecture. The Construction phase is the longest phase, where most of the development process resides. This phase is usually broken down into small iterations with executable code being delivered at the end of each iteration. Finally, in the Transition phase the system is deployed, the users are trained, and initial feedback is collected to improve the system.</ns0:p><ns0:p>To support the project lifecycle, the UP defines several disciplines or workflows. Each discipline describes a sequence of activities in which actors or workers produce products or artifacts to achieve a result of observable value. For example, a developer might carry out a programming activity using the system specification in order to produce both source and executable code. There are several variations of the Unified Process (e.g., Rational Unified Process, Agile Unified Process, Enterprise Unified Process). While each has its own set of disciplines, the following disciplines are common to most: Business Modeling, responsible for documenting the business processes in a language common to both business is proposed by six of them, but not described in detail. The knowledge acquisition and documentation activities are proposed by three and described by two, while the evaluation activity is proposed by two and described by two in detail.</ns0:p></ns0:div> <ns0:div><ns0:head>Probability Elicitation</ns0:head><ns0:p>The literature on eliciting probabilities from experts has a long history (e.g., <ns0:ref type='bibr' target='#b79'>Winkler, 1967;</ns0:ref><ns0:ref type='bibr' target='#b48'>Huber, 1974;</ns0:ref><ns0:ref type='bibr' target='#b77'>Wallsten and Budescu, 1983)</ns0:ref>. At the interface between cognitive science and Bayesian probability theory, researchers have examined biases in unaided human judgment (e.g., <ns0:ref type='bibr' target='#b51'>Kahneman et al., 1982)</ns0:ref> and have devised ways to counteract those biases (e.g., <ns0:ref type='bibr' target='#b14'>Clemen and Reilly, 2004;</ns0:ref><ns0:ref type='bibr' target='#b5'>Burgman et al., 2006)</ns0:ref>. Several authors have defined structured processes or protocols for eliciting probabilities from experts (e.g., <ns0:ref type='bibr' target='#b14'>Clemen and Reilly, 2004;</ns0:ref><ns0:ref type='bibr' target='#b34'>Garthwaite et al., 2005)</ns0:ref>. There is general agreement on the steps in the elicitation process. The seven steps described by <ns0:ref type='bibr' target='#b14'>Clemen and Reilly (2004)</ns0:ref> are: understanding the problem; identifying and recruiting experts; motivating the experts; structuring and decomposition; probability and assessment training; probability elicitation and verification; and aggregating the probabilities. A recent comprehensive reference for probability elicitation is <ns0:ref type='bibr' target='#b64'>(O'Hagan et al., 2006)</ns0:ref>.</ns0:p><ns0:p>The advent of graphical probability models <ns0:ref type='bibr' target='#b66'>Pearl (1988)</ns0:ref> has created the problem of eliciting the very large number of probabilities needed to specify a graphical model with dozens to hundreds of variables (c.f., <ns0:ref type='bibr' target='#b30'>Druzdzel and van der Gaag, 2000;</ns0:ref><ns0:ref type='bibr' target='#b69'>Renooij, 2001)</ns0:ref>. <ns0:ref type='bibr' target='#b61'>Mahoney and Laskey (1998)</ns0:ref> defined a systematic process for constructing Bayesian network models. Their process considered elicitation of structural assumptions as well as probability distributions. It is an iterative and incremental process that produces a series of prototypes, in which each cycle is used to refine requirements for the next cycle.</ns0:p></ns0:div> <ns0:div><ns0:head>UNCERTAINTY MODELING PROCESS FOR SEMANTIC TECHNOLOGY</ns0:head><ns0:p>The process of creating and using a probabilistic ontology typically occurs in three stages: First is modeling the domain; next is populating the model with situation-specific information; and third is using the model and situation-specific information for reasoning. Modeling a domain means constructing a representation of aspects of the domain for purposes of understanding, explaining, predicting, or simulating those aspects. For our purposes, the model represents the kinds of entities that can exist in the domain, their attributes, the relationships they can have to each other, the processes in which they can participate, and the rules that govern their behavior. It also includes uncertainties about all these aspects. There are many sources of uncertainty: e.g., causes may be non-deterministically related to their effects; events may be only indirectly observable through noisy channels; association of observations to the generating events may be unknown; phenomena in the domain may be subject to statistical fluctuation; the structure of and associations among domain entities may exhibit substantial variation; and/or the future behavior of domain entities may be imperfectly predictable (e.g., <ns0:ref type='bibr' target='#b73'>Schum and Starace, 2001;</ns0:ref><ns0:ref type='bibr' target='#b58'>Laskey and Laskey, 2008;</ns0:ref><ns0:ref type='bibr' target='#b16'>Costa et al., 2012)</ns0:ref>. Once these and other relevant sources of uncertainty are captured in a domain model, the model can be applied to a specific situation by populating it with data about the situation. Finally, the inference engine can be called upon to answer queries about the specific situation. Unlike traditional semantic systems that can handle only deterministic queries, queries with a probabilistic ontology can return soft results. For example, consider a query about whether an inappropriate relationship exists between a procurement official and a bidder. A reasoning system for a standard ontology can return only procurements in which such a relationship can be proven, while a reasoner for a probabilistic ontology can return a probability that such a relationship exists.</ns0:p><ns0:p>The UMP-ST is an iterative and incremental process, based on the Unified Process (UP), for designing a probabilistic ontology. While UP serves as the starting point, UMP-ST draws upon and is consistent with the ontology engineering and probability elicitation processes described in the previous section, thus tailoring the UP for probabilistic ontology design.</ns0:p><ns0:p>As shown in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, the UMP-ST includes all phases of the UP, but focuses only on the Requirements, Analysis &amp; Design, Implementation, and Test disciplines. The figure depicts the intensity of each discipline during the UMP-ST. Like the UP, UMP-ST is iterative and incremental. The basic idea behind iterative enhancement is to model the domain incrementally, allowing the modeler to take advantage of what is learned during earlier iterations of the model in designing and implementing later iterations. For this reason, each phase includes all four disciplines, but the emphasis shifts from requirements in the earlier phases toward implementation and test in the later phases. Note that testing occurs even during the Inception phase, prior to beginning the implementation phase. This is because it is usually possible to test Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> presents the Probabilistic Ontology Modeling Cycle (POMC). This cycle depicts the major outputs from each discipline and the natural order in which the outputs are produced. Unlike the waterfall model <ns0:ref type='bibr' target='#b70'>(Royce, 1970)</ns0:ref>, the POMC cycles through the steps iteratively, using what is learned in one iteration to improve the result of the next. The arrows reflect the typical progression, but are not intended as hard constraints. Indeed, it is possible to have interactions between any pair of disciplines. For instance, it is not uncommon to discover a problem in the rules defined in the Analysis &amp; Design discipline during the activities in the Test discipline. As a result, the engineer might go directly from Test to Analysis &amp; Design in order to correct the problem.</ns0:p><ns0:p>In Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>, the Requirements discipline (blue box) defines the goals that must be achieved by reasoning with the semantics provided by our model. Usually, when designing a PO, one wants to be able to automate a reasoning process that involves uncertainty. By goals, we mean the kinds of questions the user wants the system to be able to answer via the PO reasoning. For instance, one of the main goals in the procurement fraud domain is to be able to answer with a certain degree of certainty whether a procurement presents any signs of fraud. However, this type of question is not straight-forward to answer. Thus, the system will typically need to evaluate a set of more specific questions, or queries, in order to better assess the probability of having fraud. Furthermore, in order to answer these more specific queries, the system will need some evidence. These goals, queries, and evidence comprise the requirements for the model being designed.</ns0:p><ns0:p>The Analysis &amp; Design discipline (green boxes) describes classes of entities, their attributes, how they relate to each other, and what rules apply to them in our domain. These definitions are independent of the language used to implement the model.</ns0:p><ns0:p>The Implementation discipline (red boxes) maps the design to a specific language that is both semantically rich and capable of representing uncertainty. This means encoding the classes, attributes, relationships and rules in the chosen language. For our case study, the mapping is to PR-OWL <ns0:ref type='bibr' target='#b13'>(Carvalho et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b20'>Costa et al., 2008)</ns0:ref>, but other semantically rich uncertainty representation languages could also be used (e.g., <ns0:ref type='bibr' target='#b21'>Cozman and Mau&#225;, 2015)</ns0:ref>.</ns0:p><ns0:p>Finally, the Test discipline (purple box) is responsible for evaluating whether the model developed during the Implementation discipline is behaving as expected from the rules defined during Analysis &amp; Design and whether the results achieve the goals elicited during the Requirements discipline. As noted previously, it is a good idea to test some of the rules and assumptions even before implementation. This is a crucial step to mitigate risk. Early testing can identify and correct problems before significant resources have been spent developing a complex model that turns out to be inadequate.</ns0:p><ns0:p>Like several of the ontology engineering processes considered by <ns0:ref type='bibr' target='#b38'>Gomez-Perez et al. (2004)</ns0:ref>, the UMP-ST does not cover ontology management, under the assumption that these activities can be imported from other frameworks. Although the UMP-ST does not cover maintenance and reuse, its iterative nature supports incremental evolution of the developed ontology. Of the ontology support activities described by <ns0:ref type='bibr' target='#b38'>Gomez-Perez et al. (2004)</ns0:ref>, the UMP-ST process explicitly addresses only the test discipline, which is similar to the evaluation activity. By following the steps in the UMP-ST, the ontology designer will be generating the documentation needed in order to describe not only the final PO, but also the whole process of building it. This supports the documentation activity of <ns0:ref type='bibr' target='#b38'>Gomez-Perez et al. (2004)</ns0:ref>. Like most ontology engineering processes, the UMP-ST does not address the ontology support activities of integration, merging, and alignment.</ns0:p><ns0:p>The primary focus of the UMP-ST is the ontology development activities. Because it is based on the UP, it uses a different nomenclature than Gomez-Perez et al. ( <ns0:ref type='formula'>2004</ns0:ref>), but there is a close resemblance: the specification activity is similar to the requirements discipline; the conceptualization and formalization activities are similar to the analysis &amp; design discipline; and the implementation activity is similar to the implementation discipline. The major difference between the methodologies reviewed by Gomez-Perez et al. ( <ns0:ref type='formula'>2004</ns0:ref>) and the UMP-ST is the focus. While <ns0:ref type='bibr' target='#b38'>Gomez-Perez et al. (2004)</ns0:ref> focuses on ways to build a glossary of terms, build taxonomies, and define concepts and properties, and deterministic rules, the UMP-ST presents techniques to identify and specify probabilistic rules, define dependency relations between properties based on these rules, and quantify the strength of these relations as parameters of local probability distributions. Thus, the UMP-ST extends other methodologies used for building ontologies, and should coexist with these methodologies. When creating deterministic parts of the ontology the user can follow existing methodologies proposed for standard ontology building. To incorporate uncertainty and therefore extend to a full probabilistic ontology, the user can follow the steps defined in the UMP-ST process.</ns0:p><ns0:p>Similarly, the UMP-ST can and should coexist with processes for eliciting probabilities, such as those defined by <ns0:ref type='bibr' target='#b14'>Clemen and</ns0:ref><ns0:ref type='bibr' target='#b14'>Reilly (2004) and</ns0:ref><ns0:ref type='bibr' target='#b64'>O'Hagan et al. (2006)</ns0:ref>. The probabilistic ontology engineer should refer to these resources when defining a protocol for eliciting probabilities from experts.</ns0:p></ns0:div> <ns0:div><ns0:head>6/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_1'>2015:10:7010:1:1:CHECK 16 May 2016)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In the next two sections, the UMP-ST process and the POMC are illustrated through a case study in procurement fraud detection and prevention. The case study walks step by step through the activities that must be executed in each discipline in the POMC. The case study has been kept simple enough for clear exposition of POMC, while being complex enough to convey key issues that arise in real-world ontology engineering. Implementation and plausible reasoning were carried out using the UnBBayes probabilistic ontology environment <ns0:ref type='bibr' target='#b6'>(Carvalho, 2008;</ns0:ref><ns0:ref type='bibr' target='#b63'>Matsumoto et al., 2011)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>PREVENTING AND DETECTING PROCUREMENT FRAUD IN BRAZIL</ns0:head><ns0:p>In Brazil, the main law that details the regulation of the public procurement process is the Federal Law 8,666/93 (Public Procurement Law). Nevertheless, the public procurement procedures are also mentioned in Section XXI, Article 37 of the Federal Constitution. The Public Procurement Law is applicable not only to the Federal Government, but also to the State and Municipal Governments. Although it is meant to provide just general guidelines, it is so detailed that there is little room for the States and Municipalities to further legislate <ns0:ref type='bibr' target='#b33'>(Frizzo and Oliveira, 2014)</ns0:ref>. The Public Procurement Law regulates public procurement procedures and contracts involving the government.</ns0:p><ns0:p>The Public Procurement Law defines three main procurement procedures: invitation to tender, which is a simpler and faster procedure where at least three competitors are invited to participate in the tender based on the request for proposals (RFP), which is not required to be advertised in the press; price survey, which requires the competitors to be previously registered before the public tender and requires a broader advertising of the RFP in the newspaper and official press; and competition, which is the most complex and longest procedure, allowing participation of all companies that meet the qualification criteria on the first day of the procedure and requiring more general advertising of the RFP as the price survey. In addition, Law 10,520/02 created the reverse auction, which involves alternative bids from the participating companies in the competitive phase, before the qualification documents are analyzed. Nowadays, the most common procedure for the acquisition of common goods and services is the electronic reverse auction, which is the same as the reverse auction, but the procedure happens in an electronic system. Its RFP must also be advertised in the official press as well as through the Internet.</ns0:p><ns0:p>The criteria for selecting the best proposal are defined in the RFP by the regulated agency. There are three main types of rules that must be followed: best price, where the company that presents the best bid and meets the minimum requirements is awarded the contract; best technique, where the company with the best technical solutions wins regardless of price; and a mix of the two, where the scores are given for both price and technique and the company with the highest joint score wins. <ns0:ref type='bibr' target='#b33'>Frizzo and Oliveira (2014)</ns0:ref> provide additional detail on thresholds for determining whether a contract is subject to the Public Procurement Law, freedom to choose which procedure to use, changes to an existing contract, and other aspects of the public procurement process in Brazil.</ns0:p><ns0:p>The procurement process presents many opportunities for corruption. Although laws attempt to ensure a competitive and fair process, perpetrators find ways to turn the process to their advantage while appearing to be legitimate. To aid in detecting and deterring such perversions of the procurement process, a specialist, who helped in this work, has didactically structured different kinds of procurement fraud encountered by the Brazilian Office of the Comptroller General (CGU) over the years.</ns0:p><ns0:p>These different fraud types are characterized by criteria, such as business owners working as a front for the company or use of accounting indices that are not commonly employed. Indicators have been established to help identify cases of each of these fraud types. For instance, one principle that must be followed in public procurement is that of competition. A public procurement should attempt to ensure broad participation in the bidding process by limiting requirements on bidders to what is necessary to guarantee adequate execution of the contract. Nevertheless, it is common to have a fake competition in which different bidders are, in fact, owned by the same person. This is usually done by having someone act as a front for the enterprise. An indicator that a bidder may be a front is that the listed owner has little or no education. Thus, an uneducated owner is a red flag suggesting that there may be a problem with the procurement. <ns0:ref type='bibr' target='#b40'>Gregorini (2009)</ns0:ref> identified a number of red flags that can be considered evidence of fraud. These include: concentration of power in the hands of a few people; rapid growth in concentration of goods and services contracted from a single company; competition restriction; transfer of funds to a Non Governmental Organization (NGO) close to elections; and others. While these factors are evidence of potential irregularities, they are not definitive indicators. A list of more serious and determinant conditions is presented by <ns0:ref type='bibr' target='#b32'>Flores (2004)</ns0:ref>. These include: choosing directors based on a political agenda; negotiating contracts in order to reserve money for an election campaign; negotiating contracts in order to favor friends and family; bribery in order to obtain certain privileges; and providing inside information.</ns0:p><ns0:p>A more formal definition of different types of fraud found in Brazil is presented by <ns0:ref type='bibr' target='#b65'>Oliveira (2009)</ns0:ref>. He presents three main groups of fraud, based on recent scandals in Brazil: frauds initiated by passive agents; frauds initiated by active agents; and frauds that represent collusion. The first is when an agent from the Public Administration acting in his public function, favors someone or himself by performing illicit actions (e.g., purchasing products that were never used, falsification of documents and signatures, favoring friends and family). The second is when an active agent, a person or a company, outside the Public Administration tries to corrupt an agent that works in the Public Administration or does something illegal in order to cheat the procurement process (e.g., acting as a front for a company, delivering contraband products, giving money to civil servants in order to favor a specific company). Finally, the third is when there is some type of collusion between companies participating in the procurement process or even between passive and active agents (e.g., delivering and accepting just part of the goods purchased, paying before receiving the merchandise, overpricing goods and services, directing and favoring a specific company in exchange of some financial compensation).</ns0:p><ns0:p>The types of fraud presented by <ns0:ref type='bibr' target='#b65'>Oliveira (2009)</ns0:ref>, although focused on the Brazilian context, are consistent with more recent work from <ns0:ref type='bibr' target='#b27'>Dhurandhar et al. (2015b)</ns0:ref>. This work, which presents a more general fraud taxonomy related to procurement fraud, was registered as a patent in 2015 <ns0:ref type='bibr' target='#b26'>(Dhurandhar et al., 2015a)</ns0:ref>. While Oliveira talks about passive and active agents, Dhurandhar et al. talks about fraud by employees and fraud by vendors, respectively. However, these fraud definitions do have a few differences. For example, while Dhurandhar et al. differentiates collusion among vendors and collusion between employee and vendors, Oliveira classifies both as simply collusion.</ns0:p><ns0:p>Formalizing knowledge about fraud in a computable form can lead to automated support for fraud detection and prevention. Specifically, analysts at the CGU must sift through vast amounts of information related to a large number of procurements. Automated support can improve analyst productivity by highlighting the most important cases and the most relevant supporting information. The ultimate goal of the procurement fraud probabilistic ontology is to structure the specialist's knowledge to enable automated reasoning from indicators to potential fraud types. Such an automated system is intended to support specialists and to help train new specialists, but not to replace them. Automated support for this task requires a semantically rich representation that supports uncertainty management.</ns0:p><ns0:p>As a case study, <ns0:ref type='bibr' target='#b7'>Carvalho (2011)</ns0:ref> developed a proof-of-concept probabilistic ontology covering part of the procurement fraud domain. This paper uses a portion of this case study to illustrate how the POMC can support the creation of a PO. The full implementation and code for the case study is presented in <ns0:ref type='bibr' target='#b7'>(Carvalho, 2011)</ns0:ref> and is provided as supplemental material to this paper. This proof-of-concept implementation represents only a fragment of the specialist's knowledge of the procurement fraud domain. The plan is eventually to extend this PO to a full representation of the specialist's knowledge.</ns0:p></ns0:div> <ns0:div><ns0:head>UMP-ST FOR PROCUREMENT FRAUD</ns0:head><ns0:p>This section describes in detail the four disciplines in the UMP-ST process and their application to the procurement fraud case study. To facilitate the understanding of each discipline, we alternate between describing the discipline and illustrating its application to the public procurement fraud detection and prevention use case.</ns0:p></ns0:div> <ns0:div><ns0:head>Requirements</ns0:head><ns0:p>The objective of the requirements discipline is to define the objectives that must be achieved by representing and reasoning with a computable representation of domain semantics. For this discipline, it is important to define the questions that the model is expected to answer, i.e., the queries to be posed to the system being designed. For each question, a set of information items that might help answer the question (evidence) must be defined.</ns0:p><ns0:p>Requirements can be categorized as functional and non-functional <ns0:ref type='bibr' target='#b78'>(Wiegers, 2003;</ns0:ref><ns0:ref type='bibr' target='#b74'>Sommerville, 2010)</ns0:ref>. Functional requirements concern outputs the system should provide, features it should have, how it should behave, etc. In our case, functional requirements relate to the goals, queries, and evidence that pertain to our domain of reasoning. Non-functional requirements, on the other hand, represent constraints on the system as a whole. For instance, in our use case a non-functional requirement could be that a given Manuscript to be reviewed</ns0:p><ns0:p>Computer Science query has to be answered in less than a minute. Another example is that the posterior probability given as an answer to a given query has to be either exact or an approximation with an error bound of &#177;0.5%. Non-functional requirements tend to be fairly straightforward and not specific to probabilistic ontology development. We therefore focus here on how to develop functional requirements for our use case.</ns0:p><ns0:p>We focus on a subset of our procurement use case to illustrate how a requirement is carried through the PO development cycle until it is eventually implemented and tested. To understand the requirements associated with this subset, we first have to explain some of the problems encountered when dealing with public procurements.</ns0:p><ns0:p>One of the principles established by Law No. 8,666/93 is equality among bidders. This principle prohibits the procurement agent from discriminating among potential suppliers. However, if the procurement agent is related to the bidder, he/she might feed information or define new requirements for the procurement in a way that favors the bidder.</ns0:p><ns0:p>Another problem arises because public procurement is quite complex and may involve large sums of money. Therefore, members forming the committee for a procurement must both be well prepared, and have a clean history with no criminal or administrative convictions. This latter requirement is necessary to the ethical guidelines that federal, state, municipal and district government employees must follow.</ns0:p><ns0:p>The above considerations give rise to the following set of goals, queries, and evidence:</ns0:p><ns0:p>1. Goal: Identify whether a given procurement violates fair competition policy (i.e., evidence suggests further investigation and/or auditing is warranted);</ns0:p><ns0:p>(a) Query: Is there any relation between the committee and the enterprises that participated in the procurement?</ns0:p><ns0:p>i. Evidence: Committee member and responsible person of an enterprise are related (mother, father, brother, or sister);</ns0:p><ns0:p>ii. Evidence: Committee member and responsible person of an enterprise live at the same address.</ns0:p><ns0:p>2. Goal: Identify whether the committee for a given procurement has improper composition.</ns0:p><ns0:p>(a) Query: Is there any member of committee who does not have a clean history?</ns0:p><ns0:p>i. Evidence: Committee member has criminal history;</ns0:p><ns0:p>ii. Evidence: Committee member has been subject to administrative investigation.</ns0:p><ns0:p>(b) Query: Is there any relation between members of the committee and the enterprises that participated in previous procurements?</ns0:p><ns0:p>i. Evidence: Member and responsible person of an enterprise are relatives (mother, father, brother, or sister);</ns0:p><ns0:p>ii. Evidence: Member and responsible person of an enterprise live at the same address.</ns0:p><ns0:p>In defining requirements, the availability of evidence must be considered. For example, information about whether persons are related might be drawn from a social network database; evidence about criminal history might come from a police database; an evidence about cohabitation might be drawn from an address database. One important role for semantic technology is to support interoperability among these various data sources and the fraud detection model.</ns0:p><ns0:p>Another important aspect of the Requirements discipline is defining traceability of requirements. <ns0:ref type='bibr' target='#b39'>Gotel and Finkelstein (1994)</ns0:ref> define requirements traceability as:</ns0:p><ns0:p>Requirements traceability refers to the ability to describe and follow the life of a requirement, in both forwards and backwards direction.</ns0:p><ns0:p>To provide traceability, requirements should be arranged in a specification tree, so that each requirement is linked to its 'parent' requirement. A specification tree for the requirements for our procurement model is shown in Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>. In this hierarchy, each item of evidence is linked to a query it supports, which in turn is linked to its higher level goal. This linkage supports requirements traceability. In addition to the hierarchical decomposition of the specification tree, requirements should also be linked to work products of other disciplines, such as the rules in the Analysis &amp; Design discipline, MFrags in the Implementation discipline, and goals, queries, and evidence elicited in the Requirements discipline. These links provide traceability that is essential to validation and management of change. Subsequent sections show how UMP-ST supports requirements tracing.</ns0:p></ns0:div> <ns0:div><ns0:head>Analysis &amp; Design</ns0:head><ns0:p>Once we have defined our goals and described how to achieve them, it is time to start modeling the entities, their attributes, relationships, and rules to make that happen. This is the purpose of the Analysis &amp; Design discipline.</ns0:p><ns0:p>The major objective of this discipline is to define the semantics of the model. In fact, much of the semantics can be defined using traditional ontologies, including the deterministic rules that the concepts described in our model must obey. The focus of this paper is on representing uncertain aspects of the domain. Information on defining traditional ontologies can be found in <ns0:ref type='bibr' target='#b2'>Allemang and Hendler (2008)</ns0:ref> and <ns0:ref type='bibr' target='#b38'>Gomez-Perez et al. (2004)</ns0:ref>.</ns0:p><ns0:p>The first step in defining the domain model is to define the classes and relationships that are important to represent for the procurement fraud detection problem. For our case study, we use the Unified Modeling Language (UML) <ns0:ref type='bibr' target='#b71'>(Rumbaugh et al., 1999)</ns0:ref> for this purpose. Because UML is insufficiently expressive to represent complex rule definitions, we record the rules separately for later incorporation into the PR-OWL probabilistic ontology. While experienced ontology engineers might prefer to define classes, relationships and rules directly in OWL, we chose UML for its popularity, understandability, ease of communication with domain experts, and widely available and usable software tools. We see UML-style diagrams as a way to capture knowledge about classes and relationships that could be automatically translated into an OWL ontology or PR-OWL probabilistic ontology (cf., <ns0:ref type='bibr' target='#b35'>Gasevic et al. (2004))</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> depicts a simplified model of the classes and relationships in the procurement fraud domain. A Person has a name, a mother and a father (also Person). Every Person has a unique identification that in Brazil is called CPF. A Person also has an Education and livesAt a certain Address. In addition, everyone is obliged to file his/her TaxInfo every year, including his/her an-nualIncome 1 . These entities can be grouped as Personal Information. A PublicServant is a Person who worksFor a PublicAgency, which is a Government Agency. Every public Procurement is owed by a PublicAgency, has a committee formed by a group of PublicServants, and has a group of participants, which are Enterprises. One of these will be the winner of the Procurement. Eventually, the winner of the Procurement will receive a Contract of some value with the PublicAgency owner of the Procurement. The entities just described can be grouped as Procurement Information. Every Enterprise has at least one Person that is responsible for its legal acts.</ns0:p><ns0:p>An Enterprise also has an identification number, the General List of Contributors CGC, which can be used to inform that this Enterprise is suspended from procuring with the public admin-1 Every Brazilian citizen is required to file tax information, even if only to state that his or her income is below a certain amount and no taxes are owed.</ns0:p></ns0:div> <ns0:div><ns0:head>10/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016)</ns0:p><ns0:p>Manuscript to be reviewed istration, isSuspended. These are grouped as the Enterprise Information. We also have AdminstrativeInvestigation, which has information about investigations that involve one or more PublicServer. Its finalReport, the JudgmentAdministrativeReport, contains information about the penalty applied, if any. These entities form the Administrative Judgment Information. Finally we have the Criminal Judgment Information group that describes the CriminalInvestigation that involves a Person, with its finalReport, the JudgmentCriminalReport, which has information about the verdict.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Notice that just a subset of this UML model is of interest to us in this paper, since we are dealing with just a subset of the requirements presented in <ns0:ref type='bibr' target='#b7'>Carvalho (2011)</ns0:ref>.</ns0:p><ns0:p>In addition to the cardinality and uniqueness rules defined above for the entities depicted in Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>, the probabilistic rules for the requirements described in the previous section include:</ns0:p><ns0:p>1. If a member of the committee has a relative (mother, father, brother, or sister) responsible for a bidder in the procurement, then it is more likely that a relation exists between the committee and the enterprises, which inhibits competition.</ns0:p><ns0:p>2. If a member of the committee lives at the same address as a person responsible for a bidder in the procurement, then it is more likely that a relation exists between the committee and the enterprises, which lowers competition.</ns0:p><ns0:p>3. If 1 or 2, then the procurement is more likely to violate policy for fair competition.</ns0:p><ns0:p>4. If a member of the committee has been convicted of a crime or has been penalized administratively, then he/she does not have a clean history. If he/she was recently investigated, then it is likely that he/she does not have a clean history.</ns0:p><ns0:p>5. If the relation defined in 1 and 2 is found in previous procurements, then it is more likely that there will be a relation between this committee and future bidders. Typically the probabilistic rules are described initially using qualitative likelihood statements. Implementing a probabilistic ontology requires specifying numerical probabilities. Probability values can be elicited from domain experts (e.g., <ns0:ref type='bibr' target='#b30'>Druzdzel and van der Gaag, 2000;</ns0:ref><ns0:ref type='bibr' target='#b64'>O'Hagan et al., 2006)</ns0:ref> or learned from observation. The growing literature in statistical relational learning (e.g., <ns0:ref type='bibr' target='#b37'>Getoor and Taskar, 2007)</ns0:ref> provides a wealth of methods for learning semantically rich probability models from observations. In the Analysis &amp; Design stage, information is identified for specifying the probability distributions (expert judgment and/or data sources). This information is encoded into the target representation during the Implementation stage.</ns0:p><ns0:p>The traceability matrix of Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> depicts how the probabilistic rules defined above are traced to the goals, queries and evidence items defined in the Requirements discipline. This traceability matrix is an important tool to help designers to ensure that all requirements have been covered. It also supports maintainability by helping ontology engineers to identify how requirements are affected by changes in the model. It is also important at this stage to trace each of the rules to the source of information used to define the rule (e.g., notes from interview with expert, training manual, policy document, data source).</ns0:p></ns0:div> <ns0:div><ns0:head>Implementation</ns0:head><ns0:p>Once the Analysis &amp; Design step has been completed, the next step is to implement the model in a specific language. How this discipline is carried out depends on the specific language being used. Our case study was developed using the PR-OWL probabilistic ontology language <ns0:ref type='bibr' target='#b15'>(Costa, 2005;</ns0:ref><ns0:ref type='bibr' target='#b6'>Carvalho, 2008)</ns0:ref>. PR-OWL (pronounced 'prowl') adds new definitions to OWL to allow the modeler to incorporate probabilistic knowledge into an OWL ontology. This section shows how to use PR-OWL to express uncertainty about the procurement fraud domain.</ns0:p><ns0:p>PR-OWL uses Multi-Entity Bayesian Networks (MEBN) <ns0:ref type='bibr' target='#b55'>(Laskey, 2008)</ns0:ref> to express uncertainty about properties and/or relations defined on OWL classes. A probability model is defined as a set of MEBN Fragments (MFrags), where each MFrag expresses uncertainty about a small number of attributes of and/or relationships among entities. A set of properly defined MFrags taken together comprise a MEBN theory (MTheory), which can express a joint probability distribution over complex situations involving many entities in the domain. Unlike most expressive probabilistic languages that assume the domain is finite (e.g., <ns0:ref type='bibr' target='#b47'>Heckerman et al., 2004)</ns0:ref>, an MTheory can express knowledge about an unbounded or even infinite set of entities. A properly defined PR-OWL model expresses an MTheory, and thus expresses a global joint distribution over the random variables mentioned in the theory. For more detailed explanations on the key features of MEBN logic the reader should refer to <ns0:ref type='bibr' target='#b55'>Laskey (2008)</ns0:ref>.</ns0:p><ns0:p>On a typical usage of a PR-OWL PO, during execution time (e.g., in response to a query) a logical reasoning process would instantiate the MFrags that are needed to respond to the query. The result of this process is a situation-specific Bayesian network (SSBN), which is the minimal Bayesian network sufficient to obtain the posterior distribution for a set of target random variable instances given a set of finding random variable instances. In a PR-OWL probabilistic ontology, the entity types correspond to OWL classes, the attributes correspond to OWL properties, and the relationships correspond to OWL Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>relations. Thus, PR-OWL allows the ontology designer to specify probability distributions to express uncertainty about properties and relations in an OWL ontology.</ns0:p><ns0:p>The expressive power of MEBN/PR-OWL makes it an attractive choice for implementing probabilistic ontologies in complex domains. Its compatibility with OWL, a widely used ontology language, allows for the expression of uncertainty in existing OWL ontologies, and for integrating PR-OWL probabilistic ontologies with other ontologies expressed in OWL. These are the primary reasons for the choice of MEBN/PR-OWL as the implementation language in our case study.</ns0:p><ns0:p>The first step in defining a PR-OWL probabilistic ontology for the procurement fraud domain is to represent the entities, attributes and relations of Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> as OWL classes, properties and relations. Our proof-of-concept made a few simplifications to representation of Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>. For example, we removed the PublicServant entity and connected Person directly to PublicAgency with the workFor relationship. As another simplification, we assumed that every Person and Enterprise instance is uniquely identified by its name, so there was no need to represent the CPF and CGC entities. Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref> presents the entities as entered into our PR-OWL ontology implemented in UnBBayes <ns0:ref type='bibr' target='#b12'>(Carvalho et al., 2009)</ns0:ref>.</ns0:p><ns0:p>After defining the entities, we consider characteristics that may be uncertain. An uncertain attribute of an entity or an uncertain relationship among entities is represented in MEBN by a random variable (RV). For example, the RV livesAt(person) corresponds to the relation livesAt from To define a probability distribution for an uncertain attribute or relationship, we must declare it as resident in some MFrag, where its probability distribution will then be defined. For example, Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> shows how to define uncertainty about whether two persons are related. This is accomplished by selecting the OWL property isRelated and dragging the property and dropping it inside the PersonalInfo MFrag. The yellow oval on the right-hand side of Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> shows the RV defined by the PR-OWL plug-in for UnBBayes <ns0:ref type='bibr' target='#b62'>(Matsumoto, 2011)</ns0:ref> to represent uncertainty about whether persons are related. In the background what actually happens is that an instance of the DomainResidentNode class, which is a random variable that has its probability distribution defined in the current MFrag, is created. Besides that, an assertion is also added saying that this instance definesUncertaintyOf the OWL property isRelated.</ns0:p><ns0:p>Once RVs have been created for all uncertain attributes and relationships, probabilistic dependencies can be identified by analyzing how the RVs influence each other. The rules defined as part of the Analysis &amp; Design discipline describe probabilistic relationships that are formally defined as part of the Implementation discipline. For example, rule 2 indicates that there is a dependence between hasCriminalHistory(person), hasAdministrativeHistory(person), and hasCle-anHistory(person).</ns0:p><ns0:p>For this paper, we focus on the Judgment History, Improper Committee, and Improper Procurement MFrags. Figure <ns0:ref type='figure' target='#fig_11'>7</ns0:ref> shows a partial MTheory consisting of these three MFrags. Details on the complete PR-OWL MTheory can be found in <ns0:ref type='bibr' target='#b7'>Carvalho (2011)</ns0:ref> The two main goals described in our requirements are defined in the Improper Procurement and Improper Committee MFrags. 2 The Judgment History MFrag has RVs representing the judgment (criminal and administrative) history of a Person.</ns0:p><ns0:p>There are three LPDs defined in the Judgment History MFrag: (1) a probability that a person has a criminal history; (2) a probability that a person has an administrative history, and (3) a probability that a person has a clean history given whether or not that person has a criminal and/or an administrative history. This latter probability is lowest if he/she has never been investigated, higher if he/she has been investigated, and extremely high if he/she has been convicted 3 .</ns0:p><ns0:p>The Improper Committee MFrag contains the resident RV hasImproperCommittee(procurement), defined under the context constraints that procurement is an entity of type Procurement, member is an entity of type Person, and member is a member of the committee for Procurement. The assumptions behind the LPD defined in this MFrag are that: if any committee member of this procurement does not have a clean history, or if any committee member was related to previous participants, then the committee is more likely to be improper; and that if these things happen together, the probability of a improper committee is even higher.</ns0:p><ns0:p>The Improper Procurement MFrag has the resident RV isImproperProcurement(procurement), created in the same way as the isRelated RV inside the PersonalInfo MFrag explained previously. The assumptions behind the LPD defined in this MFrag are that: if the competition is compromised, or if any owner of a participating enterprise owns a suspended enterprise, or if committee of this procurement is improper, then the procurement is more likely to be improper; and that if these things happen together, the probability of having an improper procurement is even higher.</ns0:p><ns0:p>The final step in constructing a probabilistic ontology in UnBBayes is to define the local probability distributions (LPDs) for all resident RVs. Figure <ns0:ref type='figure' target='#fig_12'>8</ns0:ref> shows the LPD for the resident node isImproper-Procurement(procurement), which is the main question we need to answer in order to achieve one of the main goals in our model. This distribution follows the UnBBayes-MEBN grammar for defining 2 A more sophisticated model for deciding whether to do further investigation or change the committee would define a utility function and use expected utility to make the decision. Future versions of UnBBayes will support Multi-Entity Influence Diagrams <ns0:ref type='bibr' target='#b15'>(Costa, 2005)</ns0:ref> for modeling decision-making under uncertainty.</ns0:p><ns0:p>3 Maybe a better name for this node would be isTrustworthy. Nevertheless, the idea is that if someone was investigated and/or convicted then he might not be a good candidate for being part of a procurement committee.</ns0:p></ns0:div> <ns0:div><ns0:head>14/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science LPDs <ns0:ref type='bibr' target='#b6'>(Carvalho, 2008;</ns0:ref><ns0:ref type='bibr' target='#b11'>Carvalho et al., 2008)</ns0:ref>. The distribution for isImproperProcurement depends on the values of the parent RVs isCompetitionCompromised, hasImproperCommittee and ownsSuspendedEnterprise. The LPD is defined through a series of if-then-else statements giving the probability of ownsSuspendedEnterprise given each combination of truth-values of its parents. In this example, if all three parent RVs are true, then ownsSuspendedEnterprise has probability 0.9; if any two parents are true, then ownsSuspendedEnterprise has probability 0.8; if just one parent is true, then ownsSuspendedEnterprise has probability 0.7; if none of the parents is true then ownsSuspendedEnterprise has probability 0.0001. The probability values shown here were defined in collaboration with the specialist who supported the case study. In general, probability values for the MFrags are defined through some combination of expert elicitation and learning from data. However, in the PO described in this paper, all LPDs were defined based on the experience of the SMEs from CGU, since there is not enough structured data to learn the distribution automatically.</ns0:p><ns0:p>It is important to ensure traceability between the MFrags defined in the Implementation stage and the rules defined in the Analysis &amp; Design stage. A traceability matrix similar to Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> was developed to trace MFrags to rules. This mapping, along with the mapping of the rules to the requirements as documented in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, enables the probabilistic relationships expressed in the MFrags to be traced back to the requirements defined in the Goals stage.</ns0:p></ns0:div> <ns0:div><ns0:head>Test</ns0:head><ns0:p>As with any engineering methodology, test plays an essential role in UMP-ST. As <ns0:ref type='bibr' target='#b56'>Laskey and Mahoney (2000)</ns0:ref> point out, test should do more than showcase the model and demonstrate that it works as envisioned. Another important goal of the Test discipline is to find flaws and areas for improvement in the model.</ns0:p><ns0:p>The literature distinguishes two types of evaluation, verification and validation <ns0:ref type='bibr' target='#b0'>(Adelman, 1992)</ns0:ref>. Verification is concerned with establishing that 'the system was built right,' i.e., that the system elements conform to their defined performance specifications. Validation is concerned with establishing that the 'right system was built,' i.e. that it achieves its intended use in its operational environment.</ns0:p><ns0:p>For example, in the model we have been describing in this Section we would like to verify that the system satisfies the non-functional requirements developed during the Requirements stage as described above, e.g., that the queries covered by the requirement are answered in less than a minute and that the posterior probability given as an answer to a given query is either exact or has an approximation with an error bound of .5% or less. <ns0:ref type='bibr' target='#b56'>Laskey and Mahoney (2000)</ns0:ref> present three types of evaluation: elicitation review, importance analysis, and case-based evaluation. Elicitation review is related to reviewing the model documentation, analyzing whether all the requirements were addressed in the final model, making sure all the rules defined during the Analysis &amp; Design stage were implemented, validating the semantics of the concepts described by the model, etc. This is an important step towards achieving consistency in our model, especially if it was designed by more than one expert. Elicitation review can also confirm that the rules as defined correctly reflect stakeholder requirements.</ns0:p><ns0:p>The traceability matrices are a useful tool for verifying whether all the requirements were addressed in the final implementation of the model. By looking at the matrix tracing MFrags to rules, we can verify that all the rules defined during Analysis &amp; Design have been covered. The traceability matrix of Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, defined during Analysis &amp; Design, ensured that the rules covered all the defined requirements. Therefore, by composing these matrices, we can infer that all the requirements have been implemented in our model. This review should also confirm that important stakeholder requirements were not missed during Analysis &amp; Design.</ns0:p><ns0:p>Of course, an initial implementation will often intentionally cover only a subset of the stakeholder requirements, with additional requirements being postponed for later versions. Lessons learned during implementation are reviewed at this stage and priorities for future iterations are revisited and revised.</ns0:p><ns0:p>Importance analysis is a model validation technique described by <ns0:ref type='bibr' target='#b56'>Laskey and Mahoney (2000)</ns0:ref>. A form of sensitivity analysis, its purpose is to verify that selected parts of the model behave as intended.</ns0:p><ns0:p>In importance analysis, one or more focus RVs are specified and their behavior is examined under different combinations of values for evidence RVs. The output is a plot for each focus RV that orders the evidence RVs by how much changes in the value of the evidence RV affect the probability of the focus RV. Importance analysis is an important type of unit testing. In the case of PR-OWL, we can analyze the behavior of the random variables of interest given evidence per MFrag. This MFrag testing is important to capture local consistency of the model and to help localize the source of any problems identified in the model.</ns0:p><ns0:p>The tests designed in this Section as well as the model described in this paper were developed with the help of experts from the Department of Research and Strategic Information (DIE) from CGU. They provided detailed information on the different types of frauds as well as on evidence that they usually search for when auditing contracts during the internal control activities. Furthermore, they have also validated the proof-of-concept model described in this paper with the tests we will describe as well as others that were omitted due to space restrictions.</ns0:p><ns0:p>As an example of unit testing, we demonstrate how to define different scenarios to test the Judgment History MFrag. Essentially, we want to verify how the query hasCleanHistory(person) will behave in light of different set of evidence for a person's criminal and administrative history.</ns0:p><ns0:p>. . . Results for just one combination of states for the parent RVs are shown in Figure <ns0:ref type='figure' target='#fig_13'>9</ns0:ref>, which shows three distinct scenarios for a 3-node model. The model assesses whether or not a given person (person 1 in the figure) has a clean history. It consists of a binary RV with two parents. Each parent represents whether or not the person has been convicted, investigated, or not investigated in a criminal process (left upper node in the model) or in an administrative proces (right parent node in the model). The upper left depiction shows the model with no evidence entered (all nodes in yellow), which results in a marginal 'a priori' probability of 1.1% that any given person would not have a clean history. The upper right depiction shows the model results when knowledge about NeverInvestigated is entered in the hasCriminalHistory person1 RV, causing a slight reduction in the belief that person 1 does not have a clean history (i.e., down from 1.1% to 1.05%). Finally, the model depiction in the lower left shows the model's results when evidence on person 1 having a criminal conviction and never being investigated on an administrative process are entered. In this latter case, the belief on a non-clean history jumps to 99%.</ns0:p><ns0:p>A systematic unit test would examine other combinations as well <ns0:ref type='bibr' target='#b7'>(Carvalho, 2011)</ns0:ref>. It is important that unit testing achieve as much coverage as possible, and that results be analyzed by verifying that posterior probabilities behave as expected. In our case, the posterior probabilities are consistent with the expected result as defined by the expert.</ns0:p><ns0:p>Case-based evaluation is conducted by defining a range of different scenarios and examining the results produced by the system for each of the scenarios. Case-based evaluation is a system level test appropriate for integration testing. For our procurement PO, we define scenarios with evidence represented in different MFrags. This means that each query response will require instantiating multiple parts of the model, helping to validate how the model works as a whole. This validation is important to whether the model's global performance matches the specialist's knowledge.</ns0:p><ns0:p>It is important to try out different scenarios in order to capture the nuances of the model. In fact, it is a good practice to design the scenarios in order to cover the range of requirements the model must satisfy <ns0:ref type='bibr' target='#b78'>(Wiegers, 2003;</ns0:ref><ns0:ref type='bibr' target='#b74'>Sommerville, 2010)</ns0:ref>. Although it is impossible to cover every scenario we might encounter, we should aim for good coverage, and especially look for important 'edge cases'. A traceability matrix relating unit tests and case-based evaluation scenarios to MFrags is a useful tool to ensure that test scenarios have achieved sufficient coverage.</ns0:p><ns0:p>Keeping in mind the need to evaluate a range of requirements, we illustrate case-based evaluation with three qualitatively different scenarios. The first one concerns a regular procurement with no evidence to support the hypothesis of an improper procurement or committee. The second one has conflicting evidence in the sense that some supports the hypothesis of having an improper procurement or committee but some does not. Finally, in the third scenario there is overwhelming evidence supporting the hypothesis of an improper procurement or committee.</ns0:p><ns0:p>When defining a scenario, it is important to define the hypothesis being tested and what is the expected result, besides providing the evidence which will be used. Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> presents a comparison between all three scenarios. It can be seen that the difference between the first and the second scenarios is that member 1 was never investigated administratively in the first scenario, but was in the second. In the third scenario, however, besides having the evidence that member 1 was investigated, we also have the evidence that person 1 and 3 live at the same address and that person 2 lives at the same address as member 3.</ns0:p><ns0:p>In the first scenario, we expect that procurement will not be deemed improper since the members of the committee have never been investigated in either administrative or criminal instances and we have no relevant information about the owners of the enterprises participating in the procurement.</ns0:p><ns0:p>When the query is presented to the system, the needed MFrags are retrieved and instantiated for the entities relevant to the scenario, resulting in an SSBN that answers the query. Figure <ns0:ref type='figure' target='#fig_14'>10</ns0:ref> shows part of the SSBN generated from scenario 1. Evidence includes the fact that member 2, who in this SSBN is part of the procurement process being assessed, has never being investigated in either an administrative process on in a criminal process. As expected, the probability of both isImproperProcurement(procurement1) = true and isImproperCommittee(procurement1) = true are low, 2.35% and 2.33%, respectively. In other words, the procurement is unlikely to be improper given the evidence entered so far. In the second scenario, one of the three members of the committee was previously investigated in the administrative instance. All other evidence is the same as in the previous scenario. We expect that this new piece of evidence should not be strong enough to make the procurement improper, although the probability of being improper should be higher than in the first scenario.</ns0:p><ns0:p>The results of inference are as expected.</ns0:p><ns0:p>The probability of isImproperProcurement(procurement1) = true and isImproperCommittee(procurement1) = true are 20.82% and 28.95%, respectively 4 . In other words, the probability increased but it is still relatively unlikely. However, depending on the stringency of the threshold, this case might be flagged as warranting additional attention.</ns0:p><ns0:p>Finally, in the third scenario, we have evidence that the owners of two different enterprises participating in the procurement process live at the same address. Since there are only three enterprises participating in the procurement, the competition requirement is compromised. Thus, the procurement is likely to be improper.</ns0:p><ns0:p>As expected, the probability of isImproperProcurement(procurement1) = true and isImproperCommittee(procurement1) = true are much larger, at 60.08% and 28.95%, respectively 5 . Notice that although the probability of having an improper procurement correctly increased to a value greater than 50%, the probability of having an improper committee has not changed, since there is no new evidence supporting this hypothesis. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The cases presented here are meant to illustrate the UMP-ST. A full case-based evaluation would consider a broad range of cases with good coverage of the intended use of the model.</ns0:p></ns0:div> <ns0:div><ns0:head>APPLICABILITY OF UMP-ST TO OTHER DOMAINS</ns0:head><ns0:p>In this paper, we focused on the fraud identification use case as a means to illustrate the core ideas of the UMP-ST. We chose this use case because its applicability was clear and its benefits have been independently tested methodology is currently being evaluated for use by the Brazilian Comptroller Office). Nevertheless, the methodology is applicable to any problem requiring the development of a probabilistic ontology. Other examples of using the technique can be found in the terrorist identification domain <ns0:ref type='bibr' target='#b45'>(Haberlin et al., 2014)</ns0:ref> and the maritime situation awareness (MDA) domain <ns0:ref type='bibr' target='#b41'>(Haberlin, 2013;</ns0:ref><ns0:ref type='bibr' target='#b10'>Carvalho et al., 2011)</ns0:ref>. For instance, the latter involved the development of a probabilistic ontology as part of the PROGNOS (Probabilistic OntoloGies for Net-centric Operation Systems) project <ns0:ref type='bibr' target='#b17'>(Costa et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b8'>Carvalho et al., 2010)</ns0:ref>, in which PR-OWL was chosen as the ontology language due to its comprehensive treatment of uncertainty, use of a highly expressive first-order Bayesian language, and compatibility with OWL.</ns0:p><ns0:p>The MDA probabilistic ontology is designed for the problem of identifying whether a given vessel is a ship of interest. The PO was written in PR-OWL, and its development employed the UMP-ST process. An important aspect is that the development of the PR-OWL ontology was initially based on an existing ontology of Western European warships that identifies the major characteristics of each combatant class through the attributes of size, sensors, weapons, missions, and nationality. Thus, its development was a good case study for applying UMP-ST to extend an existing ontology to incorporate uncertainty.</ns0:p><ns0:p>During its development, the MDA ontology was evaluated for face validity with the help of semantic technology experts with knowledge of the maritime domain. This evaluation effort had issues in getting feedback from a sufficiently large number of experts, but the overall result of the evaluation suggests the UMP-ST not only as viable and applicable to the problem it supports but also a promising approach for using semantic technology in complex domains <ns0:ref type='bibr' target='#b41'>(Haberlin, 2013)</ns0:ref>.</ns0:p><ns0:p>More recently, <ns0:ref type='bibr' target='#b1'>Alencar (2015)</ns0:ref> applied the UMP-ST process to create a PO for supporting the decision of whether or not proceed with Live Data Forensic Acquisition. Besides using the UMP-ST, several tools and techniques shown in this paper were also applied: the use of UML Class Diagram to identify the main entities, attributes, and relations for the model; the use of a traceability matrix to facilitate further improvements in the model; the implementation of the PO using PR-OWL and UnBBayes; and the validation of the model using both unit testing and case-based evaluation.</ns0:p></ns0:div> <ns0:div><ns0:head>FUTURE WORK</ns0:head><ns0:p>A natural next step in this research is the development of automated tools to support the UMP-ST. It is useful to have a tool to guide the user through the steps necessary to create a probabilistic ontology and link this documentation to its implementation in the UnBBayes PR-OWL plug-in. A tool to support this documentation process has been developed by the Group of Artificial Intelligence (GIA) from the University of Bras&#237;lia, Brazil <ns0:ref type='bibr' target='#b24'>(de Souza, 2011;</ns0:ref><ns0:ref type='bibr' target='#b72'>Santos et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b9'>Carvalho et al., 2014)</ns0:ref>.</ns0:p><ns0:p>Penetration of semantic technology into serious applications cannot rely only on hand-engineered ontologies. There has been a robust literature on ontology learning <ns0:ref type='bibr' target='#b46'>(Hazman et al., 2011)</ns0:ref> and learning of expressive probabilistic representations <ns0:ref type='bibr' target='#b37'>(Getoor and Taskar, 2007;</ns0:ref><ns0:ref type='bibr' target='#b23'>de Raedt, 1996;</ns0:ref><ns0:ref type='bibr' target='#b60'>Luna et al., 2010)</ns0:ref>, and there is robust research activity on the topic. The probability specification step of the POMC could combine both expert-specified probability distributions and probability distributions learned from data.</ns0:p><ns0:p>Practicing ontology engineers and the semantic technology community would benefit from widely available ontological engineering tools that support UMP-ST, provide scalable inference and support learning. In addition, there are other disciplines not discussed in this paper that are essential for practical ontology engineering, such as configuration management and user experience design. Further, the UMP-ST process would benefit from a more detailed description of the activities performed, roles involved, and artifacts produced in its application.</ns0:p><ns0:p>The Eclipse Process Framework (EPF) could be employed to provide a structured way to present the disciplines, activities, best practices, roles, etc. As customizable software process engineering framework, EPF has two major goals <ns0:ref type='bibr'>(Eclipse Foundation, 2011)</ns0:ref>: Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; 'To provide an extensible framework and exemplary tools for software process engineering -method and process authoring, library management, configuring and publishing a process.'</ns0:p><ns0:p>&#8226; 'To provide exemplary and extensible process content for a range of software development and management processes supporting iterative, agile, and incremental development, and applicable to a broad set of development platforms and applications.'</ns0:p><ns0:p>Capturing UMP-ST within EPF will provide guidance and tools to a broad community of developers for following the UMP-ST to develop probabilistic ontologies. A process that is made freely available with the EPF framework is the OpenUP <ns0:ref type='bibr' target='#b3'>(Balduino, 2007)</ns0:ref> which is a minimally sufficient software development process. This process could be used as a starting point to describe the UMP-ST process, since OpenUP is extensible to be used as foundation on which process content can be added or tailored as needed.</ns0:p><ns0:p>Two major challenges must be addressed to enable broad use of semantically rich uncertainty management methods. The first is scalability. There have been some attempts to grapple with the inherent scalability challenges of reasoning with highly expressive probabilistic logics. For example, lifted inference <ns0:ref type='bibr' target='#b4'>(Braz et al., 2007)</ns0:ref> exploits repeated structure in a grounded model to avoid unnecessary repetition of computation. Approximate inference methods such as MC-SAT and lazy inference <ns0:ref type='bibr' target='#b29'>(Domingos and Lowd, 2009)</ns0:ref> have been applied to inference in Markov logic networks. Hypothesis management methods <ns0:ref type='bibr'>(Haberlin et al., 2010a,b;</ns0:ref><ns0:ref type='bibr' target='#b57'>Laskey et al., 2001)</ns0:ref> can help to control the complexity of the constructed ground network. Much work remains on developing scalable algorithms for particular classes of problems and integrating such algorithms into ontology engineering tools.</ns0:p><ns0:p>Finally, ontologies generated using the UMP-ST process would greatly benefit from methods that can assess how well and comprehensively the main aspects of uncertainty representation and reasoning are addressed. Thus, a natural path in further developing the UMP-ST process is to leverage ongoing work in this area, such as the Uncertainty Representation and Reasoning Evaluation Framework (URREF) <ns0:ref type='bibr' target='#b16'>(Costa et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b25'>de Villiers et al., 2015)</ns0:ref> developed by the International Society of Information Fusion's working group on Evaluation of Techniques for Uncertainty Reasoning (ETURWG). We are already participating in this effort and plan to leverage its results in the near future.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>The Uncertainty Modeling Process for Semantic Technology (UMP-ST) addresses an unmet need for a probabilistic ontology modeling methodology. While there is extensive literature on both probability elicitation and ontology engineering, these fields have developed nearly independently and there is little literature on how to bring them together to define a semantically rich domain model that captures relevant uncertainties. Such expressive probabilistic representations are important for a wide range of domains. There is a robust literature emerging on languages for capturing the requisite knowledge. However, modelers can as yet find little guidance on how to build these kinds of semantically rich probabilistic models.</ns0:p><ns0:p>This paper provides such a methodology. UMP-ST was described and illustrated with a use case on identifying fraud in public procurement in Brazil The use case was presented with a focus on illustrating the activities that must be executed within each discipline in the POMC cycle in the context of the fraud identification problem. The core concepts in applying UMP-ST to the procurement domain can easily be migrated to completely distinct domains. For instance, it was also used in defining a PO for Maritime Domain Awareness (MDA) <ns0:ref type='bibr' target='#b7'>Carvalho (2011)</ns0:ref>, which supports the identification of terrorist threats and other suspicious activities in the maritime domain. The MDA PO evolved through several versions, showing how the UMP-ST process supports iterative model evolution and enhancement.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016) Manuscript to be reviewed Computer Science some aspects of the model during the Analysis &amp; Design stage prior to implementation. It is well known that early testing reduces risk, saves cost, and leads to better performance (INCOSE, 2015).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Uncertainty Modeling Process for Semantic Technology (UMP-ST).</ns0:figDesc><ns0:graphic coords='7,141.73,98.16,413.58,193.52' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Probabilistic Ontology Modeling Cycle (POMC) -Requirements in blue, Analysis &amp; Design in green, Implementation in red, and Test in purple.</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,413.60,307.16' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Specification Tree for Procurement Model Requirements</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Entities, their attributes, and relations for the procurement model.</ns0:figDesc><ns0:graphic coords='13,141.73,63.78,413.57,319.85' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Figure 4. As it is a functional relation, livesAt relates a Person to an Address. Hence, the possible values (or states) of this RV are instances of Address.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. PR-OWL Entities for the procurement domain.</ns0:figDesc><ns0:graphic coords='15,291.83,312.29,113.39,155.72' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Creating a RV in PR-OWL plug-in from its OWL property by drag-and-drop.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Part of the probabilistic ontology for fraud detection and prevention in public procurements.</ns0:figDesc><ns0:graphic coords='17,141.73,63.78,413.58,232.79' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. LPD for node isImproperProcurement(procurement).</ns0:figDesc><ns0:graphic coords='18,245.13,63.78,206.79,241.47' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Results of unit testing for the Judgment History MFrag.</ns0:figDesc><ns0:graphic coords='19,147.71,178.57,198.42,73.69' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Part of the SSBN generated for the first scenario.</ns0:figDesc><ns0:graphic coords='20,141.73,252.56,413.57,200.13' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,141.73,63.78,413.58,210.46' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Traceability Matrix Relating Rules to Requirements Rule.1 Rule.2 Rule.3 Rule.4 Rule.5 Rule.6</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Rq.1</ns0:cell><ns0:cell>X</ns0:cell><ns0:cell>X</ns0:cell><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell>Rq.1.a</ns0:cell><ns0:cell>X</ns0:cell><ns0:cell>X</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Rq.1.a.i</ns0:cell><ns0:cell>X</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Rq.1.a.ii</ns0:cell><ns0:cell /><ns0:cell>X</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Rq.2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>X</ns0:cell><ns0:cell>X</ns0:cell><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell>Rq.2.a</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell>Rq.2.a.i</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell>Rq.2.a.ii</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell>Rq.2.b</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell>Rq.2.b.i</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell>Rq.2.b.ii</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>6. If 4 or 5, then it is more likely that the committee violates policy for proper committee composition.</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Comparison of all three scenarios.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Scenario Hypothesis and Expected Result Evidence</ns0:cell></ns0:row></ns0:table><ns0:note>19/25PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016)</ns0:note></ns0:figure> <ns0:note place='foot' n='7'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='11'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='15'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016)</ns0:note> <ns0:note place='foot' n='16'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='17'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='4'>The SSBN generated for this scenario is shown in<ns0:ref type='bibr' target='#b7'>Carvalho (2011)</ns0:ref>, provided as supplemental material with this paper.5 The SSBN generated for this scenario is shown in<ns0:ref type='bibr' target='#b7'>Carvalho (2011)</ns0:ref>, provided as supplemental material with this paper.18/25PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016)</ns0:note> <ns0:note place='foot' n='21'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016)Manuscript to be reviewed</ns0:note> <ns0:note place='foot' n='25'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:1:1:CHECK 16 May 2016) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Response to Comments from Editor and Reviewers Paper title: Uncertainty Modeling Process for Semantic Technology Paper number: CS-2015:10:7010:1:0:NEW Dear Editor: Please find a revised version of our submission, Uncertainty Modeling Process for Semantic Technology. We are grateful the reviewers for their many helpful comments and suggestions. We have substantially rewritten the paper to address the comments. A detailed point-by-point list of changes is included below. We have also included a document comparison of the old and new documents. The comparison is of limited use because the exchanges are so extensive, but we have included it as requested by PeerJ. We hope the reviewers will agree that their suggestions have helped us to make it a much better paper. Sincerely, Rommel Carvalho, Kathryn Laskey and Paulo Costa Response to comments from the Editor • • … clarification in regard to cited methods and ontologies would improve the paper … a comparison to the proposed method and the cited or applied ones, respectively, would clarify the relation between the presented and related work. We have added a new section containing substantial material reviewing the ontology engineering literature. We compare our work to the cited work and clarify the distinct contributions of our work over the existing literature. Don't use references in the abstract. This has been corrected. Response to comments from Reviewer 1 • The paper claims (in the abstract) that little has been done for the engineering of probabilistic ontologies. The statement is false … We did not intend to make the claim that there has been little work on probabilistic ontologies. In fact, elsewhere in the paper, we noted that this is an active research area. We apologize for misleading wording. We hope our revisions make our meaning clearer: “little has been written about engineering of probabilistic ontologies.” This statement is true to the best of our knowledge. We have performed a careful literature search attempting to find work of which we may not have been aware. Indeed, we undertook the work reported in this paper because we have received inquiries from people wanting to -1- build probabilistic ontologies, as to whether we were aware of any work on how to develop a probabilistic ontology. We felt this was a gap that needed to be filled. A sampling of emails we have received seeking help in probabilistic ontology engineering is included in the supplemental materials. • The use cases are too simple and not well vetted for applicability. The use case was deliberately kept simple. The purpose of the paper is to present and illustrate a methodology for probabilistic ontology engineering, not to describe an ontology for procurement fraud. We agree the use case is too simple for the latter purpose. For our purpose, including greater complexity would get in the way of explaining how to apply our knowledge engineering methodology. To avoid misunderstanding, we have included material clarifying the purpose of the use case. Although simple, the procurement fraud PO is a realistic proof of concept. One of the authors is a professional with years of experience in identifying procurement fraud. Furthermore, a specialist provided expert knowledge to support construction of the probabilistic ontology. • The process should follow definitions in Figure 2 to make it easier for the reader to follow. This has been done. • Math of uncertainty and the RV discussion is hard to discern how created, but is more associated with a discussion with BN. Some effort could be coordinated between a BN representation for the uncertainty quantification, that then is described in an ontology. The ontology then supports the query of analysis. We are unsure exactly what the reviewer’s concern is here. We have rewritten the explanations of the random variables and local distributions to be more clear. • Experimental design - Too limited to discern viability. As noted above, the focus of the paper, and the purpose of the use case, is to show how to apply the UMP-ST. We make no claim that the evaluation is complete or thorough. We claim only that it illustrates the kinds of evaluation that should be done as part of building a probabilistic ontology. • Validity - Does not seem reproducible of complete to validate. We are unsure about what is concern is being communicated. If the concern is with reproducibility of our use case, the full implementation with code is included in the first author’s doctoral dissertation. We are providing this as supporting material along with our submission. [INTRO] • “Uncertainty is fundamental across a wide range of domains to which semantic technology is applied” –awkward. We have rewritten this. • “This increase in complexity has been a major obstacle to penetration of expressive probabilistic representations into real-world applications.” - awkward; the complexity is not a reason what it has not been reported in real-world literature (that comes from reporting on products). This has been rewritten to clarify the meaning and improve readability. • “the literature contains little guidance on how to model a probabilistic ontology.” Seriously? Your references are 7-years old. We stand by our statement that, while there are robust literatures on ontology engineering and on probability elicitation, very little has been written on knowledge engineering for probabilistic languages -2- expressive enough to represent a probabilistic ontology. In the process of making revisions, we have performed additional literature search and have been unable to find guidance on modeling probabilistic ontologies. If the reviewer is aware of literature we have not identified, we would greatly appreciate pointers to it. • “use a probabilistic ontology in real world problems, the first step is creating and documenting the model (TBox).” - what is the model – mathematical? Linguistic? Etc… “t needs to be populated (A-Box).” - What is A, what is T? Thank you for pointing out the need for greater clarity in terminology. We removed the reference to A-Box and T-Box (these are commonly used terms in the literature on knowledge-based systems, but are unnecessary here). We also clarified our use of the term “model” at the beginning of the section titled “Uncertainty Modeling Process for Semantic Technology.” • “possible to query the model given the specific situation provided by the data available.” – OK, a “query” is a way to access a system, but here you assume it is human or a machine. http://www.britannica.com/technology/query-language. At the point this statement was made (end of intro), we added a sentence to clarify what we mean by a query. [UNCERTAINTY MODELING] • Three stages? How can you build the model, without the parameters? I am unsure of the question being asked here. The 3 stages are: (1) model the domain; (2) populate the model with information about a specific situation; and (3) use the model and situation-specific information for reasoning. Stage (1) includes specifying both the model structure and the model parameters. It does not include specifying the data for a particular situation. For example, in our use case, Stage (1) included developing a rule, with parameters to represent uncertainty, indicating that low education of an owner of an enterprise is a potential indicator that the enterprise may be a front. Stage (2) might include populating the model with information that a specific individual named as the enterprise owner on a procurement being investigated has not finished high school. Stage (3) would then involve running the model to alert investigators that this procurement is suspicious. We expanded the discussion at the beginning of the second section to clarify these points. • Uncertainty types – see the URREF. References have been added to URREF and other works that discuss types and sources of uncertainty. Regarding URREF, it should be noted that its focus is on evaluation of uncertainty reasoning within the context of information fusion systems. Although related, URREF does not address the goals of UMP-ST. More specifically, UMP-ST provides a method for building POs, while URREF is an ontology to support the evaluation of uncertainty representation and reasoning in information systems. We noted in the future work section that evaluation of ontologies built using UMP-ST can definitely leverage URREF’s work, but UMP-ST and URREF have different foci. • “Disciplines” – that is the word. Analysis and Design are separate concepts. In most engineering applications, you design first and then do the analysis of the design against testing. Not sure what the “unified process” is, but seems like it is myopically applied. This terminology is drawn from the literature on the unified process. We -3- have added material defining terms, reviewing literature on the UP, and noting its widespread use in the software engineering community. • “in Figure 2, the requirements discipline (blue circle)” is labeled as Goals and not requirements. Do goals consist of queries and evidence? Seems like a disconnect? The other circisl do have some association with the terms. We have revised Figure 2 and the surrounding text to make it more clear. • Figure 2, LPD not described. This has been addressed. [FRAUD] • What are the fraud types? All that is listed in “fake competition.” Where is a reference to a document that explores fraud? We have added references to the literature on detecting fraud. • “the prob ontology designed by experts transforms millions of items into dozens” – HOW? We have rewritten this section to clarify how the probabilistic ontology can be used to sift through evidence and alert analysts. • 1. Goal: evidence – looks like a social network diagram. 2. Goal, query: evidence – question should be a “criminal history” extracted from a police database. A comment was added about sources of data used for evidence. • RTM – you end with this, but it should be the start of the requirements section. We have included additional discussion on traceability, including an example specification tree and a traceability matrix. • Figure 4 is good. Thank you. • Categories for development should follow Figure 2. Thus, there is no category “implementation” in Figure 2. We have added a legend to Figure 2 and improved the correspondence between Figure 1, Figure 2 and the explanatory text. • how does Figure 6 create the RV? We have expanded the explanation. • there is no example of the coordination of the math of the RV. For example “The Improper Procurement MFrag has the resident RV isImproperProcurement(procurement),” is not a RV. isImproperProcurement(procurement) is a Boolean (true/false) RV that represents whether a procurement is improper or not. It is resident in the Improper Procurement MFrag, which shows its parents. Its local distribution is defined in Figure 8, and an instance of this RV appears in the situation-specific Bayesian network of Figure 10. An instance of this RV will be created for each procurement being evaluated. We have expanded our explanation of how this works. • If you have a LPD, you must have a GPD. It seems that you should look at the efforts for distributed fusion that works with graphs for distribution fusion. Yes, the MFrags define a global probability distribution that can be used to define a situation-specific Bayesian network to respond to any query. We added a couple of paragraphs on page 12 explaining the fact that PR-OWL ontologies are based on MEBN logic, which has a set of restrictions that ensure the existence of a GPD that can be approximated by a sequence of finite Bayesian networks. We also mentioned how this is done in practice via the situation-specific Bayesian network construction process. However, conveying the specific restrictions, mathematical proofs, process details, and other aspects that support these characteristics of MEBN models would fall outside the -4- • • • • • scope of this paper, so we pointed the interested reader to the main reference on MEBN logic. Can you provide a easy to follow RTM matrix? There are no matrices presented. This has been done. “The purpose of importance analysis is to verify that selected parts of the model behave as intended.” This does not seem to follow as importance sampling is a decision process. You probable want to discuss “verification” that the model works as intended. Importance analysis is not the same thing as importance sampling. We took the term importance analysis from Laskey and Mahoney (2000) and the references therein. It is a kind of sensitivity analysis that measures the impact (importance) of information about one random variable on the distribution of another. We added a note on the source of the term. Too many references to Carvalho (2011) are required to learn from this presentation alone. We have included additional information to make this article more selfcontained, and are also providing supplementary materials that space precludes including in the main article. Need to highlight the scenario as a separate section as a use case. There is not enough detail to work through the examples as scenarios to follow the process. Numbers in Figure 9, 10 are not described in relation to the scenario. We have added more detailed explanations within the text for both figures 9 and 10. The numbers for the scenarios are not compared to be able to learn from the examples. A combined table would be useful. We have added a combined table. [CONCLUSIONS] • The quick look at the scenarios does not seem to convey the viability of the approach. More is needed to make the claim. We wrote this paper because we have received communications from people seeking guidance on how to develop probabilistic ontologies, and we want a publication to which we can refer them. There are robust literatures on how to build ordinary ontologies, how to build Bayesian networks, how to elicit probabilities, and how to learn probabilistic models from data. But there is virtually no literature on how to combine these elements to engineer a probabilistic ontology. Our purpose in writing this paper is to address this hole in the literature by providing a methodology for how to go about constructing a PO. The purpose of the use case is to illustrate the methodology. We do not claim that the probabilistic ontology we construct for the use case is complete or correct. The purpose is to provide guidance on how to construct a probabilistic ontology. The purpose is not to present a full probabilistic ontology for procurement fraud detection. • The second paragraph in the conclusions should go in the intro. We noted in the introduction that although the main use case of procurement fraud is used for illustration, the methodology is applicable to any domain that uses semantic technology. This was also emphasized in a new section, 'Applicability of UMP-ST to Other Domains' that was inserted after the main use case section to emphasize UMP-ST is general in its applicability and that the Brazil Procurement case was just one scenario that we detailed a bit more for illustration purposes. -5- “the next step” is part of future work. This material has been moved to the future work section. ”Future work” should be a discussion section before the conclusions. This has been done. Reviewer 2 [BASIC REPORTING] • it is not clear in the paper what are the main differences concerning the modelling process when one has or has not an uncertain domain, i.e., besides making use of a language suitable for handling uncertainty, the paper should clearly present the specificities in the overall disciplines and phases that make possible to engineering an ontology representing uncertainty. Otherwise, the proposed methodology is the same as the ones handling traditional ontologies and what changes is only the implementation using a probabilistic ontology language. We have added additional material to the section titled “Uncertainty Modeling Process for Semantic Technology” clarifying the parts of the UMP-ST that are standard ontology engineering and the parts that are specific to a probabilistic ontology. • there are lots of citations to the previous work of the authors stating that more details and better explanation can be found there. I believe this harms the paper to be selfcontained. We have added additional material to make the paper more selfcontained. Space considerations preclude making the full text completely selfcontained, but we have extracted the case study material from Carvalho’s dissertation and provided as supplementary material with our submission. • would like to see how the uncertainty of the domain makes the phases and disciplines to be different from a standard ontology engineering modeling, as this is the gap that the paper proposes to fulfill; in addition what are the particularities of the UMP-ST process and the POMC compared to the traditional software development process. We have added several paragraphs to the section titled “Uncertainty Modeling Process for Semantic Technology” comparing methodologies for building standard ontologies with our proposed methodology. We also added more information on the Unified Process and how we adapted it to PO modeling. • in case it is not possible to highlight how the overall process changes when facing uncertainty, I suggest the authors direct the paper to the development of a probabilistic ontology in the Brazilian procurement fraud domain. But, in this case, it is necessary to point out the contributions of the current paper compared to the previous ones by the same authors. As described above, we have added material in the latter part of the section “Uncertainty Modeling Process for Semantic Technology” explaining the parts of the process that are specific to uncertainty modeling. • would like to see the reasons for choosing the PR-OWL instead of others existing in the literature. PR-OWL is the implementation language we used, but is not required for UMP-ST, which could be applied using a different expressive probabilistic language. A discussion about why we chose MEBN/PR-OWL was inserted on page 12, when explaining that MEBN logic provides a global probability distribution and also on how the SSBN construction method works. • -6- would like to see how this paper contributes over the previous ones, and possibly related work concerning probabilistic ontology modeling (even if they are from the same authors) We added a paragraph on this in the introduction, near the bottom of the first page. In short, this paper provides a comprehensive explanation of UMP-ST, which we developed in response to requests for guidance in constructing probabilistic ontologies. Our earlier publications were much shorter papers that focused on explaining how UMP-ST helps practitioners to make more effective POs. These works did not provide sufficient detail to replicate the use cases or build one’s own probabilistic ontology. This paper, together with the supplementary material, addresses this gap. [EXPERIMENTAL DESIGN] • Some explanations about the domain are required, particularly, it should be clear in the paper what are the meaning of procurement, bidders, suppliers, etc, in the Brazilian context. At the beginning of the “Preventing and Detecting Procurement Fraud” section, we added explanations of the Brazilian procurement process and how it works. We believe this new information, together with the explanation of Figure 4, should give the reader sufficient understanding of the domain for the purpose of the proof of concept. • in the current paper, only a small subset of the requirements is tackled, while the whole set is presented in the previous paper. Please, make clear how this subset was chosen, and how many requirements are there, and how this choice affects the overall process and the final developed probabilistic ontology, providing some examples. The use case was intended to be a proof of concept to illustrate the steps of the UMP-ST. The examples were chosen because we thought they would be easy to understand and provided a good illustration of the method. • when defining the entities, its properties and relationships, the UML language is used, and the authors claim that this is because of the popularity of UML. However, I believe that nowadays OWL and its tools and editors are also quite popular among the ontology community. This also brings the necessity of specifying the rules separately, as UML has no support for that. Maybe there was a failed attempt of using OWL editors, but the popularity is not a hard claim for choosing UML, from my point of view. The UML diagrams of Figure 4 were very fast to produce using UML tools, and provided a good visualization that can be understood by the domain expert. We could also easily add annotations to the UML diagrams. It would be possible to create the ontology directly in OWL without going through the step of making a UML class diagram, but the capability for visualization is much more limited in commonly available OWL tools, which makes it harder to share with experts to get their feedback. In our experience, it is quite common for ontology engineers to start out with easily visualized representations such as UML class diagrams, and later migrate to OWL. This may change as the visualization capability for OWL tools improves, but at present it helps to have the ability to do “quick and dirty” UML diagrams. We have tried to clarify this point in the paper. • it is stated in the paper that everyone is obliged to file his/her TaxInfo, but this is a bit strong. It should be said that any person relevant to this domain are obliged to do that, not every Brazilian citizen. Every Brazilian citizen is required to file tax information, • -7- • • • • • • • • even if only to state that his or her income is below a certain amount and no taxes are owed. In page 8, in Implementation section, it is stated that PR-OWL2 is not currently implemented. At the end of this page, it is said that PR-OWL2 plugin is used. This is a little confusing. I believe this can be clearer if the authors simply explain the limitations of PW-OWL 1 in this section and the modifications that were introduced in order to attend such limitations. Then, PR-OWL2 can be left to the Future work section. We apologize for inadvertently leaving in a statement from an earlier draft that was written before PR-OWL 2 was available. We have removed any discussion of the distinction between PR-OWL 1 and PR-OWL 2, as not being germane to this paper. We believe the PR-OWL material is more understandable now. Resident nodes, input nodes and the difference among them need to be explained in the paper. Actually, I believe the reader would benefit from some explicit background knowledge regarding this language. Explanations have been added. please, explain how the LPDs are obtained, whether they are manually defined or learned from data. An explanation was added on page 15. In general, probability values for the MFrags are defined through some combination of expert elicitation and learning from data. However, in the PO described in this paper, all LPDs were defined based on the experience of the SMEs from CGU, since there is not enough structured data to learn the distribution automatically. the role of the expert should be more detailed in the Test section. Is there only one expert? How this choice can threaten the validity of the proof of concept? An explanation was added to the Test section. [VALIDITY OF FINDINGS] the methodology is presented as general, however, this is not proved in the paper, as only one domain is taken into account and nothing is said about the people handling it. We have added references to and discussion of application of UMP-ST to other domains. We have emphasized that the procurement fraud use case was chosen to provide a step-by-step illustration of the methodology, but the UMP-ST can be applied to any domain where a semantically rich uncertainty representation is needed. Not only the process should be presented but also a discussion about how following the POMC and UMP-ST helped or changed the previous way of modeling the domain. We have received inquiries from people wanting to build probabilistic ontologies, requesting resources for how to develop a probabilistic ontology. A sampling of emails we have received seeking help in probabilistic ontology engineering is included in the supplemental materials. it needs to be discussed in the paper the point of view of the experts, i.e., how a number of them would evaluate the proposed process, compared to following a different one, or no one. The purpose of this paper is not to evaluate which methodology is the best for building POs nor to compare building a PO with and without the UMP-ST methodology. As de Hoog says (1998), 'it is extremely difficult to judge the value of a methodology in an objective way. Experimentation is of course the proper way to do it, but it is hardly feasible because there are too many conditions that cannot be controlled'. On the one hand 'introducing an experimental toy problem will violate -8- the basic assumption behind the need for a methodology: a complex development process'. On the other hand, if we extrapolate the argument that de Hoog provides for knowledge based systems to the probabilistic ontology domain, it is not likely that someone, CGU in this instance, will pay twice for building the same complex PO using different methodologies. Nevertheless, as stated by the SME at CGU, 'Before the UMP-ST process we did not even know where to begin when trying to build a PO for finding frauds in the procurement process. However, with UMP-ST, we were able to test our ideas and create a simple PO as a proof of concept.” We have added discussion to clarify this point. -9- "
Here is a paper. Please give your review comments after reading it.
275
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The ubiquity of uncertainty across application domains generates a need for principled support for uncertainty management in semantically aware systems. A probabilistic ontology provides constructs for representing uncertainty in domain ontologies. While the literature has been growing on formalisms for representing uncertainty in ontologies, there remains little guidance in the knowledge engineering literature for how to design probabilistic ontologies. To address the gap, this paper presents the Uncertainty Modeling Process for Semantic Technology (UMP-ST), a new methodology for modeling probabilistic ontologies. To explain how the methodology works and to verify that it can be applied to different scenarios, this paper describes step-by-step the construction of a proof-ofconcept probabilistic ontology. The resulting domain model can be used to support identification of fraud in public procurements in Brazil. While the case study illustrates the development of a probabilistic ontology in the PR-OWL probabilistic ontology language, the methodology is applicable to any ontology formalism that properly integrates uncertainty with domain semantics.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The ability to represent and reason with uncertainty is important across a wide range of domains. For this reason, there is a need for a well-founded integration of uncertainty representation into ontology languages. In recognition of this need, the past decade has seen a significant increase in formalisms that integrate uncertainty representation into ontology languages. This has given birth to several new languages such as: PR-OWL <ns0:ref type='bibr' target='#b16'>(Costa, 2005;</ns0:ref><ns0:ref type='bibr' target='#b19'>Costa et al., 2005</ns0:ref><ns0:ref type='bibr' target='#b20'>Costa et al., , 2008;;</ns0:ref><ns0:ref type='bibr' target='#b7'>Carvalho, 2011;</ns0:ref><ns0:ref type='bibr' target='#b14'>Carvalho et al., 2013)</ns0:ref>, OntoBayes <ns0:ref type='bibr' target='#b80'>(Yang and Calmet, 2005)</ns0:ref>, BayesOWL <ns0:ref type='bibr' target='#b28'>(Ding et al., 2006)</ns0:ref>, P-CLASSIC <ns0:ref type='bibr' target='#b52'>(Koller et al., 1997)</ns0:ref> and probabilistic extensions of SHIF(D) and SHOIN(D) <ns0:ref type='bibr' target='#b59'>(Lukasiewicz, 2008)</ns0:ref>.</ns0:p><ns0:p>However, the increased expressive power of these languages creates new challenges for the ontology designer. In addition to developing a formal representation of entities and relationships in a domain, the ontology engineer must develop a formal characterization of the uncertainty associated with attributes of entities and relationships among them. While a robust literature exists in both ontology engineering <ns0:ref type='bibr' target='#b2'>(Allemang and Hendler, 2008;</ns0:ref><ns0:ref type='bibr' target='#b39'>Gomez-Perez et al., 2004)</ns0:ref> and knoweldge engineering for probability models <ns0:ref type='bibr' target='#b56'>(Laskey and Mahoney, 2000;</ns0:ref><ns0:ref type='bibr' target='#b30'>Druzdzel and van der Gaag, 2000;</ns0:ref><ns0:ref type='bibr' target='#b53'>Korb and Nicholson, 2003;</ns0:ref><ns0:ref type='bibr' target='#b65'>O'Hagan et al., 2006)</ns0:ref>, these fields have developed largely independently. The literature contains very little guidance on how to build ontologies that capture knowledge about domain uncertainties.</ns0:p><ns0:p>To fill the gap, this paper describes the Uncertainty Modeling Process for Semantic Technology (UMP-ST), a methodology for defining a probabilistic ontology and using it for plausible reasoning in applications that use semantic technology. The methodology is illustrated through a use case in which semantic technology is applied to the problem of identifying fraud in public procurement in Brazil. The purpose of the use case is to show how to apply the methodology on a simplified but realistic problem, and to provide practical guidance to probabilistic ontology designers on how to apply the UMP-ST. This paper goes beyond previous work on UMP-ST (e.g., <ns0:ref type='bibr' target='#b42'>Haberlin, 2013;</ns0:ref><ns0:ref type='bibr' target='#b10'>Carvalho et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alencar, 2015)</ns0:ref> to provide a comprehensive explanation of the methodology in the context of application to a real world problem, along with pragmatic suggestions for how to apply the methodology in practice. For purpose of exposition, our focus is primarily on the procurement fraud use case, but the UMP-ST is applicable to any domain in which semantic technology can be applied.</ns0:p><ns0:p>Our purpose is not to provide rigorous scientific evidence for the value of UMP-ST in comparison with any other methodology or no methodology. <ns0:ref type='bibr'>de Hoog (1998)</ns0:ref> says, 'it is extremely difficult to judge the value of a methodology in an objective way. Experimentation is of course the proper way to do it, but it is hardly feasible because there are too many conditions that cannot be controlled.' The value of a methodology like UMP-ST lies in its ability to support a complex system development effort that extends over a long period of time and requires significant resources to implement. Besides the large number of uncontrolled variables, the resources to implement a single case of sufficient complexity is difficult; experimentation requiring multiple parallel implementations is prohibitive. Nevertheless, the experience of our development team is that the structure provided by UMP-ST was essential to the ability to capture the expert's knowledge in a model whose results were a reasonable match to the expert's judgments.</ns0:p><ns0:p>The paper is organized as follows. The next section reviews existing design methodologies that provided inspiration for UMP-ST. The following section introduces the UMP-ST process. Next, we introduce our use case devoted to identifying fraud in public procurement in Brazil. The fifth section explains the four disciplines of the UMP-ST in the context of the fraud use case, and is followed by a section discussing applicability of the UMP-ST to other domains. The paper concludes with a section on future work and a final section presenting our concluding remarks.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Successful development of any complex system requires following a structured, systematic process for design, implementation and evaluation. Existing software and systems engineering processes are useful as starting points, but must be tailored for engineering a probabilistic ontology. The UMP-ST draws upon a number of related processes for software engineering, ontology engineering, and Bayesian network engineering to provide a process tailored to probabilistic ontology engineering. To provide a context for introducing the UMP-ST, this section reviews related literature on design processes that provided an initial basis for tailoring the UMP-ST.</ns0:p></ns0:div> <ns0:div><ns0:head>The Unified Process</ns0:head><ns0:p>The Unified Process (UP) is a widely applied software engineering process <ns0:ref type='bibr' target='#b50'>(Jacobson et al., 1999;</ns0:ref><ns0:ref type='bibr' target='#b54'>Kruchten, 2000;</ns0:ref><ns0:ref type='bibr' target='#b3'>Balduino, 2007)</ns0:ref>. It has three main characteristics: (1) it is iterative and incremental;</ns0:p><ns0:p>(2) it is architecture centric; and (3) it is risk focused. Each project is divided into small chunks, called iterations, each concluding in delivery of executable code. These frequent deliverables yield an incremental implementation of the system. A key deliverable is the executable architecture, which is a partial implementation of the system that validates the architecture and builds the foundation of the system. Finally, the UP mitigates risk by prioritizing the highest risk features for early implementation. The reasoning is simple: if a critical aspect of the system is going to fail, it is better to discover this early enough to rework the design or cancel the project, than to realize after the fact that large amounts of resources have been wasted on a non-viable project.</ns0:p><ns0:p>The UP defines the project lifecycle as composed of four phases: (1) Inception; (2) Elaboration; (3) Construction; and (4) Transition. Inception is usually the shortest phase. The main goal is to define the justification for the project, its scope, the risks involved, and the key requirements. In the elaboration phase, the primary concerns are to define most of the requirements, to address the known risks, and to define and validate the system architecture. The Construction phase is the longest phase, where most of the development process resides. This phase is usually broken down into small iterations with executable code being delivered at the end of each iteration. Finally, in the Transition phase the system is deployed, the users are trained, and initial feedback is collected to improve the system.</ns0:p><ns0:p>To support the project lifecycle, the UP defines several disciplines or workflows. Each discipline describes a sequence of activities in which actors or workers produce products or artifacts to achieve a result of observable value. For example, a developer might carry out a programming activity using the system specification in order to produce both source and executable code. There are several variations of the Unified Process (e.g., Rational Unified Process, Agile Unified Process, Enterprise Unified Process). While each has its own set of disciplines, the following disciplines are common to most: Business Modeling, responsible for documenting the business processes in a language common to both business and software communities; Requirements, responsible for defining what the system should do based on the information gathered from the customer; Analysis &amp; Design, responsible for showing how the system will be realized in the implementation phase; Implementation, responsible for developing the code necessary to implement the elicited requirements; Test, which verifies and validates the code developed; and Deployment, responsible for delivering the software to the end user.</ns0:p></ns0:div> <ns0:div><ns0:head>Ontology Engineering</ns0:head><ns0:p>According to <ns0:ref type='bibr' target='#b39'>Gomez-Perez et al. (2004)</ns0:ref>, the first workshop on Ontological Engineering was held in conjunction with the 12TH European Conference on Artificial Intelligence in 1996. Since then, several methodologies for building ontologies have been proposed. <ns0:ref type='bibr' target='#b39'>Gomez-Perez et al. (2004)</ns0:ref> compare several different methodologies used for building ontologies in the context of the METHONTOLOGY ontology development process <ns0:ref type='bibr' target='#b31'>(Fern&#225;ndez-L&#243;pez et al., 1997)</ns0:ref>. METHONTOLOGY identifies activities that are performed to build ontologies. The three main activities are: (1) ontology management activities; (2) ontology development oriented activities; and (3) ontology support activities.</ns0:p><ns0:p>The ontology management activities include: (1) scheduling, which identifies activities, their dependency, the resources needed in each, and how long they will take; (2) control, which guarantees that the project is going according to schedule; and (3) quality assurance, which verifies if the products generated from the scheduled activities are satisfactory. These activities are general enough that they can be imported from other frameworks that are not specific to ontology engineering, such as the Project Management Body of Knowledge (PMBoK), which is a general guide to project management. PMBoK includes activities such as scheduling, control, among others <ns0:ref type='bibr' target='#b75'>(Sun, 2004)</ns0:ref>. Because these are generic activities, it is not surprising that only one, the On-To-Knowledge (OTKM) methodology <ns0:ref type='bibr' target='#b76'>(Sure et al., 2004)</ns0:ref>, out of seven methodologies analyzed and compared by <ns0:ref type='bibr' target='#b39'>Gomez-Perez et al. (2004)</ns0:ref> describes these activities in detail. The METHONTOLOGY methodology only proposes these activities, but does not describe them in detail.</ns0:p><ns0:p>The ontology development oriented activities are divided into three different steps: (1) predevelopment; (2) development; and (3) post-development activities. Pre-development involves: (1a) an environment study to understand where the ontology will be used, which applications will use it, etc.; and (1b) a feasibility study in order to assess if it is worthwhile, feasible, and cost-effective to build this ontology. Although these are important activities, they are not addressed in most of the methodologies for building ontologies. According to <ns0:ref type='bibr' target='#b39'>Gomez-Perez et al. (2004)</ns0:ref> only the METHONTOLOGY methodology proposes the environment study and describes the feasibility study.</ns0:p><ns0:p>Development activities include: (2a) the specification activity, which describes why the ontology is being built, its purpose, etc.; (2b) the conceptualization activity, which describes and organizes the domain knowledge; (2c) the formalization activity, which evolves the conceptual model into a more formal model; and (2d) the implementation activity, which creates the desired ontology in the chosen language. As expected, these are the main activities addressed by the ontology engineering methodologies. The methodologies analyzed by <ns0:ref type='bibr' target='#b39'>Gomez-Perez et al. (2004)</ns0:ref> proposed or described most of these development activities, with the exception of Cyc <ns0:ref type='bibr' target='#b68'>(Reed et al., 2002)</ns0:ref>, which only addresses the implementation activity and does not mention the others.</ns0:p><ns0:p>Post-development activities involve (3a) maintenance, which updates and fixes the ontology, and (3b) the (re)use of the ontology being developed by other ontologies and applications. These are also important activities; however, most of the methodologies only address them as a natural step during the ontology's life cycle, which can be incremental, producing a sequence of evolving prototypes. None of the methodologies presented by <ns0:ref type='bibr' target='#b39'>Gomez-Perez et al. (2004)</ns0:ref> describes these activities. Only the METHONTOLOGY and OTKM methodologies propose some of these activities, but do not provide much detail.</ns0:p><ns0:p>Finally, ontology support activities include: (1) knowledge acquisition, which extracts domain knowledge from subject matter experts (SME) or through some automatic process, called ontology learning; (2) evaluation, in order to validate the ontology being created; (3) integration, which is used when other ontologies are used; (4) merging, which is important for creating a new ontology based on a mix of several other ontologies from the same domain; (5) alignment, which involves mapping different concepts to/from the involved ontologies; (6) documentation, which describe all activities completed and Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>products generated for future reference; and (7) configuration management, which controls the different versions generated for all ontologies and documentation. Out of the seven methodologies compared by <ns0:ref type='bibr' target='#b39'>Gomez-Perez et al. (2004)</ns0:ref>, five neither propose nor mention configuration management, merging, or alignment. The integration activity is proposed by six of them, but not described in detail. The knowledge acquisition and documentation activities are proposed by three and described by two, while the evaluation activity is proposed by two and described by two in detail.</ns0:p></ns0:div> <ns0:div><ns0:head>Probability Elicitation</ns0:head><ns0:p>The literature on eliciting probabilities from experts has a long history (e.g., <ns0:ref type='bibr' target='#b79'>Winkler, 1967;</ns0:ref><ns0:ref type='bibr' target='#b48'>Huber, 1974;</ns0:ref><ns0:ref type='bibr' target='#b77'>Wallsten and Budescu, 1983)</ns0:ref>. At the interface between cognitive science and Bayesian probability theory, researchers have examined biases in unaided human judgment (e.g., <ns0:ref type='bibr' target='#b51'>Kahneman et al., 1982)</ns0:ref> and have devised ways to counteract those biases (e.g., <ns0:ref type='bibr' target='#b15'>Clemen and Reilly, 2004;</ns0:ref><ns0:ref type='bibr' target='#b5'>Burgman et al., 2006)</ns0:ref>. Several authors have defined structured processes or protocols for eliciting probabilities from experts (e.g., <ns0:ref type='bibr' target='#b15'>Clemen and Reilly, 2004;</ns0:ref><ns0:ref type='bibr' target='#b35'>Garthwaite et al., 2005)</ns0:ref>. There is general agreement on the steps in the elicitation process. The seven steps described by <ns0:ref type='bibr' target='#b15'>Clemen and Reilly (2004)</ns0:ref> are: understanding the problem; identifying and recruiting experts; motivating the experts; structuring and decomposition; probability and assessment training; probability elicitation and verification; and aggregating the probabilities. A recent comprehensive reference for probability elicitation is <ns0:ref type='bibr' target='#b65'>(O'Hagan et al., 2006)</ns0:ref>.</ns0:p><ns0:p>The advent of graphical probability models <ns0:ref type='bibr' target='#b67'>Pearl (1988)</ns0:ref> has created the problem of eliciting the many probabilities needed to specify a graphical model containing dozens to hundreds of random variables (cf., <ns0:ref type='bibr' target='#b30'>Druzdzel and van der Gaag, 2000;</ns0:ref><ns0:ref type='bibr' target='#b69'>Renooij, 2001)</ns0:ref>. <ns0:ref type='bibr' target='#b62'>Mahoney and Laskey (1998)</ns0:ref> defined a systematic process for constructing Bayesian network models. Their process considered elicitation of structural assumptions as well as probability distributions. It is an iterative and incremental process that produces a series of prototype models. The lessons learned from building each prototype model are used to identify requirements for refining the model during the next cycle.</ns0:p></ns0:div> <ns0:div><ns0:head>UNCERTAINTY MODELING PROCESS FOR SEMANTIC TECHNOLOGY</ns0:head><ns0:p>The process of creating and using a probabilistic ontology typically occurs in three stages: First is modeling the domain; next is populating the model with situation-specific information; and third is using the model and situation-specific information for reasoning. Modeling a domain means constructing a representation of aspects of the domain for purposes of understanding, explaining, predicting, or simulating those aspects. For our purposes, the model represents the kinds of entities that can exist in the domain, their attributes, the relationships they can have to each other, the processes in which they can participate, and the rules that govern their behavior. It also includes uncertainties about all these aspects. There are many sources of uncertainty: e.g., causes may be non-deterministically related to their effects; events may be only indirectly observable through noisy channels; association of observations to the generating events may be unknown; phenomena in the domain may be subject to statistical fluctuation; the structure of and associations among domain entities may exhibit substantial variation; and/or the future behavior of domain entities may be imperfectly predictable (e.g., <ns0:ref type='bibr' target='#b73'>Schum and Starace, 2001;</ns0:ref><ns0:ref type='bibr' target='#b58'>Laskey and Laskey, 2008;</ns0:ref><ns0:ref type='bibr' target='#b17'>Costa et al., 2012)</ns0:ref>. Once these and other relevant sources of uncertainty are captured in a domain model, the model can be applied to a specific situation by populating it with data about the situation. Finally, the inference engine can be called upon to answer queries about the specific situation. Unlike traditional semantic systems that can handle only deterministic queries, queries with a probabilistic ontology can return soft results. For example, consider a query about whether an inappropriate relationship exists between a procurement official and a bidder. A reasoning system for a standard ontology can return only procurements in which such a relationship can be proven, while a reasoner for a probabilistic ontology can return a probability that such a relationship exists.</ns0:p><ns0:p>The UMP-ST is an iterative and incremental process, based on the Unified Process (UP), for designing a probabilistic ontology. While UP serves as the starting point, UMP-ST draws upon and is consistent with the ontology engineering and probability elicitation processes described in the previous section, thus tailoring the UP for probabilistic ontology design.</ns0:p><ns0:p>As shown in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>, the UMP-ST includes all phases of the UP, but focuses only on the Requirements, Analysis &amp; Design, Implementation, and Test disciplines. The figure depicts the intensity of each discipline during the UMP-ST. Like the UP, UMP-ST is iterative and incremental. The basic idea behind iterative enhancement is to model the domain incrementally, allowing the modeler to take advantage of what is Manuscript to be reviewed Computer Science learned during earlier iterations of the model in designing and implementing later iterations. For this reason, each phase includes all four disciplines, but the emphasis shifts from requirements in the earlier phases toward implementation and test in the later phases. Note that testing occurs even during the Inception phase, prior to beginning the implementation phase. This is because it is usually possible to test some aspects of the model during the Analysis &amp; Design stage prior to implementation. It is well known that early testing reduces risk, saves cost, and leads to better performance <ns0:ref type='bibr' target='#b49'>(INCOSE, 2015)</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> presents the Probabilistic Ontology Modeling Cycle (POMC). This cycle depicts the major outputs from each discipline and the natural order in which the outputs are produced. Unlike the waterfall model <ns0:ref type='bibr' target='#b70'>(Royce, 1970)</ns0:ref>, the POMC cycles through the steps iteratively, using what is learned in one iteration to improve the result of the next. The arrows reflect the typical progression, but are not intended as hard constraints. Indeed, it is possible to have interactions between any pair of disciplines. For instance, it is not uncommon to discover a problem in the rules defined in the Analysis &amp; Design discipline during the activities in the Test discipline. As a result, the engineer might go directly from Test to Analysis &amp; Design in order to correct the problem.</ns0:p><ns0:p>In Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>, the Requirements discipline (blue box) defines the goals that must be achieved by reasoning with the semantics provided by our model. Usually, when designing a PO, one wants to be able to automate a reasoning process that involves uncertainty. By goals, we mean the kinds of questions the user wants the system to be able to answer via the PO reasoning. For instance, one of the main goals in the procurement fraud domain is to be able to answer with a certain degree of certainty whether a procurement presents any signs of fraud. However, this type of question is not straight-forward to answer. Thus, the system will typically need to evaluate a set of more specific questions, or queries, in order to better assess the probability of having fraud. Furthermore, in order to answer these more specific queries, the system will need some evidence. These goals, queries, and evidence comprise the requirements for the model being designed.</ns0:p><ns0:p>The Analysis &amp; Design discipline (green boxes) describes classes of entities, their attributes, how they relate to each other, and what rules apply to them in our domain. These definitions are independent of the language used to implement the model.</ns0:p><ns0:p>The Implementation discipline (red boxes) maps the design to a specific language that is both semantically rich and capable of representing uncertainty. This means encoding the classes, attributes, relationships and rules in the chosen language. For our case study, the mapping is to PR-OWL <ns0:ref type='bibr' target='#b14'>(Carvalho et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b20'>Costa et al., 2008)</ns0:ref>, but other semantically rich uncertainty representation languages could also be used (e.g., <ns0:ref type='bibr' target='#b21'>Cozman and Mau&#225;, 2015)</ns0:ref>.</ns0:p><ns0:p>Finally, the Test discipline (purple box) is responsible for evaluating whether the model developed during the Implementation discipline is behaving as expected from the rules defined during Analysis &amp; Design and whether the results achieve the goals elicited during the Requirements discipline. As noted previously, it is a good idea to test some of the rules and assumptions even before implementation. This is <ns0:ref type='table' target='#tab_1'>-2015:10:7010:2:0:NEW 29 Jun 2016)</ns0:ref> Manuscript to be reviewed a crucial step to mitigate risk. Early testing can identify and correct problems before significant resources have been spent developing a complex model that turns out to be inadequate.</ns0:p><ns0:formula xml:id='formula_0'>5/25 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Like several of the ontology engineering processes considered by Gomez-Perez et al. ( <ns0:ref type='formula'>2004</ns0:ref>), the UMP-ST does not cover ontology management, under the assumption that these activities can be imported from other frameworks. Although the UMP-ST does not cover maintenance and reuse, its iterative nature supports incremental evolution of the developed ontology. Of the ontology support activities described by <ns0:ref type='bibr' target='#b39'>Gomez-Perez et al. (2004)</ns0:ref>, the UMP-ST process explicitly addresses only the test discipline, which is similar to the evaluation activity. By following the steps in the UMP-ST, the ontology designer will be generating the documentation needed in order to describe not only the final PO, but also the whole process of building it. This supports the documentation activity of <ns0:ref type='bibr' target='#b39'>Gomez-Perez et al. (2004)</ns0:ref>. Like most ontology engineering processes, the UMP-ST does not address the ontology support activities of integration, merging, and alignment.</ns0:p><ns0:p>The primary focus of the UMP-ST is the ontology development activities. Because it is based on the UP, it uses a different nomenclature than Gomez-Perez et al. ( <ns0:ref type='formula'>2004</ns0:ref>), but there is a close resemblance: the specification activity is similar to the requirements discipline; the conceptualization and formalization activities are similar to the analysis &amp; design discipline; and the implementation activity is similar to the implementation discipline. The major difference between the methodologies reviewed by Gomez-Perez et al. ( <ns0:ref type='formula'>2004</ns0:ref>) and the UMP-ST is the focus. While <ns0:ref type='bibr' target='#b39'>Gomez-Perez et al. (2004)</ns0:ref> focuses on ways to build a glossary of terms, build taxonomies, and define concepts and properties, and deterministic rules, the UMP-ST presents techniques to identify and specify probabilistic rules, define dependency relations between properties based on these rules, and quantify the strength of these relations as parameters of local probability distributions. Thus, the UMP-ST extends other methodologies used for building ontologies, and should coexist with these methodologies. When creating deterministic parts of the ontology the user can follow existing methodologies proposed for standard ontology building. To incorporate uncertainty and therefore extend to a full probabilistic ontology, the user can follow the steps defined in the UMP-ST</ns0:p></ns0:div> <ns0:div><ns0:head>6/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_1'>2015:10:7010:2:0:NEW 29 Jun 2016)</ns0:ref> Manuscript to be reviewed Computer Science process.</ns0:p><ns0:p>Similarly, the UMP-ST can and should coexist with processes for eliciting probabilities, such as those defined by <ns0:ref type='bibr' target='#b15'>Clemen and</ns0:ref><ns0:ref type='bibr' target='#b15'>Reilly (2004) and</ns0:ref><ns0:ref type='bibr' target='#b65'>O'Hagan et al. (2006)</ns0:ref>. The probabilistic ontology engineer should refer to these resources when defining a protocol for eliciting probabilities from experts.</ns0:p><ns0:p>In the next two sections, the UMP-ST process and the POMC are illustrated through a case study in procurement fraud detection and prevention. The case study walks step by step through the activities that must be executed in each discipline in the POMC. The case study has been kept simple enough for clear exposition of POMC, while being complex enough to convey key issues that arise in real-world ontology engineering. Implementation and plausible reasoning were carried out using the UnBBayes probabilistic ontology environment <ns0:ref type='bibr' target='#b6'>(Carvalho, 2008;</ns0:ref><ns0:ref type='bibr' target='#b64'>Matsumoto et al., 2011)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>PREVENTING AND DETECTING PROCUREMENT FRAUD IN BRAZIL</ns0:head><ns0:p>In Brazil, the main law that details the regulation of the public procurement process is the Federal Law 8,666/93 (Public Procurement Law). Nevertheless, the public procurement procedures are also mentioned in Section XXI, Article 37 of the Federal Constitution. The Public Procurement Law is applicable not only to the Federal Government, but also to the State and Municipal Governments. Although it is meant to provide just general guidelines, it is so detailed that there is little room for the States and Municipalities to further legislate <ns0:ref type='bibr' target='#b34'>(Frizzo and Oliveira, 2014)</ns0:ref>. The Public Procurement Law regulates public procurement procedures and contracts involving the government.</ns0:p><ns0:p>The Public Procurement Law defines three main procurement procedures: invitation to tender, which is a simpler and faster procedure where at least three competitors are invited to participate in the tender based on the request for proposals (RFP), which is not required to be advertised in the press; price survey, which requires the competitors to be previously registered before the public tender and requires a broader advertising of the RFP in the newspaper and official press; and competition, which is the most complex and longest procedure, allowing participation of all companies that meet the qualification criteria on the first day of the procedure and requiring more general advertising of the RFP as the price survey. In addition, Law 10,520/02 created the reverse auction, which involves alternative bids from the participating companies in the competitive phase, before the qualification documents are analyzed. Nowadays, the most common procedure for the acquisition of common goods and services is the electronic reverse auction, which is the same as the reverse auction, but the procedure happens in an electronic system. Its RFP must also be advertised in the official press as well as through the Internet.</ns0:p><ns0:p>The criteria for selecting the best proposal are defined in the RFP by the regulated agency. There are three main types of rules that must be followed: best price, where the company that presents the best bid and meets the minimum requirements is awarded the contract; best technique, where the company with the best technical solutions wins regardless of price; and a mix of the two, where the scores are given for both price and technique and the company with the highest joint score wins. <ns0:ref type='bibr' target='#b34'>Frizzo and Oliveira (2014)</ns0:ref> provide additional detail on thresholds for determining whether a contract is subject to the Public Procurement Law, freedom to choose which procedure to use, changes to an existing contract, and other aspects of the public procurement process in Brazil.</ns0:p><ns0:p>The procurement process presents many opportunities for corruption. Although laws attempt to ensure a competitive and fair process, perpetrators find ways to turn the process to their advantage while appearing to be legitimate. To aid in detecting and deterring such perversions of the procurement process, a specialist, who helped in this work, has didactically structured different kinds of procurement fraud encountered by the Brazilian Office of the Comptroller General (CGU, Controladoria-Geral da Uni&#227;o, in Portuguese) over the years.</ns0:p><ns0:p>These different fraud types are characterized by criteria, such as business owners working as a front for the company or use of accounting indices that are not commonly employed. Indicators have been established to help identify cases of each of these fraud types. For instance, one principle that must be followed in public procurement is that of competition. A public procurement should attempt to ensure broad participation in the bidding process by limiting requirements on bidders to what is necessary to guarantee adequate execution of the contract. Nevertheless, it is common to have a fake competition in which different bidders are, in fact, owned by the same person. This is usually done by having someone act as a front for the enterprise. An indicator that a bidder may be a front is that the listed owner has little or no education. Thus, an uneducated owner is a red flag suggesting that there may be a problem with the procurement. <ns0:ref type='bibr' target='#b41'>Gregorini (2009)</ns0:ref> identified a number of red flags that can be considered evidence of fraud. These include: concentration of power in the hands of a few people; rapid growth in concentration of goods and services contracted from a single company; competition restriction; transfer of funds to a Non Governmental Organization (NGO) close to elections; and others. While these factors are evidence of potential irregularities, they are not definitive indicators. A list of more serious and determinant conditions is presented by <ns0:ref type='bibr' target='#b32'>Flores (2004)</ns0:ref>. These include: choosing directors based on a political agenda; negotiating contracts in order to reserve money for an election campaign; negotiating contracts in order to favor friends and family; bribery in order to obtain certain privileges; and providing inside information.</ns0:p><ns0:p>A more formal definition of different types of fraud found in Brazil is presented by <ns0:ref type='bibr' target='#b66'>Oliveira (2009)</ns0:ref>. He presents three main groups of fraud, based on recent scandals in Brazil: frauds initiated by passive agents; frauds initiated by active agents; and frauds that represent collusion. The first is when an agent from the Public Administration acting in his public function, favors someone or himself by performing illicit actions (e.g., purchasing products that were never used, falsification of documents and signatures, favoring friends and family). The second is when an active agent, a person or a company, outside the Public Administration tries to corrupt an agent that works in the Public Administration or does something illegal in order to cheat the procurement process (e.g., acting as a front for a company, delivering contraband products, giving money to civil servants in order to favor a specific company). Finally, the third is when there is some type of collusion between companies participating in the procurement process or even between passive and active agents (e.g., delivering and accepting just part of the goods purchased, paying before receiving the merchandise, overpricing goods and services, directing and favoring a specific company in exchange of some financial compensation).</ns0:p><ns0:p>The types of fraud presented by <ns0:ref type='bibr' target='#b66'>Oliveira (2009)</ns0:ref>, although focused on the Brazilian context, are consistent with more recent work from <ns0:ref type='bibr' target='#b27'>Dhurandhar et al. (2015b)</ns0:ref>. This work, which presents a more general fraud taxonomy related to procurement fraud, was registered as a patent in 2015 <ns0:ref type='bibr' target='#b26'>(Dhurandhar et al., 2015a)</ns0:ref>. While Oliveira talks about passive and active agents, Dhurandhar et al. talks about fraud by employees and fraud by vendors, respectively. However, these fraud definitions do have a few differences. For example, while Dhurandhar et al. differentiates collusion among vendors and collusion between employee and vendors, Oliveira classifies both as simply collusion.</ns0:p><ns0:p>Formalizing knowledge about fraud in a computable form can lead to automated support for fraud detection and prevention. Specifically, analysts at the CGU must sift through vast amounts of information related to a large number of procurements. Automated support can improve analyst productivity by highlighting the most important cases and the most relevant supporting information. The ultimate goal of the procurement fraud probabilistic ontology is to structure the specialist's knowledge to enable automated reasoning from indicators to potential fraud types. Such an automated system is intended to support specialists and to help train new specialists, but not to replace them. Automated support for this task requires a semantically rich representation that supports uncertainty management.</ns0:p><ns0:p>As a case study, <ns0:ref type='bibr' target='#b7'>Carvalho (2011)</ns0:ref> developed a proof-of-concept probabilistic ontology covering part of the procurement fraud domain. This paper uses a portion of this case study to illustrate how the POMC can support the creation of a PO. The full implementation and code for the case study is presented in <ns0:ref type='bibr' target='#b7'>(Carvalho, 2011)</ns0:ref> and is provided as supplemental material to this paper. This proof-of-concept implementation represents only a fragment of the specialist's knowledge of the procurement fraud domain. The plan is eventually to extend this PO to a full representation of the specialist's knowledge.</ns0:p></ns0:div> <ns0:div><ns0:head>UMP-ST FOR PROCUREMENT FRAUD</ns0:head><ns0:p>This section describes in detail the four disciplines in the UMP-ST process and their application to the procurement fraud case study. To facilitate the understanding of each discipline, we alternate between describing the discipline and illustrating its application to the public procurement fraud detection and prevention use case.</ns0:p></ns0:div> <ns0:div><ns0:head>Requirements</ns0:head><ns0:p>The POMC begins with the Requirements discipline (R.1 in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). The requirements discipline defines the objectives that must be achieved by representing and reasoning with a computable representation of domain semantics. For this discipline, it is important to define the questions that the model is expected to answer, i.e., the queries to be posed to the system being designed. For each question, a set of information items that might help answer the question (evidence) must be defined.</ns0:p></ns0:div> <ns0:div><ns0:head>8/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Requirements can be categorized as functional and non-functional <ns0:ref type='bibr' target='#b78'>(Wiegers, 2003;</ns0:ref><ns0:ref type='bibr' target='#b74'>Sommerville, 2010)</ns0:ref>. Functional requirements concern outputs the system should provide, features it should have, how it should behave, etc. In our case, functional requirements relate to the goals, queries, and evidence that pertain to our domain of reasoning. Non-functional requirements, on the other hand, represent constraints on the system as a whole. For instance, in our use case a non-functional requirement could be that a given query has to be answered in less than a minute. Another example is that the posterior probability given as an answer to a given query has to be either exact or an approximation with an error bound of &#177;0.5%. Non-functional requirements tend to be fairly straightforward and not specific to probabilistic ontology development. We therefore focus here on how to develop functional requirements for our use case.</ns0:p><ns0:p>We focus on a subset of our procurement use case to illustrate how a requirement is carried through the PO development cycle until it is eventually implemented and tested. To understand the requirements associated with this subset, we first have to explain some of the problems encountered when dealing with public procurements.</ns0:p><ns0:p>One of the principles established by Law No. 8,666/93 is equality among bidders. This principle prohibits the procurement agent from discriminating among potential suppliers. However, if the procurement agent is related to the bidder, he/she might feed information or define new requirements for the procurement in a way that favors the bidder.</ns0:p><ns0:p>Another problem arises because public procurement is quite complex and may involve large sums of money. Therefore, members forming the committee for a procurement must both be well prepared, and have a clean history with no criminal or administrative convictions. This latter requirement is necessary to the ethical guidelines that federal, state, municipal and district government employees must follow.</ns0:p><ns0:p>The above considerations give rise to the following set of goals, queries, and evidence:</ns0:p><ns0:p>1. Goal: Identify whether a given procurement violates fair competition policy (i.e., evidence suggests further investigation and/or auditing is warranted);</ns0:p><ns0:p>(a) Query: Is there any relation between the committee and the enterprises that participated in the procurement?</ns0:p><ns0:p>i. Evidence: Committee member and responsible person of an enterprise are related (mother, father, brother, or sister);</ns0:p><ns0:p>ii. Evidence: Committee member and responsible person of an enterprise live at the same address.</ns0:p><ns0:p>2. Goal: Identify whether the committee for a given procurement has improper composition.</ns0:p><ns0:p>(a) Query: Is there any member of committee who does not have a clean history?</ns0:p><ns0:p>i. Evidence: Committee member has criminal history;</ns0:p><ns0:p>ii. Evidence: Committee member has been subject to administrative investigation.</ns0:p><ns0:p>(b) Query: Is there any relation between members of the committee and the enterprises that participated in previous procurements?</ns0:p><ns0:p>i. Evidence: Member and responsible person of an enterprise are relatives (mother, father, brother, or sister);</ns0:p><ns0:p>ii. Evidence: Member and responsible person of an enterprise live at the same address.</ns0:p><ns0:p>In defining requirements, the availability of evidence must be considered. For example, information about whether persons are related might be drawn from a social network database; evidence about criminal history might come from a police database; an evidence about cohabitation might be drawn from an address database. One important role for semantic technology is to support interoperability among these various data sources and the fraud detection model.</ns0:p><ns0:p>Another important aspect of the Requirements discipline is defining traceability of requirements. According to <ns0:ref type='bibr' target='#b40'>Gotel and Finkelstein (1994)</ns0:ref>, 'requirements traceability refers to the ability to describe and follow the life of a requirement, in both the forward and backward directions. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>traceability is a specification tree, in which each requirement is linked to its 'parent' requirement. A specification tree for the requirements for our procurement model is shown in Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>. In this hierarchy, each item of evidence is linked to a query it supports, which in turn is linked to its higher level goal. This linkage supports requirements traceability. In addition to the hierarchical decomposition of the specification tree, requirements should also be linked to work products of other disciplines, such as the rules in the Analysis &amp; Design discipline, MFrags in the Implementation discipline, and goals, queries, and evidence elicited in the Requirements discipline. These links provide traceability that is essential to validation and management of change. Subsequent sections show how UMP-ST supports requirements tracing.</ns0:p></ns0:div> <ns0:div><ns0:head>Analysis &amp; Design</ns0:head><ns0:p>Once we have defined our goals and described how to achieve them, it is time to start modeling the entities, their attributes, relationships, and rules to make that happen. This is the purpose of the Analysis &amp; Design discipline.</ns0:p><ns0:p>The major objective of this discipline is to define the semantics of the model. In fact, much of the semantics can be defined using traditional ontologies, including the deterministic rules that the concepts described in our model must obey. The focus of this paper is on representing uncertain aspects of the domain. Information on defining traditional ontologies can be found in <ns0:ref type='bibr' target='#b2'>Allemang and Hendler (2008)</ns0:ref> and <ns0:ref type='bibr' target='#b39'>Gomez-Perez et al. (2004)</ns0:ref>.</ns0:p><ns0:p>The first step in defining the domain model is to define the classes and relationships that are important to represent for the procurement fraud detection problem (AD.1 in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). For our case study, we use the Unified Modeling Language (UML) <ns0:ref type='bibr' target='#b71'>(Rumbaugh et al., 1999)</ns0:ref> for this purpose. Analysis &amp; Design also includes developing rules (AD.2 in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). Because UML is insufficiently expressive to represent complex rule definitions, we record the deterministic rules separately for later incorporation into the PR-OWL probabilistic ontology. While experienced ontology engineers might prefer to define classes, relationships and rules directly in OWL, we chose UML for its popularity, understandability, ease of communication with domain experts, and widely available and usable software tools. We see UML-style diagrams as a way to capture knowledge about classes and relationships that could be automatically translated into an OWL ontology or PR-OWL probabilistic ontology (cf., <ns0:ref type='bibr' target='#b36'>Gasevic et al. (2004)</ns0:ref>).</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> depicts a simplified model of the classes and relationships in the procurement fraud domain. A Person has a name, a mother and a father (also Person). Every Person has a unique identification that in Brazil is called CPF. A Person also has an Education and livesAt a certain Address. In addition, everyone is obliged to file his/her TaxInfo every year, including his/her an-nualIncome 1 . These entities can be grouped as Personal Information. A PublicServant is a Person who worksFor a PublicAgency, which is a Government Agency. Every public Procurement is owed by a PublicAgency, has a committee formed by a group of PublicServants, and has a group of participants, which are Enterprises. One of these will be the winner of the Procurement. Eventually, the winner of the Procurement will receive a Contract of some value with the PublicAgency owner of the Procurement. The entities just described can be grouped as Procurement Information. Every Enterprise has at least one Person that is responsible for its legal acts. An Enterprise also has an identification number, the General List of Contributors CGC, which can be used to inform that this Enterprise is suspended from procuring with the public administration, isSuspended. These are grouped as the Enterprise Information. We also have AdminstrativeInvestigation, which has information about investigations that involve one or more PublicServer. Its finalReport, the JudgmentAdministrativeReport, contains information about the penalty applied, if any. These entities form the Administrative Judgment Information. Finally we have the Criminal Judgment Information group that describes the CriminalInvestigation that involves a Person, with its finalReport, the JudgmentCriminalReport, which has information about the verdict.</ns0:p><ns0:p>Notice that just a subset of this UML model is of interest to us in this paper, since we are dealing with just a subset of the requirements presented in <ns0:ref type='bibr' target='#b7'>Carvalho (2011)</ns0:ref>.</ns0:p><ns0:p>In addition to the cardinality and uniqueness rules defined above for the entities depicted in Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>, the AD.2 step in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> includes specifying probabilistic rules to address the requirements defined in the R.1 step. These include:</ns0:p><ns0:p>1. If a member of the committee has a relative (mother, father, brother, or sister) responsible for a bidder in the procurement, then it is more likely that a relation exists between the committee and the enterprises, which inhibits competition.</ns0:p><ns0:p>2. If a member of the committee lives at the same address as a person responsible for a bidder in the procurement, then it is more likely that a relation exists between the committee and the enterprises, which lowers competition. 4. If a member of the committee has been convicted of a crime or has been penalized administratively, then he/she does not have a clean history. If he/she was recently investigated, then it is likely that he/she does not have a clean history.</ns0:p><ns0:p>5. If the relation defined in 1 and 2 is found in previous procurements, then it is more likely that there will be a relation between this committee and future bidders.</ns0:p><ns0:p>6. If 4 or 5, then it is more likely that the committee violates policy for proper committee composition.</ns0:p><ns0:p>Typically the probabilistic rules are described initially using qualitative likelihood statements. Implementing a probabilistic ontology requires specifying numerical probabilities. Probability values can be elicited from domain experts (e.g., <ns0:ref type='bibr' target='#b30'>Druzdzel and van der Gaag, 2000;</ns0:ref><ns0:ref type='bibr' target='#b65'>O'Hagan et al., 2006)</ns0:ref> or learned from observation. The growing literature in statistical relational learning (e.g., <ns0:ref type='bibr' target='#b38'>Getoor and Taskar, 2007)</ns0:ref> provides a wealth of methods for learning semantically rich probability models from observations. In the Analysis &amp; Design stage, information is identified for specifying the probability distributions (expert judgment and/or data sources). This information is encoded into the target representation during the Implementation stage.</ns0:p><ns0:p>The traceability matrix of Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> depicts how the probabilistic rules defined above are traced to the goals, queries and evidence items defined in the Requirements discipline. This traceability matrix is an important tool to help designers to ensure that all requirements have been covered. It also supports maintainability by helping ontology engineers to identify how requirements are affected by changes in the model. It is also important at this stage to trace each of the rules to the source of information used to define the rule (e.g., notes from interview with expert, training manual, policy document, data source).</ns0:p><ns0:p>Another important step in the Analysis &amp; Design discipline is to form natural groups of entities, rules, and dependencies (AD.3 in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). This step facilitates the Implementation discipline. The more complex the domain, the more important is the grouping activity. As shown in Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>, even in this simplified example there are five natural groups: (1) Personal Information; (2) Procurement Information; (3) Enterprise Information; (4) Administrative Judgment Information; and (5) Criminal Judgment Information.</ns0:p></ns0:div> <ns0:div><ns0:head>Implementation</ns0:head><ns0:p>Once the Analysis &amp; Design step has been completed, the next step is to implement the model in a specific language. How this discipline is carried out depends on the specific language being used. Our case study was developed using the PR-OWL probabilistic ontology language <ns0:ref type='bibr' target='#b16'>(Costa, 2005;</ns0:ref><ns0:ref type='bibr' target='#b6'>Carvalho, 2008)</ns0:ref>. PR-OWL (pronounced 'prowl') adds new definitions to OWL to allow the modeler to incorporate probabilistic knowledge into an OWL ontology. This section shows how to use PR-OWL to express uncertainty about the procurement fraud domain.</ns0:p><ns0:p>PR-OWL uses Multi-Entity Bayesian Networks (MEBN) <ns0:ref type='bibr' target='#b55'>(Laskey, 2008)</ns0:ref> to express uncertainty about properties and/or relations defined on OWL classes. A probability model is defined as a set of MEBN Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Fragments (MFrags), where each MFrag expresses uncertainty about a small number of attributes of and/or relationships among entities. A set of properly defined MFrags taken together comprise a MEBN theory (MTheory), which can express a joint probability distribution over complex situations involving many entities in the domain. Unlike most expressive probabilistic languages that assume the domain is finite (e.g., <ns0:ref type='bibr' target='#b47'>Heckerman et al., 2004)</ns0:ref>, an MTheory can express knowledge about an unbounded or even infinite set of entities. A properly defined PR-OWL model expresses an MTheory, and thus expresses a global joint distribution over the random variables mentioned in the theory. For more detailed explanations on the key features of MEBN logic the reader should refer to <ns0:ref type='bibr' target='#b55'>Laskey (2008)</ns0:ref>.</ns0:p><ns0:p>On a typical usage of a PR-OWL PO, during execution time (e.g., in response to a query) a logical reasoning process would instantiate the MFrags that are needed to respond to the query. The result of this process is a situation-specific Bayesian network (SSBN), which is the minimal Bayesian network sufficient to obtain the posterior distribution for a set of target random variable instances given a set of finding random variable instances. In a PR-OWL probabilistic ontology, the entity types correspond to OWL classes, the attributes correspond to OWL properties, and the relationships correspond to OWL relations. Thus, PR-OWL allows the ontology designer to specify probability distributions to express uncertainty about properties and relations in an OWL ontology.</ns0:p><ns0:p>The expressive power of MEBN/PR-OWL makes it an attractive choice for implementing probabilistic ontologies in complex domains. Its compatibility with OWL, a widely used ontology language, allows for the expression of uncertainty in existing OWL ontologies, and for integrating PR-OWL probabilistic ontologies with other ontologies expressed in OWL. These are the primary reasons for the choice of MEBN/PR-OWL as the implementation language in our case study.</ns0:p><ns0:p>The first step in defining a PR-OWL probabilistic ontology for the procurement fraud domain is to represent the entities, attributes and relations of Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> as OWL classes, properties and relations (I.1 in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). Our proof-of-concept made a few simplifications to the representation depicted in Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>. For example, we removed the PublicServant entity and connected Person directly to PublicAgency with the workFor relationship. As another simplification, we assumed that every Person and Enterprise instance is uniquely identified by its name, so there was no need to represent the CPF and CGC entities. Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref> presents the entities as entered into our PR-OWL ontology implemented in UnBBayes <ns0:ref type='bibr' target='#b13'>(Carvalho et al., 2009)</ns0:ref>.</ns0:p><ns0:p>After defining the entities, we consider characteristics that may be uncertain. An uncertain attribute of an entity or an uncertain relationship among entities is represented in MEBN by a random variable (RV). For example, the RV livesAt(person) corresponds to the relation livesAt from To define a probability distribution for an uncertain attribute or relationship, we must declare it as resident in some MFrag. This occurs as part of I.1 in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>; its probability distribution will be defined later as part of I.2. For example, Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> shows how to define uncertainty about whether two persons are related. This is accomplished by selecting the OWL property isRelated and dragging the property and dropping it inside the PersonalInfo MFrag. The MFrags are language specific groupings formed out of the grouping performed during the Analysis &amp; Design discipline (AD.3 from Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). The yellow oval on the right-hand side of Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> shows the RV defined by the PR-OWL plug-in for UnBBayes <ns0:ref type='bibr' target='#b63'>(Matsumoto, 2011)</ns0:ref> to represent uncertainty about whether persons are related. In the background what actually happens is that an instance of the DomainResidentNode class, which is a random variable that has its probability distribution defined in the current MFrag, is created. Besides that, an assertion is also added saying that this instance definesUncertaintyOf the OWL property isRelated. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science anHistory(person).</ns0:head><ns0:p>For this paper, we focus on the Judgment History, Improper Committee, and Improper Procurement MFrags. Figure <ns0:ref type='figure' target='#fig_11'>7</ns0:ref> shows a partial MTheory consisting of these three MFrags. Details on the complete PR-OWL MTheory can be found in <ns0:ref type='bibr' target='#b7'>Carvalho (2011)</ns0:ref> and are provided as supplemental material with this paper.</ns0:p><ns0:p>Each MFrag defines local probability distributions (LPDs) for its resident RVs, shown as yellow ovals. These distributions are conditioned on satisfaction of constraints expressed by context RVs, shown as green pentagons. The local distributions may depend on the values of input RVs, shown as gray trapezoids, whose distributions are defined in the MFrags in which they are resident.</ns0:p><ns0:p>The two main goals described in our requirements are defined in the Improper Procurement and Improper Committee MFrags. 2 The Judgment History MFrag has RVs representing the judgment (criminal and administrative) history of a Person.</ns0:p><ns0:p>There are three LPDs defined in the Judgment History MFrag: (1) a probability that a person has a criminal history; (2) a probability that a person has an administrative history, and (3) a probability that a person has a clean history given whether or not that person has a criminal and/or an administrative history. This latter probability is lowest if he/she has never been investigated, higher if he/she has been investigated, and extremely high if he/she has been convicted 3 .</ns0:p><ns0:p>The Improper Committee MFrag contains the resident RV hasImproperCommittee(procurement), defined under the context constraints that procurement is an entity of type Procurement, member is an entity of type Person, and member is a member of the committee for Procurement. The assumptions behind the LPD defined in this MFrag are that: if any committee member of this procurement does not have a clean history, or if any committee member was related to previous participants, then the committee is more likely to be improper; and that if these things happen together, the probability of a improper committee is even higher.</ns0:p><ns0:p>The Improper Procurement MFrag has the resident RV isImproperProcurement(procurement), created in the same way as the isRelated RV inside the PersonalInfo MFrag explained previously. The assumptions behind the LPD defined in this MFrag are that: if the competition is compromised, or if any owner of a participating enterprise owns a suspended enterprise, or if committee of this procurement is improper, then the procurement is more likely to be improper; and that if these things happen together, the probability of having an improper procurement is even higher.</ns0:p><ns0:p>The final step in constructing a probabilistic ontology in UnBBayes is to define the local probability distributions (LPDs) for all resident RVs (I.2 in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). Figure <ns0:ref type='figure' target='#fig_12'>8</ns0:ref> shows the LPD for the resident node isImproperProcurement(procurement), which is the main question we need to answer in order to achieve one of the main goals in our model. This distribution follows the UnBBayes-MEBN grammar for defining LPDs <ns0:ref type='bibr' target='#b6'>(Carvalho, 2008;</ns0:ref><ns0:ref type='bibr' target='#b11'>Carvalho et al., 2008)</ns0:ref>. The distribution for isImproper-Procurement depends on the values of the parent RVs isCompetitionCompromised, hasIm-properCommittee and ownsSuspendedEnterprise. The LPD is defined through a series of ifthen-else statements giving the probability of ownsSuspendedEnterprise given each combination of truth-values of its parents. In this example, if all three parent RVs are true, then ownsSuspendedEnterprise has probability 0.9; if any two parents are true, then ownsSuspendedEnterprise has probability 0.8; if just one parent is true, then ownsSuspendedEnterprise has probability 0.7; if none of the parents is true then ownsSuspendedEnterprise has probability 0.0001. The probability values shown here were defined in collaboration with the specialist who supported the case study. In general, probability values for the MFrags are defined through some combination of expert elicitation and learning from data. However, in the PO described in this paper, all LPDs were defined based on the experience of the SMEs from CGU, since there is not enough structured data to learn the distribution automatically.</ns0:p><ns0:p>It is important to ensure traceability between the MFrags defined in the Implementation stage and the rules defined in the Analysis &amp; Design stage. A traceability matrix similar to Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> was developed to trace MFrags to rules. This mapping, along with the mapping of the rules to the requirements as 2 A more sophisticated model for deciding whether to do further investigation or change the committee would define a utility function and use expected utility to make the decision. Future versions of UnBBayes will support Multi-Entity Influence Diagrams <ns0:ref type='bibr' target='#b16'>(Costa, 2005)</ns0:ref> for modeling decision-making under uncertainty.</ns0:p><ns0:p>3 Maybe a better name for this node would be isTrustworthy. Nevertheless, the idea is that if someone was investigated and/or convicted then he might not be a good candidate for being part of a procurement committee.</ns0:p></ns0:div> <ns0:div><ns0:head>15/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science documented in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, enables the probabilistic relationships expressed in the MFrags to be traced back to the requirements defined in the Goals stage.</ns0:p></ns0:div> <ns0:div><ns0:head>Test</ns0:head><ns0:p>As with any engineering methodology, test (T.1 in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>) plays an essential role in UMP-ST. As <ns0:ref type='bibr' target='#b56'>Laskey and Mahoney (2000)</ns0:ref> point out, test should do more than showcase the model and demonstrate that it works as envisioned. Another important goal of the Test discipline is to find flaws and areas for improvement in the model.</ns0:p><ns0:p>The literature distinguishes two types of evaluation, verification and validation <ns0:ref type='bibr' target='#b0'>(Adelman, 1992)</ns0:ref>. Verification is concerned with establishing that 'the system was built right,' i.e., that the system elements conform to their defined performance specifications. Validation is concerned with establishing that the 'right system was built,' i.e., that it achieves its intended use in its operational environment.</ns0:p><ns0:p>For example, in the model we have been describing in this Section we would like to verify that the system satisfies the non-functional requirements developed during the Requirements stage as described above, e.g., that the queries covered by the requirement are answered in less than a minute and that the posterior probability given as an answer to a given query is either exact or has an approximation with an error bound of .5% or less. <ns0:ref type='bibr' target='#b56'>Laskey and Mahoney (2000)</ns0:ref> present three types of evaluation: elicitation review, importance analysis, and case-based evaluation.</ns0:p><ns0:p>Elicitation review is related to reviewing the model documentation, analyzing whether all the requirements were addressed in the final model, making sure all the rules defined during the Analysis &amp; Design stage were implemented, validating the semantics of the concepts described by the model, etc. This is an important step towards achieving consistency in our model, especially if it was designed by more than one expert. Elicitation review can also confirm that the rules as defined correctly reflect stakeholder requirements.</ns0:p><ns0:p>The traceability matrices are a useful tool for verifying whether all the requirements were addressed in the final implementation of the model. By looking at the matrix tracing MFrags to rules, we can verify that all the rules defined during Analysis &amp; Design have been covered. The traceability matrix of Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, defined during Analysis &amp; Design, ensured that the rules covered all the defined requirements. Therefore, by composing these matrices, we can infer that all the requirements have been implemented in our model. This review should also confirm that important stakeholder requirements were not missed during Analysis &amp; Design. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Of course, an initial implementation will often intentionally cover only a subset of the stakeholder requirements, with additional requirements being postponed for later versions. Lessons learned during implementation are reviewed at this stage and priorities for future iterations are revisited and revised.</ns0:p><ns0:p>Importance analysis is a model validation technique described by <ns0:ref type='bibr' target='#b56'>Laskey and Mahoney (2000)</ns0:ref>. A form of sensitivity analysis, its purpose is to verify that selected parts of the model behave as intended.</ns0:p><ns0:p>In importance analysis, one or more focus RVs are specified and their behavior is examined under different combinations of values for evidence RVs. The output is a plot for each focus RV that orders the evidence RVs by how much changes in the value of the evidence RV affect the probability of the focus RV. Importance analysis is an important type of unit testing. In the case of PR-OWL, we can analyze the behavior of the random variables of interest given evidence per MFrag. This MFrag testing is important to capture local consistency of the model and to help localize the source of any problems identified in the model.</ns0:p><ns0:p>The tests designed in this Section as well as the model described in this paper were developed with the help of experts from the Department of Research and Strategic Information from CGU. They provided detailed information on the different types of frauds as well as on evidence that they usually search for when auditing contracts during the internal control activities. Furthermore, they have also validated the proof-of-concept model described in this paper with the tests we will describe as well as others that were omitted due to space restrictions.</ns0:p><ns0:p>As an example of unit testing, we demonstrate how to define different scenarios to test the Judgment History MFrag. Essentially, we want to verify how the query hasCleanHistory(person) will behave in light of different set of evidence for a person's criminal and administrative history. Results for just one combination of states for the parent RVs are shown in Figure <ns0:ref type='figure' target='#fig_14'>9</ns0:ref>, which shows three distinct scenarios for a 3-node model. The model assesses whether or not a given person (person 1 in the figure) has a clean history. It consists of a binary RV with two parents. Each parent represents whether or not the person has been convicted, investigated, or not investigated in a criminal process (left upper node in the model) or in an administrative proces (right parent node in the model). The upper left depiction shows the model with no evidence entered (all nodes in yellow), which results in a marginal 'a priori' probability of 1.1% that any given person would not have a clean history. The upper right depiction shows the model results when knowledge about NeverInvestigated is entered in the hasCriminalHistory person1 RV, causing a slight reduction in the belief that person 1 does not have a clean history (i.e., down from 1.1% to 1.05%). Finally, the model depiction in the lower left shows the model's results when evidence on person 1 having a criminal conviction and never being investigated on an administrative process are entered. In this latter case, the belief on a non-clean history jumps to 99%.</ns0:p><ns0:p>A systematic unit test would examine other combinations as well <ns0:ref type='bibr' target='#b7'>(Carvalho, 2011)</ns0:ref>. It is important that unit testing achieve as much coverage as possible, and that results be analyzed by verifying that posterior probabilities behave as expected. In our case, the posterior probabilities are consistent with the expected result as defined by the expert.</ns0:p><ns0:p>Case-based evaluation is conducted by defining a range of different scenarios and examining the results produced by the system for each of the scenarios. Case-based evaluation is a system level test appropriate for integration testing. For our procurement PO, we define scenarios with evidence represented in different MFrags. This means that each query response will require instantiating multiple parts of the model, helping to validate how the model works as a whole. This validation is important to whether the model's global performance matches the specialist's knowledge.</ns0:p><ns0:p>It is important to try out different scenarios in order to capture the nuances of the model. In fact, it is a good practice to design the scenarios in order to cover the range of requirements the model must satisfy <ns0:ref type='bibr' target='#b78'>(Wiegers, 2003;</ns0:ref><ns0:ref type='bibr' target='#b74'>Sommerville, 2010)</ns0:ref>. Although it is impossible to cover every scenario we might encounter, we should aim for good coverage, and especially look for important 'edge cases'. A traceability matrix relating unit tests and case-based evaluation scenarios to MFrags is a useful tool to ensure that test scenarios have achieved sufficient coverage.</ns0:p><ns0:p>Keeping in mind the need to evaluate a range of requirements, we illustrate case-based evaluation with three qualitatively different scenarios. The first one concerns a regular procurement with no evidence to support the hypothesis of an improper procurement or committee. The second one has conflicting evidence in the sense that some supports the hypothesis of having an improper procurement or committee but some does not. Finally, in the third scenario there is overwhelming evidence supporting the hypothesis of an improper procurement or committee.</ns0:p><ns0:p>When defining a scenario, it is important to define the hypothesis being tested and what is the expected result, besides providing the evidence which will be used. Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> presents a comparison between all three scenarios. It can be seen that the difference between the first and the second scenarios is that member 1 was never investigated administratively in the first scenario, but was in the second. In the third scenario, however, besides having the evidence that member 1 was investigated, we also have the evidence that person 1 and 3 live at the same address and that person 2 lives at the same address as member 3.</ns0:p><ns0:p>In the first scenario, we expect that procurement will not be deemed improper since the members of the committee have never been investigated in either administrative or criminal instances and we have no relevant information about the owners of the enterprises participating in the procurement.</ns0:p><ns0:p>When the query is presented to the system, the needed MFrags are retrieved and instantiated for the entities relevant to the scenario, resulting in an SSBN that answers the query. Figure <ns0:ref type='figure' target='#fig_15'>10</ns0:ref> shows part of the SSBN generated from scenario 1. Evidence includes the fact that member 2, who in this SSBN is part of the procurement process being assessed, has never being investigated in either an administrative process on in a criminal process. As expected, the probability of both isImproperProcurement(procurement1) = true and isImproperCommittee(procurement1) = true are low, 2.35% and 2.33%, respectively. In other words, the procurement is unlikely to be improper given the evidence entered so far. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In the second scenario, one of the three members of the committee was previously investigated in the administrative instance. All other evidence is the same as in the previous scenario. We expect that this new piece of evidence should not be strong enough to make the procurement improper, although the probability of being improper should be higher than in the first scenario.</ns0:p><ns0:p>The results of inference are as expected.</ns0:p><ns0:p>The probability of isImproperProcurement(procurement1) = true and isImproperCommittee(procurement1) = true are 20.82% and 28.95%, respectively 4 . In other words, the probability increased but it is still relatively unlikely. However, depending on the stringency of the threshold, this case might be flagged as warranting additional attention.</ns0:p><ns0:p>Finally, in the third scenario, we have evidence that the owners of two different enterprises participating in the procurement process live at the same address. Since there are only three enterprises participating in the procurement, the competition requirement is compromised. Thus, the procurement is likely to be improper.</ns0:p><ns0:p>As expected, the probability of isImproperProcurement(procurement1) = true and isImproperCommittee(procurement1) = true are much larger, at 60.08% and 28.95%, respectively 5 . Notice that although the probability of having an improper procurement correctly increased to a value greater than 50%, the probability of having an improper committee has not changed, since there is no new evidence supporting this hypothesis.</ns0:p><ns0:p>The cases presented here are meant to illustrate the UMP-ST. A full case-based evaluation would consider a broad range of cases with good coverage of the intended use of the model.</ns0:p></ns0:div> <ns0:div><ns0:head>APPLICABILITY OF UMP-ST TO OTHER DOMAINS</ns0:head><ns0:p>In this paper, we focused on the fraud identification use case as a means to illustrate the core ideas of the UMP-ST. We chose this use case because its applicability was clear and its benefits have been independently tested (the methodology is currently being evaluated for use by the Brazilian Comptroller Office). Nevertheless, the methodology is applicable to any problem requiring the development of a probabilistic ontology. Other examples of using the technique can be found in the terrorist identification domain <ns0:ref type='bibr' target='#b45'>(Haberlin et al., 2014)</ns0:ref> and the maritime situation awareness (MDA) domain <ns0:ref type='bibr' target='#b42'>(Haberlin, 2013;</ns0:ref><ns0:ref type='bibr' target='#b10'>Carvalho et al., 2011)</ns0:ref>. For instance, the latter involved the development of a probabilistic ontology as part of the PROGNOS (Probabilistic OntoloGies for Net-centric Operation Systems) project <ns0:ref type='bibr' target='#b18'>(Costa et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b8'>Carvalho et al., 2010)</ns0:ref>, in which PR-OWL was chosen as the ontology language due to its comprehensive treatment of uncertainty, use of a highly expressive first-order Bayesian language, and compatibility with OWL.</ns0:p><ns0:p>The MDA probabilistic ontology is designed for the problem of identifying whether a given vessel is a ship of interest. The PO was written in PR-OWL, and its development employed the UMP-ST process. An important aspect is that the development of the PR-OWL ontology was initially based on an existing ontology of Western European warships that identifies the major characteristics of each combatant class through the attributes of size, sensors, weapons, missions, and nationality. Thus, its development was a good case study for applying UMP-ST to extend an existing ontology to incorporate uncertainty.</ns0:p><ns0:p>During its development, the MDA ontology was evaluated for face validity with the help of semantic technology experts with knowledge of the maritime domain. This evaluation effort had issues in getting feedback from a sufficiently large number of experts, but the overall result of the evaluation suggests the UMP-ST not only as viable and applicable to the problem it supports but also a promising approach for using semantic technology in complex domains <ns0:ref type='bibr' target='#b42'>(Haberlin, 2013)</ns0:ref>.</ns0:p><ns0:p>More recently, <ns0:ref type='bibr' target='#b1'>Alencar (2015)</ns0:ref> applied the UMP-ST process to create a PO for supporting the decision of whether or not proceed with Live Data Forensic Acquisition. Besides using the UMP-ST, several tools and techniques shown in this paper were also applied: the use of UML Class Diagram to identify the main entities, attributes, and relations for the model; the use of a traceability matrix to facilitate further improvements in the model; the implementation of the PO using PR-OWL and UnBBayes; and the validation of the model using both unit testing and case-based evaluation.</ns0:p><ns0:p>&#8226; 'To provide exemplary and extensible process content for a range of software development and management processes supporting iterative, agile, and incremental development, and applicable to a broad set of development platforms and applications.'</ns0:p><ns0:p>Capturing UMP-ST within EPF will provide guidance and tools to a broad community of developers for following the UMP-ST to develop probabilistic ontologies. A process that is made freely available with the EPF framework is the OpenUP <ns0:ref type='bibr' target='#b3'>(Balduino, 2007)</ns0:ref> which is a minimally sufficient software development process. This process could be used as a starting point to describe the UMP-ST process, since OpenUP is extensible to be used as foundation on which process content can be added or tailored as needed.</ns0:p><ns0:p>Two major challenges must be addressed to enable broad use of semantically rich uncertainty management methods. The first is scalability. There have been some attempts to grapple with the inherent scalability challenges of reasoning with highly expressive probabilistic logics. For example, lifted inference <ns0:ref type='bibr' target='#b4'>(Braz et al., 2007)</ns0:ref> exploits repeated structure in a grounded model to avoid unnecessary repetition of computation. Approximate inference methods such as MC-SAT and lazy inference <ns0:ref type='bibr' target='#b29'>(Domingos and Lowd, 2009)</ns0:ref> have been applied to inference in Markov logic networks. Hypothesis management methods <ns0:ref type='bibr'>(Haberlin et al., 2010a,b;</ns0:ref><ns0:ref type='bibr' target='#b57'>Laskey et al., 2001)</ns0:ref> can help to control the complexity of the constructed ground network. Much work remains on developing scalable algorithms for particular classes of problems and integrating such algorithms into ontology engineering tools.</ns0:p><ns0:p>Finally, ontologies generated using the UMP-ST process would greatly benefit from methods that can assess how well and comprehensively the main aspects of uncertainty representation and reasoning are addressed. Thus, a natural path in further developing the UMP-ST process is to leverage ongoing work in this area, such as the Uncertainty Representation and Reasoning Evaluation Framework (URREF) <ns0:ref type='bibr' target='#b17'>(Costa et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b25'>de Villiers et al., 2015)</ns0:ref> developed by the International Society of Information Fusion's working group on Evaluation of Techniques for Uncertainty Reasoning (ETURWG). We are already participating in this effort and plan to leverage its results in the near future.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>The Uncertainty Modeling Process for Semantic Technology (UMP-ST) addresses an unmet need for a probabilistic ontology modeling methodology. While there is extensive literature on both probability elicitation and ontology engineering, these fields have developed nearly independently and there is little literature on how to bring them together to define a semantically rich domain model that captures relevant uncertainties. Such expressive probabilistic representations are important for a wide range of domains.</ns0:p><ns0:p>There is a robust literature emerging on languages for capturing the requisite knowledge. However, modelers can as yet find little guidance on how to build these kinds of semantically rich probabilistic models.</ns0:p><ns0:p>This paper provides such a methodology. UMP-ST was described and illustrated with a use case on identifying fraud in public procurement in Brazil The use case was presented with a focus on illustrating the activities that must be executed within each discipline in the POMC cycle in the context of the fraud identification problem. The core concepts in applying UMP-ST to the procurement domain can easily be migrated to completely distinct domains. For instance, it was also used in defining a PO for Maritime Domain Awareness (MDA) <ns0:ref type='bibr' target='#b7'>Carvalho (2011)</ns0:ref>, which supports the identification of terrorist threats and other suspicious activities in the maritime domain. The MDA PO evolved through several versions, showing how the UMP-ST process supports iterative model evolution and enhancement.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Uncertainty Modeling Process for Semantic Technology (UMP-ST).</ns0:figDesc><ns0:graphic coords='6,141.73,145.95,413.60,193.57' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Probabilistic Ontology Modeling Cycle (POMC) -Requirements in blue, Analysis &amp; Design in green, Implementation in red, and Test in purple.</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.60,307.05' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>' A common tool for 9/25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Specification Tree for Procurement Model Requirements</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Entities, their attributes, and relations for the procurement model.</ns0:figDesc><ns0:graphic coords='12,141.73,123.30,413.57,319.85' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>As it is a functional relation, livesAt relates a Person to an Address. Hence, the possible values (or states) of this RV are instances of Address.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. PR-OWL Entities for the procurement domain.</ns0:figDesc><ns0:graphic coords='14,291.83,479.67,113.40,155.70' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Creating a RV in PR-OWL plug-in from its OWL property by drag-and-drop.</ns0:figDesc><ns0:graphic coords='15,141.73,147.65,413.60,210.42' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Part of the probabilistic ontology for fraud detection and prevention in public procurements.</ns0:figDesc><ns0:graphic coords='15,141.73,403.93,413.58,232.79' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. LPD for node isImproperProcurement(procurement).</ns0:figDesc><ns0:graphic coords='17,245.13,63.78,206.80,241.40' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Results of unit testing for the Judgment History MFrag.</ns0:figDesc><ns0:graphic coords='18,141.73,329.31,413.57,162.54' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Part of the SSBN generated for the first scenario.</ns0:figDesc><ns0:graphic coords='19,141.73,506.18,413.58,200.12' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Traceability Matrix Relating Rules to Requirements Rule.1 Rule.2 Rule.3 Rule.4 Rule.5 Rule.6</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Rq.1</ns0:cell><ns0:cell>X</ns0:cell><ns0:cell>X</ns0:cell><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell>Rq.1.a</ns0:cell><ns0:cell>X</ns0:cell><ns0:cell>X</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Rq.1.a.i</ns0:cell><ns0:cell>X</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Rq.1.a.ii</ns0:cell><ns0:cell /><ns0:cell>X</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Rq.2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>X</ns0:cell><ns0:cell>X</ns0:cell><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell>Rq.2.a</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell>Rq.2.a.i</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell>Rq.2.a.ii</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell>Rq.2.b</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell>Rq.2.b.i</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell>Rq.2.b.ii</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>X</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>3. If 1 or 2, then the procurement is more likely to violate policy for fair competition.</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Comparison of all three scenarios.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Scenario Hypothesis and Expected Result Evidence</ns0:cell></ns0:row></ns0:table><ns0:note>19/25PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016)</ns0:note></ns0:figure> <ns0:note place='foot' n='7'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='1'>Every Brazilian citizen is required to file tax information, even if only to state that his or her income is below a certain amount and no taxes are owed.10/25PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016)</ns0:note> <ns0:note place='foot' n='11'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='13'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016)</ns0:note> <ns0:note place='foot' n='17'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016)</ns0:note> <ns0:note place='foot' n='4'>The SSBN generated for this scenario is shown in<ns0:ref type='bibr' target='#b7'>Carvalho (2011)</ns0:ref>, provided as supplemental material with this paper.5 The SSBN generated for this scenario is shown in<ns0:ref type='bibr' target='#b7'>Carvalho (2011)</ns0:ref>, provided as supplemental material with this paper.20/25PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016)</ns0:note> <ns0:note place='foot' n='21'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='25'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7010:2:0:NEW 29 Jun 2016) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Response to Comments from Editor and Reviewers Paper title: Uncertainty Modeling Process for Semantic Technology Paper number: CS-2015:10:7010:1:1:REVIEW Dear Editor: Please find a revised version of our submission, Uncertainty Modeling Process for Semantic Technology. We are delighted that both reviewers agree that our revisions addressed their major concerns. Their thoughtful comments and suggestions have helped us to substantially improve the paper. We are also grateful to Reviewer 1 for a careful reading and helpful comments that have helped us to improve clarity. A detailed enumeration of the changes we have made is included below. We also include a comparison of the old and new documents. We thank both reviewers for helping us to make this a much better paper. Sincerely, Rommel Carvalho, Kathryn Laskey and Paulo Costa Response to comments from the Editor • Please address the final remaining comments for minor revisions of Reviewer 1. This has been done. Response to comments from Reviewer 1 • • • • • Figure 1 words on left side hard to read. Change the font? This has been done. Figure 2 does not work when printed in B&W. We added symbols (R, A&D, I, T) to each of the blocks to enumerate the steps. These symbols both address the B&W problem and provide a reference for guiding the reader through the steps when applied to the case study. UP not described (but assumed from heading). Unified Process has been written out in the text. Thanks for catching this! Brazilian translation not described in text on Page 16 (DIE, CGU) which must be from Portugese words and not English. We included the Portugese for CGU and removed the acronym DIE. The latter was never referenced again, so there was no need to define an acronym. “Review section – might be better to italicize a few words in the descriptions …” We agree with Reviewer 1 that the review section was rather difficult to follow and the -1- • • • • • organization needed clarification. We used a combination of enumeration and italics to clarify the discussion. Thank you, Reviewer 1, for pointing this out. We think it is clearer now, and we hope you agree. “The focus ... is still the steps. This is still confusing. My guess is… [lists steps]” Again, we agree that it is difficult for someone not already familiar with the steps to see the mapping of the case study to Figure 2. The symbols we added to Figure 1 turned out to be very useful here too – each step now has an explicit label to which we can refer when describing how the step is applied in the case study. We thank Reviewer 1 for letting us know it was confusing, and we hope the reviewers agree that the mapping is clearer now. Abstract: “is intended to support” → “is demonstrated with an example to support.” We agree with the criticism, but think “can be used to support” reads a little better. Last line in Probability Elicitation section: “what is a prototype?” This is a prototype model. We clarified this in the text. Italicizing et al. Although both usages (italicized and not) are accepted, plain text appears to be more common and endorsed by several style guides. We removed all italics from e.g., i.e., and cf. page 9 requirements traceability should be justified. Because this quote is shorter than three lines, it should be in-line. We changed it to in-line. -2- "
Here is a paper. Please give your review comments after reading it.
276
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The COVID-19 pandemic is the most serious catastrophe since the Second World War. To predict the epidemic more accurately under the influence of policies, a framework based on Independently Recurrent Neural Network (IndRNN) with fine-tuning are proposed for predict the epidemic development trend of confirmed cases and deaths in the United Stated, India, Brazil, France, Russia, China, and the world to late May, 2021. The proposed framework consists of four main steps: data pre-processing, model pre-training and weight saving, the weight fine-tuning, trend predicting and validating. It is concluded that the proposed framework based on IndRNN and fine-tuning with high speed and low complexity, has great fitting and prediction performance. The applied fine-tuning strategy can effectively reduce the error by up to 20.94% and time cost. For most of the countries, the MAPEs of fine-tuned IndRNN model were less than 1.2%, the minimum MAPE and RMSE were 0.05%, and 1.17, respectively, by using Chinese deaths, during the testing phase.</ns0:p><ns0:p>According with the prediction and validation results, the MAPEs of the proposed framework were less than 6.2% in most cases, and it generated lowest MAPE and RMSE values of 0.05% and 2.14, respectively, for deaths in China. Moreover, Policies that play an important role in the development of COVID-19 have been summarized. Timely and appropriate measures can greatly reduce the spread of COVID-19; untimely and inappropriate government policies, lax regulations, and insufficient public cooperation are the reasons for the aggravation of the epidemic situations. The code is available at https://github.com/zhhongsh/COVID19-Precdiction.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The coronavirus disease 2019 (COVID-19) is brought on by infection from Severe Acute Respiratory Syndrome (SARS) Coronavirus 2, and the reports of related cases were first released by Wuhan, Hubei Province, China in December 2019 <ns0:ref type='bibr' target='#b0'>(Zhou et al., 2020)</ns0:ref>. The widespread COVID-19 epidemic is a serious threat, and has become one of the most challenging global catastrophes facing mankind since the Second World War <ns0:ref type='bibr' target='#b1'>(Chinadaily, 2020)</ns0:ref>. On March 11 th , 2020, the global development of COVID-19 was assessed by the World Health Organization (WHO) as having met the characteristics of a pandemic <ns0:ref type='bibr'>(World Health Organization, 2020a)</ns0:ref>. The mortality rate of COVID-19 is estimated to be between 2% and 5%, i.e., lower than that of SARS and Middle East Respirator Syndrome <ns0:ref type='bibr' target='#b4'>(Gasmi et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b5'>Wu, Chen &amp; Chan, 2020)</ns0:ref>. However, COVID-19 has a higher infection rate than bat-like SARS, and its pathogenicity is between that of SARS and bat-like SARS <ns0:ref type='bibr' target='#b6'>(Benvenuto et al., 2020)</ns0:ref>. The heart may be damaged and develop myocardial inflammation after recovery from COVID-19 <ns0:ref type='bibr' target='#b7'>(Puntmann et al., 2020)</ns0:ref>. In addition, the widespread of the disease also slow down the national and global, along with aggravating unemployment and hunger <ns0:ref type='bibr' target='#b8'>(Mckibbin &amp; Fernando, 2020;</ns0:ref><ns0:ref type='bibr' target='#b9'>Abcnews, 2020a)</ns0:ref>.</ns0:p><ns0:p>On January 15 th , 2020, the Chinese Center for Disease Control and Prevention (China CDC) initiated a first-level emergency response <ns0:ref type='bibr'>(Li et al., 2020a)</ns0:ref>. A series of policies began to be implemented since January 23 th , 2020 <ns0:ref type='bibr' target='#b0'>(Yang et al., 2020)</ns0:ref>, and the epidemic had stabilized by March 2020. In contrast, the epidemic situation overseas are not very optimistic. The number of infected people continues to maintain substantial increase of tens of thousands. According to the report by the WHO on April 25 th , 2020, the global epidemic has infected more than 146 million people, with 3,092,497 deaths, and the epidemic in the Americas, Europe, and South-East Asia are the worst, especially in India and the United States <ns0:ref type='bibr'>(World Health Organization, 2020b)</ns0:ref>.</ns0:p><ns0:p>Predicting the development trend based on the increase of cases is useful for the adjustment of epidemic prevention policy. However, in the current epidemic prediction work, the models used are complex and slow <ns0:ref type='bibr' target='#b0'>(Yang et al., 2020;</ns0:ref><ns0:ref type='bibr'>Bandyopadhyay &amp; Dutta 2020)</ns0:ref>, and some methods are fast but not effective <ns0:ref type='bibr' target='#b18'>(Huang et al., 2020)</ns0:ref>. In addition, the number of cumulative increases is not stable but variable, particularly, a sudden rapid or modest increase affects the stability of modelling and therefore the accuracy of predicting future trends. Thus, it is necessary to propose a model with coexistence of high precision, high speed, and low model complexity to predict and analyze the development tendency of COVID-19 with more efficiency and accuracy, and to summarise the meaningful and positive policies.</ns0:p><ns0:p>In this paper, a framework combining IndRNN model and fine-tuning strategy is proposed. Different from the existing models in the prediction of COVID-19, the fine-tuning strategy is added to reduce error and time. The proposed framework consists of four steps. First, the original data are pre-processed, including normalization and sequence data generation. Then, The IndRNN model is used to iteratively train to learn information or features hidden in COVID-19 sequence data, and weight parameters is obtained. Thirdly, the fine-tuning strategy is added to load the recently updated data into the model with existing weight parameters for partial parameter adjustment. Finally, the model that has learned the characteristics or information is used to predict the trend, and the prediction results are compared with the real data for verification, and the policy events and social economic influences under the epidemic situation are analyzed. The contributions in this paper as follows:</ns0:p><ns0:p>1) A framework based on IndRNN and fine-tuning strategy, which can be adept at longer sequence data, was used in COVID-19 epidemic prediction with more accuracy and high speed. IndRNN is used to retain effective information and better learn the characteristics of changeable sequence data in COVID-19 due to the advantage of independent neurons prevents gradient explosion and disappearance phenomenon. The fine-tuning strategy is utilized to further improves the accuracy in a short time, and it avoids the consumption brought by retraining after data update.</ns0:p><ns0:p>2) The fitting performance of the proposed framework based on IndRNN and fine-tuning are verified among the cumulative cases of India by comparing with the Long-Short-Term-Memory (LSTM), bidirectional LSTM (Bi-LSTM), Gated-Recurrent-Unit (GRU), Stacked_Bi_GRU, Convolutional Neural Network_LSTM (CNN_LSTM), Deep_CNN models. And the prediction accuracy of the model is validated by the error between the prediction results and the real data of the next four weeks in the cumulative case data of the United State, India, Brazil, France, Russia, China, and world.</ns0:p><ns0:p>3) The growth of the cumulative population is combined to analyze the positivity of epidemic policies and activities. The phenomenon is found that the same proactive policies have been implemented inconsistently in different countries because of lax regulation and inadequate public cooperation.</ns0:p><ns0:p>The rest of the paper is organized as follows: Section II describes the related work of this research direction. Section III introduces the data sources and study area, the proposed framework, and performance metrics; the experimental results are presented in Section &#8547;. In Section V, impact of fine-tuning on the prediction and validation accuracy, and the influence of policies are discussed. Finally, some conclusions are drawn in Section &#8549;.</ns0:p></ns0:div> <ns0:div><ns0:head>Related work</ns0:head><ns0:p>There are two main methods for forecasting the epidemic development of COVID-19: mathematical model and deep learning model.</ns0:p><ns0:p>A typical mathematical model in epidemic dynamics is the Susceptible-Exposed-Infectious-Removed (SEIR) model, using mathematical formulas to reflect the relationships between the flows of people at four states: susceptible, exposed, infectious, and recovered <ns0:ref type='bibr' target='#b13'>(Fang, Nie &amp; Penny, 2020)</ns0:ref>. The SEIR model was used to effectively predict the peaks and sizes of COVID-19 epidemiological data with sufficient fitting performance <ns0:ref type='bibr' target='#b0'>(Yang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b13'>Fang, Nie &amp; Penny, 2020)</ns0:ref>. A modified SEIR model also showed a good effect for predicting the peaks and sizes <ns0:ref type='bibr' target='#b0'>(Yang et al., 2020)</ns0:ref>. The peak deviation of the another modified SEIR model in predicting epidemiological data in China was 3.02% <ns0:ref type='bibr' target='#b13'>(Fang, Nie &amp; Penny, 2020)</ns0:ref>. However, SEIR focuses on predicting trends in sensitive, exposed, infected, and recovered groups <ns0:ref type='bibr' target='#b0'>(Yang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b13'>Fang, Nie &amp; Penny, 2020)</ns0:ref>, rather than cumulative confirmed and death cases. Moreover, it is necessary to comprehensively consider the changes in some parameter values as affected by the changes in epidemic policies and regional differences, such as the effective reproductive number, number of contacts, and infection rate. And these parameters are not easy to obtain, and are uncertain.</ns0:p><ns0:p>Deep learning automatically extracts the features from the data and builds the model without the need for other specific parameters. This method generates a series of sequence data from COVID-19 epidemiological data and looks for regular changes in the sequence data. Some deep learning models based on LSTM was used to predict COVID-19 trend, and the results demonstrated that LSTM has good prospects for predicting the trend; however, the fitting effect needs to be improved <ns0:ref type='bibr' target='#b0'>(Yang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b14'>Zandavi, Rashidi &amp; Vafaee, 2020)</ns0:ref>, too many internal parameters of LSTM increase the complexity of model training, and it can only learn forward. The Bi-LSTM, which adds bidirectional learning capability on LSTM, and the GRU, which makes gates simplification on the internal structure of LSTM, are used by <ns0:ref type='bibr' target='#b16'>Shahid, Zameer &amp; Muneeb (2020)</ns0:ref> for epidemic prediction. Moreover, a stacked-Bi-GRU model was applied for COVID-19 trend forecast, owing to the learning adequacy of the bidirectional cyclic network <ns0:ref type='bibr'>(Bandyopadhyay &amp; Dutta 2020)</ns0:ref>. LSTM and GRU solve the problem of gradient vanishing explosion of RNN, but there is a gradient attenuation problem of layer. In addition, the entanglement of neurons in the same layer makes their behavior difficult to interpret.</ns0:p><ns0:p>The existence of these multiple gate operations in the recurrent unit of the RNN variants, which is LSTM, Bi-LSTM, GRU, makes the parameters complicated. CNN does not need many parameters in virtue of the advantage of weight sharing, in comparison with these recurrent networks. A deep CNN model <ns0:ref type='bibr' target='#b18'>(Huang et al., 2020)</ns0:ref> and a CNN-LSTM model <ns0:ref type='bibr' target='#b19'>(Dutta, Bandyopadhyay &amp; Kim, 2020)</ns0:ref> were proposed for analysing and predicting the number of confirmed cases in China. However, the training speed of deep CNN is fast, but the effect is not significantly improved. And CNN_LSTM model combines CNN and LSTM, but increases the model complexity.</ns0:p><ns0:p>In the above deep learning network model, high precision, high speed, and low model complexity cannot coexist. And the cumulative epidemic data is time series, so recurrent networks are more suitable for processing such serialized data in comparing with CNN. Similar to LSTM, GRU, Bi-LSTM, IndRNN is one of the variants of RNN, which can learn longer sequence data, and it has no redundant gate operations, fewer parameters, and can be more easily trained <ns0:ref type='bibr' target='#b21'>(Li et al., 2018)</ns0:ref>. Simultaneously, IndRNN is designed to solve the problems of gradient disappearance and explosion, and can be utilized to process sequence data like LSTM, GRU and Bi-LSTM. IndRNN was used to learn relationships between plant gene sequences, overcoming the uncertainty of artificially acquired traits, and has higher accuracy than LSTM <ns0:ref type='bibr' target='#b22'>(Zhang et al., 2020)</ns0:ref>. Therefore, Indrnn is adopted as the basic model in this paper to improve the accuracy of epidemic prediction.</ns0:p><ns0:p>However, the epidemic data are updated daily, and the network weights need to be retrained after obtaining the data of the last few weeks, which are time-consuming. Fine-tuning can transfer the trained network model parameters to the required network for partial parameter adjustments, without the need to train from scratch. <ns0:ref type='bibr' target='#b23'>Tajbakhsh et al. (2016)</ns0:ref> used a fine-tuned pre-trained CNN network for medical image analysis and found that the effect was better than testing data are input for testing by applying the model with weights, to determine the gap between the actual output of test data and the test labels. After that, the model is used for direct trend prediction, and the prediction results are compared with the real updated data to judge the prediction performance of the proposed framework.</ns0:p><ns0:p>Specially, seven deep learning models which are LSTM, Bi-LSTM, GRU, Stacked_Bi_GRU, CNN_LSTM, Deep_CNN, and IndRNN are used in this research for model comparison. As shown in Figure <ns0:ref type='figure'>3</ns0:ref>, the structures of the LSTM, Bi-LSTM, GRU, and IndRNN models are approximately the same. The flatten layer in LSTM, Bi-LSTM and GRU models is used to convert three-dimensional data into two-dimensional data. The IndRNN model does not add the Flatten layer because the output of the IndRNN layer is 2-dimensional data. The details of the proposed framework are described below.</ns0:p></ns0:div> <ns0:div><ns0:head>Pre-processing</ns0:head><ns0:p>Pre-processing includes MinMaxScaler operation and sequence data generation. To keep the data at the same order of magnitude and facilitate characteristic analysis and model convergence, the cumulative confirmed cases and deaths data are scaled to 0~1 after MinMaxScaler operation based on the minimum and maximum values, and the original proportion of the data is retained.</ns0:p><ns0:p>Sequence data generation is a crucial step, and it is the premise of sequence model training. The sequence data can reflect changes in which no regularity can be traced in single data. Therefore, the individual data were organised into sequential data. As shown in Figure <ns0:ref type='figure'>4</ns0:ref>, the dimensionality of the individual data for weeks was , a window with a sequence length of &#119899; (&#119899;,1)</ns0:p><ns0:p>and step size of 1 was initially selected for sliding orderly through the weekly data. All of the &#119909; data contained in the window was considered as a data sequence, and the next week data outside the window was the corresponding label data. After the window slid through all of the weekly data, the sequence data with dimension of was generated. It is noteworthy that the (&#119899; -&#119909;,&#119909;) content of sequence data represents features. For example, when the sequence length is 3, the data from the first to the third form a sequence, and the data from the fourth day is the label data; next, the data of days 2 to 4 form a sequence, and the data of day 5 is label data. The sequence data generation is completed until the label data reaches the last data of the input data.</ns0:p></ns0:div> <ns0:div><ns0:head>LSTM and Bi-LSTM</ns0:head><ns0:p>In RNN, too many cyclic units may lead to the loss of previously learned rule information and the existence of long-term dependence, and can lead to the problems such gradient disappearance or gradient explosion <ns0:ref type='bibr' target='#b27'>(Goodfellow, Bengio &amp; Courville, 2016)</ns0:ref>. LSTM is mainly used to learn and overcome the long-term dependence of RNN <ns0:ref type='bibr' target='#b28'>(Hochreiter &amp; Schmidhuber, 1997)</ns0:ref>. The internal units of the LSTM include the memory cell, forget gate, input gate, and output gate <ns0:ref type='bibr' target='#b27'>(Goodfellow, Bengio &amp; Courville, 2016)</ns0:ref>. In the LSTM internal unit, the memory cell carries the necessary information transfers between the LSTM internal circulation units, thereby solving the problem of gradient disappearance, and learning long-term dependence <ns0:ref type='bibr' target='#b29'>(Olah, 2015)</ns0:ref>. The sigmoid layers in the three gates constrain the value between 0 and 1, so as to determine which PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2021:06:62336:1:2:NEW 4 Sep 2021)</ns0:ref> Manuscript to be reviewed Computer Science information should be saved or forgotten <ns0:ref type='bibr' target='#b27'>(Goodfellow, Bengio &amp; Courville, 2016;</ns0:ref><ns0:ref type='bibr' target='#b29'>Olah, 2015)</ns0:ref>. When the information flow enters the circulation unit, the operation is as follows <ns0:ref type='bibr' target='#b29'>(Olah, 2015)</ns0:ref>:</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>forget t = &#120532;(W forget &#8226; [h t -1 ,x t ] + b forget ) (2) &#119894;&#119899;&#119901;&#119906;&#119905; &#119905; = &#120648;(&#119882; &#119894;&#119899;&#119901;&#119906;&#119905; &#8226; [&#8462; &#119905; -1 ,&#119909; &#119905; ] + &#119887; &#119894;&#119899;&#119901;&#119906;&#119905; ) (3) &#119862; &#119905; = &#119853;&#119834;&#119847;&#119841;(&#119882; &#119862; &#8226; [&#8462; &#119905; -1 ,&#119909; &#119905; ] + &#119887; &#119862; ) (4) &#119862; &#119905; = &#119891;&#119900;&#119903;&#119892;&#119890;&#119905; t * &#119862; &#119905; -1 + &#119894;&#119899;&#119901;&#119906;&#119905; &#119905; * &#119862; &#119905; (5) &#119900;&#119906;&#119905;&#119901;&#119906;&#119905; &#119905; = &#120648;(&#119882; &#119900;&#119906;&#119905;&#119901;&#119906;&#119905; &#8226; [&#8462; &#119905; -1 ,&#119909; &#119905; ] + &#119887; &#119900;&#119906;&#119905;&#119901;&#119906;&#119905; ) (6) &#8462; &#119905; = &#119900;&#119906;&#119905;&#119901;&#119906;&#119905; &#119905; * tanh (&#119862; &#119905; )</ns0:formula><ns0:p>In the above, stands for the moment, , , and represent the output after the LSTM can only predict the output based on the previous content, but the later information will also help understand the current text. Therefore, the Bi-LSTM proposed by <ns0:ref type='bibr' target='#b30'>Schuster &amp; Paliwal (1997)</ns0:ref> performs bidirectional input and makes full use of the context information. The forward input is the sequence input at time and the output at time , and the backward input is the &#119905; &#119905; -1 sequence input at time t and the output at time t+1. The final output is a combination of forward output and reverse output</ns0:p><ns0:p>).</ns0:p><ns0:p>&#8462; &#119891;&#119900;&#119903;&#119908;&#119900;&#119903;&#119889; = (&#8462;&#119891; 0 ,&#8462;&#119891; 1 ,&#8230;,&#8462;&#119891; &#119899; -1 ) &#8462; &#119887;&#119886;&#119888;&#119896;&#119908;&#119900;&#119903;&#119889; = (&#8462;&#119887; &#119899; -1 ,&#8462;&#119887; &#119899; -2 ,&#8230;,&#8462;&#119887; 0</ns0:p></ns0:div> <ns0:div><ns0:head>GRU</ns0:head><ns0:p>The difference between GRU and LSTM is that the GRU changes the forgetting gate and input gate in LSTM to update gate, and reduces the amount of gate controls <ns0:ref type='bibr' target='#b31'>(Chung et al., 2014)</ns0:ref>. The GRU transmits information directly through the hidden state, rather than through the memory cell. The input information of the current moment and the output information of the previous moment are firstly determined by the update gate whether to be updated. The reset gate is then used to control whether the information is set to 0, i.e., to determine the amount of information to be retained in the candidate information. Finally, it is up to update gate to control how much information in the output at the previous time is forgotten and how much information is added to the hidden information, and the retained information forms the output of the GRU recurrent unit at the current moment. The main formulas of GRU internal structure are as follows <ns0:ref type='bibr' target='#b31'>(Chung et al., 2014</ns0:ref>):</ns0:p><ns0:p>(7) </ns0:p><ns0:formula xml:id='formula_1'>&#119906;&#119901;&#119889;&#119886;&#119905;&#119890; &#119905; = &#120648;(&#119882; &#119906;&#119901;&#119889;&#119886;&#119905;&#119890; &#8226; [&#8462; &#119905; -1 ,&#119909; &#119905; ])<ns0:label>(8)</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>&#119903;&#119890;&#119904;&#119890;&#119905; &#119905; = &#120648;(&#119882; &#119903;&#119890;&#119904;&#119890;&#119905; &#8226; [&#8462; &#119905; -1 ,&#119909; &#119905; ])<ns0:label>(9)</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>&#8462; &#119905; = &#119853;&#119834;&#119847;&#119841; (&#119882; &#8226; [&#119903;&#119890;&#119904;&#119890;&#119905; &#119905; * &#8462; &#119905; -1 ,&#119909; &#119905; ])</ns0:formula></ns0:div> <ns0:div><ns0:head>IndRNN</ns0:head><ns0:p>IndRNN solves the problem of neuron independence in RNN and the gradient attenuation that appears in LSTM and GRU, and can handle longer sequences. In addition, gate computation is added to LSTM and GRU, which increases the computational complexity of the network layer.</ns0:p><ns0:p>In IndRNN, there are few parameters, and the neuron only receives input from this moment and its own state input from the previous moment at any given moment, making each neuron independent <ns0:ref type='bibr' target='#b21'>(Li et al., 2018)</ns0:ref>. The state update formula of the hidden layer is <ns0:ref type='bibr' target='#b21'>(Li et al., 2018)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_4'>(11) &#8462; &#119905; = &#120590;(&#119882;&#119909; &#119905; + &#119906; &#8857; &#8462; &#119905; -1 + &#119887;)</ns0:formula><ns0:p>Here, and are the weights of a certain neuron at the current and previous moment, &#119882; &#119906; respectively.</ns0:p></ns0:div> <ns0:div><ns0:head>Fine-tuning</ns0:head><ns0:p>Fine-tuning refers to copying the weight of the trained network model to the network that needs to be used, and continuing to train and adjust part of the weight <ns0:ref type='bibr' target='#b23'>(Tajbakhsh et al. 2016)</ns0:ref>. The advantage of fine-tuning is that it can achieve better results in a shorter time than training the network from scratch, owing to pre-training using a large amount of data <ns0:ref type='bibr' target='#b23'>(Tajbakhsh et al. 2016)</ns0:ref>.</ns0:p><ns0:p>The </ns0:p></ns0:div> <ns0:div><ns0:head>Assessment metrics</ns0:head><ns0:p>To evaluate the effectiveness of the model, the RMSE and MAPE were used to access the fitting performance between the output (prediction data) and label data of each sequence data. The quations are as follows:</ns0:p><ns0:p>(12) &#119877;&#119872;&#119878;&#119864;(&#119871;,&#8462;) = Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Here, denotes the number of sequential data, and and represent the output of each &#119898; &#8462; &#119894; &#119871; &#119894; sequence after testing in model and the corresponding label data, respectively. The smaller the &#119894; value of RMSE and MAPE, the better the prediction effect, which means that the error between prediction data and label data is smaller.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>This paper aims to achieve prediction of epidemic trends with more accurate by proposed framework based on IndRNN and fine-tuning strategy on COVID-19 epidemiology data. The deadline for training data is March 13 th , 2021, the data from 3/20/2021 to 4/24/2021 is used for fine-tuning and testing, and we have taken cases from 1/22/2021 to 5/10/2021 for validation. The deep learning environment of our experiment was mainly built based on the Ubuntu 16.04 system environment, which encoded by python Keras with the support of Intel Core i9-10920X CPU @3.50GHz&#215;24.</ns0:p><ns0:p>Three tasks are completed in this paper: 1) model comparison by LSTM, Bi-LSTM, GRU, Stacked_Bi_GRU, CNN_LSTM, Deep_CNN, and IndRNN models, 2) the fine-tuned IndRNN model is utilized to predict the number of confirmed cases in the United Sates, India, Brazil, France, Russia, China and the world, and verify their accuracy, and 3) the growth status of the cumulative cases in 6 countries in combination with current policies are analyzed. All results in this section use the values in Table <ns0:ref type='table'>2</ns0:ref>. The experimental procedure for each model follows the steps in the method. Moreover, during fine-tuning, the numbers of layers in the seven models that do not participate in training are set as shown in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Comparison of models</ns0:head><ns0:p>The performance of seven different models which are LSTM, Bi-LSTM, GRU, Stacked_Bi_GRU, CNN_LSTM, Deep_CNN, and IndRNN models are compared based on the COVID-19 statistical data of India. The reason for this selection lays on the fact that India is the worst-affected country under the current situation, the weekly increase of cases is dramatically higher than other countries, with vast changing data. Before adding the updated data, the six countries and the world used in this paper have at least 55 weeks of data. To find the most suitable sequence length, and to ensure that the number of training data is more than fine-tuning data, Indian dataset was used for training by LSTM, Bi-LSTM, GRU, Stacked_Bi_GRU, CNN_LSTM, Deep_CNN, and IndRNN models with sequence length between 5 and 45 weeks (separated by 4 weeks, i.e. one month) as single data cannot become sequence data. The best results were obtained when the sequence length was 45.</ns0:p><ns0:p>After normalization and sequence data generation, the dimensions of the sequence data generated by using weekly cumulative confirmed and death cases in India are (20, 1, 45) and (14, 1, 45), and the corresponding label data are 20 and 14 respectively. The last 6 of them are used for fine-tuning and testing, and the other data are used for training. Manuscript to be reviewed Computer Science trained models load the corresponding weights for iterative fine-tuning by the fine-tuning data. In this process, to find the appropriate amount of fine-tuning data, 1, 3, and 5 pieces of fine-tuning data are used for fine-tuning, and the number of corresponding testing data are 5, 3, and 1 respectively.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref>, Table <ns0:ref type='table' target='#tab_2'>4, and Table 5</ns0:ref> show the comparison results. Where, in Table <ns0:ref type='table' target='#tab_2'>4</ns0:ref>, 'f' and 't' in the column 'Split' represent the number of fine-tuning data and testing data, respectively. As can be seen from the table and the figure, IndRNN model shows the best performance for both with no-fine-tuning and fine-tuning models, compared to LSTM, Bi-LSTM, GRU, Stacked_Bi_GRU, CNN_LSTM, and Deep_CNN models. Where, IndRNN model had the least MAPE and RMSE before and after fine-tuning, the lowest total runtime when using cumulative diagnosis data, the second lowest total runtime when using cumulative death data, and minimum number of total parameters. The MAPE and RMSE of LSTM, BI-LLSTM and GRU are similar, and the effect ranking of the remaining models is Deep_CNN, Stacked_Bi_GRU, CNN_LSTM, respectively. After fine-tuning, the RMSE and MAPE of the seven models are all significantly decreased, especially for IndRNN, the MAPE of which is decreased by 0.27%~12.36%, 0.12%~14.29%, 0.3%~10.91%, 0.69%~8.89%, 2.21%~11.84%, -1.59%~13.79%, and 0.36%~20.94%, respectively. The best test results were obtained when five fine-tuning data were used to finetune the parameters of the pre-trained IndRNN model. It is proved that fine-tuning plays a very important role in reducing errors. In addition, IndRNN also exhibits superiority in computational efficiency. That is also true for that with fine-tuning. Figure <ns0:ref type='figure'>6</ns0:ref> shows the fitting performance of the IndRNN model. The predicted value is very close to the true value and the two lines are almost overlapped, which demonstrates the effectiveness of the fine-tuned IndRNN model in predicting the development of COVID-19 cases.</ns0:p></ns0:div> <ns0:div><ns0:head>Predictive performance analysis of the COVID-19 epidemic situations in six countries</ns0:head><ns0:p>In this section, the proposed fine-tuned IndRNN model is used to predict the COVID-19 epidemic trend in the cumulative confirmed and death cases of the China, the United States, India, Brazil, France and Russia with 5 fine-tuning data. The prediction period is one month, according to the fact that the impact of policies or events generally exhibits in the future month. And then the actual data is utilized to validate the predict result. The development tendency of the COVID-19 is analyzed, the significant phenomenon is then detected and the reasons that responsible for this is further explored.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>7</ns0:ref> shows the development tendency and prediction result of COVID-19 cumulative confirmed and death cases in China, and the general development stages of the epidemic in China are summarized in the Table <ns0:ref type='table'>S1</ns0:ref>. As can be seen from the figure, the cumulative confirmed cases in China experienced dramatic changes. It firstly starts to rise sharply in mid-January 2020, while the number of deaths continues to rise, and the epidemic has entered the outbreak stage (The State Council Information Office of the People's Republic of <ns0:ref type='bibr'>China, 2020)</ns0:ref>. During this period, the Chinese government issued a timely emergency response, e.g., expanding the laboratories capacities for nucleic acid detection, the construction of 'Huoshenshan', 'Leishenshan', and Fangcang shelter hospitals for accommodating more patients, medical observation was conducted on close contacts, people remained at home as much as possible, and wearing masks, avoiding gatherings, and keeping a physical distance of 1 meter when going out were strictly required <ns0:ref type='bibr' target='#b34'>(Li et al., 2020b;</ns0:ref><ns0:ref type='bibr'>Chinese Center for Disease Control and Prevention, 2020a)</ns0:ref>. The China CDC issued prevention guidelines for people of different ages and places to strengthen the awareness of safety (Chinese Center for Disease <ns0:ref type='bibr'>Control and Prevention, 2020b)</ns0:ref>. Figure <ns0:ref type='figure'>8</ns0:ref> illustrates the development trend of cumulative confirmed and death cases in the United States, India, Brazil, France, Russia. Different from that of China, the growth of cumulative cases of the top five countries is relatively slow before July 2020. After that, it began to increase sharply, especially for the United States and India. The rudimentary development of the COVID-19 epidemic in the United States is summarized in the Table <ns0:ref type='table'>S2</ns0:ref>. After the first COVID-19 patient in the United States was recorded on January 20 th , 2020 <ns0:ref type='bibr' target='#b37'>(Holshue et al., 2020)</ns0:ref>, the following measures were taken: non-Americans who had visited China in the past 14 days were banned from entering on February 2 th , 2020 (National Immigration Administration, 2020), cruise ship ban, and masks were promoted by the CDC (Centers for Disease <ns0:ref type='bibr'>Control and Prevention, 2020)</ns0:ref>. Nevertheless, there were still a campaign of mass protests and campaign rallies without protective measures <ns0:ref type='bibr' target='#b40'>(Schuchat, 2020)</ns0:ref>. As a result, the account of confirmed cases in the United States continued to increase significantly after surpassing China on March 26 th , 2020. In addition, the 'opening up' measures promoted by the American government <ns0:ref type='bibr'>(Goodmorningamerica, 2020a)</ns0:ref>, and the opening of schools and holding of the rally about the presidential campaign since August 2020 (USA TODEY, 2020), promote the intensification of the epidemic, hence the cumulative confirmed cases in the United Stated start to rise sharply in October 2020.</ns0:p><ns0:p>The India has no obvious growth before July 2020, in which the 'city closure' measures implemented by the Indian government at the end of March have played a key role <ns0:ref type='bibr' target='#b43'>(Xinhuanet, 2020a)</ns0:ref>. But some areas have been re-blocked, and the number of people suffering from the COVID-19 in India has increased extraordinary, because of an outbreak rebound caused by the increase of outdoor activities after a gradual unsealing since June 2020 <ns0:ref type='bibr' target='#b43'>(Xinhuanet, 2020a;</ns0:ref><ns0:ref type='bibr' target='#b44'>Abcnews, 2020b)</ns0:ref>. Many massive rallies had been held in India since March 2021, including political rallies and festival <ns0:ref type='bibr' target='#b45'>(Bhuyan, 2021)</ns0:ref>, and the number of cumulative confirmed cases has surged.</ns0:p><ns0:p>Brazil, France, and Russia do not show a significant increase in general, but the number of cases in Brazil is higher than that in France and Russia. Brazil implemented a social distancing policy, but some people neglected to wear masks at rallies (The paper, 2020), and professional football matches were held in Rio, Brazil on June 18 th , 2020 <ns0:ref type='bibr' target='#b47'>(Xinhuanet, 2020b)</ns0:ref>, these events promoted the development of the epidemic. France implemented a lockdown on March 17 th , 2020, and gradually unblocked it from May 11 th , 2020. However, the amount of cumulative diagnoses in France starts to rise in October 2020, by reason of the emergence of cases in school and surges in cases in many areas, hence the French government closed cities for a second time on October 30 th , 2020 <ns0:ref type='bibr' target='#b48'>(Gouvernement.fr., 2020;</ns0:ref><ns0:ref type='bibr'>CNN, 2020)</ns0:ref>; The Russian government did not take timely measures to prevent the European epidemic, causing a large number of imported cases <ns0:ref type='bibr' target='#b50'>(Kankanews, 2020)</ns0:ref>; moreover, the holding of a military parade on June 24 th , 2020 <ns0:ref type='bibr' target='#b51'>(Xinhuanet, 2020c)</ns0:ref>, has become one of the factors driving the curve of Russia upward.</ns0:p><ns0:p>According to Figures <ns0:ref type='figure'>7 and 8</ns0:ref>, the trend forecasts in six countries are basically in line with the curve trend. Table <ns0:ref type='table'>6</ns0:ref> is the testing and validation accuracy of cumulative confirmed and death COVID-19 cases in five countries by IndRNN model with fine-tuning. As indicated from the table, in most cases, the MAPE of testing and validation are less than 1.2%, and 6.2%, respectively. In the table, it generates lowest MAPE and RMSE values of 0.04% and 2.00 in testing results, and 0.05% and 1.17 in verification results, respectively, for deaths in China, which shows the effectiveness of fitting and prediction by the proposed framework.</ns0:p></ns0:div> <ns0:div><ns0:head>Verification of prediction accuracy in global diagnosis</ns0:head><ns0:p>The IndRNN model was used to predict epidemic trends by training the cumulative confirmed weekly cases for the global from February 15 th , 2020 to April 24 th , 2021. Figure <ns0:ref type='figure'>9</ns0:ref> displays the prediction diagram. By May 22 th , 2021, the number of global cumulative confirmed cases may surpass 171.4 million, and deaths may reach 3.5 million, and the deviations between these results and the real data are 2.81%, and 2.30%, respectively.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Among the seven models introduced in Section 2, the IndRNN model with fine-tuning strategy achieved the optimal result. In different models, fine-tuning can continuously adjust the parameters in a relatively short time. Compared with the result without fine-tuning, the error is decreased by up to 20.94%. It is confirmed that the fine-tuning takes full advantage of the characteristics obtained from the pre-trained model, and this strategy plays a good role in increasing accuracy.</ns0:p><ns0:p>According to the experimental results of LSTM, Bi-LSTM, GRU, Stacked_Bi_GRU, CNN_LSTM, Deep_CNN, and IndRNN models, the IndRNN model takes the second least time, but its RMSE and MAPE are the smallest. Among the seven networks, the errors of the stacked networks are increased compared with the simple networks, and the phenomenon of overfitting occurs, which is because the stacked networks over-interpret information, resulting in the weak generalization ability of the models; compared with CNN, the variant network (i.e. LSTM, Bi-LSTM, GRU, IndRNN) of RNN has higher accuracy, due to the memory function of the internal structure of RNN series network, which is able to extract features from sequences of different moments. This ability to process time series is more consistent with the characteristics of COVID-19 epidemiological data changing over time. Among the variant networks of RNN, the Manuscript to be reviewed Computer Science accuracy of LSTM, Bi-LSTM and GRU is not much different, which is inconsistent with the result that Bi-LSTM is better than LSTM and GRU found by <ns0:ref type='bibr' target='#b16'>Shahid, Zameer &amp; Muneeb (2020)</ns0:ref>. The reason is that the Bi-LSTM has insufficient learning ability on cumulative data of India with variable growth rate. By contrast, IndRNN has the best performance as the non-interference of independent neurons with each other, it is more adaptable to the changeable epidemic data, and is capable of better extract and transmit information hidden in the COVID-19 sequence data. Therefore, it can be considered that the IndRNN model can understand the regular pattern of epidemic data, has good learning capabilities, and achieves the coexistence of high precision and low time consumption.</ns0:p><ns0:p>The MAPE of the testing by IndRNN model with fine-tuning is less than 1.2%, except for 5.89% of the cumulative confirmed cases in France and 2.99% of the cumulative deaths in India, and the MAPE of the validation by this framework is less than 6.2%, except for 11.53% of the cumulative confirmed cases in France and 13.92% of the cumulative deaths in India. This is due to the sudden increase in the cases and the short duration of the growth trend, which affects the ability of model to obtain stable features and predict trend.</ns0:p><ns0:p>The more fine-tuning data is used, and the more error is reduced after fine-tuning, owing to enough fine-tuning data can make full use of the features extracted by the pre-trained model. Concretely, the reduced errors of LSTM, Bi-LSTM, GRU, Stacked_Bi_GRU, CNN_LSTM, Deep_CNN, and IndRNN models after fine-tuning are 0.27%~12.36%, 0.12%~14.29%, 0.3%~10.91%, 0.69%~8.89%, 2.21%~11.84%, -1.59%~13.79%, 0.36%~20.94%, respectively. About Deep_CNN model, there is a slight increase in the error after fine-tuning when the number of fine-tuning data is 1, and one possible reason is that too few fine-tuning parameters and data affect the effect of parameter adjustment. The degree of error reduction is affected by the ability of the model to learn features and the fine-tuned model to learn features from the pre-trained model, and the different changes in different data, as well as the amount of fine-tuning data. Among them, the fine-tuning model using 5 fine-tuning data makes full use of features obtained from pre-trained IndRNN model, and get best result.</ns0:p><ns0:p>The model speed is proportional to the number of parameters, but not to the fitting effect. The parameters of IndRNN model are the least by reason of no gate calculations in internal units in compare with LSTM, GRU, Bi-LSTM units. However, as the three gates and memory cell in the recurrent unit of the LSTM model are more than the GRU model with two gates, and Bi-LSTM model has bidirectional LSTM layer in the internal structure, hence, LSTM model has more parameters than GRU model, and the parameters of Bi-LSTM model are about twice that of the LSTM model. Stacked_Bi_GRU and CNN_LSTM have a large number of parameters due to the stack of multiple network layers, which affects the running speed. Among the seven networks, Deep_CNN is the model with the fewest fine-tuning parameters because of a small number of dense layer parameters caused by characteristic of shared parameters, hence the fine-tuning speed is the fastest. However, the test RMSE of this model is up to 9.22% higher than that of IndRNN, which may be because the reduction of parameters affects the effect of feature extraction of sequence data.</ns0:p><ns0:p>Through the policies mentioned above, we divide these policies into active measures and inactive measures according to whether they contribute to the mitigation of the epidemic (Table <ns0:ref type='table'>7</ns0:ref>). We conclude that, on the final results of epidemic prevention and control, the timely release of prevention and control policies by the government is the most important. Local implementation and cooperation of residents are also crucial. The COVID-19 epidemic of China gradually declines to a stable state in April 2020 as good performance in these respects. Some countries have adopted the same policy as China but the effect is not obvious, e.g. the United States, India, owing to the deficiency of implementation of some regions and collaboration of residents, such as no mask, organizing of the rally. The short implementation time of blockade measures in some countries promoted the recurrence of the epidemic, for instance, India. Policy relaxation and economic restarts should be conducted after the epidemic situation is relatively stable, rather than prematurely. Large-scale activities with crowd gatherings should be cancelled as much as possible, owing to the existence of a large potential infection rate. At the resident level, some countries have promulgated corresponding measures, but citizens have not strictly implemented them. Some citizens resist the 'closing the city' measure, for they think it conflicts with personal freedom. Some people have negative emotions about the severe epidemic and determination of them to protest together is insufficient, affecting the policy response. It is essential to actively respond to the relevant prevention and control policies issued by the state, and not to violate the prohibition without authorisation. Although the current epidemic situation in some countries is gradually stable, keeping vigilant, paying attention to personal protection, and reducing long-distance travel remain important.</ns0:p><ns0:p>To ensure that the training data is more than the fine-tuning data to get good results, if the time interval is long, it is necessary to download the continuously updated data for retraining again, and then use the subsequent data for fine-tuning, so as to achieve the goal of not needing to retrain the data in a short time and improving the accuracy. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(2) According the prediction and validation results, the MAPE of IndRNN model with finetuning was less than 6.2% in most cases, and this framework obtained a minimum MAPE and RMSE of 0.05% and 1.17, respectively, by using Chinese deaths. The deviations between predicted cumulative confirmed and death cases on the world in late May 2021 and the real data were 2.81%, and 2.30%, respectively, which confirmed the predictive performance of the proposed framework.</ns0:p><ns0:p>(3) Policies play an important role in the development of COVID-19. Timely and appropriate measures can greatly reduce the spread of COVID-19. Some countries have adopted the same positive policy but the effect is not ideal due to the untimely and inappropriate government policies, low implementation ability and the coordination degree of the public. The positive measures include release of emergency response, sealing off cities and closing entertainment venues, prohibiting activities, establishment of temporary hospital, border inspection, cruise ship ban, the release of prevention and control guidelines, the vigorously production of medical materials, improvement of the detection ability of COVID-19. And the negative measures are relaxing restrictions too early for opening, timely policy implementation, hosting of great events, and face Masks are not mandatory. Additionally, untimely and inappropriate government policies, lax regulations, and insufficient public cooperation are the reasons for the aggravation of the epidemic situations. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62336:1:2:NEW 4 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>fine-tuning model in this paper is consistent with the pre-training model, as shown in Figure3. Firstly, the fine-tuning model copies and transfers the weight of the pre-training model to itself. Then, the parameters of the net layer before the dense layer in the Stacked_Bi_GRU, CNN_LSTM, Deep_CNN models, and the LSTM layer, Bi-LSTM layer, GRU layer, IndRNN layer, are frozen, respectively, and do not participate in the following training. The parameters of the dense layer and the activation layer of each fine-tuning model are in an active state, waiting for adjustment. Finally, import fine-tuning data to start fine-tuning training. The error between feature sequence data and label data is gradually narrowed by iterative training with the full utilization of the features or knowledge acquired by the pre-training model.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>&#119871; &#119894; -&#8462; &#119894; ) 2 (13) &#119872;&#119860;&#119875;&#119864;(&#119871;,&#8462;) = 100 &#119898; &#8721; &#119898; &#119894; = 1 | &#119871; &#119894; -&#8462; &#119894; &#119871; &#119894; | PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62336:1:2:NEW 4 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>These 7 models use Indian COVID-19 data to save the weights after pre-training, respectively. Under the condition of freezing the network layer before the fully connected layer, the pre-PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62336:1:2:NEW 4 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>As a result, the epidemic has been promptly controlled, and the cumulative confirmed cases in China has gradually flattened and stabilized by late April 2020. The COVID-19 deaths in China have been basically flat since the end of April 2020, and have remained at 4,636 since January 25 th , 2021. It is demonstrated that the epidemic in China has been adequately controlled since the late April 2020.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62336:1:2:NEW 4 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>In this work, deep learning models were utilized to research the development of COVID-19 in China, the top five countries and the world. The framework based on IndRNN and fine-tuning consists of four steps: data preprocessing, pre-training the model and saving the weight, finetuning the weight, trend prediction and validation. The development tendency of COVID-19 was analyzed, predicted, and validated. Some conclusions are draw as follows: (1) The validity of the proposed framework is verified by comparing it with LSTM, Bi-LSTM, GRU, Stacked_Bi_GRU, CNN_LSTM, and Deep_CNN. The result demonstrated that IndRNN model shows the best performance, and has low complexity. Compared with no-fine-tuning, the fine-tuned IndRNN model can effectively the reduce the prediction errors by up to 20.94% and the time cost. For most of the countries, the MAPE of IndRNN model with fine-tuning was less than 1.2%, and the lowest MAPE and RMSE values of 0.04% and 2.00 in testing results were gendered, for deaths in China, which indicated the effectiveness of the proposed framework. PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62336:1:2:NEW 4 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,229.87,525.00,246.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,70.87,323.01,672.95' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,204.37,525.00,375.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,250.12,525.00,196.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,204.37,525.00,170.25' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>weight and bias offset of the three gates &#119882; &#119900;&#119906;&#119905;&#119901;&#119906;&#119905; &#119882; &#119862; b forget &#119887; &#119894;&#119899;&#119901;&#119906;&#119905; &#119887; &#119900;&#119906;&#119905;&#119901;&#119906;&#119905; &#119887; &#119862;</ns0:figDesc><ns0:table><ns0:row><ns0:cell>&#119905;</ns0:cell><ns0:cell>&#119891;&#119900;&#119903;&#119892;&#119890;&#119905; &#119905; &#119894;&#119899;&#119901;&#119906;&#119905; &#119905;</ns0:cell><ns0:cell>&#119900;&#119906;&#119905;&#119901;&#119906;&#119905; &#119905;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>activation function in the forget gate, input gate and output gate, respectively. &#120590;</ns0:cell><ns0:cell>, W forget &#119882; &#119894;&#119899;&#119901;&#119906;&#119905;</ns0:cell><ns0:cell>,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>, , symbolize the and memory cell, respectively. , , , is the output at the previous moment, is the input at the &#8462; &#119905; -1 &#119909; &#119905;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>current moment. The updated memory cell is obtained by adding &#119862; &#119905;</ns0:cell><ns0:cell>&#119891;&#119900;&#119903;&#119892;&#119890;&#119905; &#119905; * &#119862; &#119905; -1</ns0:cell><ns0:cell>which</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>represents unnecessary information to be discarded, and</ns0:cell><ns0:cell cols='2'>which represents the</ns0:cell></ns0:row></ns0:table><ns0:note>&#119894;&#119899;&#119901;&#119906;&#119905; &#119905; * &#119862; &#119905; information to be updated. Finally, the information in the memory unit is controlled by the &#119862; &#119905; output gate to return the final output for this LSTM cell.&#8462; &#119905;</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 . Comparison among LSTM, Bi-LSTM, GRU, Stacked_Bi_GRU, CNN_LSTM, Deep_CNN, and IndRNN in terms of MAPE.</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Prediction of cumulative confirmed COVID-19 cases in India.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>MAPE (%)</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62336:1:2:NEW 4 Sep 2021)Manuscript to be reviewed</ns0:note></ns0:figure> </ns0:body> "
" Response to Reviewer 1 Comments from the Reviewer 1 Basic reporting No comment. Experimental design I think the details of data processing in the experimental design, including all variables from the preprocessing and coding of the original obtained data to how to train and test, should be clearer and add more details. Validity of the findings I suggest that the discussion of the results should be more fully compared with the previous studies. Please clarify and add. Additional comments The structure of this article is complete and professional, with clear diagrams and tables, and a clear sharing of materials. However, I think the reference literature, especially the collection of the literature on the epidemic, is slightly insufficient. Not only must we collect more literature about COVID-19, but even from the perspective of public health, I think it is necessary to go back to the Spanish flu... etc. The relevant literature on the spread of the epidemic is discussed in order to complete the discussion in the background. For your research design, I think the details of data processing in the experimental design, including all variables from the preprocessing and coding of the original obtained data to how to train and test, should be clearer and add more details. Finally, I suggest that the discussion of the results should be more fully compared with the previous studies. Please clarify and add. Comment 1: -I think the reference literature, especially the collection of the literature on the epidemic, is slightly insufficient. Not only must we collect more literature about COVID-19, but even from the perspective of public health, I think it is necessary to go back to the Spanish flu... etc. The relevant literature on the spread of the epidemic is discussed in order to complete the discussion in the background. Response 1: Thank you for your valuable comment. In this paper, three new literatures on epidemic prediction were added, and the methods used in the literatures and some methods originally cited were analyzed and summarized. Due to the addition of related work section in this paper, the detailed analysis the epidemic prediction works was recorded in this chapter, and a brief explanation was given in the third paragraph of the introduction. (1) The third paragraph of the introduction section in the revised paper (page 2 line 62-67): However, in the current epidemic prediction work, the models used are complex and slow (Yang et al., 2020; Bandyopadhyay & Dutta 2020), and some methods are fast but not effective (Huang et al., 2020). In addition, the number of cumulative increases is not stable but variable, particularly, a sudden rapid or modest increase affects the stability of modelling and therefore the accuracy of predicting future trends. Thus, it is necessary to propose a model with coexistence of high precision, high speed, and low model complexity to predict and analyze the development tendency of COVID-19 with more efficiency and accuracy, and to summarise the meaningful and positive policies. (2) The related work section in the revised paper (page 3-4 line 106-165): There are two main methods for forecasting the epidemic development of COVID-19: mathematical model and deep learning model. A typical mathematical model in epidemic dynamics is the Susceptible-Exposed-Infectious-Removed (SEIR) model, using mathematical formulas to reflect the relationships between the flows of people at four states: susceptible, exposed, infectious, and recovered (Fang, Nie & Penny, 2020). The SEIR model was used to effectively predict the peaks and sizes of COVID-19 epidemiological data with sufficient fitting performance (Yang et al., 2020; Fang, Nie & Penny, 2020). A modified SEIR model also showed a good effect for predicting the peaks and sizes (Yang et al., 2020). The peak deviation of the another modified SEIR model in predicting epidemiological data in China was 3.02% (Fang, Nie & Penny, 2020). However, SEIR focuses on predicting trends in sensitive, exposed, infected, and recovered groups (Yang et al., 2020; Fang, Nie & Penny, 2020), rather than cumulative confirmed and death cases. Moreover, it is necessary to comprehensively consider the changes in some parameter values as affected by the changes in epidemic policies and regional differences, such as the effective reproductive number, number of contacts, and infection rate. And these parameters are not easy to obtain, and are uncertain. Deep learning automatically extracts the features from the data and builds the model without the need for other specific parameters. This method generates a series of sequence data from COVID-19 epidemiological data and looks for regular changes in the sequence data. Some deep learning models based on LSTM was used to predict COVID-19 trend, and the results demonstrated that LSTM has good prospects for predicting the trend; however, the fitting effect needs to be improved (Yang et al., 2020; Zandavi, Rashidi & Vafaee, 2020), too many internal parameters of LSTM increase the complexity of model training, and it can only learn forward. The Bi-LSTM, which adds bidirectional learning capability on LSTM, and the GRU, which makes gates simplification on the internal structure of LSTM, are used by Shahid, Zameer & Muneeb (2020) for epidemic prediction. Moreover, a stacked-Bi-GRU model was applied for COVID-19 trend forecast, owing to the learning adequacy of the bidirectional cyclic network (Bandyopadhyay & Dutta 2020). LSTM and GRU solve the problem of gradient vanishing explosion of RNN, but there is a gradient attenuation problem of layer. In addition, the entanglement of neurons in the same layer makes their behavior difficult to interpret. The existence of these multiple gate operations in the recurrent unit of the RNN variants, which is LSTM, Bi-LSTM, GRU, makes the parameters complicated. CNN does not need many parameters in virtue of the advantage of weight sharing, in comparison with these recurrent networks. A deep CNN model (Huang et al., 2020) and a CNN-LSTM model (Dutta, Bandyopadhyay & Kim, 2020) were proposed for analysing and predicting the number of confirmed cases in China. However, the training speed of deep CNN is fast, but the effect is not significantly improved. And CNN_LSTM model combines CNN and LSTM, but increases the model complexity. In the above deep learning network model, high precision, high speed, and low model complexity cannot coexist. And the cumulative epidemic data is time series, so recurrent networks are more suitable for processing such serialized data in comparing with CNN. Similar to LSTM, GRU, Bi-LSTM, IndRNN is one of the variants of RNN, which can learn longer sequence data, and it has no redundant gate operations, fewer parameters, and can be more easily trained (Li et al., 2018). Simultaneously, IndRNN is designed to solve the problems of gradient disappearance and explosion, and can be utilized to process sequence data like LSTM, GRU and Bi-LSTM. IndRNN was used to learn relationships between plant gene sequences, overcoming the uncertainty of artificially acquired traits, and has higher accuracy than LSTM (Zhang et al., 2020). Therefore, Indrnn is adopted as the basic model in this paper to improve the accuracy of epidemic prediction. However, the epidemic data are updated daily, and the network weights need to be retrained after obtaining the data of the last few weeks, which are time-consuming. Fine-tuning can transfer the trained network model parameters to the required network for partial parameter adjustments, without the need to train from scratch. Tajbakhsh et al. (2016) used a fine-tuned pre-trained CNN network for medical image analysis and found that the effect was better than training all data from scratch CNN. Boyd, Czajka & Bowyer (2020) demonstrated that fine-tuning existing network weights to extract iris features was more accurate. Therefore, a fine-tuning strategy is added in this study to further improve the accuracy and speed of prediction. Combined with the above, in order to achieve rapid and more accurate epidemic prediction, a framework based on IndRNN and fine-tuning to predict COVID-19 epidemiological data is proposed in this paper. Comment 2: -For your research design, I think the details of data processing in the experimental design, including all variables from the preprocessing and coding of the original obtained data to how to train and test, should be clearer and add more details. Response 2: Thanks for your reminder. In order to illustrate the parameters involved in this experiment, two new Tables 2 and 3 are added. Where Table 2 is consistent parameters and their values in the models covered in this article, and Table 3 lists the numbers of network layers of the model and frozen network layers when fine-tuning. The newly added Table 2 and Table 3 are shown in the uploaded primary files. The quantity of training data, fine tuning data and test data in model comparison is explained in results section. Moreover, the details of framework are added in the method section to better understand the process. The parameter description added in results section as follows: (1) The second paragraph of the results in the revised paper (page 9 line 329-336): Three tasks are completed in this paper: 1) model comparison by LSTM, Bi-LSTM, GRU, Stacked_Bi_GRU, CNN_LSTM, Deep_CNN, and IndRNN models, 2) the fine-tuned IndRNN model is utilized to predict the number of confirmed cases in the United Sates, India, Brazil, France, Russia, China and the world, and verify their accuracy, and 3) the growth status of the cumulative cases in 6 countries in combination with current policies are analyzed. All results in this section use the values in Table 2. The experimental procedure for each model follows the steps in the method. Moreover, during fine-tuning, the numbers of layers in the seven models that do not participate in training are set as shown in Table 3. (2) The second and third paragraph of the Section “Comparison of models” in the revised paper (page 9-10, line 350-359): After normalization and sequence data generation, the dimensions of the sequence data generated by using weekly cumulative confirmed and death cases in India are (20, 1, 45) and (14, 1, 45), and the corresponding label data are 20 and 14 respectively. The last 6 of them are used for fine-tuning and testing, and the other data are used for training. These 7 models use Indian COVID-19 data to save the weights after pre-training, respectively. Under the condition of freezing the network layer before the fully connected layer, the pre-trained models load the corresponding weights for iterative fine-tuning by the fine-tuning data. In this process, to find the appropriate amount of fine-tuning data, 1, 3, and 5 pieces of fine-tuning data are used for fine-tuning, and the number of corresponding testing data are 5, 3, and 1 respectively. (3) The first sentence of the third paragraph of the Section “Predictive performance analysis of the COVID-19 epidemic situations in six countries” in the revised paper (page 10 line 379-381): “In this section, the proposed fine-tuned IndRNN model is used to predict the COVID-19 epidemic trend in the cumulative confirmed and death cases of the China, the United States, India, Brazil, France and Russia with 5 fine-tuning data.” The detail description added in method section as follows: The first paragraph of the method in the revised paper (page 5-6 line 187-203): The flowchart of the proposed framework is illustrated in Figure 2. It mainly consists of four steps as follows: (1) the cumulative confirmed and death cases obtained from various websites are first preprocessed to generate sequence data of size in a uniform range, and the sequence data are divided into training data, fine-tuning data, and testing data; (2) the deep learning model trains and learns the features using the training data, then the weights are obtained. The training process is the process of establishing the optimal relationship between the training data and the training labels, in which the training data is the input and the training label is the perfect output. Through continuous iterative training, the characteristics of training data are found, and the parameters in the network model are adjusted to continuously fit the actual output and training labels; (3) the fine-tuning data is loaded with the weight file, and the parameters are adjusted by fine-tuning model, continuously reducing the error between the fine-tuning data and the fine-tuning labels; (4) though applying the model built from the previous training and fine-tuning, the testing data are used for testing, later trends are predicted, and validated using true data. The testing data are input for testing by applying the model with weights, to determine the gap between the actual output of test data and the test labels. After that, the model is used for direct trend prediction, and the prediction results are compared with the real updated data to judge the prediction performance of the proposed framework. (2) The second paragraph of the pre-processing section in the revised paper (page 6 line 225-228): For example, when the sequence length is 3, the data from the first to the third form a sequence, and the data from the fourth day is the label data; next, the data of days 2 to 4 form a sequence, and the data of day 5 is label data. The sequence data generation is completed until the label data reaches the last data of the input data. Comment 3: -I suggest that the discussion of the results should be more fully compared with the previous studies. Please clarify and add. Response 3: Thank for your instruction. This paper added some discussions (page 12) on the reasons why the proposed framework is more suitable for the epidemic data in the discussion section. And the discussion section compares the existing results with those in other literature and analyzes the reasons for the inconsistencies. The new discussion section is as follows: (1) The second paragraph of the discussion in the revised paper (page 12-13, line 466-484): According to the experimental results of LSTM, Bi-LSTM, GRU, Stacked_Bi_GRU, CNN_LSTM, Deep_CNN, and IndRNN models, the IndRNN model takes the second least time, but its RMSE and MAPE are the smallest. Among the seven networks, the errors of the stacked networks are increased compared with the simple networks, and the phenomenon of overfitting occurs, which is because the stacked networks over-interpret information, resulting in the weak generalization ability of the models; compared with CNN, the variant network (i.e. LSTM, Bi-LSTM, GRU, IndRNN) of RNN has higher accuracy, due to the memory function of the internal structure of RNN series network, which is able to extract features from sequences of different moments. This ability to process time series is more consistent with the characteristics of COVID-19 epidemiological data changing over time. Among the variant networks of RNN, the accuracy of LSTM, Bi-LSTM and GRU is not much different, which is inconsistent with the result that Bi-LSTM is better than LSTM and GRU found by Shahid, Zameer & Muneeb (2020). The reason is that the Bi-LSTM has insufficient learning ability on cumulative data of India with variable growth rate. By contrast, IndRNN has the best performance as the non-interference of independent neurons with each other, it is more adaptable to the changeable epidemic data, and is capable of better extract and transmit information hidden in the COVID-19 sequence data. Therefore, it can be considered that the IndRNN model can understand the regular pattern of epidemic data, has good learning capabilities, and achieves the coexistence of high precision and low time consumption. (2) The fifth paragraph of the discussion in the revised paper (page 13, line 509-515): “The model speed is proportional to the number of parameters, but not to the fitting effect.” and “Stacked_Bi_GRU and CNN_LSTM have a large number of parameters due to the stack of multiple network layers, which affects the running speed. Among the seven networks, Deep_CNN is the model with the fewest fine-tuning parameters because of a small number of dense layer parameters caused by characteristic of shared parameters, hence the fine-tuning speed is the fastest. However, the test RMSE of this model is up to 9.22% higher than that of IndRNN, which may be because the reduction of parameters affects the effect of feature extraction of sequence data.” Response to Reviewer 2 Comments from the Reviewer 1 Basic reporting Many flaws in the article that are needed to be addressed by authors. Experimental design Need to be revised in terms of phases and draw a main figure the tell the whole story. Validity of the findings No benchmark scenario is found, so how authors can prove that their study better than existing ones. Additional comments The authors presented very traditional study which I think it is more to evaluation rather than development. However, several limitations are exist: - The title is too general and it reflects the contribution more to evaluation rather than development study ? - The abstract is too long and need to be revised in the context of academic article. - Authors mentioned 'Therefore, it is necessary to propose a model to predict and analyze the development tendencyof COVID-19, and to summarise the meaningful and positive policies' this is too general challenge which was handled by many studies. Furthermore, the contribution of the study is very poor. Moreover, the authors need to be specific about the contributions of the study. - Comparison with state of the art studies also is missing. - The related works section is missing. Thus, the study failed to fill up to highlight the gap of existing studies. Comment 1: -The title is too general and it reflects the contribution more to evaluation rather than development study? Response 1: Thank for your instruction. According to the main content of this paper, the main study is to put forward a faster and more accurate framework based on IndRNN and fine-tuning for predicting COVID-19 trend. Therefore, the title has been changed to “Prediction of COVID-19 epidemic situation via fine-tuned IndRNN”. Comment 2: -The abstract is too long and need to be revised in the context of academic article. Response 2: Thank you for your suggestion. The sentences in the original summary are refined, and the redundant parts are removed. The revised summary (page 1 line 17-36) is shown below: The COVID-19 pandemic is the most serious catastrophe since the Second World War. To predict the epidemic more accurately under the influence of policies, a framework based on Independently Recurrent Neural Network (IndRNN) with fine-tuning are proposed for predict the epidemic development trend of confirmed cases and deaths in the United Stated, India, Brazil, France, Russia, China, and the world to late May, 2021. The proposed framework consists of four main steps: data pre-processing, model pre-training and weight saving, the weight fine-tuning, trend predicting and validating. It is concluded that the proposed framework based on IndRNN and fine-tuning with high speed and low complexity, has great fitting and prediction performance. The applied fine-tuning strategy can effectively reduce the error by up to 20.94% and time cost. For most of the countries, the MAPEs of fine-tuned IndRNN model were less than 1.2%, the minimum MAPE and RMSE were 0.05%, and 1.17, respectively, by using Chinese deaths, during the testing phase. According with the prediction and validation results, the MAPEs of the proposed framework were less than 6.2% in most cases, and it generated lowest MAPE and RMSE values of 0.05% and 2.14, respectively, for deaths in China. Moreover, Policies that play an important role in the development of COVID-19 have been summarized. Timely and appropriate measures can greatly reduce the spread of COVID-19; untimely and inappropriate government policies, lax regulations, and insufficient public cooperation are the reasons for the aggravation of the epidemic situations. The code is available at https://github.com/zhhongsh/COVID19-Precdiction. And the prediction by IndRNN model with fine-tuning are now available online (http://47.117.160.245:8088/IndRNNPredict) Comment 3: -Authors mentioned 'Therefore, it is necessary to propose a model to predict and analyze the development tendency of COVID-19, and to summarise the meaningful and positive policies' this is too general challenge which was handled by many studies. Response 3: Thank you for your reminder. In view of this suggestion, this paper first analyzes the shortcomings of relevant literature on epidemic prediction, and then puts forward the necessity of more accurate methods for predicting epidemic trend. This part has been modified (in page 2 line 62-69) to “However, in the current epidemic prediction work, the models used are complex and slow (Yang et al., 2020; Bandyopadhyay & Dutta 2020), and some methods are fast but not effective (Huang et al., 2020). In addition, the number of cumulative increases is not stable but variable, particularly, a sudden rapid or modest increase affects the stability of modelling and therefore the accuracy of predicting future trends. Thus, it is necessary to propose a model with coexistence of high precision, high speed, and low model complexity to predict and analyze the development tendency of COVID-19 with more efficiency and accuracy, and to summarise the meaningful and positive policies.” Comment 4: -The contribution of the study is very poor. Moreover, the authors need to be specific about the contributions of the study. Response 4: Thank you for your valuable comment. We summarize the works done in this paper, and the contributions are clarified, according to the actual effect of the proposed framework, and the comparison of the model with other models to verify its good performance, as well as the analysis of the policy. The contributions in this paper (in page 2 line 81-98) as follows: 1) A framework based on IndRNN and fine-tuning strategy, which can be adept at longer sequence data, was used in COVID-19 epidemic prediction with more accuracy and high speed. IndRNN is used to retain effective information and better learn the characteristics of changeable sequence data in COVID-19 due to the advantage of independent neurons prevents gradient explosion and disappearance phenomenon. The fine-tuning strategy is utilized to further improves the accuracy in a short time, and it avoids the consumption brought by retraining after data update. 2) The fitting performance of the proposed framework based on IndRNN and fine-tuning are verified among the cumulative cases of India by comparing with the LSTM, Bi-LSTM, GRU, Stacked_Bi_GRU, CNN_LSTM, Deep_CNN models. And the prediction accuracy of the model is validated by the error between the prediction results and the real data of the next four weeks in the cumulative case data of the United State, India, Brazil, France, Russia, China, and world. 3) The growth of the cumulative population is combined to analyze the positivity of epidemic policies and activities. The phenomenon is found that the same proactive policies have been implemented inconsistently in different countries because of lax regulation and inadequate public cooperation. Comment 5: -Comparison with state of the art studies also is missing. Response 5: Thanks for your reminder.Three state of the art studies: Stacked_Bi_GRU, CNN_LSTM, and Deep_CNN models, are added to comparision in this paper. The experimental results are shown in the Table 4, Table 5, and Figure 5, which is in the uploaded primary files. The content of model analysis by comparing the results is added in the discussion section according to the experimental results. (1) Some results are added (in page 10 line 363-372) after adding the three models: Where, IndRNN model had the least MAPE and RMSE before and after fine-tuning, the lowest total runtime when using cumulative diagnosis data, the second lowest total runtime when using cumulative death data, and minimum number of total parameters. The MAPE and RMSE of LSTM, BI-LLSTM and GRU are similar, and the effect ranking of the remaining models is Deep_CNN, Stacked_Bi_GRU, CNN_LSTM, respectively. After fine-tuning, the RMSE and MAPE of the seven models are all significantly decreased, especially for IndRNN, the MAPE of which is decreased by 0.27%~12.36%, 0.12%~14.29%, 0.3%~10.91%, 0.69%~8.89%, 2.21%~11.84%, -1.59%~13.79%, and 0.36%~20.94%, respectively. (2) The second paragraph of the discussion section (page 12-13 line 466-484): According to the experimental results of LSTM, Bi-LSTM, GRU, Stacked_Bi_GRU, CNN_LSTM, Deep_CNN, and IndRNN models, the IndRNN model takes the second least time, but its RMSE and MAPE are the smallest. Among the seven networks, the errors of the stacked networks are increased compared with the simple networks, and the phenomenon of overfitting occurs, which is because the stacked networks over-interpret information, resulting in the weak generalization ability of the models; compared with CNN, the variant network (i.e. LSTM, Bi-LSTM, GRU, IndRNN) of RNN has higher accuracy, due to the memory function of the internal structure of RNN series network, which is able to extract features from sequences of different moments. This ability to process time series is more consistent with the characteristics of COVID-19 epidemiological data changing over time. Among the variant networks of RNN, the accuracy of LSTM, Bi-LSTM and GRU is not much different, which is inconsistent with the result that Bi-LSTM is better than LSTM and GRU found by Shahid, Zameer & Muneeb (2020). The reason is that the Bi-LSTM has insufficient learning ability on cumulative data of India with variable growth rate. By contrast, IndRNN has the best performance as the non-interference of independent neurons with each other, it is more adaptable to the changeable epidemic data, and is capable of better extract and transmit information hidden in the COVID-19 sequence data. Therefore, it can be considered that the IndRNN model can understand the regular pattern of epidemic data, has good learning capabilities, and achieves the coexistence of high precision and low time consumption. (3) The fifth paragraph of the discussion (page 13 line 509-515): The model speed is proportional to the number of parameters, but not to the fitting effect.” and “Stacked_Bi_GRU and CNN_LSTM have a large number of parameters due to the stack of multiple network layers, which affects the running speed. Among the seven networks, Deep_CNN is the model with the fewest fine-tuning parameters because of a small number of dense layer parameters caused by characteristic of shared parameters, hence the fine-tuning speed is the fastest. However, the test RMSE of this model is up to 9.22% higher than that of IndRNN, which may be because the reduction of parameters affects the effect of feature extraction of sequence data. Comment 6: -The related works section is missing. Thus, the study failed to fill up to highlight the gap of existing studies. Response 6: Thank for your instruction. Related work section is added in this paper, mainly from the model prediction work of two categories of analysis (mathematical model and deep learning model), to summarize their shortcomings. The ways are searched to correct those deficiencies, and use these methods for research. The related work (page 3-4 line 106-165) added is as follows: There are two main methods for forecasting the epidemic development of COVID-19: mathematical model and deep learning model. A typical mathematical model in epidemic dynamics is the Susceptible-Exposed-Infectious-Removed (SEIR) model, using mathematical formulas to reflect the relationships between the flows of people at four states: susceptible, exposed, infectious, and recovered (Fang, Nie & Penny, 2020). The SEIR model was used to effectively predict the peaks and sizes of COVID-19 epidemiological data with sufficient fitting performance (Yang et al., 2020; Fang, Nie & Penny, 2020). A modified SEIR model also showed a good effect for predicting the peaks and sizes (Yang et al., 2020). The peak deviation of the another modified SEIR model in predicting epidemiological data in China was 3.02% (Fang, Nie & Penny, 2020). However, SEIR focuses on predicting trends in sensitive, exposed, infected, and recovered groups (Yang et al., 2020; Fang, Nie & Penny, 2020), rather than cumulative confirmed and death cases. Moreover, it is necessary to comprehensively consider the changes in some parameter values as affected by the changes in epidemic policies and regional differences, such as the effective reproductive number, number of contacts, and infection rate. And these parameters are not easy to obtain, and are uncertain. Deep learning automatically extracts the features from the data and builds the model without the need for other specific parameters. This method generates a series of sequence data from COVID-19 epidemiological data and looks for regular changes in the sequence data. Some deep learning models based on LSTM was used to predict COVID-19 trend, and the results demonstrated that LSTM has good prospects for predicting the trend; however, the fitting effect needs to be improved (Yang et al., 2020; Zandavi, Rashidi & Vafaee, 2020), too many internal parameters of LSTM increase the complexity of model training, and it can only learn forward. The Bi-LSTM, which adds bidirectional learning capability on LSTM, and the GRU, which makes gates simplification on the internal structure of LSTM, are used by Shahid, Zameer & Muneeb (2020) for epidemic prediction. Moreover, a stacked-Bi-GRU model was applied for COVID-19 trend forecast, owing to the learning adequacy of the bidirectional cyclic network (Bandyopadhyay & Dutta 2020). LSTM and GRU solve the problem of gradient vanishing explosion of RNN, but there is a gradient attenuation problem of layer. In addition, the entanglement of neurons in the same layer makes their behavior difficult to interpret. The existence of these multiple gate operations in the recurrent unit of the RNN variants, which is LSTM, Bi-LSTM, GRU, makes the parameters complicated. CNN does not need many parameters in virtue of the advantage of weight sharing, in comparison with these recurrent networks. A deep CNN model (Huang et al., 2020) and a CNN-LSTM model (Dutta, Bandyopadhyay & Kim, 2020) were proposed for analysing and predicting the number of confirmed cases in China. However, the training speed of deep CNN is fast, but the effect is not significantly improved. And CNN_LSTM model combines CNN and LSTM, but increases the model complexity. In the above deep learning network model, high precision, high speed, and low model complexity cannot coexist. And the cumulative epidemic data is time series, so recurrent networks are more suitable for processing such serialized data in comparing with CNN. Similar to LSTM, GRU, Bi-LSTM, IndRNN is one of the variants of RNN, which can learn longer sequence data, and it has no redundant gate operations, fewer parameters, and can be more easily trained (Li et al., 2018). Simultaneously, IndRNN is designed to solve the problems of gradient disappearance and explosion, and can be utilized to process sequence data like LSTM, GRU and Bi-LSTM. IndRNN was used to learn relationships between plant gene sequences, overcoming the uncertainty of artificially acquired traits, and has higher accuracy than LSTM (Zhang et al., 2020). Therefore, Indrnn is adopted as the basic model in this paper to improve the accuracy of epidemic prediction. However, the epidemic data are updated daily, and the network weights need to be retrained after obtaining the data of the last few weeks, which are time-consuming. Fine-tuning can transfer the trained network model parameters to the required network for partial parameter adjustments, without the need to train from scratch. Tajbakhsh et al. (2016) used a fine-tuned pre-trained CNN network for medical image analysis and found that the effect was better than training all data from scratch CNN. Boyd, Czajka & Bowyer (2020) demonstrated that fine-tuning existing network weights to extract iris features was more accurate. Therefore, a fine-tuning strategy is added in this study to further improve the accuracy and speed of prediction. Combined with the above, in order to achieve rapid and more accurate epidemic prediction, a framework based on IndRNN and fine-tuning to predict COVID-19 epidemiological data is proposed in this paper. "
Here is a paper. Please give your review comments after reading it.
277
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Interacting with mobile applications can often be challenging for people with visual impairments due to the poor usability of some mobile applications. The goal of this paper is to provide an overview of the developments on usability of mobile applications for people with visual impairments based on recent advances in research and application development. This overview is important to guide decision-making for researchers and provide a synthesis of available evidence on the usability of mobile applications for people with visual impairments and indicate in which direction it is worthwhile to prompt further research. We performed a systematic literature review on the usability of mobile applications for people with visual impairments. A deep analysis following the Preferred Reporting Items for SLRs and Meta-Analyses (PRISMA) guidelines was performed to produce a set of relevant papers in the field. We first identified 932 papers published within the last six years. After screening the papers and employing a snowballing technique, we identified 60 studies that were then classified into seven themes: accessibility, daily activities, assistive devices, navigation, screen division layout, and audio guidance. The studies were then analyzed to answer the proposed research questions in order to illustrate the different trends, themes, and evaluation results of various mobile applications developed in the last six years. Using this overview as a foundation, future directions for research in the field of usability for the visually impaired (UVI) are highlighted.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The era of mobile devices and applications has begun. With the widespread use of mobile applications, designers and developers need to consider all types of users and develop applications for their different needs. One notable group of users is people with visual impairments. According to the World Health Organization, there are approximately 285 million people with visual impairments worldwide <ns0:ref type='bibr'>(WHO, 2020)</ns0:ref>. This is a huge number to keep in mind while developing new mobile applications.</ns0:p><ns0:p>People with visual impairments have urged more attention from the tech community to provide them with the assistive technologies they need <ns0:ref type='bibr'>(Khan &amp; Khusro, 2021)</ns0:ref>. Small tasks that we do daily, such as picking out outfits or even moving from one room to another, could be challenging for such individuals. Thus, leveraging technology to assist with such tasks can be life changing. Besides, increasing the usability of applications and developing dedicated ones tailored to their needs is essential. The usability of an application refers to its efficiency in terms of the time and effort required to perform a task, its effectiveness in performing said tasks, and its users' satisfaction <ns0:ref type='bibr' target='#b23'>(Ferreira, 2020)</ns0:ref>. Researchers have been studying this field intensively and proposing different solutions to improve the usability of applications for people with visual impairments.</ns0:p><ns0:p>This paper provides a systematic literature review (SLR) on the usability of mobile applications for people with visual impairments. The study aims to find discussions of usability issues related to people with visual impairments in recent studies and how they were solved using mobile applications. By reviewing published works from the last six years, this SLR aims to update readers on the newest trends, limitations of current research, and future directions in the research field of usability for the visually impaired (UVI). This SLR can be of great benefit to researchers aiming to become involved in UVI research and could provide the basis for new work to be developed, consequently improving the quality of life for the visually impaired. This review differs from previous review studies (i.e. <ns0:ref type='bibr'>Khan &amp; Khusro, 2021)</ns0:ref> because we classified the studies into themes in order to better evaluate and synthesize the studies and provide clear directions for future work. The following themes were chosen based on the issues addressed in the reviewed papers: 'Assistive Devices,' 'Navigation,' 'Accessibility,' 'Daily Activities,' 'Audio Guidance,' and 'Gestures.' Figure <ns0:ref type='figure'>1</ns0:ref> illustrates the percentage of papers classified in each theme.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 1: Percentages of classification themes</ns0:head><ns0:p>The remainder of this paper is organized as follows: the next section specifies the methodology, following this, the results section illustrates the results of the data collection, the discussion section consists of the research questions with their answers and the limitations and &#61623; ISI Web of Knowledge &#61623; Scopus.</ns0:p><ns0:p>The selected search engines were as follows:</ns0:p><ns0:p>&#61623; DBLP (Computer Science Bibliography Website)</ns0:p><ns0:formula xml:id='formula_0'>&#61623; Google Scholar &#61623; Microsoft Academic &#61656; SEARCH STRING</ns0:formula><ns0:p>The above databases were initially searched using the following keyword protocol: ('Usability' AND ('visual impaired' OR 'visually impaired' OR 'blind' OR 'impairment') AND 'mobile'). However, in order to generate a more powerful search string, the Network Analysis Interface for Literature Studies (NAILS) project was used. NAILS is an automated tool for literature analysis. Its main function is to perform statistical and social network analysis (SNA) on citation data <ns0:ref type='bibr' target='#b41'>(Knutas et al., 2015)</ns0:ref>. In this study, it was used to check the most important work in the relevant fields as shown in figure <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>.</ns0:p><ns0:p>NAILS produced a report displaying the most important authors, publications, and keywords and listed the references cited most often in the analysed papers <ns0:ref type='bibr' target='#b41'>(Knutas et al., 2015)</ns0:ref> . The new search string was generated after using the NAILS project as follows: ('Usability' OR 'usability model' OR 'usability dimension' OR 'Usability evaluation model' OR 'Usability evaluation dimension') AND ('mobile' OR 'Smartphone') AND ('Visually impaired' OR 'Visual impairment' OR 'Blind' OR 'Low vision' OR 'Blindness'). &#61623; The study must be relevant to the main topic (Usability of Mobile Applications for Visually Impaired Users).</ns0:p><ns0:p>&#61623; The study must be a full-length paper.</ns0:p><ns0:p>&#61623; The study must be written in English because any to consider any other languages, the research team will need to use the keywords of this language in this topic and deal with search engines using that language to extract all studies related to our topic to form an SLR with a comprehensive view of the selected languages.</ns0:p><ns0:p>Therefore, the research team preferred to focus on studies in English to narrow the scope of this SLR.</ns0:p><ns0:p>A research study was excluded if it did not meet one or more items of the criteria.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>CONDUCTING STAGE:</ns0:head><ns0:p>The conducting stage of the review involved a systematic search based on relevant search terms. This consisted of three sub stages: exporting citations, importing citations into Mendeley, and importing citations into Rayyan.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61656; EXPORTING CITATIONS</ns0:head><ns0:p>First, in exporting the citations and conducting the search through the mentioned databases, a total of 932 studies were found. The numbers are illustrated in Figure <ns0:ref type='figure'>4</ns0:ref> below. The highest number of papers was found in Google Scholar, followed by Scopus, ISI Web of Knowledge, ScienceDirect, IEEE Xplore, Microsoft Academic, and DBLP and ACM Library with two studies each. Finally, SpringerLink did not have any studies that met the inclusion criteria.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 3: Number of papers per database</ns0:head><ns0:p>The chance of encountering duplicate studies was determined to be high. Therefore, importing citations into Mendeley was necessary in order to eliminate the duplicates.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61656; IMPORTING CITATIONS INTO MENDELEY</ns0:head><ns0:p>Mendeley is an open-source reference and citation manager. It can highlight paragraphs and sentences, and it can also list automatic references on the end page. Introducing the use of Mendeley is also expected to avoid duplicates in academic writing, especially for systematic literature reviews <ns0:ref type='bibr' target='#b10'>(Basri &amp; Patak, 2015)</ns0:ref>. Hence, in the next step, the 932 studies were imported into Mendeley, and each study's title and abstract were screened independently for eligibility. A total of 187 duplicate studies were excluded. 745 total studies remained after the first elimination process.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 4: Search stages</ns0:head><ns0:formula xml:id='formula_1'>&#61656; IMPORTING CITATIONS INTO RAYYAN</ns0:formula><ns0:p>Rayyan QCRI is a free web and mobile application that helps expedite the initial screening of both abstracts and titles through a semi-automated process while incorporating a high level of usability. Its main benefit is to speed up the most tedious part of the systematic literature review process: selecting studies for inclusion in the review <ns0:ref type='bibr' target='#b58'>(Ouzzani et al., 2016)</ns0:ref>. Therefore, for the last step, another import was done using Rayyan to check for duplications a final time. Using Rayyan, a total of 124 duplicate studies were found, resulting in a total of 621 studies. Using Rayyan, a two-step filtration was conducted to guarantee that the papers have met the inclusion criteria of this SLR. After filtering based on the abstracts, 564 papers did not meet the inclusion criteria. At this stage, 57 studies remained. The second step of filtration eliminated 11 more studies by reading the full papers; two studies were not written in the English language, and nine were inaccessible.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61656; SNOWBALLING</ns0:head><ns0:p>Snowballing is an emerging technique used to conduct systematic literature reviews that are considered both efficient and reliable using simple procedures. The procedure for snowballing consisted of three phases in each cycle. The first phase is refining the start set, the second phase is backward snowballing, and the third is forward snowballing. The first step, forming the start set, is basically identifying relevant papers that can have a high potential of satisfying the criteria and research question. Backward snowballing was conducted using the reference list to identify new papers to include. It shall start by going through the reference list and excluding papers that do not fulfill the basic criteria; the rest that fulfil criteria shall be added to the SLR. Forward snowballing refers to identifying new papers based on those papers that cited the paper being examined <ns0:ref type='bibr' target='#b31'>(Juneja &amp; Kaur, 2019)</ns0:ref>. Hence, in order to be sure that we concluded all related studies after we got the 46 papers, a snowballing step was essential. Forward and backward snowballing were conducted. Each of the 46 studies was examined by checking their references to take a look at any possible addition of sources and examining all papers that cited this study. The snowballing activity added some 38 studies, but after full reading, it became 33 that matched the inclusion criteria. A total of 79 studies were identified through this process.</ns0:p><ns0:p>&#61656; QUALITY ASSESSMENT A systematic literature review's quality is determined by the content of the papers included in the review. As a result, it is important to evaluate the papers carefully <ns0:ref type='bibr' target='#b81'>(Zhou et al., 2015)</ns0:ref>. Many influential scales exist in the software engineering field for evaluating the validity of individual primary studies and grading the overall intensity of the body of proof. Hence, we adapted the comprehensive guidelines specified by Kitchenhand and Charters <ns0:ref type='bibr' target='#b33'>(Keele, 2007)</ns0:ref>, and the quasigold standard (QGS) <ns0:ref type='bibr' target='#b33'>(Keele, 2007)</ns0:ref> was used to establish the quest technique, where a robust search strategy for enhancing the validity and reliability of a SLR's search process is devised using the QGS. By applying this technique, our quality assessment questions were focused and aligned with the research questions mentioned earlier.</ns0:p><ns0:p>In our last step, we had to verify the papers' eligibility; we conducted a quality check for each of the 79 studies. For quality assessment, we considered whether the paper answered the following questions: QA1: Is the research aim clearly stated in the research? QA2: Does the research contain a usability dimension or techniques for mobile applications for people with visual impairments? QA3: Is there an existing issue with mobile applications for people with visual impairments that the author is trying to solve? QA4: Is the research focused on mobile application solutions? After discussing the quality assessment questions and attempting to find an answer in each paper, we agreed to score each study per question. If the study answers a question, it will be given 2 points; if it only partially answers a question, it will be given 1 point; and if there is no answer for a given question in the study, it will have 0 points. The next step was to calculate the weight of each study. If the total weight was higher or equal to four points, the paper was accepted in the SLR; if not, the paper was discarded since it did not reach the desired quality level. Figure <ns0:ref type='figure'>5</ns0:ref> below illustrates the quality assessment process. After applying the quality assessment, 39 papers were rejected since they received less than four points, which resulted in a final tally of 60 papers.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 5: Quality assessment process</ns0:head><ns0:p>To summarize, this review was conducted according to the Preferred Reporting Items for SLRs and Meta-Analyses (PRISMA) <ns0:ref type='bibr' target='#b45'>(Liberati et al., 2009)</ns0:ref>. The PRISMA diagram shown in Figure <ns0:ref type='figure' target='#fig_1'>6</ns0:ref> illustrates all systematic literature processes used in this study. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>ANALYSING STAGE:</ns0:head><ns0:p>All researchers involved in this SLR collected the data. The papers were distributed equally between them, and each researcher read each paper completely to determine its topic, extract its limitations and future work, write a quick summary about it, and record this information in an Excel spreadsheet.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63198:1:0:NEW 11 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>All researchers worked intensively on this systematic literature review. After filtering the results and completing the previously mentioned steps, we divided the papers between us. Then, each researcher classified the main topics of their assigned studies to generate the general categories. We held several meetings to specify those classes. Afterwards, each researcher was assigned one class to summarize its studies and report the results.</ns0:p><ns0:p>In this section, we review the results of the data collection, where each paper was read completely and then classified according to the topic they covered into seven themes, as shown in Figure <ns0:ref type='figure'>7</ns0:ref> below. The themes were identified by the researchers of this SLR on the basis of the issues addressed in the reviewed papers.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 7: Results of the SLR</ns0:head></ns0:div> <ns0:div><ns0:head>A. Accessibility</ns0:head><ns0:p>Of a total of 60 studies, 10 focused on issues of accessibility. Accessability is concerened with whether all users are able to have equivalent user experiences, regardless of abilities. Six studies, <ns0:ref type='bibr'>(Darvishy, Hutter &amp; Frei, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b52'>(Morris et al., 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b65'>(Qureshi &amp; Wong, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b35'>(Khan, Khusro &amp; Alam, 2018)</ns0:ref> , <ns0:ref type='bibr' target='#b59'>(Paiva et al., 2020)</ns0:ref>, and <ns0:ref type='bibr' target='#b63'>(Pereda, Murillo &amp; Paz, 2020)</ns0:ref>, gave suggestions for increasing accessibility, <ns0:ref type='bibr'>(Darvishy, Hutter &amp; Frei, 2019)</ns0:ref>, and <ns0:ref type='bibr' target='#b52'>(Morris et al., 2016)</ns0:ref>, gave some suggestions for making mobile map applications and Twitter accessible to visually impaired users, and <ns0:ref type='bibr' target='#b65'>(Qureshi &amp; Wong, 2020)</ns0:ref> and <ns0:ref type='bibr' target='#b35'>(Khan, Khusro &amp; Alam, 2018)</ns0:ref> focused on user interfaces and provided accessibility suggestions suitable for blind people. <ns0:ref type='bibr' target='#b59'>(Paiva et al., 2020)</ns0:ref> and <ns0:ref type='bibr' target='#b63'>(Pereda, Murillo &amp; Paz, 2020)</ns0:ref> proposed a set of heuristics to evaluate the accessibility of mobile applications. Two studies, <ns0:ref type='bibr' target='#b37'>(Khowaja et al., 2019)</ns0:ref> and <ns0:ref type='bibr' target='#b15'>(Carvalho et al., 2018)</ns0:ref>, focused on evaluating usability and accessibility issues on some mobile applications, comparing them, and identifying the number and types of problems that visually impaired users faced. <ns0:ref type='bibr' target='#b4'>(Aqle, Khowaja &amp; Al-Thani, 2020)</ns0:ref> proposed a new web search interface designed for visually impaired users. One study, <ns0:ref type='bibr' target='#b50'>(McKay, 2017)</ns0:ref>, focused on accessibility challenges by applying usability tests on a hybrid mobile app with some visually impaired university students.</ns0:p></ns0:div> <ns0:div><ns0:head>B. Assistive Devices</ns0:head><ns0:p>People with visual impairments have an essential need for assistive technology since they face many challenges when performing activities in daily life. Out of the 60 studies reviewed, 13 were related to assistive technology. The studies <ns0:ref type='bibr' target='#b72'>(Smaradottir, Martinez &amp; H&#229;land, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b71'>(Skulimowski et al.,2019)</ns0:ref>, <ns0:ref type='bibr' target='#b8'>(Barbosa, Hayes &amp; Wang, 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b69'>(Rosner &amp; Perlman, 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b17'>(Csap&#243; et al., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b36'>(Khan &amp; Khusro, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b75'>(Sonth &amp; Kallimani, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b39'>(Kim et al., 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b77'>(Vashistha et al., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b32'>(Kameswaran et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b27'>), (Shirley et al., 2017)</ns0:ref>, and <ns0:ref type='bibr' target='#b67'>(Rahman et al., 2017)</ns0:ref> were related to screen readers (voiceovers). On the other hand, <ns0:ref type='bibr' target='#b12'>(Bharatia, Ambawane &amp; Rane, 2019)</ns0:ref> and <ns0:ref type='bibr' target='#b44'>(Lewis et al., 2016)</ns0:ref> were related to proposing an assistant device for the visually impaired. Of the studies related to screening readers, <ns0:ref type='bibr' target='#b75'>(Sonth &amp; Kallimani, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b77'>(Vashistha et al., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b36'>(Khan &amp; Khusro, 2020)</ns0:ref> and <ns0:ref type='bibr' target='#b44'>(Lewis et al.,2016)</ns0:ref> cited challenges faced by visually impaired users. <ns0:ref type='bibr' target='#b8'>Barbosa, Hayes &amp; Wang, 2016)</ns0:ref> , <ns0:ref type='bibr' target='#b39'>(Kim et al., 2016)</ns0:ref>, and <ns0:ref type='bibr' target='#b67'>(Rahman et al., 2017)</ns0:ref> suggested new applications, while <ns0:ref type='bibr' target='#b72'>(Smaradottir, Martinez &amp; H&#229;land, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b69'>(Rosner &amp; Perlman, 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b17'>(Csap&#243; et al., 2015), and</ns0:ref><ns0:ref type='bibr' target='#b27'>(Shirley et al., 2017)</ns0:ref> evaluated current existing work. The studies <ns0:ref type='bibr' target='#b12'>(Bharatia, Ambawane &amp; Rane, 2019)</ns0:ref> and <ns0:ref type='bibr' target='#b44'>(Lewis et al., 2016)</ns0:ref> proposed using wearable devices to improve the quality of life for people with visual impairments.</ns0:p></ns0:div> <ns0:div><ns0:head>C. Daily Activities</ns0:head><ns0:p>In recent years, people with visual impairments have used mobile applications to increase their independence in their daily activities and learning, especially those based on the braille method. We divide the daily activity section into braille-based applications and applications designed to enhance the independence of the visually impaired. Four studies, <ns0:ref type='bibr' target='#b54'>(Nahar, Sulaiman, &amp; Jaafar, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b53'>(Nahar, Jaafar, &amp; Sulaiman, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b6'>(Ara&#250;jo et al., 2016)</ns0:ref>, and <ns0:ref type='bibr' target='#b26'>(Gokhale et al., 2017)</ns0:ref>, implemented and evaluated the usability of mobile phone applications that use braille to help visually impaired people in their daily lives. Seven studies, <ns0:ref type='bibr' target='#b78'>(Vitiello et al., 2018)</ns0:ref>, (Kunaratana-Angkul, Wu, &amp; Shin-Renn, 2020), <ns0:ref type='bibr' target='#b24'>(Ghidini et al., 2016</ns0:ref><ns0:ref type='bibr'>), (Madrigal-Cadavid et al., 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b49'>(Marques, Carri&#231;o &amp; Guerreiro, 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b57'>(Oliveira et al., 2018)</ns0:ref>, and <ns0:ref type='bibr' target='#b68'>(Rodrigues et al., 2015)</ns0:ref>, focused on building applications that enhance the independence and autonomy of people with visual impairments in their daily life activities.</ns0:p></ns0:div> <ns0:div><ns0:head>D. Screen Division Layout</ns0:head><ns0:p>People with visual impairments encounter various challenges in identifying and locating nonvisual items on touch screen interfaces like phones and tablets. Incidents of accidentally touching a screen element and frequently following an incorrect pattern in attempting to access objects and screen artifacts hinder blind people from performing typical activities on smartphones <ns0:ref type='bibr' target='#b38'>(Khusro et al., 2019)</ns0:ref>. In this review, 9 out of 60 studies discuss screen division layout: <ns0:ref type='bibr' target='#b38'>(Khusro et al., 2019)</ns0:ref>, <ns0:ref type='bibr'>(Khan, &amp; Khusro, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b28'>(Grussenmeyer &amp; Folmer, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b60'>(Palani et al., 2018)</ns0:ref>, and <ns0:ref type='bibr' target='#b43'>(Leporini &amp; Palmucci, 2018)</ns0:ref> discuss touch screen (smartwatch tablets, mobile phones, and tablet) usability among people with visual impairments, while <ns0:ref type='bibr' target='#b16'>(Cho &amp; Kim, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b2'>(Alnfiai &amp; Sampalli, 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b56'>(Niazi et al., 2016)</ns0:ref>, and <ns0:ref type='bibr' target='#b3'>(Alnfiai &amp; Sampalli, 2019</ns0:ref>) concern text entry methods that increase the usability of apps among visually impaired people. <ns0:ref type='bibr' target='#b38'>(Khusro et al., 2019)</ns0:ref> provides a novel contribution to the literature regarding considerations that can be used as guidelines for designing a user-friendly and semantically enriched user interface for blind people. An experiment in <ns0:ref type='bibr' target='#b16'>(Cho &amp; Kim, 2017)</ns0:ref> was conducted comparing the two-button mobile interface usability with the one-finger method and voiceover. <ns0:ref type='bibr' target='#b43'>(Leporini &amp; Palmucci, 2018)</ns0:ref> gathered information on the interaction challenges faced by visually impaired people when answering questions on a mobile touch-screen device, investigated possible solutions to overcome the accessibility and usability challenges.</ns0:p></ns0:div> <ns0:div><ns0:head>E. Gestures</ns0:head><ns0:p>In total, 3 of 60 studies discuss gestures in usability. <ns0:ref type='bibr'>(Alnfiai &amp; Sampalli, 2017)</ns0:ref> compared the performance of BrailleEnter, a gesture based input method to the Swift Braille keyboard, a method that requires finding the location of six buttons representing braille dot, while <ns0:ref type='bibr' target='#b14'>(Buzzi et al., 2017)</ns0:ref> and <ns0:ref type='bibr' target='#b74'>(Smaradottir, Martinez &amp; Haland, 2017)</ns0:ref> provide an analysis of gesture performance on touch screens among visually impaired people.</ns0:p></ns0:div> <ns0:div><ns0:head>F. Audio Guidance</ns0:head><ns0:p>People with visual impairment primarily depend on audio guidance forms in their daily lives; accordingly, audio feedback helps guide them in their interaction with mobile applications. Four studies discussed the use of audio guidance in different contexts: one in navigation <ns0:ref type='bibr' target='#b25'>(Gintner et al., 2017)</ns0:ref>, one in games <ns0:ref type='bibr' target='#b5'>(Ara &#250;jo et al., 2017)</ns0:ref>, one in reading <ns0:ref type='bibr' target='#b70'>(Sabab &amp; Ashmafee, 2016)</ns0:ref>, and one in videos <ns0:ref type='bibr' target='#b22'>(Fa&#231;anha et al., 2016)</ns0:ref>.These studies were developed and evaluated based on usability and accessibility of the audio guidance for people with visual impairments and aimed to utilize mobile applications to increase the enjoyment and independence of such individuals.</ns0:p></ns0:div> <ns0:div><ns0:head>G. Navigation</ns0:head><ns0:p>Navigation is a common issue that visually impaired people face. Indoor navigation is widely discussed in the literature. <ns0:ref type='bibr' target='#b55'>(Nair et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b0'>(Al-Khalifa &amp;</ns0:ref><ns0:ref type='bibr' target='#b0'>Al-Razgan, 2016), and</ns0:ref><ns0:ref type='bibr' target='#b21'>(de Borba Campos et al., 2015)</ns0:ref> discuss how we can develop indoor navigation applications for visually impaired people. Outdoor navigation is also common in the literature, as seen in <ns0:ref type='bibr' target='#b19'>(Darvishy et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b29'>(Hossain, Qaiduzzaman &amp; Rahman, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b46'>(Long et al., 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b64'>(Prerana et al., 2019)</ns0:ref>, and <ns0:ref type='bibr' target='#b7'>(Bandukda et al., 2020)</ns0:ref>. For example, in <ns0:ref type='bibr' target='#b19'>(Darvishy et al., 2020)</ns0:ref>, Touch Explorer, an accessible digital map application, was presented to alleviate many of the problems faced by people with visual impairments while using highly visually oriented digital maps. Primarily, it focused on using non-visual output modalities like voice output, everyday sound, and vibration feedback. Issues with navigation applications were also presented in <ns0:ref type='bibr' target='#b48'>(Maly et al., 2015)</ns0:ref>. <ns0:ref type='bibr' target='#b32'>(Kameswaran et al., 2020)</ns0:ref> discussed commonly used technologies in navigation applications for blind people and highlighted the importance of using complementary technologies to convey information through different modalities to enhance the navigation experience. Interactive sonification of images for navigation has also been shown in <ns0:ref type='bibr' target='#b71'>(Skulimowski et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>In this section, the research questions are addressed in detail to clearly achieve the research objective. A detailed overview of each theme will be mentioned below.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.'>Answers to the Research Questions</ns0:head><ns0:p>This section will answer the research question proposed: Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Mobile applications can help people with visual impairments in their daily activities, such as navigation and writing. Additionally, mobile devices may be used for entertainment purposes. However, people with visual impairments face various difficulties while performing text entry operations, text selection, and text manipulation on mobile applications <ns0:ref type='bibr' target='#b56'>(Niazi et al., 2016)</ns0:ref>. Thus, the authors of the studies tried to increase touch screens' usability by producing prototypes or simple systems and doing usability testing to understand the UX of people with visual impairments.</ns0:p></ns0:div> <ns0:div><ns0:head>RQ2: What is the role of mobile devices in solving those issues?</ns0:head><ns0:p>Mobile phones are widely used in modern society, especially among users with visual impairments; they are considered the most helpful tool for blind users to communicate with people worldwide <ns0:ref type='bibr' target='#b72'>(Smaradottir, Martinez &amp; H&#229;land, 2017)</ns0:ref>. In addition, the technology of touch screen assistive technology enables speech interaction between blind people and mobile devices and permits the use of gestures to interact with a touch user interface. Assistive technology is vital in helping people living with disabilities perform actions or interact with systems <ns0:ref type='bibr' target='#b56'>(Niazi et al., 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>RQ3: What are the publication trends on the usability of mobile applications among the visually impaired?</ns0:head><ns0:p>As shown in Figure <ns0:ref type='figure' target='#fig_3'>8</ns0:ref> below, research into mobile applications' usability for the visually impaired has increased in the last five years, with a slight dip in 2018. Looking at the most frequent themes, we find that 'Assistive Devices' peaked in 2017, while 'Navigation' and 'Accessibility' increased significantly in 2020. On the other hand, we see that the prevalence of 'Daily Activities' stayed stable throughout the research years. The term 'Audio Guidance' appeared in 2016 and 2017 and has not appeared in the last three years. 'Gestures' also appeared only in 2017. 'Screen Layout Division' was present in the literature in the last five years and increased in 2019, but did not appear in 2020. We divide the answer to this question into two sections: first, we will discuss limitations; then, we will discuss future work for each proposed theme.</ns0:p></ns0:div> <ns0:div><ns0:head>A. Limitations</ns0:head><ns0:p>Studies on the usability of mobile applications for visually impaired users in the literature have various limitations, and most of them were common among the studies. These limitations were divided into two groups. The first group concerns proposed applications; for example, <ns0:ref type='bibr' target='#b67'>(Rahman et al., 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b57'>(Oliveira et al., 2018), and</ns0:ref><ns0:ref type='bibr'>(Madrigal-Cadavid et al., 2019)</ns0:ref> faced issues regarding camera applications in mobile devices due to the considerable effort needed for its usage and being heavily dependent on the availability of the internet. The other group of studies, <ns0:ref type='bibr' target='#b68'>(Rodrigues et al., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b43'>(Leporini &amp; Palmucci, 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b2'>(Alnfiai &amp; Sampalli, 2016)</ns0:ref>, and (Ara &#250;jo et al., 2017), have shown limitations in visually impaired users' inability to comprehend a graphical user interface. <ns0:ref type='bibr'>(Alnfiai &amp; Sampalli, 2017)</ns0:ref> and <ns0:ref type='bibr' target='#b3'>(Alnfiai &amp; Sampalli, 2019</ns0:ref>) evaluated new braille input methods and found that the traditional braille keyboard, where knowing the exact position of letters QWERTY is required, is limited in terms of usability compared to the new input methods. Most studies faced difficulties regarding the sample size and the fact that many of the participants were not actually blind or visually impaired but only blindfolded. This likely led to less accurate results, as blind or visually impaired people can provide more useful feedback as they experience different issues on a daily basis and are more ideal for this type of study. So, the need for a good sample of participants who actually have this disability is clear to allow for better evaluation results and more feedback and recommendations for future research.</ns0:p></ns0:div> <ns0:div><ns0:head>B. Future Work</ns0:head><ns0:p>A commonly discussed future work in the chosen literature is to increase the sample sizes of people with visual impairment and focus on various ages and geographical areas to generalize the studies. Table <ns0:ref type='table'>2</ns0:ref> summarizes suggestions for future work according to each theme. Those future directions could inspire new research in the field.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 2: Theme-based future work</ns0:head></ns0:div> <ns0:div><ns0:head>RQ5: What is the focus of research on usability for visually impaired people, and what are the research outcomes in the studies reviewed?</ns0:head><ns0:p>There are a total of 60 outcomes in this research. Of these, 40 involve suggestions to improve usability of mobile applications; four of them address problems that are faced by visually impaired people that reduce usability. Additionally, 16 of the outcomes are assessments of the usability of the prototype or model. Two of the results are recommendations to improve usability. Finally, the last two outcomes are hardware solutions that may help the visually impaired perform their daily activities. Figure <ns0:ref type='figure'>9</ns0:ref> illustrates these numbers.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 9: Outcomes of studies</ns0:head></ns0:div> <ns0:div><ns0:head>Overview of the reviewed studies</ns0:head><ns0:p>In the following subsections, we will summarize all the selected studies based on the classified theme: accessibility, assistive devices, daily activities, screen division layout, gestures, audio guidance, and navigation. The essence of the studies will be determined, and their significance in the field will be explored. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>A. Accessibility</ns0:head><ns0:p>For designers dealing with mobile applications, it is critical to determine and fix accessibility issues in the application before it is delivered to the users <ns0:ref type='bibr' target='#b37'>(Khowaja et al., 2019)</ns0:ref>. Accessibility refers to giving the users the same user experience regardless of ability. In <ns0:ref type='bibr' target='#b37'>(Khowaja et al., 2019)</ns0:ref> and <ns0:ref type='bibr' target='#b15'>(Carvalho et al., 2018)</ns0:ref>, the researchers focused on comparing the levels of accessibility and usability in different applications. They had a group of visually impaired users and a group of sighted users test out the applications to compare the number and type of problems they faced and determine which applications contained the most violations. Because people with visual impairments cannot be ignored in the development of mobile applications, many researchers have sought solutions for guaranteeing accessibility. For example, in <ns0:ref type='bibr' target='#b65'>(Qureshi &amp; Wong, 2020)</ns0:ref>, the study contributed to producing a new, effective design for mobile applications based on the suggestions of people with visual impairments and with the help of two expert mobile application developers. In <ns0:ref type='bibr' target='#b35'>(Khan, Khusro &amp; Alam, 2018)</ns0:ref>, an adaptive user interface model for visually impaired people was proposed and evaluated in an empirical study with 63 visually impaired people. In <ns0:ref type='bibr'>(Aqle, Khowaja &amp; Al-Thani,2020)</ns0:ref>, the researchers proposed a new web search interface for users with visual impairments that is based on discovering concepts through formal concept analysis (FCA). Users interact with the interface to collect concepts, which are then used as keywords to narrow the search results and target the web pages containing the desired information with minimal effort and time. The usability of the proposed search interface (InteractSE) was evaluated by experts in the field of HCI and accessibility, with a set of heuristics by Nielsen and a set of WCAG 2.0 guidelines.</ns0:p><ns0:p>In <ns0:ref type='bibr'>(Darvishy, Hutter &amp; Frei, 2019)</ns0:ref>, the researchers proposed a solution for making mobile map applications accessible for people with blindness or visual impairment. They suggested replacing forests in the map with green color and birds' sound, replacing water with blue color and water sounds, replacing streets with grey color and vibration, and replacing buildings with yellow color and pronouncing the name of the building. The prototype showed that it was possible to explore a simple map through vibrations, sounds, and speech.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b52'>(Morris et al., 2016)</ns0:ref>, the researchers utilized a multi-faceted technique to investigate how and why visually impaired individuals use Twitter and the difficulties they face in doing so. They noted that Twitter had become more image-heavy over time and that picture-based tweets are largely inaccessible to people with visual impairments. The researchers then made several suggestions for how Twitter could be amended to continue to be usable for people with visual impairments.</ns0:p><ns0:p>The researchers in <ns0:ref type='bibr' target='#b59'>(Paiva et al., 2020)</ns0:ref> focused on how to evaluate proposed methods for ensuring the accessibility and usability of mobile applications. Their checklist, Acc-MobileCheck, contains 47 items that correspond to issues related to comprehension (C), operation (O), perception (P), and adaptation (A) in mobile interface interaction. To validate Acc-MobileCheck, it was reviewed by five experts and three developers and determined to be effective. In <ns0:ref type='bibr' target='#b63'>(Pereda, Murillo &amp; Paz, 2020)</ns0:ref>, the authors also suggest a set of heuristics to evaluate the accessibility of mobile e-commerce applications for visually impaired people. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Finally, (McKay, 2017) conducted an accessibility test for hybrid mobile apps and found that students with blindness faced many barriers to access based on how they used hybrid mobile applications. While hybrid apps can allow for increased time for marketing, this comes at the cost of app accessibility for people with disabilities.</ns0:p></ns0:div> <ns0:div><ns0:head>B. Assistive Devices</ns0:head><ns0:p>A significant number of people with visual impairments use state-of-the-art software to perform tasks in their daily lives. These technologies are made up of electronic devices equipped with sensors and processors that can make intelligent decisions.</ns0:p><ns0:p>One of the most important and challenging tasks in developing such technologies is to create a user interface that is appropriate for the sensorimotor capabilities of users with blindness <ns0:ref type='bibr' target='#b17'>(Csap&#243; et al., 2015)</ns0:ref>. Several new hardware tools have proposed to improve the quality of life for people with visual impairments. Three tools were presented in this SLR: a smart stick that can notify the user of any obstacle, helping them to perform tasks easily and efficiently <ns0:ref type='bibr' target='#b12'>(Bharatia, Ambawane &amp; Rane, 2019)</ns0:ref>, and an eye that can allow users to detect colors (medical evaluation is still required) <ns0:ref type='bibr' target='#b44'>(Lewis et al.,2016)</ns0:ref>.</ns0:p><ns0:p>The purpose of the study in <ns0:ref type='bibr' target='#b27'>(Shirley et al., 2017)</ns0:ref> was to understand how people with blindness use smartphone applications as assistive technology and how they perceive them in terms of accessibility and usability. An online survey with 259 participants was conducted, and most of the participants rated the applications as useful and accessible and were satisfied with them.</ns0:p><ns0:p>The researchers in <ns0:ref type='bibr' target='#b67'>(Rahman et al., 2017)</ns0:ref> designed and implemented EmoAssist, which is a smartphone application that assists with natural dyadic conversations and aims to promote user satisfaction by providing options for accessing non-verbal communication that predicts behavioural expressions and contains interactive dimensions to provide valid feedback. The usability of this application was evaluated in a study with ten people with blindness where several tools were applied in the application. The study participants found that the usability of EmoAssist was good, and it was an effective assistive solution.</ns0:p></ns0:div> <ns0:div><ns0:head>C. Daily Activities</ns0:head><ns0:p>This theme contains two main categories: braille-based application studies and applications to enhance the independence of VIU. Both are summarized below.</ns0:p></ns0:div> <ns0:div><ns0:head>1-Braille-based applications</ns0:head><ns0:p>Braille is still the most popular method for assisting people with visual impairments in reading and studying, and most educational mobile phone applications are limited to sighted people. Recently, however, some researchers have developed assistive education applications for students with visual impairments, especially those in developing countries. For example, in India, the number of children with visual impairments is around 15 million, and only 5% receive an education <ns0:ref type='bibr' target='#b26'>(Gokhale et al., 2017)</ns0:ref>. Three of the braille studies focused on education: <ns0:ref type='bibr' target='#b54'>(Nahar, Sulaiman, &amp; Jaafar, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b53'>(Nahar, Jaafar, &amp; Sulaiman, 2019)</ns0:ref>, and <ns0:ref type='bibr' target='#b6'>(Ara&#250;jo et al., 2016)</ns0:ref>. These </ns0:p></ns0:div> <ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Computer Science studies all used smartphone touchscreens and action gestures to gain input from the student, and then output was provided in the form of audio feedback. In <ns0:ref type='bibr' target='#b54'>(Nahar, Sulaiman, &amp; Jaafar, 2020)</ns0:ref>, vibrational feedback was added to guide the users. The participants in <ns0:ref type='bibr' target='#b54'>(Nahar, Sulaiman, &amp; Jaafar, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b53'>(Nahar, Jaafar, &amp; Sulaiman, 2019)</ns0:ref>, and <ns0:ref type='bibr' target='#b6'>(Ara&#250;jo et al., 2016)</ns0:ref> included students with blindness of visual impairment and their teachers. The authors in <ns0:ref type='bibr' target='#b54'>(Nahar, Sulaiman, &amp; Jaafar, 2020)</ns0:ref> and <ns0:ref type='bibr' target='#b53'>(Nahar, Jaafar, &amp; Sulaiman, 2019)</ns0:ref> evaluated the usability of their applications following the same criteria (efficiency, learnability, memorability, errors, and satisfaction). The results showed that in <ns0:ref type='bibr' target='#b54'>(Nahar, Sulaiman, &amp; Jaafar, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b53'>(Nahar, Jaafar, &amp; Sulaiman, 2019)</ns0:ref>, and <ns0:ref type='bibr' target='#b6'>(Ara&#250;jo et al., 2016)</ns0:ref>, the applications met the required usability criteria. The authors in <ns0:ref type='bibr' target='#b26'>(Gokhale et al., 2017)</ns0:ref> presented a braille-based solution to help people with visual impairments call and save contacts. A braille keypad on the smartphone touchscreen was used to gain input from the user, which was then converted into haptic and auditory feedback to let the user know what action was taken. The usability of this application was considered before it was designed. The participants' responses were positive because this kind of user-centric design simplifies navigation and learning processes.</ns0:p></ns0:div> <ns0:div><ns0:head>2-Applications to Enhance the Independence of People with Visual Impairments</ns0:head><ns0:p>The authors in the studies explored in this section focused on building applications that enhance independence and autonomy in daily life activities for users with visual impairments.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b78'>(Vitiello et al., 2018)</ns0:ref>, the authors presented their mobile application, an assistive solution for visually impaired users called 'Crania', which uses machine learning techniques to help users with visual impairments get dressed by recognizing the colour and texture of their clothing and suggesting suitable combinations. The system provides feedback through voice synthesis.</ns0:p><ns0:p>The participants in the study were adults and elderly people, some of whom were completely blind and the rest of whom had partial sight. After testing for usability, all the participants with blindness agreed that using the application was better than their original method, and half of the participants with partial sight said the same thing. At the end of the study, the application was determined to be accessible and easy to use.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b42'>(Kunaratana-Angkul, Wu, &amp; Shin-Renn, 2020)</ns0:ref>, an application which allows elderly people to measure low vision status at home through their smartphones instead of visiting hospitals was tested, and most of the participants considered it to be untrustworthy because the medical information was insufficient. Even when participants were able to learn how to use the application, most of them were still confused while using it and needed further instruction.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b24'>(Ghidini et al., 2016)</ns0:ref>, the authors studied the habits of people with visual impairments when using their smartphones in order to develop an electronic calendar with different interaction formats, such as voice commands, touch, and vibration interaction. The authors presented the lessons learned and categorized them based on usability heuristics such as feedback, design, user freedom and control, and recognition instead of remembering.</ns0:p><ns0:p>In <ns0:ref type='bibr'>(Madrigal-Cadavid et al., 2019)</ns0:ref>, the authors developed a drug information application for people with visual impairments to help them access the labels of medications. The application was developed based on a user-centered design process. By conducting a usability test, the authors recognized some usability issues for people with visual impairments, such as difficulty in locating the bar code. Given this, a new version will include a search function that is based on pictures. The application is searched by capturing the bar code or text or giving voice commands that allow the user to access medication information. The participants were people with visual impairments, and most of them required assistance using medications before using the application. This application will enhance independence for people with visual impairments in terms of using medications.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b49'>(Marques, Carri&#231;o &amp; Guerreiro, 2015)</ns0:ref>, an authentication method is proposed for users with visual impairments that allows them to protect their passwords. It is not secure when blind or visually impaired users spell out their passwords or enter the numbers in front of others, and the proposed solution allows the users to enter their password with one hand by tapping the screen. The blind participants in this study demonstrated that this authentication method is usable and supports their security needs.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b57'>(Oliveira et al., 2018)</ns0:ref>, the author noted that people with visual impairments face challenges in reading, thus he proposed an application called LeR &#243;tulos. This application was developed and evaluated for the Android operating system and recognizes text from photos taken by the mobile camera and converts them into an audio description. The prototype was designed to follow the guidelines and recommendations of usability and accessibility. The requirements of the application are defined based on the following usability goals: the steps are easy for the user to remember; the application is efficient, safe, useful, and accessible; and user satisfaction is achieved.</ns0:p><ns0:p>Interacting with talkback audio devices is still difficult for people with blindness, and it is unclear how much benefit they provide to people with visual impairments in their daily activities. The author in <ns0:ref type='bibr' target='#b68'>(Rodrigues et al., 2015)</ns0:ref> investigates the smartphone adoption process of blind users by conducting experiments, observations, and weekly interviews. An eight-week study was conducted with five visually impaired participants using Samsung and an enabled talkback 2 screen reader. Focusing on understanding the experiences of people with visual impairments when using touchscreen smartphones revealed accessibility and usability issues. The results showed that the participants have difficulties using smartphones because they fear that they cannot use them properly, and that impacts their ability to communicate with family. However, they appreciate the benefits of using smartphones in their daily activities, and they have the ability to use them.</ns0:p></ns0:div> <ns0:div><ns0:head>D. Screen Division Layout</ns0:head><ns0:p>People with visual impairments encounter various challenges identifying and locating nonvisual items on touch screen interfaces, such as phones and tablets. Various specifications for developing a user interface for people with visual impairments must be met, such as having touch screen division to enable people with blindness to easily and comfortably locate objects and items that are non-visual on the screen <ns0:ref type='bibr' target='#b38'>(Khusro et al., 2019)</ns0:ref>. Article <ns0:ref type='bibr' target='#b38'>(Khusro et al., 2019)</ns0:ref> highlighted the importance of aspects of the usability analysis, such as screen partitioning, to meet specific usability requirements, including orientation, consistency, operation, time consumption, and navigation complexity when users want to locate objects on their touchscreen. The authors of <ns0:ref type='bibr'>(Khan, &amp; Khusro, 2019)</ns0:ref> describe the improvements that people with blindness have experienced in using the smartphone while performing their daily tasks. This information was determined through an empirical study with 41 people with blindness who explained their user and interaction experiences operating a smartphone.</ns0:p><ns0:p>The authors in <ns0:ref type='bibr' target='#b60'>(Palani et al., 2018)</ns0:ref> provide design guidelines governing the accurate display of haptically perceived graphical materials. Determining the usability parameters and the various cognitive abilities required for optimum and accurate use of device interfaces is crucial. Also the authors of <ns0:ref type='bibr' target='#b28'>(Grussenmeyer &amp; Folmer, 2017)</ns0:ref> highlight the importance of usability and accessibility of smartphones and touch screens for people with visual impairments. The primary focus in <ns0:ref type='bibr' target='#b43'>(Leporini &amp; Palmucci, 2018)</ns0:ref> is on interactive tasks used to finish exercises and to answer questionnaires or quizzes. These tools are used for evaluation tests or in games. When using gestures and screen readers to interact on a mobile device, difficulties may arise <ns0:ref type='bibr' target='#b43'>(Leporini &amp; Palmucci, 2018)</ns0:ref>, The study has various objectives, including gathering information on the difficulties encountered by people with blindness during interactions with mobile touch screen devices to answer questions and investigating practicable solutions to solve the detected accessibility and usability issues. A mobile app with an educational game was used to apply the proposed approach. Moreover, in <ns0:ref type='bibr' target='#b2'>(Alnfiai &amp; Sampalli, 2016)</ns0:ref> and <ns0:ref type='bibr' target='#b56'>(Niazi et al., 2016)</ns0:ref>, an analysis of the single-tap braille keyboard created to help people with no or low vision while using touch screen smartphones was conducted. The technology used in <ns0:ref type='bibr' target='#b2'>(Alnfiai &amp; Sampalli, 2016</ns0:ref>) was the talkback service, which provides the user with verbal feedback from the application, allowing users with blindness to key in characters according to braille patterns. To evaluate single tap braille, it was compared to the commonly used QWERTY keyboard. In <ns0:ref type='bibr' target='#b56'>(Niazi et al., 2016)</ns0:ref>, it was found that participants adapted quickly to single-tap Braille and were able to type on the touch screen within 15 to 20 minutes of being introduced to this system. The main advantage of single tap braille is that it allows users with blindness to enter letters based on braille coding, which they are already familiar with. The average error rate is lower using single-tap Braille than it is on the QWERTY keyboard. The authors of <ns0:ref type='bibr' target='#b56'>(Niazi et al., 2016)</ns0:ref> found that minimal typing errors were made using the proposed keypad, which made it an easier option for people with blindness <ns0:ref type='bibr' target='#b56'>(Niazi et al., 2016)</ns0:ref>. In <ns0:ref type='bibr' target='#b16'>(Cho &amp; Kim, 2017)</ns0:ref>, the authors describe new text entry methods for the braille system including a left touch and a double touch scheme that form a twobutton interface for braille input so that people with visual impairments are able to type textual characters without having to move their fingers to locate the target buttons.</ns0:p></ns0:div> <ns0:div><ns0:head>E. Gestures</ns0:head><ns0:p>One of the main problems affecting the visually impaired is limited mobility for some gestures. We need to know what gestures are usable by people with visual impairments. Moreover, the technology of assistive touchscreen-enabled speech interaction between blind PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63198:1:0:NEW 11 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science people and mobile devices permits the use of gestures to interact with a touch user interface. Assistive technology is vital in helping people living with disabilities to perform actions or interact with systems. <ns0:ref type='bibr' target='#b74'>(Smaradottir, Martinez &amp; Haland, 2017)</ns0:ref> analyses a voiceover screen reader used in Apple Inc.'s products. An assessment of this assistive technology was conducted with six visually impaired test participants. The main objectives were to pinpoint the difficulties related to the performance of gestures applicable in screen interactions and to analyze the system's response to the gestures. In this study, a user evaluation was completed in three phases. The first phase entailed training users regarding different hand gestures, the second phase was carried out in a usability laboratory where participants were familiarized with technological devices, and the third phase required participants to solve different tasks. In <ns0:ref type='bibr' target='#b41'>(Knutas et al., 2015)</ns0:ref>, the vital feature of the system is that it enables the user to interactively select a 3D scene region for sonification by merely touching the phone screen. It uses three different modes to increase usability. <ns0:ref type='bibr'>(Alnfiai &amp; Sampalli, 2017</ns0:ref>) explained a study done to compare the use of two data input methods to evaluate their efficiency with completely blind participants who had prior knowledge of braille. The comparison was made between the braille enter input method that uses gestures and the swift braille keyboard, which necessitates finding six buttons representing braille dots. Blind people typically prefer rounded shapes to angular ones when performing complex gestures, as they experience difficulties performing straight gestures with right angles. Participants highlighted that they experienced difficulties particularly with gestures that have steep or right angles. In <ns0:ref type='bibr' target='#b14'>(Buzzi et al., 2017)</ns0:ref>, 36 visually impaired participants were selected and split into two groups of low-vision and blind people. They examined their touch-based gesture preferences in terms of the number of strokes, multitouch, and shape angles. For this reason, a wireless system was created to record sample gestures from various participants simultaneously while monitoring the capture process.</ns0:p></ns0:div> <ns0:div><ns0:head>F. Audio Guidance</ns0:head><ns0:p>People with visual impairment typically cannot travel without guidance due to the inaccuracy of current navigation systems in describing roads and especially sidewalks. Thus, the author of <ns0:ref type='bibr' target='#b25'>(Gintner et al., 2017)</ns0:ref> aims to design a system to guide people with visual impairments based on geographical features and addresses them through a user interface that converts text to audio using a built-in voiceover engine (Apple iOS). The system was evaluated positively in terms of accessibility and usability as tested in a qualitative study involving six participants with visual impairment.</ns0:p><ns0:p>Based on challenges faced by visually impaired game developers, <ns0:ref type='bibr' target='#b5'>(Ara &#250;jo et al., 2017)</ns0:ref> provides guidance for developers to provide accessibility in digital games by using audio guidance for players with visual impairments. The interactions of the player can be conveyed through audio and other basic mobile device components with criteria focused on the game level and speed adjustments, high contrast interfaces, accessible menus, and friendly design. Without braille, people with visual impairments cannot read, but braille is expensive and takes effort, and so it is important to propose technology to facilitate reading for them. In <ns0:ref type='bibr' target='#b70'>(Sabab &amp; Ashmafee, 2016)</ns0:ref>, the author proposed developing a mobile application called 'Blind Reader' that reads an audio document and allows the user to interact with the application to gain knowledge. This application was evaluated with 11 participants, and the participants were satisfied with the application. Videos are an important form of digital media, and unfortunately people with visual impairment cannot access these videos. Therefore, <ns0:ref type='bibr' target='#b22'>(Fa&#231;anha et al., 2016)</ns0:ref> aims to discover sound synthesis techniques to maximize and accelerate the production of audio descriptions with lowcost phonetic description tools. This tool has been evaluated based on usability with eight people and resulted in a high acceptance rate among users.</ns0:p></ns0:div> <ns0:div><ns0:head>G. Navigation 1-Indoor Navigation</ns0:head><ns0:p>Visually impaired people face critical problems when navigating from one place to another. Whether indoors or outdoors, they tend to stay in one place to avoid the risk of injury or seek the help of a sighted person before moving <ns0:ref type='bibr' target='#b0'>(Al-Khalifa &amp; Al-Razgan, 2016)</ns0:ref>. Thus, aid in navigation is essential for those individuals. In <ns0:ref type='bibr' target='#b55'>(Nair et al., 2020)</ns0:ref>, Nair developed an application called ASSIST, which leverages 19 Bluetooth low energy (BLE) beacons and augmented reality (AR) to help visually impaired people move around cluttered indoor places (e.g., subways) and provide the needed safe guidance, just like having a sighted person lead the way. In the subway example, the beacons will be distributed across the halls of the subway and the application will detect them. Sensors and cameras attached to the individual will detect their exact location and send the data to the application. The application will then give a sequence of audio feedback explaining how to move around the place to reach a specific point (e.g., 'in 50 feet turn right', 'now turn left', 'you will reach the destination in 20 steps'). The application also has an interface for sighted and low-vision users that shows the next steps and instructions. A usability study was conducted to test different aspects of the proposed solution. The majority of the participants agreed that they could easily reach a specified location using the application without the help of a sighted person. A survey conducted to give suggestions from the participants for future improvements showed that most participants wanted to attach their phones to their bodies and for the application to consider the different walking speeds of users. They were happy with the audio and vibration feedback that was given before each step or turn they had to take.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b0'>(Al-Khalifa &amp; Al-Razgan, 2016)</ns0:ref>, the main purpose of the study was to provide an Arabiclanguage application for guidance inside buildings using Google Glass and an associated mobile application. First, the building plan must be set by a sighted person who configures the different locations needed. Ebsar will ask the map builder to mark each interesting location with a QR code and generate a room number, and the required steps and turns are tracked using the mobile device's built-in compass and accelerometer features. All of these are recorded in the application for the use of a visually impaired individual, and at the end, a full map is generated for the building. After setting the building map, a user can navigate inside the building with the help of Ebsar, paired with Google Glass, for input and output purposes. The efficiency, effectiveness, and levels of user satisfaction with this solution were evaluated. The results showed that the errors made were few, indicating that Ebsar is highly effective. The time consumed in performing tasks ranged from medium to low depending on the task; this can be improved later. Interviews with participants indicated the application's ease of use. <ns0:ref type='bibr' target='#b21'>(de Borba Campos et al., 2015)</ns0:ref> shows an application simulating a museum map for people with visual impairments. It discusses whether mental maps and interactive games can be used by people with visual impairments to recognize the space around them. After multiple usability evaluation sessions, the mobile application showed high efficiency among participants in understanding the museum's map without repeating the visitation. The authors make a few suggestions based on feedback from the participants regarding enhancing usability, including using audio cues, adding contextual help to realise the activities carried around in a space, and focusing on audio feedback instead of graphics.</ns0:p></ns0:div> <ns0:div><ns0:head>2-Outdoor Navigation</ns0:head><ns0:p>Outdoor navigation is also commonly discussed in the literature. In <ns0:ref type='bibr' target='#b19'>(Darvishy et al., 2020)</ns0:ref>, Touch Explorer was presented to alleviate many of the problems faced by visually impaired people in navigation by developing a non-visual mobile digital map. The application relies on three major methods of communication with the user: voice output, vibration feedback, and everyday sounds. The prototype was developed using simple abstract visuals and mostly relies on voice for explanation of the content. Usability tests show the great impact the prototype had on the understanding of the elements of the map. Few suggestions were given by the participants to increase usability, including GPS localization to locate the user on the map, a scale element for measuring the distance between two map elements, and an address search function. In <ns0:ref type='bibr' target='#b29'>(Hossain, Qaiduzzaman &amp; Rahman, 2020)</ns0:ref>, a navigation application called Sightless Helper was developed to provide a safe navigation method for people with visual impairments. It relies on footstep counting and GPS location to provide the needed guidance. It can also ensure safe navigation by detect objects and unsafe areas and can detect unusual shaking of the user and alert an emergency contact about the problem. The user interaction categories are voice recognition, touchpad, buttons, and shaking sensors. After multiple evaluations, the application was found to be useful in different scenarios and was considered usable by people with visual impairments. The authors in <ns0:ref type='bibr' target='#b46'>(Long et al., 2016)</ns0:ref> propose an application that uses both updates from users and information about the real world to help visually impaired people navigate outdoor settings. After interviews with participants, some design goals were set, including the ability to tag an obstacle on the map, check the weather, and provide an emergency service. The application was evaluated and was found to be of great benefit; users made few errors and found it easy to use. In <ns0:ref type='bibr' target='#b64'>(Prerana et al., 2019)</ns0:ref>, a mobile application called STAVI was presented to help visually impaired people navigate from a source to a destination safely and avoid issues of re-routing. The application depends on voice commands and voice output. The application also has additional features, such as calling, messages, and emergency help. The authors in <ns0:ref type='bibr' target='#b7'>(Bandukda et al., 2020)</ns0:ref> helped people with visual impairments explore parks and natural spaces using a framework called PLACES. Different interviews and surveys were conducted to identify the issues visually impaired people face when they want to do any leisure activity. These were considered in the Manuscript to be reviewed Computer Science development of the framework, and some design directions were presented, such as the use of audio to share an experience.</ns0:p></ns0:div> <ns0:div><ns0:head>3-General Issues</ns0:head><ns0:p>The authors in <ns0:ref type='bibr' target='#b48'>(Maly et al., 2015)</ns0:ref> discuss implementing an evaluation model to assess the usability of a navigation application and to understand the issues of communication with mobile applications that people with visual impairments face. The evaluation tool was designed using a client-server architecture and was applied to test the usability of an existing navigation application. The tool was successful in capturing many issues related to navigation and user behavior, especially the issue of different timing between the actual voice instruction and the position of the user. The authors in <ns0:ref type='bibr' target='#b32'>(Kameswaran et al., 2020)</ns0:ref> conducted a study to find out which navigation technologies blind people can use and to understand the complementarity between navigation technologies and their impact on navigation for visually impaired users. The results of the study show that visually impaired people use both assistive technologies and those designed for non-visually impaired users. Improving voice agents in navigation applications was discussed as a design implication for the visually impaired. In <ns0:ref type='bibr' target='#b71'>(Skulimowski et al.,2019)</ns0:ref>, the authors show how interactive sonification can be used in simple travel aids for the blind. It uses depth images and a histogram called U-depth, which is simple auditory representations for blind users. The vital feature of this system is that it enables the user to interactively select a 3D scene region for sonification by touching the phone screen. This sonic representation of 3D scenes allows users to identify the environment's general appearance and determine objects' distance. The prototype structure was tested by three blind individuals who successfully performed the indoor task. Among the test scenes used included walking along an empty corridor, walking along a corridor with obstacles, and locating an opening between obstacles. However, the results showed that it took a long time for the testers to locate narrow spaces between obstacles.</ns0:p><ns0:p>RQ6: What evaluation methods were used in the studies on usability for visually impaired people that were reviewed?</ns0:p><ns0:p>The most prevalent methods to evaluate the usability of applications were surveys and interviews. These were used to determine the usability of the proposed solutions and obtain feedback and suggestions regarding additional features needed to enhance the usability from the participants' points of view. Focus groups were also used extensively in the literature. Many of the participants selected were blindfolded and were not actually blind or visually impaired. Moreover, the samples selected for the evaluation methods mentioned above considered the age factor depending on the study's needs.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitation and future work</ns0:head><ns0:p>The limitations of this paper are mainly related to the methodology followed. Focusing on just eight online databases and restricting the search with the previously specified keywords and string may have limited the number of search results. Additionally, a large number of papers were excluded because they were written in other languages. Access limitations were also faced due to some libraries asking for fees to access the papers. Therefore, for future works, a study to expand on the SLR results and reveal the current usability models of mobile applications for the visually impaired to verify the SLR results is needed so that this work contributes positively to assessing difficulties and expanding the field of usability of mobile applications for users with visual impairments.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In recent years, the number of applications focused on people with visual impairments has grown, which has led to positive enhancements in those people's lives, especially if they do not have people around to assist them. In this paper, the research papers focusing on Usability for Visually Impaired Users were analyzed and classified into seven themes: accessibility, daily activities, assistive devices, gestures, navigation, screen division layout, and audio guidance. We found that various research studies focus on accessibility of mobile applications to ensure that the same user experience is available to all users, regardless of their abilities. We found many studies that focus on how the design of the applications can assist in performing daily life activities like braille-based application studies and applications to enhance the independence of VI users. We also found papers that discuss the role of assistive devices like screen readers and wearable devices in solving challenges faced by VI users and thus improving their quality of life. We also found that some research papers discuss limited mobility of some gestures for VI users and investigated ways in which we can know what gestures are usable by people with visual impairments. We found many research papers that focus on improving navigation for VI users by incorporating different output modalities like sound and vibration. We also found various studies focusing on screen division layout. By dividing the screen and focusing on visual impairmentrelated issues while developing user interfaces, visually impaired users can easily locate the objects and items on the screens. Finally, we found papers that focus on audio guidance to improve usability. The proposed applications use voice-over and speech interactions to guide visually impaired users in performing different activities through their mobiles. Most of the researchers focused on usability in different applications and evaluated the usability issues of these applications with visually impaired participants. Some of the studies included sighted participants to compare the number and type of problems they faced. The usability evaluation was generally based on the following criteria: accessibility, efficiency, learnability, memorability, errors, safety, and satisfaction. Many of the studied applications show a good indication of these applications' usability and follow the participants' comments to ensure additional enhancements in usability. This paper aims to provide an overview of the developments on usability of mobile applications for people with visual impairments and use this overview to highlight potential future directions.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2 :&#61623;</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: NAILS output sample</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: PRISMA flow diagram</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>RQ1:</ns0:head><ns0:label /><ns0:figDesc>What existing UVI issues did authors try to solve with mobile devices? PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63198:1:0:NEW 11 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Publication trends over time</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63198:1:0:NEW 11 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63198:1:0:NEW 11 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63198:1:0:NEW 11 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63198:1:0:NEW 11 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,178.87,525.00,381.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,178.87,525.00,284.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='41,42.52,178.87,525.00,369.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='44,42.52,178.87,525.00,366.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='45,42.52,178.87,525.00,275.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='46,42.52,178.87,525.00,355.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='47,42.52,178.87,525.00,258.00' type='bitmap' /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63198:1:0:NEW 11 Sep 2021)Manuscript to be reviewed</ns0:note> </ns0:body> "
"College of Computer and Information Sciences King Saud University Riyadh, Saudi Arabia Office Phone: 0118051621 September 9th, 2021 Dear Editor, Thank you for considering our manuscript for publication in PeerJ Computer Science. After careful revisions based on the reviewers’ valuable feedback, we would like to submit a revised version of our manuscript that addresses the points raised by the reviewers. We truly appreciate giving us the opportunity to revise and resubmit our work and look forward to your feedback. Best regards, Corresponding Author, Dr. Sarah Almoaiqel On behalf of all authors. Reviewer #1: Comments: 1- I suggest improving the abstract at lines 23- 24 to provide more justification for your study (specifically, you should expand upon the knowledge gap being filled). 2- How does this figure (line 63) illustrate the classification? You are repeating the same classification information you already mentioned in here and added a % to each theme. 3- What do you mean by 'concludes the paper' in line 68. I mean, What is 'that section: that concludes the paper? 4- Is this figure in line 85 really necessary? You already mentioned the stages on the text. Besides, you don't mention the figure. Figures should not be placed without having been mentioned in the text. 5- What do you mean by relevant? This is also mentioned in Table 2 but you need to be more specific about this unique inclusion/exclusion criteria (lines 135, 157, 158). 6- Is this figure necessary? (Line 136) You already mentioned the number of papers per database in the text. Besides, you did not mention the figure. Figures should not be placed without having been mentioned in the text. 7- At line 138- 139 It should say sub stage. I suggest to be more specific. Please rewrite this sentence as: 'Therefore, importing citations into MENDELEY was mandatory in order to eliminate the duplicates. 8- About figure 4: Same comment. This figure is not mentioned in the text. Figure is not a summary of just the stage mentioned, is a summary of all stages, hence, it should be mentioned before, as indicated. 9- Please rewrite lines 207 and 208 as: The PRISMA diagram shown in Figure 7 illustrates all systematic literature processes used in this study. 10- Be more specific in line 211. All researchers involved in this SLR... 11- Why just documents in English? You need to add some justification/explanation. 12- Using this overview as a foundation, future directions for research in the field of usability for the visually impaired (UVI) are highlighted. More discussion about those futures directions needs to be added. Response to Reviewer #1: We thank the reviewers for their valuable comments, which helped us to improve the quality of this manuscript. 1- We have included more justification for the study in the abstract. 2- We have specified that the percentage is referring to the percentage of reviewed studies classified under each category. 3- We have revised this section to express the organization and content of each section more clearly. 4- We agree and we have removed the figure previously on line 85. 5- We have revised the manuscript to express the meaning of relevant studies (studies that meet the inclusion criteria) more clearly. 6- We have revised the paragraph before the figure to include less detail (later included in the figure). 7- We have revised the paragraph as suggested. 8- We have revised the previous paragraph and we have mentioned the figure. 9- We have rewritten the sentence per the suggestion. 10- We specified more clearly who is involved in the research and how involved. 11- We have provided justification for why “research written in the English language” was part of the inclusion criteria. 12- We have specified the future directions in more depth in Table 2. Reviewer 2: Comments: 1- My first suggestion for improvements is to include more details of the findings and type of knowledge synthesized in the study in the abstract. The information on results and their impact is very briefly described in the abstract, which would jeopardize the chances of reader having a broader view of what the paper has to offer before reading the full text. 2- There are very long paragraphs in the text. The first paragraph of the introduction is very long, and I would strongly recommend the authors break it down into shorter paragraphs. 3- The introduction has definitions to key terms, such as usability, with no reference to authoritative sources. 4- I also believe it is important that the authors highlight the research gap they identified and that motivated the present study. Have other studies performed systematic reviews on the usability of mobile applications for visually impaired people? Why did the authors choose the period of six years? 5- Line 52: fix citation style: evaluation tests. (Bastien, 2010). 6- The search protocol should describe the different adaptations the search string had to undergo to be executed in each database. 7- The quality assessment, starting on line 180, does not describe which authors participated in the procedures. It is important to describe how the process was conducted and, if there were disagreements, how they were solved. 8- The statement on line 227 is very confusing. What do authors mean by “Of a total of 60 studies, 10 discussed accessibility.”. Considering the topic of the study, how could they not consider accessibility? 9- In fact, further reading the paper reveals that the categories chosen to organize the studies has little justification and explanation in terms of conceptualization. There is very little depth in the description of the categories chosen for the papers. 10- There are several inconsistencies in the citation style that need to be fixed. 11- Unfortunately, the current version of the paper does not present enough consolidation of knowledge in the field, with superficial analyses with little conceptualization to provide a broader overview of the area. 12- In my view, the systematic review would need a new analysis, with a substantially deeper analysis and conceptualization. Response: We thank the reviewers for their valuable comments, which helped us to improve the quality of this manuscript. 1- We have revised the abstract to define the findings, type of knowledge synthesized, impact of the study, and contribution more clearly. 2- We have revised the introduction and broken it down into shorter paragraphs. 3- We have added a reference for the keyword, usability. 4- We have specified the research gap in the introduction, clarified whether other studies have been conducted, and the reason for choosing the six-year period. 5- We have fixed the citation. Thank you for bringing it to our attention. 6- We have revised the search string section according to the feedback from all reviewers. 7- We have specified more directly who was involved in the quality assessment and to what extent. There were no disagreements to be mentioned. 8- We have revised this paragraph to clarify what we meant by of a total of 60 studies, 10 discussed accessibility. 9- We have specified throughout the manuscript how the categories were chosen. 10- We have fixed the inconsistencies in the citation style. 11- We have provided more information in the discussion section and included a revised future works section (found in table 2) to illustrate future directions based on themes. 12- We have included a deeper analysis throughout the paper to provide more insight. Reviewer 3: Comments: 1- The references used in the Introduction section (Brady et, al and Bastien et. Al.) are pretty old and should be updated. There are no similar literature review papers mentioned in this review. The description and the relevance of the problem should be extended. 2- The authors mention that they classified the studies into five different themes in the Introduction section, but they should also explain how they chose them. Also, there are seven themes in Figure 1. At this point of reading the paper it is not clear what the percentages mean in Figure 1 since this is explained a few pages below. 3- I don't see the contribution of Figure 2 to better undestandability of the paper and I propose the authors to exclude it. 4- The NAILS project webpage should be cited. 5- »Keyword protocol« in actually not a protocol - it is a search string. 6- Table 2 is also not needed. Everything can be explained in one or two sentences. 7- The legend »Series 1« at Figure 4 is not needed. The databases on y-axis should be listed in decreasing order. 8- Figure 9 is not a PRISMA 2009 Flow Diagram. Please refere to the guidances described at http://www.prisma-statement.org/ or. directly at (https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKE wi0yrv8wqjyAhVImYsKHU5cAGQQFnoECAQQAQ&url=http%3A%2F%2Fprismastatement.org%2Fdocuments%2FPRISMA%25202009%2520flow%2520diagram.pdf&u sg=AOvVaw1qm3ududj_3lshDSZ29LLL 9- You should mention that you folowed PRISMA 2009 guidances for SR in the begining of the methodology section. Also, mention when did the searcing through the databases took place. 10- The references should be listed in alphabetic order if the chosen citation style is used. Response: We thank the reviewers for their valuable comments, which helped us to improve the quality of this manuscript. 1- We have updated the references in the introduction and included a similar literature review paper and specified the gap that prompted our SLR. 2- We have explained how we choose the themes in the introduction and throughout the manuscript. We have revised the introduction to specify more clearly what the percentages mean in Figure 1 (the percentages are referring to the percentage of research papers classified within each theme). 3- We have referred to Figure 2 in the paragraph and specified its purpose. 4- The NAILS webpage was cited accordingly. 5- We have revised the term “keyword protocol” and clarified that it is a search string. Thank you for bringing this to our attention. 6- Table 2 was removed, and everything was explained in a paragraph. 7- Figure 4 was revised, and the legend was removed based on the reviewer’s feedback. 8- We have referred to the correct PRISMA 2009 flow diagram. 9- We have mentioned that we followed the PRISMA 2009 guidance in the beginning of the methodology section. 10- We have listed the references in alphabetical order. "
Here is a paper. Please give your review comments after reading it.
278
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Interacting with mobile applications can often be challenging for people with visual impairments due to the poor usability of some mobile applications. The goal of this paper is to provide an overview of the developments on usability of mobile applications for people with visual impairments based on recent advances in research and application development. This overview is important to guide decision-making for researchers and provide a synthesis of available evidence and indicate in which direction it is worthwhile to prompt further research. We performed a systematic literature review on the usability of mobile applications for people with visual impairments. A deep analysis following the Preferred Reporting Items for SLRs and Meta-Analyses (PRISMA) guidelines was performed to produce a set of relevant papers in the field. We first identified 932 papers published within the last six years. After screening the papers and employing a snowballing technique, we identified 60 studies that were then classified into seven themes: accessibility, daily activities, assistive devices, navigation, screen division layout, and audio guidance. The studies were then analyzed to answer the proposed research questions in order to illustrate the different trends, themes, and evaluation results of various mobile applications developed in the last six years. Using this overview as a foundation, future directions for research in the field of usability for the visually impaired (UVI) are highlighted.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The era of mobile devices and applications has begun. With the widespread use of mobile applications, designers and developers need to consider all types of users and develop applications for their different needs. One notable group of users is people with visual impairments. According to the World Health Organization, there are approximately 285 million people with visual impairments worldwide <ns0:ref type='bibr'>(WHO, 2020)</ns0:ref>. This is a huge number to keep in mind while developing new mobile applications.</ns0:p><ns0:p>People with visual impairments have urged more attention from the tech community to provide them with the assistive technologies they need <ns0:ref type='bibr'>(Khan &amp; Khusro, 2021)</ns0:ref>. Small tasks that we do daily, such as picking out outfits or even moving from one room to another, could be challenging for such individuals. Thus, leveraging technology to assist with such tasks can be life changing. Besides, increasing the usability of applications and developing dedicated ones tailored to their needs is essential. The usability of an application refers to its efficiency in terms of the time and effort required to perform a task, its effectiveness in performing said tasks, and its users' satisfaction <ns0:ref type='bibr' target='#b23'>(Ferreira, 2020)</ns0:ref>. Researchers have been studying this field intensively and proposing different solutions to improve the usability of applications for people with visual impairments.</ns0:p><ns0:p>This paper provides a systematic literature review (SLR) on the usability of mobile applications for people with visual impairments. The study aims to find discussions of usability issues related to people with visual impairments in recent studies and how they were solved using mobile applications. By reviewing published works from the last six years, this SLR aims to update readers on the newest trends, limitations of current research, and future directions in the research field of usability for the visually impaired (UVI). This SLR can be of great benefit to researchers aiming to become involved in UVI research and could provide the basis for new work to be developed, consequently improving the quality of life for the visually impaired. This review differs from previous review studies (i.e. <ns0:ref type='bibr'>Khan &amp; Khusro, 2021)</ns0:ref> because we classified the studies into themes in order to better evaluate and synthesize the studies and provide clear directions for future work. The following themes were chosen based on the issues addressed in the reviewed papers: 'Assistive Devices,' 'Navigation,' 'Accessibility,' 'Daily Activities,' 'Audio Guidance,' and 'Gestures.' Figure <ns0:ref type='figure'>1</ns0:ref> illustrates the percentage of papers classified in each theme.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 1: Percentages of classification themes</ns0:head><ns0:p>The remainder of this paper is organized as follows: the next section specifies the methodology, following this, the results section illustrates the results of the data collection, the discussion section consists of the research questions with their answers and the limitations and potential directions for future work, and the final section summarizes this paper's main findings and contribution.</ns0:p></ns0:div> <ns0:div><ns0:head>Survey Methodology</ns0:head><ns0:p>This systematic literature review used the Meta-Analyses <ns0:ref type='bibr'>(PRISMA, 2009)</ns0:ref> guidelines to produce a set of relevant papers in the field. This SLR was undertaken to address the research questions described below. A deep analysis was performed based on a group of studies; the most relevant studies were documented, and the research questions were addressed.</ns0:p></ns0:div> <ns0:div><ns0:head>A. RESEARCH QUESTIONS</ns0:head><ns0:p>The research questions addressed by this study are presented in Table <ns0:ref type='table'>1</ns0:ref> with descriptions and the motivations behind them.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 1: Research questions</ns0:head></ns0:div> <ns0:div><ns0:head>B. SEARCH STRATEGY</ns0:head><ns0:p>This review analysed and synthesised studies on usability for the visually impaired from a user perspective following a systematic approach. As proposed by Tranfield et al. <ns0:ref type='bibr' target='#b76'>(Tanfield, Denyer &amp; Smart, 2003)</ns0:ref>, the study followed a three-stage approach to ensure that the findings were both reliable and valid. These stages were planning the review, conducting the review by analysing papers, and reporting emerging themes and recommendations. These stages will be discussed further in the following section.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.'>PLANNING STAGE</ns0:head><ns0:p>The planning stage of this review included defining data sources and the search string protocol as well as inclusion and exclusion criteria.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61656; DATA SOURCES</ns0:head><ns0:p>We aimed to use two types of data sources: digital libraries and search engines. The search process was manually conducted by searching through databases. The selected databases and digital libraries are as follows: Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The selected search engines were as follows:</ns0:p><ns0:p>&#61623; DBLP (Computer Science Bibliography Website)</ns0:p><ns0:formula xml:id='formula_0'>&#61623; Google Scholar &#61623; Microsoft Academic &#61656; SEARCH STRING</ns0:formula><ns0:p>The above databases were initially searched using the following keyword protocol: ('Usability' AND ('visual impaired' OR 'visually impaired' OR 'blind' OR 'impairment') AND 'mobile'). However, in order to generate a more powerful search string, the Network Analysis Interface for Literature Studies (NAILS) project was used. NAILS is an automated tool for literature analysis. Its main function is to perform statistical and social network analysis (SNA) on citation data <ns0:ref type='bibr' target='#b41'>(Knutas et al., 2015)</ns0:ref>. In this study, it was used to check the most important work in the relevant fields as shown in figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>.</ns0:p><ns0:p>NAILS produced a report displaying the most important authors, publications, and keywords and listed the references cited most often in the analysed papers <ns0:ref type='bibr' target='#b41'>(Knutas et al., 2015)</ns0:ref> . The new search string was generated after using the NAILS project as follows: ('Usability' OR 'usability model' OR 'usability dimension' OR 'Usability evaluation model' OR 'Usability evaluation dimension') AND ('mobile' OR 'Smartphone') AND ('Visually impaired' OR 'Visual impairment' OR 'Blind' OR 'Low vision' OR 'Blindness'). Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To be included in this systematic review, each study had to meet the following screening criteria:</ns0:p><ns0:p>&#61623; The study must have been published between 2015 and 2020.</ns0:p><ns0:p>&#61623; The study must be relevant to the main topic (Usability of Mobile Applications for Visually Impaired Users).</ns0:p><ns0:p>&#61623; The study must be a full-length paper.</ns0:p><ns0:p>&#61623; The study must be written in English because any to consider any other languages, the research team will need to use the keywords of this language in this topic and deal with search engines using that language to extract all studies related to our topic to form an SLR with a comprehensive view of the selected languages.</ns0:p><ns0:p>Therefore, the research team preferred to focus on studies in English to narrow the scope of this SLR.</ns0:p><ns0:p>A research study was excluded if it did not meet one or more items of the criteria.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>CONDUCTING STAGE:</ns0:head><ns0:p>The conducting stage of the review involved a systematic search based on relevant search terms. This consisted of three substages: exporting citations, importing citations into Mendeley, and importing citations into Rayyan.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61656; EXPORTING CITATIONS</ns0:head><ns0:p>First, in exporting the citations and conducting the search through the mentioned databases, a total of 932 studies were found. The numbers are illustrated in Figure <ns0:ref type='figure' target='#fig_2'>4</ns0:ref> below. The highest number of papers was found in Google Scholar, followed by Scopus, ISI Web of Knowledge, ScienceDirect, IEEE Xplore, Microsoft Academic, and DBLP and ACM Library with two studies each. Finally, SpringerLink did not have any studies that met the inclusion criteria.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 3: Number of papers per database</ns0:head><ns0:p>The chance of encountering duplicate studies was determined to be high. Therefore, importing citations into Mendeley was necessary in order to eliminate the duplicates.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61656; IMPORTING CITATIONS INTO MENDELEY</ns0:head><ns0:p>Mendeley is an open-source reference and citation manager. It can highlight paragraphs and sentences, and it can also list automatic references on the end page. Introducing the use of Mendeley is also expected to avoid duplicates in academic writing, especially for systematic literature reviews <ns0:ref type='bibr' target='#b10'>(Basri &amp; Patak, 2015)</ns0:ref>. Hence, in the next step, the 932 studies were imported into Mendeley, and each study's title and abstract were screened independently for eligibility. A total of 187 duplicate studies were excluded. 745 total studies remained after the first elimination process. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>&#61656; IMPORTING CITATIONS INTO RAYYAN</ns0:head><ns0:p>Rayyan QCRI is a free web and mobile application that helps expedite the initial screening of both abstracts and titles through a semi-automated process while incorporating a high level of usability. Its main benefit is to speed up the most tedious part of the systematic literature review process: selecting studies for inclusion in the review <ns0:ref type='bibr' target='#b58'>(Ouzzani et al., 2016)</ns0:ref>. Therefore, for the last step, another import was done using Rayyan to check for duplications a final time. Using Rayyan, a total of 124 duplicate studies were found, resulting in a total of 621 studies. Using Rayyan, a two-step filtration was conducted to guarantee that the papers have met the inclusion criteria of this SLR. After filtering based on the abstracts, 564 papers did not meet the inclusion criteria. At this stage, 57 studies remained. The second step of filtration eliminated 11 more studies by reading the full papers; two studies were not written in the English language, and nine were inaccessible.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61656; SNOWBALLING</ns0:head><ns0:p>Snowballing is an emerging technique used to conduct systematic literature reviews that are considered both efficient and reliable using simple procedures. The procedure for snowballing consisted of three phases in each cycle. The first phase is refining the start set, the second phase is backward snowballing, and the third is forward snowballing. The first step, forming the start set, is basically identifying relevant papers that can have a high potential of satisfying the criteria and research question. Backward snowballing was conducted using the reference list to identify new papers to include. It shall start by going through the reference list and excluding papers that do not fulfill the basic criteria; the rest that fulfil criteria shall be added to the SLR. Forward snowballing refers to identifying new papers based on those papers that cited the paper being examined <ns0:ref type='bibr' target='#b31'>(Juneja &amp; Kaur, 2019)</ns0:ref>. Hence, in order to be sure that we concluded all related studies after we got the 46 papers, a snowballing step was essential. Forward and backward snowballing were conducted. Each of the 46 studies was examined by checking their references to take a look at any possible addition of sources and examining all papers that cited this study. The snowballing activity added some 38 studies, but after full reading, it became 33 that matched the inclusion criteria. A total of 79 studies were identified through this process.</ns0:p><ns0:p>&#61656; QUALITY ASSESSMENT A systematic literature review's quality is determined by the content of the papers included in the review. As a result, it is important to evaluate the papers carefully <ns0:ref type='bibr' target='#b81'>(Zhou et al., 2015)</ns0:ref>. Many influential scales exist in the software engineering field for evaluating the validity of individual primary studies and grading the overall intensity of the body of proof. Hence, we adapted the comprehensive guidelines specified by Kitchenhand and Charters <ns0:ref type='bibr' target='#b33'>(Keele, 2007)</ns0:ref>, and the quasigold standard (QGS) <ns0:ref type='bibr' target='#b33'>(Keele, 2007)</ns0:ref> was used to establish the quest technique, where a robust search strategy for enhancing the validity and reliability of a SLR's search process is devised using the QGS. By applying this technique, our quality assessment questions were focused and aligned with the research questions mentioned earlier.</ns0:p><ns0:p>In our last step, we had to verify the papers' eligibility; we conducted a quality check for each of the 79 studies. For quality assessment, we considered whether the paper answered the following questions: QA1: Is the research aim clearly stated in the research? QA2: Does the research contain a usability dimension or techniques for mobile applications for people with visual impairments? QA3: Is there an existing issue with mobile applications for people with visual impairments that the author is trying to solve? QA4: Is the research focused on mobile application solutions? After discussing the quality assessment questions and attempting to find an answer in each paper, we agreed to score each study per question. If the study answers a question, it will be given 2 points; if it only partially answers a question, it will be given 1 point; and if there is no answer for a given question in the study, it will have 0 points. The next step was to calculate the weight of each study. If the total weight was higher or equal to four points, the paper was accepted in the SLR; if not, the paper was discarded since it did not reach the desired quality level. Figure <ns0:ref type='figure'>5</ns0:ref> below illustrates the quality assessment process. After applying the quality assessment, 39 papers were rejected since they received less than four points, which resulted in a final tally of 60 papers.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 5: Quality assessment process</ns0:head><ns0:p>To summarize, this review was conducted according to the Preferred Reporting Items for SLRs and Meta-Analyses (PRISMA) <ns0:ref type='bibr' target='#b45'>(Liberati et al., 2009)</ns0:ref>. The PRISMA diagram shown in Figure <ns0:ref type='figure' target='#fig_3'>6</ns0:ref> illustrates all systematic literature processes used in this study. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>ANALYSING STAGE:</ns0:head><ns0:p>All researchers involved in this SLR collected the data. The papers were distributed equally between them, and each researcher read each paper completely to determine its topic, extract the paper's limitations and future work, write a quick summary about it, and record this information in an Excel spreadsheet.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>All researchers worked intensively on this systematic literature review. After completing the previously mentioned steps, the papers were divided among all the researchers. Then, each researcher read their assigned papers completely and then classified them into themes according to the topic they covered. The researchers held several meetings to discuss and specify those themes. The themes were identified by the researchers based on the issues addressed in the reviewed papers. In the end, the researchers resulted in seven themes, as shown in Figure <ns0:ref type='figure'>7</ns0:ref> below. The references selected for each theme can be found in the Appendix. Afterwards, each researcher was assigned one theme to summarize its studies and report the results. In this section, we review the results.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 7: Results of the SLR</ns0:head></ns0:div> <ns0:div><ns0:head>A. Accessibility</ns0:head><ns0:p>Of a total of 60 studies, 10 focused on issues of accessibility. Accessibility is concerned with whether all users are able to have equivalent user experiences, regardless of abilities. Six studies, <ns0:ref type='bibr'>(Darvishy, Hutter &amp; Frei, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b52'>(Morris et al., 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b65'>(Qureshi &amp; Wong, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b35'>(Khan, Khusro &amp; Alam, 2018)</ns0:ref> , <ns0:ref type='bibr' target='#b59'>(Paiva et al., 2020)</ns0:ref>, and <ns0:ref type='bibr' target='#b62'>(Pereda, Murillo &amp; Paz, 2020)</ns0:ref>, gave suggestions for increasing accessibility, <ns0:ref type='bibr'>(Darvishy, Hutter &amp; Frei, 2019)</ns0:ref>, and <ns0:ref type='bibr' target='#b52'>(Morris et al., 2016)</ns0:ref>, gave some suggestions for making mobile map applications and Twitter accessible to visually impaired users, and <ns0:ref type='bibr' target='#b65'>(Qureshi &amp; Wong, 2020)</ns0:ref> and <ns0:ref type='bibr' target='#b35'>(Khan, Khusro &amp; Alam, 2018)</ns0:ref> focused on user interfaces and provided accessibility suggestions suitable for blind people. <ns0:ref type='bibr' target='#b59'>(Paiva et al., 2020)</ns0:ref> and <ns0:ref type='bibr' target='#b62'>(Pereda, Murillo &amp; Paz, 2020)</ns0:ref> proposed a set of heuristics to evaluate the accessibility of mobile applications. Two studies, <ns0:ref type='bibr' target='#b37'>(Khowaja et al., 2019)</ns0:ref> and <ns0:ref type='bibr' target='#b15'>(Carvalho et al., 2018)</ns0:ref>, focused on evaluating usability and accessibility issues on some mobile applications, comparing them, and identifying the number and types of problems that visually impaired users faced. <ns0:ref type='bibr' target='#b4'>(Aqle, Khowaja &amp; Al-Thani, 2020)</ns0:ref> proposed a new web search interface designed for visually impaired users. One study, <ns0:ref type='bibr' target='#b50'>(McKay, 2017)</ns0:ref>, focused on accessibility challenges by applying usability tests on a hybrid mobile app with some visually impaired university students.</ns0:p></ns0:div> <ns0:div><ns0:head>B. Assistive Devices</ns0:head><ns0:p>People with visual impairments have an essential need for assistive technology since they face many challenges when performing activities in daily life. Out of the 60 studies reviewed, 13 were related to assistive technology. The studies <ns0:ref type='bibr' target='#b72'>(Smaradottir, Martinez &amp; H&#229;land, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b71'>(Skulimowski et al.,2019)</ns0:ref>, <ns0:ref type='bibr' target='#b8'>(Barbosa, Hayes &amp; Wang, 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b69'>(Rosner &amp; Perlman, 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b17'>(Csap&#243; et al., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b36'>(Khan &amp; Khusro, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b75'>(Sonth &amp; Kallimani, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b39'>(Kim et al., 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b77'>(Vashistha et al., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b32'>(Kameswaran et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b27'>), (Shirley et al., 2017)</ns0:ref>, and <ns0:ref type='bibr' target='#b67'>(Rahman et al., 2017)</ns0:ref> were related to screen readers (voiceovers). On the other hand, <ns0:ref type='bibr' target='#b12'>(Bharatia, Ambawane &amp; Rane, 2019)</ns0:ref> and <ns0:ref type='bibr' target='#b44'>(Lewis et al., 2016)</ns0:ref> were related to proposing an assistant device for the visually impaired. Of the studies related to screening readers, <ns0:ref type='bibr' target='#b75'>(Sonth &amp; Kallimani, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b77'>(Vashistha et al., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b36'>(Khan &amp; Khusro, 2020)</ns0:ref> and <ns0:ref type='bibr' target='#b44'>(Lewis et al.,2016)</ns0:ref> cited challenges faced by visually impaired users. <ns0:ref type='bibr' target='#b8'>Barbosa, Hayes &amp; Wang, 2016)</ns0:ref> , <ns0:ref type='bibr' target='#b39'>(Kim et al., 2016)</ns0:ref>, and <ns0:ref type='bibr' target='#b67'>(Rahman et al., 2017)</ns0:ref> suggested new applications, while <ns0:ref type='bibr' target='#b72'>(Smaradottir, Martinez &amp; H&#229;land, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b69'>(Rosner &amp; Perlman, 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b17'>(Csap&#243; et al., 2015), and</ns0:ref><ns0:ref type='bibr' target='#b27'>(Shirley et al., 2017)</ns0:ref> evaluated current existing work. The studies <ns0:ref type='bibr' target='#b12'>(Bharatia, Ambawane &amp; Rane, 2019)</ns0:ref> and <ns0:ref type='bibr' target='#b44'>(Lewis et al., 2016)</ns0:ref> proposed using wearable devices to improve the quality of life for people with visual impairments.</ns0:p></ns0:div> <ns0:div><ns0:head>C. Daily Activities</ns0:head><ns0:p>In recent years, people with visual impairments have used mobile applications to increase their independence in their daily activities and learning, especially those based on the braille method. We divide the daily activity section into braille-based applications and applications designed to enhance the independence of the visually impaired. Four studies, <ns0:ref type='bibr' target='#b54'>(Nahar, Sulaiman, &amp; Jaafar, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b53'>(Nahar, Jaafar, &amp; Sulaiman, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b6'>(Ara&#250;jo et al., 2016)</ns0:ref>, and <ns0:ref type='bibr' target='#b26'>(Gokhale et al., 2017)</ns0:ref>, implemented and evaluated the usability of mobile phone applications that use braille to help visually impaired people in their daily lives. Seven studies, <ns0:ref type='bibr' target='#b78'>(Vitiello et al., 2018)</ns0:ref>, (Kunaratana-Angkul, Wu, &amp; Shin-Renn, 2020), <ns0:ref type='bibr' target='#b24'>(Ghidini et al., 2016)</ns0:ref>, <ns0:ref type='bibr'>(Madrigal-Cadavid et al., 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b49'>(Marques, Carri&#231;o &amp; Guerreiro, 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b57'>(Oliveira et al., 2018)</ns0:ref>, and <ns0:ref type='bibr' target='#b68'>(Rodrigues et al., 2015)</ns0:ref>, focused on building applications that enhance the independence and autonomy of people with visual impairments in their daily life activities.</ns0:p></ns0:div> <ns0:div><ns0:head>D. Screen Division Layout</ns0:head><ns0:p>People with visual impairments encounter various challenges in identifying and locating nonvisual items on touch screen interfaces like phones and tablets. Incidents of accidentally touching a screen element and frequently following an incorrect pattern in attempting to access objects and screen artifacts hinder blind people from performing typical activities on smartphones <ns0:ref type='bibr' target='#b38'>(Khusro et al., 2019)</ns0:ref>. In this review, 9 out of 60 studies discuss screen division layout: <ns0:ref type='bibr' target='#b38'>(Khusro et al., 2019)</ns0:ref>, <ns0:ref type='bibr'>(Khan, &amp; Khusro, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b28'>(Grussenmeyer &amp; Folmer, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b60'>(Palani et al., 2018)</ns0:ref>, and <ns0:ref type='bibr' target='#b43'>(Leporini &amp; Palmucci, 2018)</ns0:ref> discuss touch screen (smartwatch tablets, mobile phones, and tablet) usability among people with visual impairments, while <ns0:ref type='bibr' target='#b16'>(Cho &amp; Kim, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b2'>(Alnfiai &amp; Sampalli, 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b56'>(Niazi et al., 2016)</ns0:ref>, and <ns0:ref type='bibr' target='#b3'>(Alnfiai &amp; Sampalli, 2019</ns0:ref>) concern text entry methods that increase the usability of apps among visually impaired people. <ns0:ref type='bibr' target='#b38'>(Khusro et al., 2019)</ns0:ref> provides a novel contribution to the literature regarding considerations that can be used as guidelines for designing a user-friendly and semantically enriched user interface for blind people. An experiment in <ns0:ref type='bibr' target='#b16'>(Cho &amp; Kim, 2017)</ns0:ref> was conducted comparing the two-button mobile interface usability with the one-finger method and voiceover. <ns0:ref type='bibr' target='#b43'>(Leporini &amp; Palmucci, 2018)</ns0:ref> gathered information on the interaction challenges faced by visually impaired people when answering questions on a mobile touch-screen device, investigated possible solutions to overcome the accessibility and usability challenges.</ns0:p></ns0:div> <ns0:div><ns0:head>E. Gestures</ns0:head><ns0:p>In total, 3 of 60 studies discuss gestures in usability. <ns0:ref type='bibr'>(Alnfiai &amp; Sampalli, 2017)</ns0:ref> compared the performance of BrailleEnter, a gesture based input method to the Swift Braille keyboard, a method that requires finding the location of six buttons representing braille dot, while <ns0:ref type='bibr' target='#b14'>(Buzzi et al., 2017)</ns0:ref> and <ns0:ref type='bibr' target='#b73'>(Smaradottir, Martinez &amp; Haland, 2017)</ns0:ref> provide an analysis of gesture performance on touch screens among visually impaired people.</ns0:p></ns0:div> <ns0:div><ns0:head>F. Audio Guidance</ns0:head><ns0:p>People with visual impairment primarily depend on audio guidance forms in their daily lives; accordingly, audio feedback helps guide them in their interaction with mobile applications. Four studies discussed the use of audio guidance in different contexts: one in navigation <ns0:ref type='bibr' target='#b25'>(Gintner et al., 2017)</ns0:ref>, one in games <ns0:ref type='bibr' target='#b5'>(Ara &#250;jo et al., 2017)</ns0:ref>, one in reading <ns0:ref type='bibr' target='#b70'>(Sabab &amp; Ashmafee, 2016)</ns0:ref>, and one in videos <ns0:ref type='bibr' target='#b22'>(Fa&#231;anha et al., 2016)</ns0:ref>.These studies were developed and evaluated based on usability and accessibility of the audio guidance for people with visual impairments and aimed to utilize mobile applications to increase the enjoyment and independence of such individuals.</ns0:p></ns0:div> <ns0:div><ns0:head>G. Navigation</ns0:head><ns0:p>Navigation is a common issue that visually impaired people face. Indoor navigation is widely discussed in the literature. <ns0:ref type='bibr' target='#b55'>(Nair et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b0'>(Al-Khalifa &amp;</ns0:ref><ns0:ref type='bibr' target='#b0'>Al-Razgan, 2016), and</ns0:ref><ns0:ref type='bibr' target='#b21'>(de Borba Campos et al., 2015)</ns0:ref> discuss how we can develop indoor navigation applications for visually impaired people. Outdoor navigation is also common in the literature, as seen in <ns0:ref type='bibr' target='#b19'>(Darvishy et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b29'>(Hossain, Qaiduzzaman &amp; Rahman, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b46'>(Long et al., 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b64'>(Prerana et al., 2019)</ns0:ref>, and <ns0:ref type='bibr' target='#b7'>(Bandukda et al., 2020)</ns0:ref>. For example, in <ns0:ref type='bibr' target='#b19'>(Darvishy et al., 2020)</ns0:ref>, Touch Explorer, an accessible digital map application, was presented to alleviate many of the problems faced by people with visual impairments while using highly visually oriented digital maps. Primarily, it focused on using non-visual output modalities like voice output, everyday sound, and vibration feedback. Issues with navigation applications were also presented in <ns0:ref type='bibr' target='#b48'>(Maly et al., 2015)</ns0:ref>. <ns0:ref type='bibr' target='#b32'>(Kameswaran et al., 2020)</ns0:ref> discussed commonly used technologies in navigation applications for blind people and highlighted the importance of using complementary technologies to convey information through different modalities to enhance the navigation experience. Interactive sonification of images for navigation has also been shown in <ns0:ref type='bibr' target='#b71'>(Skulimowski et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>In this section, the research questions are addressed in detail to clearly achieve the research objective. Also, a detailed overview of each theme will be mentioned below.</ns0:p></ns0:div> <ns0:div><ns0:head>Answers to the Research Questions</ns0:head><ns0:p>This section will answer the research question proposed:</ns0:p></ns0:div> <ns0:div><ns0:head>RQ1: What existing UVI issues did authors try to solve with mobile devices?</ns0:head><ns0:p>Mobile applications can help people with visual impairments in their daily activities, such as navigation and writing. Additionally, mobile devices may be used for entertainment purposes. However, people with visual impairments face various difficulties while performing text entry operations, text selection, and text manipulation on mobile applications <ns0:ref type='bibr' target='#b56'>(Niazi et al., 2016)</ns0:ref>. Thus, the authors of the studies tried to increase touch screens' usability by producing prototypes or simple systems and doing usability testing to understand the UX of people with visual impairments.</ns0:p></ns0:div> <ns0:div><ns0:head>RQ2: What is the role of mobile devices in solving those issues?</ns0:head><ns0:p>Mobile phones are widely used in modern society, especially among users with visual impairments; they are considered the most helpful tool for blind users to communicate with people worldwide <ns0:ref type='bibr' target='#b72'>(Smaradottir, Martinez &amp; H&#229;land, 2017)</ns0:ref>. In addition, the technology of touch screen assistive technology enables speech interaction between blind people and mobile devices and permits the use of gestures to interact with a touch user interface. Assistive technology is vital in helping people living with disabilities perform actions or interact with systems <ns0:ref type='bibr' target='#b56'>(Niazi et al., 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>RQ3: What are the publication trends on the usability of mobile applications among the visually impaired?</ns0:head><ns0:p>As shown in Figure <ns0:ref type='figure' target='#fig_4'>8</ns0:ref> below, research into mobile applications' usability for the visually impaired has increased in the last five years, with a slight dip in 2018. Looking at the most frequent themes, we find that 'Assistive Devices' peaked in 2017, while 'Navigation' and 'Accessibility' increased significantly in 2020. On the other hand, we see that the prevalence of 'Daily Activities' stayed stable throughout the research years. The term 'Audio Guidance' appeared in 2016 and 2017 and has not appeared in the last three years. 'Gestures' also appeared only in 2017. 'Screen Layout Division' was present in the literature in the last five years and increased in 2019 but did not appear in 2020. We divide the answer to this question into two sections: first, we will discuss limitations; then, we will discuss future work for each proposed theme.</ns0:p></ns0:div> <ns0:div><ns0:head>A. Limitations</ns0:head><ns0:p>Studies on the usability of mobile applications for visually impaired users in the literature have various limitations, and most of them were common among the studies. These limitations were divided into two groups. The first group concerns proposed applications; for example, <ns0:ref type='bibr' target='#b67'>(Rahman et al., 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b57'>(Oliveira et al., 2018), and</ns0:ref><ns0:ref type='bibr'>(Madrigal-Cadavid et al., 2019)</ns0:ref> faced issues regarding camera applications in mobile devices due to the considerable effort needed for its usage and being heavily dependent on the availability of the internet. The other group of studies, <ns0:ref type='bibr' target='#b68'>(Rodrigues et al., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b43'>(Leporini &amp; Palmucci, 2018)</ns0:ref>, <ns0:ref type='bibr'>(Alnfiai &amp;</ns0:ref><ns0:ref type='bibr' target='#b2'>Sampalli, 2016), and</ns0:ref><ns0:ref type='bibr' target='#b5'>(Ara &#250;jo et al., 2017)</ns0:ref>, have shown limitations in visually impaired users' inability to comprehend a graphical user interface. <ns0:ref type='bibr'>(Alnfiai &amp; Sampalli, 2017)</ns0:ref> and <ns0:ref type='bibr' target='#b3'>(Alnfiai &amp; Sampalli, 2019</ns0:ref>) evaluated new braille input methods and found that the traditional braille keyboard, where knowing the exact position of letters QWERTY is required, is limited in terms of usability compared to the new input methods. Most studies faced difficulties regarding the sample size and the fact that many of the participants were not actually blind or visually impaired but only blindfolded. This likely led to less accurate results, as blind or visually impaired people can provide more useful feedback as they experience different issues on a daily basis and are more ideal for this type of study. So, the need for a good sample of participants who actually have this disability is clear to allow for better evaluation results and more feedback and recommendations for future research.</ns0:p></ns0:div> <ns0:div><ns0:head>B. Future Work</ns0:head><ns0:p>A commonly discussed future work in the chosen literature is to increase the sample sizes of people with visual impairment and focus on various ages and geographical areas to generalize the studies. Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> summarizes suggestions for future work according to each theme. Those future directions could inspire new research in the field. There are a total of 60 outcomes in this research. Of these, 40 involve suggestions to improve usability of mobile applications; four of them address problems that are faced by visually impaired people that reduce usability. Additionally, 16 of the outcomes are assessments of the usability of the prototype or model. Two of the results are recommendations to improve usability. Finally, the last two outcomes are hardware solutions that may help the visually impaired perform their daily activities. Figure <ns0:ref type='figure'>9</ns0:ref> illustrates these numbers.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 9: Outcomes of studies</ns0:head></ns0:div> <ns0:div><ns0:head>Overview of the reviewed studies</ns0:head><ns0:p>In the following subsections, we summarize all the selected studies based on the classified theme: accessibility, assistive devices, daily activities, screen division layout, gestures, audio guidance, and navigation. The essence of the studies will be determined, and their significance in the field will be explored.</ns0:p></ns0:div> <ns0:div><ns0:head>A. Accessibility</ns0:head><ns0:p>For designers dealing with mobile applications, it is critical to determine and fix accessibility issues in the application before it is delivered to the users <ns0:ref type='bibr' target='#b37'>(Khowaja et al., 2019)</ns0:ref>. Accessibility refers to giving the users the same user experience regardless of ability. In <ns0:ref type='bibr' target='#b37'>(Khowaja et al., 2019)</ns0:ref> and <ns0:ref type='bibr' target='#b15'>(Carvalho et al., 2018)</ns0:ref>, the researchers focused on comparing the levels of accessibility and usability in different applications. They had a group of visually impaired users and a group of sighted users test out the applications to compare the number and type of problems they faced and determine which applications contained the most violations. Because people with visual impairments cannot be ignored in the development of mobile applications, many researchers have sought solutions for guaranteeing accessibility. For example, in <ns0:ref type='bibr' target='#b65'>(Qureshi &amp; Wong, 2020)</ns0:ref>, the study contributed to producing a new, effective design for mobile applications based on the suggestions of people with visual impairments and with the help of two expert mobile application developers. In <ns0:ref type='bibr' target='#b35'>(Khan, Khusro &amp; Alam, 2018)</ns0:ref>, an adaptive user interface model for visually impaired people was proposed and evaluated in an empirical study with 63 visually impaired people. In <ns0:ref type='bibr'>(Aqle, Khowaja &amp; Al-Thani,2020)</ns0:ref>, the researchers proposed a new web search interface for users with visual impairments that is based on discovering concepts through formal concept analysis (FCA). Users interact with the interface to collect concepts, which are then used as keywords to narrow the search results and target the web pages containing the desired information with minimal effort and time. The usability of the proposed search interface (InteractSE) was evaluated by experts in the field of HCI and accessibility, with a set of heuristics by Nielsen and a set of WCAG 2.0 guidelines.</ns0:p><ns0:p>In <ns0:ref type='bibr'>(Darvishy, Hutter &amp; Frei, 2019)</ns0:ref>, the researchers proposed a solution for making mobile map applications accessible for people with blindness or visual impairment. They suggested replacing forests in the map with green color and birds' sound, replacing water with blue color and water sounds, replacing streets with grey color and vibration, and replacing buildings with yellow color and pronouncing the name of the building. The prototype showed that it was possible to explore a simple map through vibrations, sounds, and speech.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b52'>(Morris et al., 2016)</ns0:ref>, the researchers utilized a multi-faceted technique to investigate how and why visually impaired individuals use Twitter and the difficulties they face in doing so. They noted that Twitter had become more image-heavy over time and that picture-based tweets are largely inaccessible to people with visual impairments. The researchers then made several suggestions for how Twitter could be amended to continue to be usable for people with visual impairments.</ns0:p><ns0:p>The researchers in <ns0:ref type='bibr' target='#b59'>(Paiva et al., 2020)</ns0:ref> focused on how to evaluate proposed methods for ensuring the accessibility and usability of mobile applications. Their checklist, Acc-MobileCheck, contains 47 items that correspond to issues related to comprehension (C), operation (O), perception (P), and adaptation (A) in mobile interface interaction. To validate Acc-MobileCheck, it was reviewed by five experts and three developers and determined to be effective. In <ns0:ref type='bibr' target='#b62'>(Pereda, Murillo &amp; Paz, 2020)</ns0:ref>, the authors also suggest a set of heuristics to evaluate the accessibility of mobile e-commerce applications for visually impaired people. Finally, <ns0:ref type='bibr' target='#b50'>(McKay, 2017)</ns0:ref> conducted an accessibility test for hybrid mobile apps and found that students with blindness faced many barriers to access based on how they used hybrid mobile applications. While hybrid apps can allow for increased time for marketing, this comes at the cost of app accessibility for people with disabilities.</ns0:p></ns0:div> <ns0:div><ns0:head>B. Assistive Devices</ns0:head><ns0:p>A significant number of people with visual impairments use state-of-the-art software to perform tasks in their daily lives. These technologies are made up of electronic devices equipped with sensors and processors that can make intelligent decisions.</ns0:p><ns0:p>One of the most important and challenging tasks in developing such technologies is to create a user interface that is appropriate for the sensorimotor capabilities of users with blindness <ns0:ref type='bibr' target='#b17'>(Csap&#243; et al., 2015)</ns0:ref>. Several new hardware tools have proposed to improve the quality of life for people with visual impairments. Three tools were presented in this SLR: a smart stick that can notify the user of any obstacle, helping them to perform tasks easily and efficiently <ns0:ref type='bibr' target='#b12'>(Bharatia, Ambawane &amp; Rane, 2019)</ns0:ref>, and an eye that can allow users to detect colors (medical evaluation is still required) <ns0:ref type='bibr' target='#b44'>(Lewis et al.,2016)</ns0:ref>.</ns0:p><ns0:p>The purpose of the study in <ns0:ref type='bibr' target='#b27'>(Shirley et al., 2017)</ns0:ref> was to understand how people with blindness use smartphone applications as assistive technology and how they perceive them in terms of accessibility and usability. An online survey with 259 participants was conducted, and most of the participants rated the applications as useful and accessible and were satisfied with them.</ns0:p><ns0:p>The researchers in <ns0:ref type='bibr' target='#b67'>(Rahman et al., 2017)</ns0:ref> designed and implemented EmoAssist, which is a smartphone application that assists with natural dyadic conversations and aims to promote user satisfaction by providing options for accessing non-verbal communication that predicts behavioural expressions and contains interactive dimensions to provide valid feedback. The usability of this application was evaluated in a study with ten people with blindness where several tools were applied in the application. The study participants found that the usability of EmoAssist was good, and it was an effective assistive solution.</ns0:p></ns0:div> <ns0:div><ns0:head>C. Daily Activities</ns0:head><ns0:p>This theme contains two main categories: braille-based application studies and applications to enhance the independence of VIU. Both are summarized below.</ns0:p></ns0:div> <ns0:div><ns0:head>1-Braille-based applications</ns0:head><ns0:p>Braille is still the most popular method for assisting people with visual impairments in reading and studying, and most educational mobile phone applications are limited to sighted people. Recently, however, some researchers have developed assistive education applications for students with visual impairments, especially those in developing countries. For example, in India, the number of children with visual impairments is around 15 million, and only 5% receive an education <ns0:ref type='bibr' target='#b26'>(Gokhale et al., 2017)</ns0:ref>. Three of the braille studies focused on education: <ns0:ref type='bibr' target='#b54'>(Nahar, Sulaiman, &amp; Jaafar, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b53'>(Nahar, Jaafar, &amp; Sulaiman, 2019)</ns0:ref>, and <ns0:ref type='bibr' target='#b6'>(Ara&#250;jo et al., 2016)</ns0:ref>. These studies all used smartphone touchscreens and action gestures to gain input from the student, and then output was provided in the form of audio feedback. In <ns0:ref type='bibr' target='#b54'>(Nahar, Sulaiman, &amp; Jaafar, 2020)</ns0:ref>, vibrational feedback was added to guide the users. The participants in <ns0:ref type='bibr' target='#b54'>(Nahar, Sulaiman, &amp; Jaafar, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b53'>(Nahar, Jaafar, &amp; Sulaiman, 2019)</ns0:ref>, and <ns0:ref type='bibr' target='#b6'>(Ara&#250;jo et al., 2016)</ns0:ref> included students with blindness of visual impairment and their teachers. The authors in <ns0:ref type='bibr' target='#b54'>(Nahar, Sulaiman, &amp; Jaafar, 2020)</ns0:ref> and <ns0:ref type='bibr' target='#b53'>(Nahar, Jaafar, &amp; Sulaiman, 2019)</ns0:ref> evaluated the usability of their applications following the same criteria (efficiency, learnability, memorability, errors, and satisfaction). The results showed that in <ns0:ref type='bibr' target='#b54'>(Nahar, Sulaiman, &amp; Jaafar, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b53'>(Nahar, Jaafar, &amp; Sulaiman, 2019)</ns0:ref>, and <ns0:ref type='bibr' target='#b6'>(Ara&#250;jo et al., 2016)</ns0:ref>, the applications met the required usability criteria. The authors in <ns0:ref type='bibr' target='#b26'>(Gokhale et al., 2017)</ns0:ref> presented a braille-based solution to help people with visual impairments call and save contacts. A braille keypad on the smartphone touchscreen was used to gain input from the user, which was then converted into haptic and auditory feedback to let the user know what action was taken. The usability of this application was considered before it was designed. The participants' responses were positive because this kind of user-centric design simplifies navigation and learning processes.</ns0:p></ns0:div> <ns0:div><ns0:head>2-Applications to Enhance the Independence of People with Visual Impairments</ns0:head><ns0:p>The authors in the studies explored in this section focused on building applications that enhance independence and autonomy in daily life activities for users with visual impairments.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b78'>(Vitiello et al., 2018)</ns0:ref>, the authors presented their mobile application, an assistive solution for visually impaired users called 'Crania', which uses machine learning techniques to help users with visual impairments get dressed by recognizing the colour and texture of their clothing and suggesting suitable combinations. The system provides feedback through voice synthesis.</ns0:p><ns0:p>The participants in the study were adults and elderly people, some of whom were completely blind and the rest of whom had partial sight. After testing for usability, all the participants with blindness agreed that using the application was better than their original method, and half of the participants with partial sight said the same thing. At the end of the study, the application was determined to be accessible and easy to use.</ns0:p><ns0:p>In (Kunaratana-Angkul, Wu, &amp; Shin-Renn, 2020), an application which allows elderly people to measure low vision status at home through their smartphones instead of visiting hospitals was tested, and most of the participants considered it to be untrustworthy because the medical information was insufficient. Even when participants were able to learn how to use the application, most of them were still confused while using it and needed further instruction.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b24'>(Ghidini et al., 2016)</ns0:ref>, the authors studied the habits of people with visual impairments when using their smartphones in order to develop an electronic calendar with different interaction formats, such as voice commands, touch, and vibration interaction. The authors presented the lessons learned and categorized them based on usability heuristics such as feedback, design, user freedom and control, and recognition instead of remembering.</ns0:p><ns0:p>In <ns0:ref type='bibr'>(Madrigal-Cadavid et al., 2019)</ns0:ref>, the authors developed a drug information application for people with visual impairments to help them access the labels of medications. The application was developed based on a user-centered design process. By conducting a usability test, the authors recognized some usability issues for people with visual impairments, such as difficulty in locating the bar code. Given this, a new version will include a search function that is based on pictures. The application is searched by capturing the bar code or text or giving voice commands that allow the user to access medication information. The participants were people with visual impairments, and most of them required assistance using medications before using the application. This application will enhance independence for people with visual impairments in terms of using medications.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b49'>(Marques, Carri&#231;o &amp; Guerreiro, 2015)</ns0:ref>, an authentication method is proposed for users with visual impairments that allows them to protect their passwords. It is not secure when blind or visually impaired users spell out their passwords or enter the numbers in front of others, and the proposed solution allows the users to enter their password with one hand by tapping the screen. The blind participants in this study demonstrated that this authentication method is usable and supports their security needs.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b57'>(Oliveira et al., 2018)</ns0:ref>, the author noted that people with visual impairments face challenges in reading, thus he proposed an application called LeR &#243;tulos. This application was developed and evaluated for the Android operating system and recognizes text from photos taken by the mobile camera and converts them into an audio description. The prototype was designed to follow the guidelines and recommendations of usability and accessibility. The requirements of the application are defined based on the following usability goals: the steps are easy for the user to remember; the application is efficient, safe, useful, and accessible; and user satisfaction is achieved.</ns0:p><ns0:p>Interacting with talkback audio devices is still difficult for people with blindness, and it is unclear how much benefit they provide to people with visual impairments in their daily activities. The author in <ns0:ref type='bibr' target='#b68'>(Rodrigues et al., 2015)</ns0:ref> investigates the smartphone adoption process of blind users by conducting experiments, observations, and weekly interviews. An eight-week study was conducted with five visually impaired participants using Samsung and an enabled talkback 2 screen reader. Focusing on understanding the experiences of people with visual impairments when using touchscreen smartphones revealed accessibility and usability issues. The results showed that the participants have difficulties using smartphones because they fear that they cannot use them properly, and that impacts their ability to communicate with family. However, they appreciate the benefits of using smartphones in their daily activities, and they have the ability to use them.</ns0:p></ns0:div> <ns0:div><ns0:head>D. Screen Division Layout</ns0:head><ns0:p>People with visual impairments encounter various challenges identifying and locating nonvisual items on touch screen interfaces, such as phones and tablets. Various specifications for developing a user interface for people with visual impairments must be met, such as having touch screen division to enable people with blindness to easily and comfortably locate objects and items that are non-visual on the screen <ns0:ref type='bibr' target='#b38'>(Khusro et al., 2019)</ns0:ref>. Article <ns0:ref type='bibr' target='#b38'>(Khusro et al., 2019)</ns0:ref> highlighted the importance of aspects of the usability analysis, such as screen partitioning, to meet specific usability requirements, including orientation, consistency, operation, time consumption, and navigation complexity when users want to locate objects on their touchscreen. The authors of <ns0:ref type='bibr'>(Khan, &amp; Khusro, 2019)</ns0:ref> describe the improvements that people with blindness have experienced in using the smartphone while performing their daily tasks. This information was determined through an empirical study with 41 people with blindness who explained their user and interaction experiences operating a smartphone. using gestures and screen readers to interact on a mobile device, difficulties may arise <ns0:ref type='bibr' target='#b43'>(Leporini &amp; Palmucci, 2018)</ns0:ref>, The study has various objectives, including gathering information on the difficulties encountered by people with blindness during interactions with mobile touch screen devices to answer questions and investigating practicable solutions to solve the detected accessibility and usability issues. A mobile app with an educational game was used to apply the proposed approach. Moreover, in <ns0:ref type='bibr' target='#b2'>(Alnfiai &amp; Sampalli, 2016)</ns0:ref> and <ns0:ref type='bibr' target='#b56'>(Niazi et al., 2016)</ns0:ref>, an analysis of the single-tap braille keyboard created to help people with no or low vision while using touch screen smartphones was conducted. The technology used in <ns0:ref type='bibr' target='#b2'>(Alnfiai &amp; Sampalli, 2016)</ns0:ref> was the talkback service, which provides the user with verbal feedback from the application, allowing users with blindness to key in characters according to braille patterns. To evaluate single tap braille, it was compared to the commonly used QWERTY keyboard. In <ns0:ref type='bibr' target='#b56'>(Niazi et al., 2016)</ns0:ref>, it was found that participants adapted quickly to single-tap Braille and were able to type on the touch screen within 15 to 20 minutes of being introduced to this system. The main advantage of single tap braille is that it allows users with blindness to enter letters based on braille coding, which they are already familiar with. The average error rate is lower using single-tap Braille than it is on the QWERTY keyboard. The authors of <ns0:ref type='bibr' target='#b56'>(Niazi et al., 2016)</ns0:ref> found that minimal typing errors were made using the proposed keypad, which made it an easier option for people with blindness <ns0:ref type='bibr' target='#b56'>(Niazi et al., 2016)</ns0:ref>. In <ns0:ref type='bibr' target='#b16'>(Cho &amp; Kim, 2017)</ns0:ref>, the authors describe new text entry methods for the braille system including a left touch and a double touch scheme that form a twobutton interface for braille input so that people with visual impairments are able to type textual characters without having to move their fingers to locate the target buttons.</ns0:p></ns0:div> <ns0:div><ns0:head>E. Gestures</ns0:head><ns0:p>One of the main problems affecting the visually impaired is limited mobility for some gestures. We need to know what gestures are usable by people with visual impairments. Moreover, the technology of assistive touchscreen-enabled speech interaction between blind people and mobile devices permits the use of gestures to interact with a touch user interface. Assistive technology is vital in helping people living with disabilities to perform actions or interact with systems. <ns0:ref type='bibr' target='#b73'>(Smaradottir, Martinez &amp; Haland, 2017)</ns0:ref> analyses a voiceover screen reader used in Apple Inc.'s products. An assessment of this assistive technology was conducted with six visually impaired test participants. The main objectives were to pinpoint the difficulties related to the performance of gestures applicable in screen interactions and to analyze the system's response to the gestures. In this study, a user evaluation was completed in three phases. The first phase entailed training users regarding different hand gestures, the second phase was carried out in a usability laboratory where participants were familiarized with technological devices, and the third phase required participants to solve different tasks. In <ns0:ref type='bibr' target='#b41'>(Knutas et al., 2015)</ns0:ref>, the vital feature of the system is that it enables the user to interactively select a 3D scene region for sonification by merely touching the phone screen. It uses three different modes to increase usability. <ns0:ref type='bibr'>(Alnfiai &amp; Sampalli, 2017</ns0:ref>) explained a study done to compare the use of two data input methods to evaluate their efficiency with completely blind participants who had prior PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63198:2:0:CHECK 9 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science knowledge of braille. The comparison was made between the braille enter input method that uses gestures and the swift braille keyboard, which necessitates finding six buttons representing braille dots. Blind people typically prefer rounded shapes to angular ones when performing complex gestures, as they experience difficulties performing straight gestures with right angles. Participants highlighted that they experienced difficulties particularly with gestures that have steep or right angles. In <ns0:ref type='bibr' target='#b14'>(Buzzi et al., 2017)</ns0:ref>, 36 visually impaired participants were selected and split into two groups of low-vision and blind people. They examined their touch-based gesture preferences in terms of the number of strokes, multitouch, and shape angles. For this reason, a wireless system was created to record sample gestures from various participants simultaneously while monitoring the capture process.</ns0:p></ns0:div> <ns0:div><ns0:head>F. Audio Guidance</ns0:head><ns0:p>People with visual impairment typically cannot travel without guidance due to the inaccuracy of current navigation systems in describing roads and especially sidewalks. Thus, the author of <ns0:ref type='bibr' target='#b25'>(Gintner et al., 2017)</ns0:ref> aims to design a system to guide people with visual impairments based on geographical features and addresses them through a user interface that converts text to audio using a built-in voiceover engine (Apple iOS). The system was evaluated positively in terms of accessibility and usability as tested in a qualitative study involving six participants with visual impairment.</ns0:p><ns0:p>Based on challenges faced by visually impaired game developers, <ns0:ref type='bibr' target='#b5'>(Ara &#250;jo et al., 2017)</ns0:ref> provides guidance for developers to provide accessibility in digital games by using audio guidance for players with visual impairments. The interactions of the player can be conveyed through audio and other basic mobile device components with criteria focused on the game level and speed adjustments, high contrast interfaces, accessible menus, and friendly design. Without braille, people with visual impairments cannot read, but braille is expensive and takes effort, and so it is important to propose technology to facilitate reading for them. In <ns0:ref type='bibr' target='#b70'>(Sabab &amp; Ashmafee, 2016)</ns0:ref>, the author proposed developing a mobile application called 'Blind Reader' that reads an audio document and allows the user to interact with the application to gain knowledge. This application was evaluated with 11 participants, and the participants were satisfied with the application. Videos are an important form of digital media, and unfortunately people with visual impairment cannot access these videos. Therefore, <ns0:ref type='bibr' target='#b22'>(Fa&#231;anha et al., 2016)</ns0:ref> aims to discover sound synthesis techniques to maximize and accelerate the production of audio descriptions with lowcost phonetic description tools. This tool has been evaluated based on usability with eight people and resulted in a high acceptance rate among users.</ns0:p></ns0:div> <ns0:div><ns0:head>G. Navigation 1-Indoor Navigation</ns0:head><ns0:p>Visually impaired people face critical problems when navigating from one place to another. Whether indoors or outdoors, they tend to stay in one place to avoid the risk of injury or seek the help of a sighted person before moving <ns0:ref type='bibr' target='#b0'>(Al-Khalifa &amp; Al-Razgan, 2016)</ns0:ref>. Thus, aid in navigation is essential for those individuals. In <ns0:ref type='bibr' target='#b55'>(Nair et al., 2020)</ns0:ref>, Nair developed an application called ASSIST, which leverages 19 Bluetooth low energy (BLE) beacons and augmented reality (AR) to help visually impaired people move around cluttered indoor places (e.g., subways) and provide the needed safe guidance, just like having a sighted person lead the way. In the subway example, the beacons will be distributed across the halls of the subway and the application will detect them. Sensors and cameras attached to the individual will detect their exact location and send the data to the application. The application will then give a sequence of audio feedback explaining how to move around the place to reach a specific point (e.g., 'in 50 feet turn right', 'now turn left', 'you will reach the destination in 20 steps'). The application also has an interface for sighted and low-vision users that shows the next steps and instructions. A usability study was conducted to test different aspects of the proposed solution. The majority of the participants agreed that they could easily reach a specified location using the application without the help of a sighted person. A survey conducted to give suggestions from the participants for future improvements showed that most participants wanted to attach their phones to their bodies and for the application to consider the different walking speeds of users. They were happy with the audio and vibration feedback that was given before each step or turn they had to take.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b0'>(Al-Khalifa &amp; Al-Razgan, 2016)</ns0:ref>, the main purpose of the study was to provide an Arabiclanguage application for guidance inside buildings using Google Glass and an associated mobile application. First, the building plan must be set by a sighted person who configures the different locations needed. Ebsar will ask the map builder to mark each interesting location with a QR code and generate a room number, and the required steps and turns are tracked using the mobile device's built-in compass and accelerometer features. All of these are recorded in the application for the use of a visually impaired individual, and at the end, a full map is generated for the building. After setting the building map, a user can navigate inside the building with the help of Ebsar, paired with Google Glass, for input and output purposes. The efficiency, effectiveness, and levels of user satisfaction with this solution were evaluated. The results showed that the errors made were few, indicating that Ebsar is highly effective. The time consumed in performing tasks ranged from medium to low depending on the task; this can be improved later. Interviews with participants indicated the application's ease of use. <ns0:ref type='bibr' target='#b21'>(de Borba Campos et al., 2015)</ns0:ref> shows an application simulating a museum map for people with visual impairments. It discusses whether mental maps and interactive games can be used by people with visual impairments to recognize the space around them. After multiple usability evaluation sessions, the mobile application showed high efficiency among participants in understanding the museum's map without repeating the visitation. The authors make a few suggestions based on feedback from the participants regarding enhancing usability, including using audio cues, adding contextual help to realise the activities carried around in a space, and focusing on audio feedback instead of graphics.</ns0:p></ns0:div> <ns0:div><ns0:head>2-Outdoor Navigation</ns0:head><ns0:p>Outdoor navigation is also commonly discussed in the literature. In <ns0:ref type='bibr' target='#b19'>(Darvishy et al., 2020)</ns0:ref>, Touch Explorer was presented to alleviate many of the problems faced by visually impaired people in navigation by developing a non-visual mobile digital map. The application relies on three major methods of communication with the user: voice output, vibration feedback, and everyday sounds. The prototype was developed using simple abstract visuals and mostly relies on voice for explanation of the content. Usability tests show the great impact the prototype had on the understanding of the elements of the map. Few suggestions were given by the participants to increase usability, including GPS localization to locate the user on the map, a scale element for measuring the distance between two map elements, and an address search function. In <ns0:ref type='bibr' target='#b29'>(Hossain, Qaiduzzaman &amp; Rahman, 2020)</ns0:ref>, a navigation application called Sightless Helper was developed to provide a safe navigation method for people with visual impairments. It relies on footstep counting and GPS location to provide the needed guidance. It can also ensure safe navigation by detect objects and unsafe areas and can detect unusual shaking of the user and alert an emergency contact about the problem. The user interaction categories are voice recognition, touchpad, buttons, and shaking sensors. After multiple evaluations, the application was found to be useful in different scenarios and was considered usable by people with visual impairments. The authors in <ns0:ref type='bibr' target='#b46'>(Long et al., 2016)</ns0:ref> propose an application that uses both updates from users and information about the real world to help visually impaired people navigate outdoor settings. After interviews with participants, some design goals were set, including the ability to tag an obstacle on the map, check the weather, and provide an emergency service. The application was evaluated and was found to be of great benefit; users made few errors and found it easy to use. In <ns0:ref type='bibr' target='#b64'>(Prerana et al., 2019)</ns0:ref>, a mobile application called STAVI was presented to help visually impaired people navigate from a source to a destination safely and avoid issues of re-routing. The application depends on voice commands and voice output. The application also has additional features, such as calling, messages, and emergency help. The authors in <ns0:ref type='bibr' target='#b7'>(Bandukda et al., 2020)</ns0:ref> helped people with visual impairments explore parks and natural spaces using a framework called PLACES. Different interviews and surveys were conducted to identify the issues visually impaired people face when they want to do any leisure activity. These were considered in the development of the framework, and some design directions were presented, such as the use of audio to share an experience.</ns0:p></ns0:div> <ns0:div><ns0:head>3-General Issues</ns0:head><ns0:p>The authors in <ns0:ref type='bibr' target='#b48'>(Maly et al., 2015)</ns0:ref> discuss implementing an evaluation model to assess the usability of a navigation application and to understand the issues of communication with mobile applications that people with visual impairments face. The evaluation tool was designed using a client-server architecture and was applied to test the usability of an existing navigation application. The tool was successful in capturing many issues related to navigation and user behavior, especially the issue of different timing between the actual voice instruction and the position of the user. The authors in <ns0:ref type='bibr' target='#b32'>(Kameswaran et al., 2020)</ns0:ref> conducted a study to find out which navigation technologies blind people can use and to understand the complementarity between navigation technologies and their impact on navigation for visually impaired users. The results of the study show that visually impaired people use both assistive technologies and those designed for non-visually impaired users. Improving voice agents in navigation applications was PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63198:2:0:CHECK 9 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science discussed as a design implication for the visually impaired. In <ns0:ref type='bibr' target='#b71'>(Skulimowski et al.,2019)</ns0:ref>, the authors show how interactive sonification can be used in simple travel aids for the blind. It uses depth images and a histogram called U-depth, which is simple auditory representations for blind users. The vital feature of this system is that it enables the user to interactively select a 3D scene region for sonification by touching the phone screen. This sonic representation of 3D scenes allows users to identify the environment's general appearance and determine objects' distance. The prototype structure was tested by three blind individuals who successfully performed the indoor task. Among the test scenes used included walking along an empty corridor, walking along a corridor with obstacles, and locating an opening between obstacles. However, the results showed that it took a long time for the testers to locate narrow spaces between obstacles.</ns0:p><ns0:p>RQ6: What evaluation methods were used in the studies on usability for visually impaired people that were reviewed?</ns0:p><ns0:p>The most prevalent methods to evaluate the usability of applications were surveys and interviews. These were used to determine the usability of the proposed solutions and obtain feedback and suggestions regarding additional features needed to enhance the usability from the participants' points of view. Focus groups were also used extensively in the literature. Many of the participants selected were blindfolded and were not actually blind or visually impaired. Moreover, the samples selected for the evaluation methods mentioned above considered the age factor depending on the study's needs.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitation and future work</ns0:head><ns0:p>The limitations of this paper are mainly related to the methodology followed. Focusing on just eight online databases and restricting the search with the previously specified keywords and string may have limited the number of search results. Additionally, a large number of papers were excluded because they were written in other languages. Access limitations were also faced due to some libraries asking for fees to access the papers. Therefore, for future works, a study to expand on the SLR results and reveal the current usability models of mobile applications for the visually impaired to verify the SLR results is needed so that this work contributes positively to assessing difficulties and expanding the field of usability of mobile applications for users with visual impairments.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In recent years, the number of applications focused on people with visual impairments has grown, which has led to positive enhancements in those people's lives, especially if they do not have people around to assist them. In this paper, the research papers focusing on Usability for Visually Impaired Users were analyzed and classified into seven themes: accessibility, daily activities, assistive devices, gestures, navigation, screen division layout, and audio guidance. We found that various research studies focus on accessibility of mobile applications to ensure that the same user experience is available to all users, regardless of their abilities. We found many studies that focus on how the design of the applications can assist in performing daily life activities like braille-based application studies and applications to enhance the independence of VI users. We also found papers that discuss the role of assistive devices like screen readers and wearable devices in solving challenges faced by VI users and thus improving their quality of life. We also found that some research papers discuss limited mobility of some gestures for VI users and investigated ways in which we can know what gestures are usable by people with visual impairments. We found many research papers that focus on improving navigation for VI users by incorporating different output modalities like sound and vibration. We also found various studies focusing on screen division layout. By dividing the screen and focusing on visual impairmentrelated issues while developing user interfaces, visually impaired users can easily locate the objects and items on the screens. Finally, we found papers that focus on audio guidance to improve usability. The proposed applications use voice-over and speech interactions to guide visually impaired users in performing different activities through their mobiles. Most of the researchers focused on usability in different applications and evaluated the usability issues of these applications with visually impaired participants. Some of the studies included sighted participants to compare the number and type of problems they faced. The usability evaluation was generally based on the following criteria: accessibility, efficiency, learnability, memorability, errors, safety, and satisfaction. Many of the studied applications show a good indication of these applications' usability and follow the participants' comments to ensure additional enhancements in usability. This paper aims to provide an overview of the developments on usability of mobile applications for people with visual impairments and use this overview to highlight potential future directions. </ns0:p></ns0:div> <ns0:div><ns0:head>Theme</ns0:head><ns0:p>Suggestions for Future Work Sources</ns0:p></ns0:div> <ns0:div><ns0:head>Accessibility</ns0:head><ns0:p>In terms of accessibility, in the future, there is potential in investigating concepts of how information will be introduced in a mobile application to increase accessibility VI users. In addition, future work directions include extending frameworks for visually complex or navigationally dense applications. Furthermore, emotion-based UI design may also be investigated to improve accessibility. Moreover, the optimization of GUI layouts and elements could be considered with a particular focus on gesture control systems and eyetracking systems. <ns0:ref type='bibr'>(Darvishy, Hutter &amp; Frei, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b35'>(Khan, Khusro &amp; Alam, 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b59'>(Paiva et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b37'>(Khowaja et al., 2019)</ns0:ref>, and <ns0:ref type='bibr' target='#b15'>(Carvalho et al., 2018)</ns0:ref> Assistive Devices</ns0:p><ns0:p>In terms of assistive devices for people with visual impairments, there is potential for future direction in research into multimodal non-visual interaction (e.g., sonification methods). Also, since there is very little available literature about how to go about prototype development and evaluation activities for assistive devices for users with no or little sight, it is important to investigate this to further develop the field. <ns0:ref type='bibr' target='#b71'>(Skulimowski et al.,2019)</ns0:ref>, <ns0:ref type='bibr' target='#b12'>(Bharatia, Ambawane &amp; Rane, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b17'>(Csap&#243; et al., 2015)</ns0:ref>, and (Rahman et al.</ns0:p></ns0:div> <ns0:div><ns0:head>, 2017) Daily Activities</ns0:head><ns0:p>There is a need to evaluate the usability and accessibility of applications that aim to assist visually impaired users and improve restrictions in daily activities. <ns0:ref type='bibr'>(Madrigal-Cadavid et al., 2019)</ns0:ref> , <ns0:ref type='bibr' target='#b57'>(Oliveira et al., 2018)</ns0:ref>, and <ns0:ref type='bibr' target='#b68'>(Rodrigues et al., 2015)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Screen Division Layout</ns0:head><ns0:p>In terms of screen division layout, it is important to continuously seek to improve interfaces and provide feedback to make them more focused, more cohesive, and simpler to handle. A complete set of robust design guidelines ought to be created to provide a wide variety of non-visual applications with increased haptic access on a touchscreen device. <ns0:ref type='bibr' target='#b38'>(Khusro et al., 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b3'>(Alnfiai &amp; Sampalli, 2019)</ns0:ref> , <ns0:ref type='bibr' target='#b60'>(Palani et al., 2018)</ns0:ref>, and <ns0:ref type='bibr'>(Khan, &amp; Khusro, 2019)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Gestures</ns0:head><ns0:p>Gesture based interaction ought to be further investigated as it has the potential to greatly improve the way VI users communicate with mobile devices. Performance of gestures with various sizes of touch screens ought to be compared, as the size might have a significant effect on what is considered a usable gesture. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science navigation should focus on eliminating busy graphical interfaces and relying on sounds. Studying more methods and integrating machine learning algorithms and hardware devices to provide accurate results regarding the identification of surrounding objects, and continuous updates for any upcoming obstacles, is also discussed in the literature as an important direction for future work. <ns0:ref type='bibr' target='#b19'>(Darvishy et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b29'>(Hossain, Qaiduzzaman &amp; Rahman, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b29'>(Hossain, Qaiduzzaman &amp; Rahman, 2020)</ns0:ref>, and <ns0:ref type='bibr' target='#b7'>(Bandukda et al., 2020)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Audio Guidance</ns0:head><ns0:p>In terms of audio guidance, there is potential for future directions in expanding algorithms to provide audio guidance to assist in more situations. Authors also emphasise developing versions of the applications in more languages. <ns0:ref type='bibr' target='#b25'>(Gintner et al., 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b70'>(Sabab &amp; Ashmafee, 2016)</ns0:ref> and <ns0:ref type='bibr' target='#b22'>(Fa&#231;anha et al., 2016)</ns0:ref> </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>&#61623;</ns0:head><ns0:label /><ns0:figDesc>ISI Web of Knowledge &#61623; Scopus. PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63198:2:0:CHECK 9 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: NAILS output sample &#61656; INCLUSION AND EXCLUSION CRITERIA</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Search stages</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: PRISMA flow diagram</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Publication trends over time</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Navigation</ns0:head><ns0:label /><ns0:figDesc>Literature suggests that future work in the area of PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63198:2:0:CHECK 9 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,178.87,525.00,381.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='44,42.52,178.87,525.00,284.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='45,42.52,178.87,525.00,369.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='48,42.52,178.87,525.00,366.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='49,42.52,178.87,525.00,275.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='50,42.52,178.87,525.00,355.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='51,42.52,178.87,525.00,258.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 : Theme-based future work RQ5: What is the focus of research on usability for visually impaired people, and what are the research outcomes in the studies reviewed?</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Theme-based future work</ns0:figDesc><ns0:table><ns0:row><ns0:cell>1</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63198:2:0:CHECK 9 Oct 2021)</ns0:note> <ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63198:2:0:CHECK 9 Oct 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"College of Computer and Information Sciences King Saud University Riyadh, Saudi Arabia Office Phone: 0118051621 October 9th, 2021 Dear Editor, Thank you very much for considering our manuscript for publication in PeerJ Computer Science. We have revised the manuscript based on the editor and reviewer’s valuable feedback. We have included a table in the Appendix with the references selected for each theme. We have also double checked the manuscript for typos, grammar, and other citations issues. Also, we followed the journal guidelines for paper structure. We believe the manuscript is now suitable for publication in PeerJ. Best regards, Corresponding Author, Dr. Sarah Almoaiqel On behalf of all authors. Editor: Comments: 1- The authors should include the references selected for the SLR in a tabular format. The table should contain the references selected for each theme. 2- In addition, the paper should be proofread for typos, grammar and other citations issues. Finally, follow the journal guidelines for paper structure. Response to the Editor: We thank you for your valuable feedback, which helped us to improve the quality of our manuscript. 1- We have included a table in the Appendix showing the references selected for each theme. As we did with the other tables, we have only included the Appendix heading in the manuscript and uploaded the table in a separate document. 2- We have double checked the manuscript for typos, grammar, and other citations issues. We have also followed the journal guidelines for paper structure. Reviewer #3: Comments: 1- The authors have included almost all of my suggestions in the reviewed manuscript. I still miss a table with a short description of the papers included in the SLR (60 papers). This table could be included in the appendix. Right now I can not even confirm that there were really 60 papers included in the SLR - I would have to extract unique references from the description of the results. Response: We thank the reviewer for their valuable comments, which helped us to improve the quality of this manuscript. 1- We have included a table in the Appendix showing the references selected for each theme. As we did with the other tables, we have only included the Appendix heading in the manuscript and uploaded the table in a separate document. "
Here is a paper. Please give your review comments after reading it.
279
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The real-time availability of the internet has engaged millions of users around the world.</ns0:p><ns0:p>The usage of regional languages is being preferred for effective and ease of communication. People share ideas, opinions and events that are happening globally i.e., sports, inflation, protest, explosion, and sexual assault etc. in different local languages that contain invaluable information. As a result, multilingual data were being generated rapidly on social networks and news channels. Extraction and classification of events from local languages is challenging task because of resource lacking. In this research paper, we presented the event classification of the Urdu language text by exploring the Urdu text (resource poor) language existing on social media and the news channels. The dataset contains more than 0.1 million (1,02,962) labeled instances of twelve (12) different types of events. Rather than other features vector generating technique, the Term Frequency-Inverse Document Frequency (tf-idf) showed the best results as a feature vector to evaluate the performance of the six popular machine learning classifiers. Among the classifiers used, the Random Forest (RF), Decision Tree and k-Nearest Neighbor outperformed among other classifiers.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>In the current digital and innovative era, the text is still the strongest and dominant source of communication instead of pictures, emoji, sounds and animations <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. The innovative environment of communication; real-time availability <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> of the Internet and the unrestricted communication mode of social networks have attracted billions of people around the world. Now, people are hooked together via Internet like a global village. They preferred to share insights about different topics, opinions, views, ideas, and events <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref> on social networks in different languages. The one of the reasons i.e., Because social media and news channels have created space for local languages <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>. Google input tool 1 provides language transliteration support for more than 88 different languages. Many tools provide the support to use local languages on social media for communication. The google language translator 2 is a platform that facilitates multilingual users of more than 100 languages for conversation. Generally, people prefer to communicate in local languages instead of non-local languages for sake of easiness. A cursive language Urdu is one of the local languages that is being highly adopted for communication. There are more than 300 million <ns0:ref type='bibr' target='#b12'>[10]</ns0:ref> Urdu language users all around the world. The Urdu language is a mix-composition of different languages i.e., Arabic, Persian, Turkish, and Hindi <ns0:ref type='bibr' target='#b13'>[11]</ns0:ref>. In Pakistan and India, more than 65 million people can speak, understand, and write the Urdu language <ns0:ref type='bibr' target='#b14'>[12]</ns0:ref>. It is one of the resource-poor, neglected languages <ns0:ref type='bibr' target='#b15'>[13]</ns0:ref> and the national language <ns0:ref type='bibr' target='#b16'>[14]</ns0:ref> of Pakistan: the 6 th the most populous 2 country in the world. Urdu is widely adopted as a second language all over Pakistan <ns0:ref type='bibr' target='#b13'>[11]</ns0:ref><ns0:ref type='bibr' target='#b14'>[12]</ns0:ref><ns0:ref type='bibr' target='#b15'>[13]</ns0:ref><ns0:ref type='bibr' target='#b16'>[14]</ns0:ref>. In contrast to cursive languages, there exists noteworthy work of information extraction and classification for i.e., English, French, German, and many other non-cursive languages <ns0:ref type='bibr' target='#b16'>[14]</ns0:ref><ns0:ref type='bibr' target='#b17'>[15]</ns0:ref>. In South-Asia other countries <ns0:ref type='bibr' target='#b17'>[15]</ns0:ref> i.e., Bangladesh, Iran, and Afghanistan also have a considerable number of Urdu language users. There are several tools that supports the usage of local languages on social media and news channels. Pak Urdu Installer 3 is also one of that software, it supports the Urdu language for textual communication. Sifting worthy insights from an immense amount of heterogeneous text existing on social media is an interesting and challenging task of Natural Language Processing (NLP). Event extraction and classification is one of those tasks. Event classification insights are helpful to develop various NLP applications i.e., to respond to emergencies, outbreaks, rain, flood and earthquake <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref> etc. People share their intent, appreciation or criticism <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref> i.e., enjoying discount offers by selling brands or criticizing the quality of the product. Earlier awareness of sentimental insights can be helpful to protect by business losses. The implementation of smart-cities possesses a lot of challenges; decision making, event management, communication and information retrieval. Extracting useful insights from an immense amount of text, dramatically enhance the worth of smart cities <ns0:ref type='bibr' target='#b8'>[7]</ns0:ref>. Event information can be used to predict the effects of the event on the community, improve security and rescue the people. Classification of events can be used to collect relevant information about a specific topic, toptrends, stories, text summarization and question and answering systems <ns0:ref type='bibr' target='#b10'>[8]</ns0:ref><ns0:ref type='bibr' target='#b11'>[9]</ns0:ref>. Such information can be used to predict upcoming events, situations and happening. For example, protesting events reported on social media generally end with conflict among different parties, injuries, death of people and misuse of resources that cause anarchy. Some proactive measurements can be taken by the state to diffuse the situation and to prevent conflict. Similarly, event classification is crucial to monitor the law-and-order situation of the world. Extracting and classification of event information from Urdu language text is a unique, interesting, and challenging task. The characteristic features of the Urdu langue that made the event classification tasks more complex and challenging, are listed below. Similarly, the lack of resources i.e., the Part of speech tagger (PoS), words stemmer, datasets, and word annotators are some other factors that made the processing of the Urdu text complex. There exist a few noteworthy works related to the Urdu language text processing (See the literature for more details). All the above-mentioned factors motivated us to explore Urdu language text for our task.</ns0:p></ns0:div> <ns0:div><ns0:head>Concept of Events</ns0:head><ns0:p>The definition of events varies from domain to domain. In literature, the event is defined in various aspects, such as a verb, adjective, and noun based depending on environmental situation <ns0:ref type='bibr' target='#b18'>[16]</ns0:ref><ns0:ref type='bibr' target='#b19'>[17]</ns0:ref>. In our research work event can be defined as 'An environmental change that occurs because of some reasons or actions for a specific period.' For example, the explosion of the gas container, a collision between vehicles, terrorist attacks and rainfall etc. There are several hurdles to process Urdu language text for event classification. Some of them are i.e., determining the boundary of events in a sentence, identifying event triggers, and assigning an appropriate label. Event Classification 'The automated way of assigning predefined labels of events to new instances by using pretrained classification models is called event classification.'. Classification is supervised machine learning; all the classifiers are trained on label instances of the dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Multiclass Event Classification</ns0:head><ns0:p>It is the task of automatically assigning the most relevant one class from the given multiple classes. Some serious challenges of multiclassification are sentences overlapping in multiple classes <ns0:ref type='bibr' target='#b20'>[18]</ns0:ref><ns0:ref type='bibr' target='#b21'>[19]</ns0:ref> and imbalance instances of classes. These factors generally affect the overall performance of the classification system.</ns0:p></ns0:div> <ns0:div><ns0:head>Lack of Recourse</ns0:head><ns0:p>The researchers of cursive languages in the past were unexcited and vapid <ns0:ref type='bibr' target='#b15'>[13]</ns0:ref> because of lacking resources i.e., dataset, part of speech tagger and word annotators etc. Therefore, a very low amount of research work exists for cursive language i.e., Arabic, Persian Hindi, and Urdu <ns0:ref type='bibr' target='#b22'>[20]</ns0:ref>. But now, from the last few years, cursive languages have attracted researchers. The main reason behind the attraction is that a large amount of cursive language data was being generated rapidly over the internet. Now, some processing tools also have been developed i.e., Part of speech tagger, word stemmer, and annotator that play an important role by making research handier. But these tools are still limited, commercial, and close domain. Natural language processing is tightly coupled with resources i.e., processing resources, datasets, semantical, syntactical, and contextual information. Textual features i.e., Part of Speech (PoS) and semantic are important for text processing. Central Language of Engineering (CLE) 4 provides limited access to PoS tagger because of the close domain and paid that diverged the researcher to explore Urdu text more easily. Contextual features <ns0:ref type='bibr' target='#b23'>[21]</ns0:ref> i.e., grammatical insight (tense), and sequence of words play important role in text processing. Because of the morphological richness nature of Urdu, a word can be used for a different purpose and convey different meanings depending on the context of contents. Unfortunately, the Urdu language is still lacking such tools that are publicly available for research. Dataset is the core element of research. Dataset for the Urdu language generally exists for name entity extraction with a small number of instances which are &#61623; Enabling Minority Language Engineering (EMILLE) (only 200000 tokens) <ns0:ref type='bibr' target='#b24'>[22]</ns0:ref>.</ns0:p><ns0:p>&#61623; Becker-Riaz corpus (only 50000 tokens) <ns0:ref type='bibr' target='#b25'>[23]</ns0:ref> &#61623; International Joint Conference on Natural Language Processing (IJCNLP) workshop corpus (only 58252 tokens) &#61623; Computing Research Laboratory (CRL) annotated corpus (only 55,000 tokens are publicly available data corpora. <ns0:ref type='bibr' target='#b26'>[24]</ns0:ref> There is no specific dataset for events classification for Urdu language text.</ns0:p></ns0:div> <ns0:div><ns0:head>Concept of Our System</ns0:head><ns0:p>The overall working process of our proposed framework is given in Fig. <ns0:ref type='figure'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Our Contribution</ns0:head><ns0:p>&#61623; In this research article, we claim that we are the first ones who are exploring the Urdu language text to perform multi-class event classification at the sentence level using a machine learning approach, &#61623; A dataset that is larger than state-of-art used in experiments. In our best knowledge classification for twelve 12 different types of events never performed, &#61623; A comprehensive and detailed comparison of six machine learning algorithms is presented to find a more accurate model for event classification for the Urdu language text.</ns0:p></ns0:div> <ns0:div><ns0:head>Our Limitations</ns0:head><ns0:p>&#61623; There is no specific Word2Vec model for Urdu language text, &#61623; There is also no availability of the free (open source) Part of Speech tagger and word stemmer for Urdu language text, &#61623; Also, there exists no publicly available dataset of Urdu text for sentence classification.</ns0:p></ns0:div> <ns0:div><ns0:head>Related Work</ns0:head><ns0:p>Classification of events from the textual dataset is a very challenging and interesting task of Natural Language Processing (NLP). An intent mining system developed <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref> to facilitate citizens and cooperative authorities using a bag of the token model. The researchers explored the hybrid feature representation for binary classification and multi-label classification. It showed a 6% to 7% improvement in the top-down feature set processing approach. Intelligence information retrieval plays a vital role in the management of smart cities. Such information helps to enhance security and emergency management capabilities in smart cities <ns0:ref type='bibr' target='#b8'>[7]</ns0:ref>. Textual contents on social media are explored in different ways to extract event information. Generally, the event has been defined as a verb, noun, and adjective <ns0:ref type='bibr' target='#b16'>[14]</ns0:ref>. Event detection is a generic term that is further divided into event extraction and event classification. A combined neural network of convolutional and recurrent network was designed to extract event from English, Tamil, and Hindi languages. It showed 39.91%, 37.42% and 39.71% F_ Measure <ns0:ref type='bibr' target='#b19'>[17]</ns0:ref>. In the past, the researchers were impassive in cursive language, therefore a very limited amount of research work exist in cursive language i.e., Arabic, Persian Hindi, and Urdu <ns0:ref type='bibr' target='#b27'>[25]</ns0:ref>. Similarly, in the work of <ns0:ref type='bibr' target='#b27'>[25]</ns0:ref> the authors developed a multiple minimal reduct extraction algorithm which is an improved version of the Quick reduct algorithm <ns0:ref type='bibr' target='#b28'>[26]</ns0:ref>. The purpose of developing the algorithm is to produce a set of rules that assist in the classification of Urdu sentences. For evaluation purpose, an Arabic-based corpus containing more than 2500 documents was plugged in for classifying them in to one of the nine classes. In the experiment, we compared the results of the proposed approach when using multiple and single minimal reducts. The results showed that the proposed approach had achieved an accuracy of 94% when using multiple reducts, which outperformed the single reduct method which achieved an accuracy of 86%. The results of the experiments also showed that the proposed approach outperforms both the K-NN and J48 algorithms regarding classification accuracy using the dataset on hand. Urdu textual contents explored <ns0:ref type='bibr' target='#b29'>[27]</ns0:ref> for classification using the majority voting algorithm. They categorized Urdu text into seven classes i.e., Health, Business, Entertainment, Science, Culture, Sports, and Wired. They used 21769 news documents for classification and reported 94% precision and recall. Dataset evaluated using these algorithms, Linear SGD, Bernoulli Na&#239;ve Bayes, Linear SVM, Na&#239;ve Bayes, random forest classifier, and Multinomial Na&#239;ve Bayes. Textual classification is close to our problem that is events classification by text at the sentence level, but it is completely different. They did not report the overall accuracy of the system for multiple classes. The information about feature selection is also omitted by the researchers but comparatively, we disclosed the feature selection, engineering, and accuracy of classifiers for multi-classes. Our dataset set consists of 1,02,960 instances of sentences and twelve <ns0:ref type='bibr' target='#b14'>(12)</ns0:ref> classes that are comparatively very greater.</ns0:p><ns0:p>A framework <ns0:ref type='bibr' target='#b31'>[28]</ns0:ref> proposed a tweet classification system to rescue people looking for help in a disaster like a flood <ns0:ref type='bibr' target='#b32'>[29]</ns0:ref>. The developed system was based on the Markov Model achieve 81% and 87% accuracy for classification and location detection, respectively. The features used in their system are <ns0:ref type='bibr' target='#b32'>[29]</ns0:ref>: To classify Urdu news headlines <ns0:ref type='bibr' target='#b33'>[30]</ns0:ref> by using maximum indexes of vectors. They used stemmed and non-stemmed textual data for experiments. The system was specifically designed for text classification instead of event classification. The proposed system achieved 78.0% for competitors and 86.6% accuracy for the proposed methodology. In comparison, we used sentences of Urdu language for classification and explored the textual features of sentences. We have explored all the textual and numeric features i.e., title, length, last-4-words, and the combinations of these (for more detail see Tab. 1) in detail in this paper that were not reported ever in state-of-art according to our knowledge. Twitter <ns0:ref type='bibr' target='#b34'>[31]</ns0:ref> to detect natural disasters i.e., bush fires, earthquakes and cyclones and humanitarian crises <ns0:ref type='bibr' target='#b35'>[32]</ns0:ref>. To be aware of emergencies situation in natural disasters a framework work designed based on SVM and Na&#239;ve Bayes classifiers using word unigram, bi-gram, length, number of #Hash tag, and reply. These features were selected on sentence bases. SVM and Nave Bayes showed 87.5% and 86.2% accuracy respectively for tweet classification i.e., seeking help, offering for help, and none. A very popular social website (Twitter) textual data used <ns0:ref type='bibr' target='#b36'>[33]</ns0:ref> to extract and classify events for the Arabic language. Implementation and testing of Support Vector Machine (SVM) and Polynomial Network (PN) algorithms showed promising results for tweet classification 89.2% and 92.7%. Stemmer with PN and SVM magnified the classification 93.9% and 91.7% respectively. Social events <ns0:ref type='bibr' target='#b37'>[34]</ns0:ref> extracted assuming that to predict either parties or one of them aware of the event. The research aimed to find the relation between related events. Support Vector Machine (SVM) with kernel method was used on adopted annotated data of Automated Content Extraction (ACE). Structural information derived from the dependency tree and parsing tree is utilized to derive new structures that played important role in event identification and classification. The Tweet classification of the tweets related to the US Air Lines <ns0:ref type='bibr' target='#b43'>[40]</ns0:ref> is performed by the sentiment analysis companies that is not related to our work. We tried to classify events at sentence level that is challenging since the Urdu sentence contains very short features as compared to a tweet. It is pertinent to mention that the sentiment classification is different from the event classification.</ns0:p><ns0:formula xml:id='formula_0'>&#61623;</ns0:formula></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>Event classification for Urdu text is performed using supervised machine learning approach. A complete overview of the multi-class event classification methodology is given in Fig. <ns0:ref type='figure'>1</ns0:ref>. Textual data classification possesses a lot of challenges i.e., word similarity, poor grammatical structure, misuse of terms, and multilingual words. That is the reason, we decided to adopt a supervised classification approach to classify Urdu sentences into different categories.</ns0:p></ns0:div> <ns0:div><ns0:head>Data Collection</ns0:head><ns0:p>Urdu data were collected from popular social networks (Twitter), famous news channel blogs i.e., Geo News 5 ,Urdu Point 6 and BBC Urdu 7 . The data collection consists of the title, the main body, the published date, the location and the URL of the post. In the phase of data collection, a PHP based web scraper is used to crawl data from the above-mentioned social websites. A complete post is retrieved from the websites and stored in MariaDB (database). Our dataset consists of more than one million (1, 02, 960) label sentences of different types of events. All the different types of events used in our research work and their maximum number of instances are shown below in Fig. <ns0:ref type='figure'>2</ns0:ref>. There are twelve different types of events that we try to classify in our research work. These events are a factual representation of the state and situation of the people. In Fig. <ns0:ref type='figure'>2</ns0:ref>. imbalances number of instances of each event are given. It can be visualized that politics, sports, and Fraud &amp; Corruption have a higher number of instances while Inflation, Sexual Assault and Terrorist Attack have a lower number of instances. These imbalance number of instances made our classification more interesting and challenging. Multiclass events classification tasks are comprised of many classes. The different types of events that are used in our research work i.e., sports, Inflation, Murder &amp; Death, Terrorist Attack, Politics, Law and Order, Earthquake, Showbiz, Fraud &amp; Corruption, Weather, Sexual Assault, and Business. All the sentences of the dataset are labeled by the above-mentioned twelve <ns0:ref type='bibr' target='#b14'>(12)</ns0:ref> different types of events. Finally, a numeric (integer) value is assigned to each type of event label (See Tab. 2 for more details of label and its relevant numeric value).</ns0:p></ns0:div> <ns0:div><ns0:head>Preprocessing</ns0:head><ns0:p>The initial preprocessing steps are performed on the corpus to prepare it for machine learning algorithms. Because textual data cannot directly process by machine learning classifiers. It also contains many irrelevant words. The detail of all the preprocessing steps is given below. These steps were implemented in PHP-based environment. While the words tokenization is performed using the scikit library <ns0:ref type='bibr' target='#b22'>[20]</ns0:ref> in python.</ns0:p></ns0:div> <ns0:div><ns0:head>Post Splitting</ns0:head><ns0:p>The PHP crawler extracted the body of the post. It comprises many sentences as a paragraph. In the Urdu language script, sentences end with a sign called '-'Hyphen (Khatma-&#8235;.)&#65175;&#65252;&#64423;&#8236; It is a standard punctuation mark in the Urdu language to represent the end of the sentence. As mentioned earlier, we are performing event classification at the sentence level. So, we split paragraphs of every post into sentences. Every line in the paragraphs ending at Hyphen is split as a single line.</ns0:p></ns0:div> <ns0:div><ns0:head>Stop Words Elimination</ns0:head><ns0:p>Generally, those words that occur frequently in text corpus are considered as stop words. These words merely affect the performance of the classifier. Punctuation marks ('!', '@',' #', etc.) and frequent words of the Urdu languages (&#8235;(&#64400;&#65166;&#8236;ka), &#8235;&#64400;&#64431;&#8236; (kay), &#8235;(&#64400;&#64509;&#8236;ki) etc.) are the common examples of stop words. All the stop words <ns0:ref type='bibr' target='#b31'>[28]</ns0:ref> that do not play an influential role in event classification for the Urdu language text are eliminated from the corpus. Stop words elimination reduces the memory and processing utilization and make the processing efficient.</ns0:p></ns0:div> <ns0:div><ns0:head>Noise Removal and Sentences Filtering</ns0:head><ns0:p>Our data were collected by different sources (see section 3). It contains a lot of noisy elements i.e., multilanguage words, links, mathematical characters, and special symbols, etc. To clean the corpus, we removed noise i.e., multilingual sentences, irrelevant links, and special characters. The nature of our problem confined us to define the limit of words per sentence. Because of the multiple types of events, it is probably hard to find the sentence of the same length. We decided to keep the maximum number of sentences in our corpus. All those sentences which are brief and extensive are removed from our corpus. In our dataset lot of sentences varying in length from 5 words to 250 words. We decided to use sentences that consist of 5 words to 150 words to lemmatize our research problem and to reduce the consumption of processing resources.</ns0:p></ns0:div> <ns0:div><ns0:head>Sentence Labeling</ns0:head><ns0:p>In supervised learning, providing output (Label) detail in the corpus is a core element. Sentence labeling is an exhausting task that requires deep knowledge and an expert's skill of language. All the sentences were manually labeled by observing the title of the post and body of sentences by Urdu language experts (see Tab. 2 for sentence labeling). Three Urdu language experts were engaged in the task of sentence labeling. One of them is Ph.D. (Scholar) while the other two are M.Phil. To our best knowledge, it is the first largest labeled dataset for the multi-class event in the Urdu language.</ns0:p></ns0:div> <ns0:div><ns0:head>Feature Selection</ns0:head><ns0:p>The performance of prediction or classification models is cohesively related to the selection of appropriate features. In our dataset six (6) features excluding 'Date' as a feature are considered valuable to classify Urdu news sentences into different classes. All the proposed features that are used in our research work are listed in Tab.1. Why these features selected? Last-4-Words of Sentence Occurrence, happening, and situations are generic terms that are used to represent events. In general, 'verb' represents event. The grammatical structure of Urdu language is Subject_ Object_ Verb (SOV) <ns0:ref type='bibr' target='#b34'>[31]</ns0:ref>, which depicts that verb, is laying in the last part of the sentences. For example, the sentence &#8235;&#65193;&#64510;&#65166;&#1748;'(&#8236; &#8235;&#64344;&#65166;&#65255;&#64509;&#8236; &#8235;&#64400;&#65262;&#8236; &#8235;&#64344;&#65262;&#65193;&#65261;&#64414;&#8236; &#8235;&#65255;&#64431;&#8236; &#8235;&#65165;&#65187;&#65252;&#65194;&#8236; -Ahmad ney podon ko pani dia'), (Ahmad watered the plants) follows the SOV format. 'Pani dia-&#8235;&#65193;&#64510;&#65166;&#8236; &#8235;'&#64344;&#65166;&#65255;&#64509;&#8236; is the verbal part of the sentence existing in the last two words of the sentence. It shows the happening or action of the event. Our research problem is to classify sentences into different classes of events. So, that last_4_ words are considered one of the vital features to identify events and non-event sentences. For example, in Tab. 3 in the event column underline/highlighted part of the sentence represents the happening of an event i.e., last_4_words in the sentence. While labeling the sentences we strictly concerned that only event sentences of different types should be labeled.</ns0:p></ns0:div> <ns0:div><ns0:head>Title of Post</ns0:head><ns0:p>Every conversation has a central point i.e., title. Textual, pictorial, or multimedia content that is posted on social networks as a blog post, at the paragraph level or sentence level describes the specific event. Although many posts contain irrelevant titles to the body of the message.</ns0:p><ns0:p>However, using the title as a feature to classify sentences is crucial because the title is assigned to the contents-based material.</ns0:p></ns0:div> <ns0:div><ns0:head>Length of Sentence</ns0:head><ns0:p>A sentence is a composition of many words. The length of the sentence is determined by the total number of words or tokens that exist in it. It can be used as a feature to classify sentences because many sentences of the same event have probably the same length.</ns0:p></ns0:div> <ns0:div><ns0:head>Features Engineering</ns0:head><ns0:p>Feature Engineering is a way of generating specific features from a given set of features and converting selected features to machine-understandable format. Our dataset is text-based that consists of more than 1 million (102960 labeled) instances i.e., sports, inflation, death, terrorist attack, and sexual assault, etc. 12 classes. As mentioned earlier that the Urdu language is one of the resource-poor languages and since there are no pre-trained word embedding models to generate the embedding vectors for Urdu language text, we could not use the facility of Word2Vec embedding technique. All the textual features are converted to numeric format i.e., (Term Frequency_ Inverse Document Frequency) TF_IDF and Count-Vectorizer. These two features TF_IDF and Count-Vectorizer are used in a parallel fashion. The scikit-learn package is used to transform text data into numerical value <ns0:ref type='bibr' target='#b22'>[20]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Count_ Vectorization</ns0:head><ns0:p>The process of converting words to numerical form is called vectorization. Its working strategy is based on term frequency. It counts the frequency of specific word w and builds the spare matrix-vector using bag-of-words (BOW). The length of the feature vector depends on the size of the bag-of-words i.e., dictionary.</ns0:p></ns0:div> <ns0:div><ns0:head>Term Frequency Inverse Document Frequency</ns0:head><ns0:p>It is a statistical measure of word w to understand the importance of that word for specific document d in the corpus. The importance of w proportionally related to frequency i.e., higher frequency more important. The mathematical formulas related to TF_IDF are given below: </ns0:p></ns0:div> <ns0:div><ns0:head>Experimental Setup</ns0:head><ns0:p>Classifiers are the algorithms used to classify data instances into predefined categories. Many classifiers exist that process the textual data using a machine learning approach. In our research work, we selected the six most popular machine learning algorithms i.e., Random Forest (RF) <ns0:ref type='bibr' target='#b12'>[10]</ns0:ref>, K-Nearest Neighbor (KNN), Support Vector Machine (SVM, Decision Tree (DT), Na&#239;ve Bayes Multinomial (NBM) and Linear Regression (LR).</ns0:p></ns0:div> <ns0:div><ns0:head>Machine Learning Classifiers</ns0:head><ns0:p>In this section, we presented the detail of six classifiers that were used to classify the Urdu sentences using different proposed features.</ns0:p></ns0:div> <ns0:div><ns0:head n='1'>Random Forest (RF)</ns0:head><ns0:p>This model is comprised of several decision trees that acts as a building block of RF. Every decision tree is created using the rules i.e., if then else and the conditional statements etc. <ns0:ref type='bibr' target='#b12'>[10]</ns0:ref>. These rules are then followed by the multiple decision trees to analyze the problem at discrete level. 2 k-Nearest Neighbor It is one of the statistical models that find the similarity among the data points using Euclidean distance <ns0:ref type='bibr' target='#b38'>[35]</ns0:ref>. It belongs to the category of lazy classifiers and is widely used for the classification and regression tasks.</ns0:p><ns0:formula xml:id='formula_1'>3</ns0:formula><ns0:p>Support Vector Machine It is based on statistical theory <ns0:ref type='bibr' target='#b39'>[36]</ns0:ref>, to draw hyperplane among points of dataset. It is highly recommended for regression and classification i.e., binary classification, multiclass classification and multilabel classification. It finds the decision boundary to identify different classes and maximize the margin.</ns0:p></ns0:div> <ns0:div><ns0:head>4</ns0:head><ns0:p>Decision Tree It is one of the supervised classifiers that works following certain rules. Data points/inputs are split according to specific condition <ns0:ref type='bibr' target='#b40'>[37]</ns0:ref>. It is used for regression and classification using nonparametric method because it can handle textual and numerical data. Learning from data point is accomplished by approximating sine curve with the combination of if-else like set of rules. The accuracy of model is related to the deepness and complexity of rules.</ns0:p></ns0:div> <ns0:div><ns0:head>5</ns0:head><ns0:p>Na&#239;ve Bayes Multinominal It is computationally efficient classifier for text classification using discrete features. It can also handle the textual data by converting into numerical <ns0:ref type='bibr' target='#b41'>[38]</ns0:ref> format using count vectorizer and term frequency inverse document frequency (tf-idf).</ns0:p></ns0:div> <ns0:div><ns0:head>6</ns0:head><ns0:p>Linear Regression It is highly recommended classifier for numerical output. It is used to perform prediction by learning linear relationship between independent variables (inputs) and dependent variable (output) <ns0:ref type='bibr' target='#b42'>[39]</ns0:ref>. Training Dataset A subpart of dataset that is used to train the models to learn the relationship among depended and independent variable is called training dataset. We divided our data into training and testing using train_ test_ split function of scikit library using python. Our training dataset consists of 70% of the dataset that is more than 70,000 labelled sentences of Urdu language text. Testing Dataset It also the subpart of the dataset that is usually smaller than size as compared to training dataset. In our research case, we decided to use 30% of dataset for testing and validating the performance of classifiers. It comprises of more than 30,000 instances/sentences of Urdu langue text.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance Measuring Parameters</ns0:head><ns0:p>The most common performance measuring parameters <ns0:ref type='bibr' target='#b16'>[14]</ns0:ref><ns0:ref type='bibr' target='#b17'>[15]</ns0:ref><ns0:ref type='bibr' target='#b18'>[16]</ns0:ref><ns0:ref type='bibr' target='#b19'>[17]</ns0:ref><ns0:ref type='bibr' target='#b20'>[18]</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>To evaluate our dataset, the Python package scikit-learn is used to perform event classification at the sentence level. A detailed comparison of results obtained by using six popular machine learning classifiers is given in the proceeding tables.</ns0:p><ns0:p>The results that are given in Tab. 4. describe the accuracy of all used machine learning classifiers using 'Last-4-words' as feature. It can be observed that out of classifiers Random Forest showed the highest score 52% accuracy. In Tab. 5 among all classifiers Random Forest again showed the highest score 53% accuracy using 'Last-4-words and Length' as feature.</ns0:p><ns0:p>The accuracy of 'Title and Last-4-words' as features is 98% and 99% for Random Forest and K-Nearest Neighbor respectively as mentioned in Tab. <ns0:ref type='bibr' target='#b6'>6</ns0:ref>. In comparison to other features 'Title and Length' also showed the highest results 99% accurate for Random Forest and Decision Tree (See Tab. 7). Length as feature showed very low results, more detail of about other classifiers (See in Tab.8.).</ns0:p><ns0:p>To maintain the brevity of the paper and to provide validation details of our results, we expressed performance measuring parameter (precision, recall and f-measure) in details of only those two features that showed highest results i.e., 'Title and Last-4-words' and 'Title and Length'.</ns0:p><ns0:p>The performance measuring parameters of Random Forest and K-Nearest Neighbor using 'Title and Last-4-words' as feature is given in Tab. 9, Tab. 10, Tab.11 and Tab. 12.</ns0:p><ns0:p>Similarly, the Random forest and Decision Tree have showed the highest accuracy for sentence classification using 'Title and Length' as feature. The details of performance measuring parameters (precision, recall and f-measure) are also given in Tab. 13 and Tab. 14 to verify the results. Among all other features 'Title and Last-4-words' and 'Title and Last-4-words'of a sentence is used as a feature to classify events into different categories. Both features showed outstanding results to classify events at sentence level into different classes. The accuracy comparison of six classification algorithms for 'Title and Length' of the sentence and 'Title and Last-4-words'as features is shown in Fig. <ns0:ref type='figure'>3</ns0:ref> and Fig. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science number of users and a huge bulk of data on social networks. There is need to extract and classify information from other languages instead of only widely used languages like English. Because lot of information being shared in non-English languages. Local languages cannot be processed by the existing tools that are designed for English language.</ns0:p><ns0:p>Extracting and classification of events from resource-poor language is an interesting and challenging task. We explored many features but the 'Title and Last_4_ words' and 'Title and Length' showed outstanding results. The purpose of exploring multiple features and evaluating many classifiers is to design an accurate multiclass event classification for the Urdu language.</ns0:p><ns0:p>The lack of resources were the main barriers to use other feature vector generating techniques i.e., word2Vec, word embedding. TF_IDF feature generating technique played a vital role to achieve the highest accuracy for Urdu language text.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>A massive amount of Urdu textual data exists on social networks and news websites. Multiclass event classification for Urdu text at the sentence level is a challenging task because of few numbers of words and limited contextual information. We performed experiments by selecting appropriate features i.e., 'Title and Last_4_ words' and 'Title and Length'. These are the key features to achieve our expected results. Count_ Vectorizer and TF-IDF feature generating techniques are used to convert text into (numeric) real value for machine learning models because there no specific word embedding model like Word2Vec or Glove for Urdu language text. There exists no standard dataset for event classification from Urdu text. All other features i.e., length and Last_4_grams(words) showed poor results. Any one of the features i.e., length or last-4-words of sentence individually is inappropriate to classify sentences into multiple classes. All the necessary pre-processing steps were performed to prepare the dataset in appropriate format to learn more information. Without performing pre-processing steps, it utilizes more resources i.e., memory, time, and processing.</ns0:p></ns0:div> <ns0:div><ns0:head>Future Work</ns0:head><ns0:p>&#61623; In a comprehensive review of Urdu literature, we found a few numbers of referential works related to Urdu text processing. One of the main issues associated with the Urdu language research is the unavailability of the appropriate corpus like the data set of Urdu sentences representing the event; the close-domain PoS tagger; the lexicons, and the annotator etc.</ns0:p><ns0:p>&#61623; There is a need to develop the supporting tools i.e., the PoS tagger, the annotation tools, the dataset of the Urdu-based languages having information about some information associated with the events, and the lexicons can be created to extend the research areas in the Urdu language.</ns0:p><ns0:p>&#61623; In the future, many other types of events and other domains of information like medical events, social, local, and religious events can be classified using the extension of machine learning i.e., deep learning. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science &#61623; In the future grammatical, contextual, and lexical information can be used to categorize events. Temporal information related to events can be further utilized to classify an event as real and retrospective.</ns0:p><ns0:p>&#61623; Classification of events can be performed at the document level and phrase level.</ns0:p><ns0:p>&#61623; Deep learning classifiers can be used for a higher number of event classes. </ns0:p></ns0:div> <ns0:div><ns0:head>&#8235;&#64344;&#64425;&#65248;&#64431;&#8236; &#8235;&#65193;&#65253;&#8236; &#8235;&#64380;&#65256;&#65194;&#8236; &#8235;&#65191;&#65262;&#65205;&#8236; &#8235;&#65247;&#65262;&#64402;&#8236; &#8235;&#65175;&#64429;&#64431;&#1748;&#8236;</ns0:head><ns0:p>Channd din pahly log khush they. Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>&#61623;</ns0:head><ns0:label /><ns0:figDesc>Cursive nature of script &#61623; Morphologically enriched &#61623; Different structure of grammar &#61623; Right to the left writing style &#61623; No text capitalization</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Number of words in a tweet (w) &#61623; Verb in a tweet by (verb) &#61623; Number of verbs in a tweet by (v) &#61623; Position of the query by (Pos) &#61623; Word before query word (before) PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:1:1:NEW 24 Feb 2021) Manuscript to be reviewed Computer Science &#61623; Word after query word (after)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>i.e., precision, recall, and F1_measure are used to evaluate the proposed framework since these parameters are the key indicators while performing the classification in multiclass environment using imbalanced dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:11:55370:1:1:NEW 24 Feb 2021)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>4. In Fig 5., we presented the accuracy comparison of top the best classifiers with best proposed features.Event extraction and classification are tightly coupled with processing resources i.e., Part of speech tagger (PoS), Text annotators, and contextual insights. Usage of local languages being highly preferred over social media. Urdu is one of those languages that have a considerable</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Discussion</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:1:1:NEW 24 Feb 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Types of events and their labels in the dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Event</ns0:cell><ns0:cell cols='2'>Label Event</ns0:cell><ns0:cell>Label</ns0:cell></ns0:row><ns0:row><ns0:cell>Sports</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>Earthquake</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell>Inflation</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>Showbiz</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Murder and Death 3</ns0:cell><ns0:cell cols='2'>Fraud and Corruption 9</ns0:cell></ns0:row><ns0:row><ns0:cell>Terrorist Attack</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>Rain/Weather</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>Politics</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>Sexual Assault</ns0:cell><ns0:cell>11</ns0:cell></ns0:row><ns0:row><ns0:cell>Law and Order</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>Business</ns0:cell><ns0:cell>12</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Last 4-words representing an event</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Event</ns0:cell><ns0:cell>Non_ Event</ns0:cell></ns0:row><ns0:row><ns0:cell>&#8235;&#64400;&#65208;&#65252;&#64511;&#65198;&#8236; &#8235;&#65251;&#65204;&#65164;&#65248;&#64423;&#8236; &#8235;&#64400;&#65198;&#8236; &#8235;&#65247;&#64431;&#8236; &#8235;&#64400;&#65262;&#8236; &#8235;&#65165;&#65261;&#65197;&#8236; &#8235;&#64344;&#65166;&#64400;&#65204;&#65176;&#65166;&#65253;&#8236; &#8235;&#65251;&#64511;&#64415;&#8236; &#8235;&#65169;&#64429;&#65166;&#65197;&#65173;&#8236; &#8235;&#64380;&#64429;&#64397;&#8236; &#8235;&#65183;&#65256;&#64403;&#8236; &#8235;&#64424;&#64431;&#1748;&#8236; &#8235;&#64380;&#64401;&#64509;&#8236;</ns0:cell><ns0:cell>Massala Kashmir ko lay kar Pakistan aur Bharat mein jang cherr chuki hai.</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Last _4_words accuracy</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Feature</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>45%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>NBMN</ns0:cell><ns0:cell>44%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>LR Decision Tree</ns0:cell><ns0:cell>49% 49%</ns0:cell><ns0:cell>Last _4_words</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Random Forest 52%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>K-NN</ns0:cell><ns0:cell>48%</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Title and last _4_words accuracy</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Feature</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>85%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>NBMN</ns0:cell><ns0:cell>91%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>LR</ns0:cell><ns0:cell>95%</ns0:cell><ns0:cell>Title and Last</ns0:cell></ns0:row><ns0:row><ns0:cell>Decision Tree</ns0:cell><ns0:cell>97%</ns0:cell><ns0:cell>_4_words</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Random Forest 98%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>K-NN</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Length </ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Feature</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>17%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>NBMN</ns0:cell><ns0:cell>32%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>LR Decision Tree</ns0:cell><ns0:cell>32% 32%</ns0:cell><ns0:cell>Length</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Random Forest 32%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>K-NN</ns0:cell><ns0:cell>24%</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:1:1:NEW 24 Feb 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>KNN TP, FN, FP and TN K-Nearest Neighbor</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Label</ns0:cell><ns0:cell>Type of Event</ns0:cell><ns0:cell>TP</ns0:cell><ns0:cell>FN</ns0:cell><ns0:cell>FP</ns0:cell><ns0:cell>TN</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Sports</ns0:cell><ns0:cell>5638</ns0:cell><ns0:cell>23</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Inflation</ns0:cell><ns0:cell>967</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>Murder and Death</ns0:cell><ns0:cell>2077</ns0:cell><ns0:cell>38</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell cols='2'>Terrorist Attack 858</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>Politics</ns0:cell><ns0:cell>9931</ns0:cell><ns0:cell>99</ns0:cell><ns0:cell>98</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>law and order</ns0:cell><ns0:cell>2238</ns0:cell><ns0:cell>55</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>Earthquake</ns0:cell><ns0:cell>970</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>07</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>Showbiz</ns0:cell><ns0:cell>2242</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>Fraud and corruption</ns0:cell><ns0:cell>3023</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>Rain/weather</ns0:cell><ns0:cell>1031</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>Sexual Assault</ns0:cell><ns0:cell>889</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>Business</ns0:cell><ns0:cell>1001</ns0:cell><ns0:cell>51</ns0:cell><ns0:cell>04</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Random Forest performance using the title, and last _4_words features</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Label</ns0:cell><ns0:cell>Event</ns0:cell><ns0:cell cols='3'>Precision Recall F1_Measure</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Sports</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Inflation</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>Murder and Death</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.98</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>Terrorist Attack</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.97</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>Politics</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.98</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>law and order</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.97</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>Earthquake</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>Showbiz</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>Fraud and corruption</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.98</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>Rain/weather</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell cols='2'>Sexual Assault/Intercourse 1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>Business</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.97</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Overall accuracy 98.53%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:1:1:NEW 24 Feb 2021)</ns0:note></ns0:figure> <ns0:note place='foot' n='2'>https://www.worldometers.info/world-population/population-by-country/ 3 http://www.mbilalm.com/download/pak-urdu-installer.php PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:1:1:NEW 24 Feb 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot' n='4'>http://www.cle.org.pk/ PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:1:1:NEW 24 Feb 2021)</ns0:note> <ns0:note place='foot' n='5'>https://urdu.geo.tv/ 6 https://www.urdupoint.com/daily/</ns0:note> <ns0:note place='foot' n='7'>https://www.bbc.com/urdu PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:1:1:NEW 24 Feb 2021)</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:1:1:NEW 24 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
" February 16, 20201 The Islamia University of Bahawalpur, Pakistan, Department of Software Engineering, Faculty of Computing. Dear Editor, We appreciate the reviewers for their constructive comments. All comments are addressed and included in the manuscript in the corresponding sections. Both the title and the abstract of the research article are edited as per suggestions given by the Editor to remove any ambiguity. We hope that the updated manuscript is in manageable form according to the criteria of the PeerJ Journal. The reply is sent on the behalf of all authors. Malik Daler Ali Awan (PhD) Lecturer Department of Software Engineering Faculty of Computing, The Islamia University of Bahawalpur. Responses to Reviewers Comments Reviewer 1 (Manar Alkhatib) Basic reporting The title of the paper is “Event classification from text existing on social media” it's not mentioned in the title and abstract, that the data is in Urdu language !!! Response: Thanks for your helping reply. The title is updated according to the suggestions of the reviewer, to clear the domain of our research work. Experimental design In the lines 182-183 the authors mentioned that “ In comparison, we used sentences of Urdu language for classification and explored the textual features of sentences. We disclosed all textual features in detail in this paper that were not reported ever in state-of-art.” Comment: The authors didn’t mention in detail all the textual features for the Urdu language, and they didn't provide enough examples for each feature so the reader can understand it. Response: The details of all the other features used in our research purpose are now included in the updated manuscript to prepare the document in manageable form. Some related examples are also included in the manuscript. The authors didn't add a section for the machine learning algorithms they used in the paper, or even add a references for these algorithms!!!! Response: “Machine Learning” section is added and the detail of all the algorithms used in our work is included in the updated manuscript, with suitable references. The authors didn't mention the stemmer or POS tool they used in the experiments. Response: The Urdu language is considered as one of the resource-poor languages and there exists only one part of speech tagger at Central Language of Engineering (CLE) that is available on payment. Therefore, we could not use any PoS tagger and stemmer in our work. The authors mentioned 'The dataset consists of more than 1 million (1,02,962) labelled instances of twelve (12) 22 different types of events.' what was the size of training data , and the size for testing data!!!! Response: The size of the dataset is more than 0.1 million (1,00,000) instances, for experimental purpose, we divided our dataset as 70% for training dataset (70, 000) and 30% for testing and validation dataset (30,000). The authors didn't include any example for their work!!! Response: The example of dataset is given in Table 2. Validity of the findings the authors didn't experiment all choices of the extracted features, and they just did the experiment on 'title and last 4 word' !!! only Response: We have selected all the features but in order to maintain the brevity of the article, we excluded those results having negligible accuracy, in submitted document. Now these results are also included. the tables 7-10 not explained. All the figures not explained. Response: The details of all the tables and the figures are excluded as per journal guidelines since it is recommended by the journal that not to repeat the contents that are already given in tables and figures. But now, on the recommendation of reviewer the detail of all tables and figures is added in the research paper. the findings not clear and accurate!!! Response: The code and dataset used in our experimental work are already provided to Peerj Journal. Reviewer 2 (Vaibhav Rupapara) Reviewer 2 (Vaibhav Rupapara) Basic reporting The author works in the text classification domain and performs an experiment on the event classification in the Urdu language. The author used state of the art machine learning models and show that the KNN outperform all machine learning models. The author used a dataset contain 1 million records and 12 different labels taken from social me dia. Experimental design 1-The author used TF-IDF and count vectorizer for features extraction. Why author used only TF-IDF and BoW count vectorizer? the author should use word2vec in comparison with TF-IDF and count vectorizer see the article 'Rustam, F., Ashraf, I., Mehmood, A., Ullah, S. and Choi, G.S., 2019. Tweets classification on the basis of sentiments for US airline companies. Entropy, 21(11), p.1078.' Response: The Word2Vec feature is not used in our work since the Urdu language is one of the resource-poor languages. The development of the customized W2Vec model from the available corpuses in hand is not worthy to use since it needs a huge amount of data to develop such models. The reference recommended by the reviewer is worthy to read but it solely describes the sentiment analysis performed on a resource rich language like English while our work consists of event classification for resource poor language Urdu. 2- Author should check the IDF formula again. Response: The typos and grammatical mistakes are also corrected and carefully proofread. 3- KNN is best performer when features set will be small. Find a supporting article for that where knn outperform on a large features set. Response: KNN is equally applicable for the large sets with some changes in parameters. Some noteworthy work of application of KNN on large datasets is give below: 1. Olivares, J., Kermarrec, A. M., & Chiluka, N. (2019). The out-of-core KNN awakens: the light side of computation force on large datasets. Computing, 101(1), 19-38. 2. Deng, Z., Zhu, X., Cheng, D., Zong, M., & Zhang, S. (2016). Efficient kNN classification algorithm for big data. Neurocomputing, 195, 143-148. 3. Yang, P., & Huang, B. (2008, December). KNN based outlier detection algorithm in large dataset. In 2008 International Workshop on Education Technology and Training & 2008 International Workshop on Geoscience and Remote Sensing (Vol. 1, pp. 611-613). IEEE. 4- Author used state of the art method which have already lots of used in classification what is the novality of author work. work in specific domain is not a novality. Response: It is pertinent to mention that we have used state-of-art classifiers that were not used for the work of same type on Urdu language, according to our knowledge. The novelty of our work lies in the fact that we have generated our own data set since there is no available any dataset that is suitable for our task. Secondly, the event generation in Urdu language is not performed to date, according to our knowledge. This paper may be a foundation work for the said task in near future. Validity of the findings No comment Comments for the Author 1- The experimental diagram should follow the full experimental flow. Response: The experimental diagram showed the details of flow of experimental work. 2- author work on specific language how to deal with preprocessing of text there should be a more clear discussion about the library and self-generated library if anything used by the author. Response: The pre-processing is performed at different level. We performed the pre-processing using PHP script to remove the noise, eliminate the stop words and hyperlinks and, the multilingual texts along with the Urdu numerals. In the second level, the tokenization is performed using python scikit library. 3- the author should perform a comparison with results without pre-processing of text. Response: It is vital to perform pre-processing in order achieve the better results. In phase of pre-processing all the unnecessary words, hyperlinks, noise, and multilingual words are removed. These words do not play any specific role to classify events. Although these words increase the processing time, decrease the performance of classifiers, and consumes more memory. In our work experiments without pre-processing leads to very poor results, that are unlucky unavailable. 4- grammar should be checked thoroughly. Response: The typos and grammatical mistakes are also corrected and carefully proofread. "
Here is a paper. Please give your review comments after reading it.
280
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Extraction and classification of multiclass events from local languages is challenging task because of resource lacking. In this research paper, we presented the event classification for the Urdu language text existing on social media and the news channels. The dataset contains more than 0.1 million (102,962) labeled instances of twelve ( <ns0:ref type='formula'>12</ns0:ref>) different types of events. Length and last-4-words of sentence are used as features to classify events. The Term Frequency-Inverse Document Frequency (tf-idf) showed the best results as a feature vector to evaluate the performance of the six popular machine learning classifiers.</ns0:p><ns0:p>Random Forest (RF), Decision Tree and k-Nearest Neighbor out-performed among the other classifiers. The highest accuracy achieved by Random Forest is 53.00%.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>In the current digital and innovative era, the text is still the strongest and dominant source of communication instead of pictures, emoji, sounds and animations <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. The innovative environment of communication; real-time availability <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> of the Internet and the unrestricted communication mode of social networks have attracted billions of people around the world. Now, people are hooked together via Internet like a global village. They preferred to share insights about different topics, opinions, views, ideas, and events <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref> on social networks in different languages. The one of the reasons i.e., Because social media and news channels have created space for local languages <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>. Google input tool 1 provides language transliteration support for more than 88 different languages. Many tools provide the support to use local languages on social media for communication. The google language translator 2 is a platform that facilitates multilingual users of more than 100 languages for conversation. Generally, people prefer to communicate in local languages instead of non-local languages for sake of easiness. A cursive language Urdu is one of the local languages that is being highly adopted for communication. There are more than 300 million <ns0:ref type='bibr' target='#b13'>[10]</ns0:ref> Urdu language users all around the world. The Urdu language is a mix-composition of different languages i.e., Arabic, Persian, Turkish, and Hindi <ns0:ref type='bibr' target='#b14'>[11]</ns0:ref>. In Pakistan and India, more than 65 million people can speak, understand, and write the Urdu language <ns0:ref type='bibr' target='#b15'>[12]</ns0:ref>. It is one of the resource-poor, neglected languages <ns0:ref type='bibr' target='#b16'>[13]</ns0:ref> and the national language <ns0:ref type='bibr' target='#b17'>[14]</ns0:ref> of Pakistan: the 6 th the most populous 2 country in the world. Urdu is widely adopted as a second language all over Pakistan <ns0:ref type='bibr' target='#b14'>[11]</ns0:ref><ns0:ref type='bibr' target='#b15'>[12]</ns0:ref><ns0:ref type='bibr' target='#b16'>[13]</ns0:ref><ns0:ref type='bibr' target='#b17'>[14]</ns0:ref>. In contrast to cursive languages, there exists noteworthy work of information extraction and classification for i.e., English, French, German, and many other non-cursive languages <ns0:ref type='bibr' target='#b17'>[14]</ns0:ref><ns0:ref type='bibr' target='#b18'>[15]</ns0:ref>. In South-Asia other countries <ns0:ref type='bibr' target='#b18'>[15]</ns0:ref> i.e., Bangladesh, Iran, and Afghanistan also have a considerable number of Urdu language users. There are several tools that supports the usage of local languages on social media and news channels. Pak Urdu Installer 3 is also one of that software, it supports the Urdu language for textual communication. Sifting worthy insights from an immense amount of heterogeneous text existing on social media is an interesting and challenging task of Natural Language Processing (NLP). Event extraction and classification is one of those tasks. Event classification insights are helpful to develop various NLP applications i.e., to respond to emergencies, outbreaks, rain, flood and earthquake <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref> etc. People share their intent, appreciation or criticism <ns0:ref type='bibr' target='#b7'>[6]</ns0:ref> i.e., enjoying discount offers by selling brands or criticizing the quality of the product. Earlier awareness of sentimental insights can be helpful to protect by business losses. The implementation of smart-cities possesses a lot of challenges; decision making, event management, communication and information retrieval. Extracting useful insights from an immense amount of text, dramatically enhance the worth of smart cities <ns0:ref type='bibr' target='#b9'>[7]</ns0:ref>. Event information can be used to predict the effects of the event on the community, improve security and rescue the people. Classification of events can be used to collect relevant information about a specific topic, toptrends, stories, text summarization and question and answering systems <ns0:ref type='bibr' target='#b11'>[8]</ns0:ref><ns0:ref type='bibr' target='#b12'>[9]</ns0:ref>. Such information can be used to predict upcoming events, situations and happening. For example, protesting events reported on social media generally end with conflict among different parties, injuries, death of people and misuse of resources that cause anarchy. Some proactive measurements can be taken by the state to diffuse the situation and to prevent conflict. Similarly, event classification is crucial to monitor the law-and-order situation of the world. Extracting and classification of event information from Urdu language text is a unique, interesting, and challenging task. The characteristic features of the Urdu langue that made the event classification tasks more complex and challenging, are listed below.</ns0:p><ns0:p>&#61623; Cursive nature of script Similarly, the lack of resources i.e., the Part of speech tagger (PoS), words stemmer, datasets, and word annotators are some other factors that made the processing of the Urdu text complex. There exist a few noteworthy works related to the Urdu language text processing (See the literature for more details). All the above-mentioned factors motivated us to explore Urdu language text for our task.</ns0:p></ns0:div> <ns0:div><ns0:head>Concept of Events</ns0:head><ns0:p>The definition of events varies from domain to domain. In literature, the event is defined in various aspects, such as a verb, adjective, and noun based depending on environmental situation <ns0:ref type='bibr' target='#b19'>[16]</ns0:ref><ns0:ref type='bibr' target='#b20'>[17]</ns0:ref>. In our research work event can be defined as 'An environmental change that occurs because of some reasons or actions for a specific period.' For example, the explosion of the gas container, a collision between vehicles, terrorist attacks and rainfall etc. There are several hurdles to process Urdu language text for event classification. Some of them are i.e., determining the boundary of events in a sentence, identifying event triggers, and assigning an appropriate label. Event Classification 'The automated way of assigning predefined labels of events to new instances by using pretrained classification models is called event classification.'. Classification is supervised machine learning; all the classifiers are trained on label instances of the dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Multiclass Event Classification</ns0:head><ns0:p>It is the task of automatically assigning the most relevant one class from the given multiple classes. Some serious challenges of multiclassification are sentences overlapping in multiple classes <ns0:ref type='bibr' target='#b21'>[18]</ns0:ref><ns0:ref type='bibr' target='#b22'>[19]</ns0:ref> and imbalance instances of classes. These factors generally affect the overall performance of the classification system.</ns0:p></ns0:div> <ns0:div><ns0:head>Lack of Recourse</ns0:head><ns0:p>The researchers of cursive languages in the past were unexcited and vapid <ns0:ref type='bibr' target='#b16'>[13]</ns0:ref> because of lacking resources i.e., dataset, part of speech tagger and word annotators etc. Therefore, a very low amount of research work exists for cursive language i.e., Arabic, Persian Hindi, and Urdu <ns0:ref type='bibr' target='#b23'>[20]</ns0:ref>. But now, from the last few years, cursive languages have attracted researchers. The main reason behind the attraction is that a large amount of cursive language data was being generated rapidly over the internet. Now, some processing tools also have been developed i.e., Part of speech tagger, word stemmer, and annotator that play an important role by making research handier. But these tools are still limited, commercial, and close domain. Natural language processing is tightly coupled with resources i.e., processing resources, datasets, semantical, syntactical, and contextual information. Textual features i.e., Part of Speech (PoS) and semantic are important for text processing. Central Language of Engineering (CLE) 4 provides limited access to PoS tagger because of the close domain and paid that diverged the researcher to explore Urdu text more easily. Contextual features <ns0:ref type='bibr' target='#b25'>[21]</ns0:ref> i.e., grammatical insight (tense), and sequence of words play important role in text processing. Because of the morphological richness nature of Urdu, a word can be used for a different purpose and convey different meanings depending on the context of contents. Unfortunately, the Urdu language is still lacking such tools that are publicly available for research. Dataset is the core element of research. Dataset for the Urdu language generally exists for name entity extraction with a small number of instances which are &#61623; Enabling Minority Language Engineering (EMILLE) (only 200000 tokens) <ns0:ref type='bibr' target='#b26'>[22]</ns0:ref>.</ns0:p><ns0:p>&#61623; Becker-Riaz corpus (only 50000 tokens) <ns0:ref type='bibr' target='#b27'>[23]</ns0:ref> &#61623; International Joint Conference on Natural Language Processing (IJCNLP) workshop corpus (only 58252 tokens) &#61623; Computing Research Laboratory (CRL) annotated corpus (only 55,000 tokens are publicly available data corpora. <ns0:ref type='bibr' target='#b28'>[24]</ns0:ref> There is no specific dataset for events classification for Urdu language text.</ns0:p></ns0:div> <ns0:div><ns0:head>Concept of Our System</ns0:head><ns0:p>The overall working process of our proposed framework is given in Fig. <ns0:ref type='figure'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Our Contribution</ns0:head><ns0:p>&#61623; In this research article, we claim that we are the first ones who are exploring the Urdu language text to perform multi-class event classification at the sentence level using a machine learning approach, &#61623; A dataset that is larger than state-of-art used in experiments. In our best knowledge classification for twelve 12 different types of events never performed, &#61623; A comprehensive and detailed comparison of six machine learning algorithms is presented to find a more accurate model for event classification for the Urdu language text.</ns0:p></ns0:div> <ns0:div><ns0:head>Our Limitations</ns0:head><ns0:p>&#61623; There is no specific Word2Vec model for Urdu language text, &#61623; There is also no availability of the free (open source) Part of Speech tagger and word stemmer for Urdu language text, &#61623; Also, there exists no publicly available dataset of Urdu language text for sentence classification.</ns0:p></ns0:div> <ns0:div><ns0:head>Related Work</ns0:head><ns0:p>Classification of events from the textual dataset is a very challenging and interesting task of Natural Language Processing (NLP). An intent mining system developed <ns0:ref type='bibr' target='#b7'>[6]</ns0:ref> to facilitate citizens and cooperative authorities using a bag of the token model. The researchers explored the hybrid feature representation for binary classification and multi-label classification. It showed a 6% to 7% improvement in the top-down feature set processing approach. Intelligence information retrieval plays a vital role in the management of smart cities. Such information helps to enhance security and emergency management capabilities in smart cities <ns0:ref type='bibr' target='#b9'>[7]</ns0:ref>. Textual contents on social media are explored in different ways to extract event information. Generally, the event has been defined as a verb, noun, and adjective <ns0:ref type='bibr' target='#b17'>[14]</ns0:ref>. Event detection is a generic term that is further divided into event extraction and event classification. A combined neural network of convolutional and recurrent network was designed to extract event from English, Tamil, and Hindi languages. It showed 39.91%, 37.42% and 39.71% F_ Measure <ns0:ref type='bibr' target='#b20'>[17]</ns0:ref>. In the past, the researchers were impassive in cursive language, therefore a very limited amount of research work exist in cursive language i.e., Arabic, Persian Hindi, and Urdu <ns0:ref type='bibr' target='#b29'>[25]</ns0:ref>. Similarly, in the work of <ns0:ref type='bibr' target='#b29'>[25]</ns0:ref> the authors developed a multiple minimal reduct extraction algorithm which is an improved version of the Quick reduct algorithm <ns0:ref type='bibr' target='#b30'>[26]</ns0:ref>. The purpose of developing the algorithm is to produce a set of rules that assist in the classification of Urdu sentences. For evaluation purpose, an Arabic-based corpus containing more than 2500 documents was plugged in for classifying them in to one of the nine classes. In the experiment, we compared the results of the proposed approach when using multiple and single minimal reducts. The results showed that the proposed approach had achieved an accuracy of 94% when using multiple reducts, which outperformed the single reduct method which achieved an accuracy of 86%. The results of the experiments also showed that the proposed approach outperforms both the K-NN and J48 algorithms regarding classification accuracy using the dataset on hand. Urdu textual contents explored <ns0:ref type='bibr' target='#b31'>[27]</ns0:ref> for classification using the majority voting algorithm. They categorized Urdu text into seven classes i.e., Health, Business, Entertainment, Science, Culture, Sports, and Wired. They used 21769 news documents for classification and reported 94% precision and recall. Dataset evaluated using these algorithms, Linear SGD, Bernoulli Na&#239;ve Bayes, Linear SVM, Na&#239;ve Bayes, random forest classifier, and Multinomial Na&#239;ve Bayes. A framework <ns0:ref type='bibr' target='#b32'>[28]</ns0:ref> proposed a tweet classification system to rescue people looking for help in a disaster like a flood <ns0:ref type='bibr' target='#b33'>[29]</ns0:ref>. The developed system was based on the Markov Model achieve 81% and 87% accuracy for classification and location detection, respectively. The features used in their system are <ns0:ref type='bibr' target='#b33'>[29]</ns0:ref>: To classify Urdu news headlines <ns0:ref type='bibr' target='#b34'>[30]</ns0:ref> by using maximum indexes of vectors. They used stemmed and non-stemmed textual data for experiments. The system was specifically designed for text classification instead of event classification. The proposed system achieved 78.0% for competitors and 86.6% accuracy for the proposed methodology. In comparison, we used sentences of Urdu language for classification and explored the textual features of sentences. We have explored all the textual and numeric features i.e., title, length, last-4-words, and the Manuscript to be reviewed Computer Science combinations of these (for more detail see Tab. 1) in detail in this paper that were not reported ever in state-of-art according to our knowledge. Twitter <ns0:ref type='bibr' target='#b35'>[31]</ns0:ref> to detect natural disasters i.e., bush fires, earthquakes and cyclones and humanitarian crises <ns0:ref type='bibr' target='#b36'>[32]</ns0:ref>. To be aware of emergencies situation in natural disasters a framework work designed based on SVM and Na&#239;ve Bayes classifiers using word unigram, bi-gram, length, number of #Hash tag, and reply. These features were selected on sentence bases. SVM and Nave Bayes showed 87.5% and 86.2% accuracy respectively for tweet classification i.e., seeking help, offering for help, and none. A very popular social website (Twitter) textual data used <ns0:ref type='bibr' target='#b37'>[33]</ns0:ref> to extract and classify events for the Arabic language. Implementation and testing of Support Vector Machine (SVM) and Polynomial Network (PN) algorithms showed promising results for tweet classification 89.2% and 92.7%. Stemmer with PN and SVM magnified the classification 93.9% and 91.7% respectively. Social events <ns0:ref type='bibr' target='#b38'>[34]</ns0:ref> extracted assuming that to predict either parties or one of them aware of the event. The research aimed to find the relation between related events. Support Vector Machine (SVM) with kernel method was used on adopted annotated data of Automated Content Extraction (ACE). Structural information derived from the dependency tree and parsing tree is utilized to derive new structures that played important role in event identification and classification. The Tweet classification of the tweets related to the US Air Lines <ns0:ref type='bibr' target='#b45'>[40]</ns0:ref> is performed by the sentiment analysis companies that is not related to our work. We tried to classify events at sentence level that is challenging since the Urdu sentence contains very short features as compared to a tweet. It is pertinent to mention that the sentiment classification is different from the event classification. Multiclass event classification is reported <ns0:ref type='bibr' target='#b46'>[41]</ns0:ref> comprehensively, deep learning classifiers are used to classify events into different classes.</ns0:p><ns0:formula xml:id='formula_0'>&#61623;</ns0:formula></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>Event classification for Urdu text is performed using supervised machine learning approach. A complete overview of the multi-class event classification methodology is given in Fig. <ns0:ref type='figure'>1</ns0:ref>. Textual data classification possesses a lot of challenges i.e., word similarity, poor grammatical structure, misuse of terms, and multilingual words. That is the reason, we decided to adopt a supervised classification approach to classify Urdu sentences into different categories.</ns0:p></ns0:div> <ns0:div><ns0:head>Data Collection</ns0:head><ns0:p>Urdu data were collected from popular social networks (Twitter), famous news channel blogs i.e., Geo News 5 ,Urdu Point 6 and BBC Urdu 7 . The data collection consists of the title, the main body, the published date, the location and the URL of the post. In the phase of data collection, a PHP based web scraper is used to crawl data from the above-mentioned social websites. A complete post is retrieved from the websites and stored in MariaDB (database). Our dataset consists of 0.1 million (1 02, 960) label sentences of different types of events. All the different types of events used in our research work and their maximum number of instances are shown below in Fig. <ns0:ref type='figure'>2</ns0:ref>.</ns0:p><ns0:p>There are twelve different types of events that we try to classify in our research work. These events are a factual representation of the state and situation of the people. In Fig. <ns0:ref type='figure'>2</ns0:ref>. imbalances number of instances of each event are given. It can be visualized that politics, sports, and Fraud &amp; Corruption have a higher number of instances while Inflation, Sexual Assault and Terrorist Attack have a lower number of instances. These imbalance number of instances made our classification more interesting and challenging. Multiclass events classification tasks are comprised of many classes. The different types of events that are used in our research work i.e., sports, Inflation, Murder &amp; Death, Terrorist Attack, Politics, Law and Order, Earthquake, Showbiz, Fraud &amp; Corruption, Weather, Sexual Assault, and Business. All the sentences of the dataset are labeled by the above-mentioned twelve (12) different types of events. Finally, a numeric (integer) value is assigned to each type of event label (See Tab. 2 for more details of label and its relevant numeric value).</ns0:p></ns0:div> <ns0:div><ns0:head>Preprocessing</ns0:head><ns0:p>The initial preprocessing steps are performed on the corpus to prepare it for machine learning algorithms. Because textual data cannot directly process by machine learning classifiers. It also contains many irrelevant words. The detail of all the preprocessing steps is given below. These steps were implemented in PHP-based environment. While the words tokenization is performed using the scikit library <ns0:ref type='bibr' target='#b23'>[20]</ns0:ref> in python.</ns0:p></ns0:div> <ns0:div><ns0:head>Post Splitting</ns0:head><ns0:p>The PHP crawler extracted the body of the post. It comprises many sentences as a paragraph. In the Urdu language script, sentences end with a sign called '-'Hyphen (Khatma-&#8235;.)&#65175;&#65252;&#64423;&#8236; It is a standard punctuation mark in the Urdu language to represent the end of the sentence. As mentioned earlier, we are performing event classification at the sentence level. So, we split paragraphs of every post into sentences. Every line in the paragraphs ending at Hyphen is split as a single line.</ns0:p></ns0:div> <ns0:div><ns0:head>Stop Words Elimination</ns0:head><ns0:p>Generally, those words that occur frequently in text corpus are considered as stop words. These words merely affect the performance of the classifier. Punctuation marks ('!', '@',' #', etc.) and frequent words of the Urdu languages (&#8235;(&#64400;&#65166;&#8236;ka), &#8235;&#64400;&#64431;&#8236; (kay), &#8235;(&#64400;&#64509;&#8236;ki) etc.) are the common examples of stop words. All the stop words <ns0:ref type='bibr' target='#b32'>[28]</ns0:ref> that do not play an influential role in event classification for the Urdu language text are eliminated from the corpus. Stop words elimination reduces the memory and processing utilization and make the processing efficient.</ns0:p></ns0:div> <ns0:div><ns0:head>Noise Removal and Sentences Filtering</ns0:head><ns0:p>Our data were collected by different sources (see section 3). It contains a lot of noisy elements i.e., multilanguage words, links, mathematical characters, and special symbols, etc. To clean the corpus, we removed noise i.e., multilingual sentences, irrelevant links, and special characters. The nature of our problem confined us to define the limit of words per sentence. Because of the multiple types of events, it is probably hard to find the sentence of the same length. We decided to keep the maximum number of sentences in our corpus. All those sentences which are brief and extensive are removed from our corpus. In our dataset lot of sentences varying in length from 5 words to 250 words. We decided to use sentences that consist of 5 words to 150 words to lemmatize our research problem and to reduce the consumption of processing resources.</ns0:p></ns0:div> <ns0:div><ns0:head>Sentence Labeling</ns0:head><ns0:p>In supervised learning, providing output (Label) detail in the corpus is a core element. Sentence labeling is an exhausting task that requires deep knowledge and an expert's skill of language. All the sentences were manually labeled by observing the title of the post and body of sentences by Urdu language experts (see Tab. 2 for sentence labeling). Three Urdu language experts were engaged in the task of sentence labeling. One of them is Ph.D. (Scholar) while the other two are M.Phil. To our best knowledge, it is the first largest labeled dataset for the multi-class event in the Urdu language.</ns0:p></ns0:div> <ns0:div><ns0:head>Feature Selection</ns0:head><ns0:p>The performance of prediction or classification models is cohesively related to the selection of appropriate features. In our dataset six (6) features excluding 'Date' as a feature are considered valuable to classify Urdu news sentences into different classes. All the proposed features that are used in our research work are listed in Tab.1. Why these features selected? Last-4-Words of Sentence Occurrence, happening, and situations are generic terms that are used to represent events. In general, 'verb' represents event. The grammatical structure of Urdu language is Subject_ Object_ Verb (SOV) <ns0:ref type='bibr' target='#b35'>[31]</ns0:ref>, which depicts that verb, is laying in the last part of the sentences. For example, the sentence &#8235;&#65193;&#64510;&#65166;&#1748;'(&#8236; &#8235;&#64344;&#65166;&#65255;&#64509;&#8236; &#8235;&#64400;&#65262;&#8236; &#8235;&#64344;&#65262;&#65193;&#65261;&#64414;&#8236; &#8235;&#65255;&#64431;&#8236; &#8235;&#65165;&#65187;&#65252;&#65194;&#8236; -Ahmad ney podon ko pani dia'), (Ahmad watered the plants) follows the SOV format. 'Pani dia-&#8235;&#65193;&#64510;&#65166;&#8236; &#8235;'&#64344;&#65166;&#65255;&#64509;&#8236; is the verbal part of the sentence existing in the last two words of the sentence. It shows the happening or action of the event. Our research problem is to classify sentences into different classes of events. So, that last_4_ words are considered one of the vital features to identify events and non-event sentences. For example, in Tab. 3 in the event column underline/highlighted part of the sentence represents the happening of an event i.e., last_4_words in the sentence. While labeling the sentences we strictly concerned that only event sentences of different types should be labeled.</ns0:p></ns0:div> <ns0:div><ns0:head>Length of Sentence</ns0:head><ns0:p>A sentence is a composition of many words. The length of the sentence is determined by the total number of words or tokens that exist in it. It can be used as a feature to classify sentences because many sentences of the same event have probably the same length.</ns0:p></ns0:div> <ns0:div><ns0:head>Features Engineering</ns0:head><ns0:p>Feature Engineering is a way of generating specific features from a given set of features and converting selected features to machine-understandable format. Our dataset is text-based that consists of more than 1 million (102960 labeled) instances i.e., sports, inflation, death, terrorist attack, and sexual assault, etc. 12 classes. As mentioned earlier that the Urdu language is one of the resource-poor languages and since there are no pre-trained word embedding models to generate the embedding vectors for Urdu language text, we could not use the facility of Word2Vec embedding technique.</ns0:p><ns0:p>All the textual features are converted to numeric format i.e., (Term Frequency_ Inverse Document Frequency) TF_IDF and Count-Vectorizer. These two features TF_IDF and Count-Vectorizer are used in a parallel fashion. The scikit-learn package is used to transform text data into numerical value <ns0:ref type='bibr' target='#b23'>[20]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Count_ Vectorization</ns0:head><ns0:p>The process of converting words to numerical form is called vectorization. Its working strategy is based on term frequency. It counts the frequency of specific word w and builds the spare matrix-vector using bag-of-words (BOW). The length of the feature vector depends on the size of the bag-of-words i.e., dictionary. Term Frequency Inverse Document Frequency It is a statistical measure of word w to understand the importance of that word for specific document d in the corpus. The importance of w proportionally related to frequency i.e., higher frequency more important. The mathematical formulas related to TF_IDF are given below: </ns0:p><ns0:formula xml:id='formula_1'>Term Frequency (TF) =<ns0:label>(1)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Experimental Setup</ns0:head><ns0:p>Classifiers are the algorithms used to classify data instances into predefined categories. Many classifiers exist that process the textual data using a machine learning approach. In our research work, we selected the six most popular machine learning algorithms i.e., Random Forest (RF) <ns0:ref type='bibr' target='#b13'>[10]</ns0:ref>, K-Nearest Neighbor (KNN), Support Vector Machine (SVM, Decision Tree (DT), Na&#239;ve Bayes Multinomial (NBM) and Linear Regression (LR).</ns0:p></ns0:div> <ns0:div><ns0:head>Machine Learning Classifiers</ns0:head><ns0:p>In this section, we presented the detail of six classifiers that were used to classify the Urdu sentences using different proposed features. 1 Random Forest (RF) This model is comprised of several decision trees that acts as a building block of RF. Every decision tree is created using the rules i.e., if then else and the conditional statements etc. <ns0:ref type='bibr' target='#b13'>[10]</ns0:ref>. These rules are then followed by the multiple decision trees to analyze the problem at discrete level.</ns0:p></ns0:div> <ns0:div><ns0:head>2</ns0:head><ns0:p>k-Nearest Neighbor It is one of the statistical models that find the similarity among the data points using Euclidean distance <ns0:ref type='bibr' target='#b39'>[35]</ns0:ref>. It belongs to the category of lazy classifiers and is widely used for the classification and regression tasks.</ns0:p></ns0:div> <ns0:div><ns0:head>3</ns0:head><ns0:p>Support Vector Machine It is based on statistical theory <ns0:ref type='bibr' target='#b40'>[36]</ns0:ref>, to draw hyperplane among points of dataset. It is highly recommended for regression and classification i.e., binary classification, multiclass classification and multilabel classification. It finds the decision boundary to identify different classes and maximize the margin.</ns0:p></ns0:div> <ns0:div><ns0:head>Decision Tree</ns0:head><ns0:p>It is one of the supervised classifiers that works following certain rules. Data points/inputs are split according to specific condition <ns0:ref type='bibr' target='#b42'>[37]</ns0:ref>. It is used for regression and classification using nonparametric method because it can handle textual and numerical data. Learning from data point is accomplished by approximating sine curve with the combination of if-else like set of rules. The accuracy of model is related to the deepness and complexity of rules.</ns0:p></ns0:div> <ns0:div><ns0:head>5</ns0:head><ns0:p>Na&#239;ve Bayes Multinominal It is computationally efficient classifier for text classification using discrete features. It can also handle the textual data by converting into numerical <ns0:ref type='bibr' target='#b43'>[38]</ns0:ref> format using count vectorizer and term frequency inverse document frequency (tf-idf).</ns0:p></ns0:div> <ns0:div><ns0:head>6</ns0:head><ns0:p>Linear Regression It is highly recommended classifier for numerical output. It is used to perform prediction by learning linear relationship between independent variables (inputs) and dependent variable (output) <ns0:ref type='bibr' target='#b44'>[39]</ns0:ref>. Training Dataset A subpart of dataset that is used to train the models to learn the relationship among depended and independent variable is called training dataset. We divided our data into training and testing using train_ test_ split function of scikit library using python. Our training dataset consists of 70% of the dataset that is more than 70,000 labelled sentences of Urdu language text. Testing Dataset It also the subpart of the dataset that is usually smaller than size as compared to training dataset. In our research case, we decided to use 30% of dataset for testing and validating the performance of classifiers. It comprises of more than 30,000 instances/sentences of Urdu langue text.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance Measuring Parameters</ns0:head><ns0:p>The most common performance measuring parameters <ns0:ref type='bibr' target='#b17'>[14]</ns0:ref><ns0:ref type='bibr' target='#b18'>[15]</ns0:ref><ns0:ref type='bibr' target='#b19'>[16]</ns0:ref><ns0:ref type='bibr' target='#b20'>[17]</ns0:ref><ns0:ref type='bibr' target='#b21'>[18]</ns0:ref> i.e., precision, recall, and F1_measure are used to evaluate the proposed framework since these parameters are the key indicators while performing the classification in multiclass environment using imbalanced dataset. </ns0:p></ns0:div> <ns0:div><ns0:head>Precision</ns0:head></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>To evaluate our dataset, the Python package scikit-learn is used to perform event classification at the sentence level. We extracted the last-4-words of each sentence and calculated the length of each sentence. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science (NBM). We proposed three features to classify sentences into different types of events i.e., Length, Last-4-words, and Length and Last-4-words. The resulting metrics of all the evaluated classifiers using length as feature is given in Tab 4. DT, RF, NBM and LR showed 32% accuracies that is very low. Comparatively second feature that is Last-4-words showed better result for these above-mentioned classifiers. Random Forest showed 52% accuracy that is considerable result as initiative for multiclass event classification in the Urdu language text. The results details of other classifiers can be seen in Tab 5. We also evaluated these classifiers using another feature that is the combination of the both Length and Last-4-grams. It also improved the 1% overall accuracy of proposed system. The Random Forest showed 53.00% accuracy. The further details of accuracies of other used machine learning models can be seen in Tab.6</ns0:p><ns0:p>The detail of the highest accuracies that is obtained using Last-4-words and Length of Last-4words is presented in Fig. <ns0:ref type='figure'>3</ns0:ref>. It can be observed that the accuracy of DT and k-NN using 'Length' and 'Last-4-words'as features fluctuated high to down and vice versa. While Random Forest maintained the consistency in accuracy for the above-mentioned features. In the evaluation of third feature that is the ' Length and Last-4-words' FR showed the improvement in results.</ns0:p><ns0:p>It is noteworthy to mention that SVM and k-NN showed better results for other languages i.e., English, Hindi, Arabic and Persian. In case of Urdu language text only Random Forest showed the highest accuracy. The reasons are that Urdu language has complex writing script, vocabulary and right to left writing style. The Urdu language is mixture of multiple languages i.e., Arabic, Persian, Hindi, Turkish and Urdu. The diversity in nature of Urdu language comparatively high from other languages.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Event extraction and classification are tightly coupled with processing resources i.e., Part of speech tagger (PoS), Text annotators, and contextual insights. Usage of local languages being highly preferred over social media. Urdu is one of those languages that have a considerable number of users and a huge bulk of data on social networks. There is need to extract and classify information from other languages instead of only widely used languages like English. Because lot of information being shared in non-English languages. Local languages cannot be processed by the existing tools that are designed for English language.</ns0:p><ns0:p>Extracting and classification of events from resource-poor language is an interesting and challenging task. There is no standard (benchmark) datasets and word embedding model like Word2Vec or Glove (Exists for English Language) for Urdu language text.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>A massive amount of Urdu textual data exists on social networks and news websites. Multiclass event classification for Urdu text at the sentence level is a challenging task because of few numbers of words and limited contextual information. We performed experiments by selecting appropriate features i.e., length, last-4-words and combination of both length and last-4-words. These are the key features to achieve our expected results. Count_ Vectorizer and TF-IDF &#61623; There is a need to develop the supporting tools i.e., the PoS tagger, the annotation tools, the dataset of the Urdu-based languages having information about some information associated with the events, and the lexicons can be created to extend the research areas in the Urdu language.</ns0:p><ns0:p>&#61623; In the future, many other types of events and other domains of information like medical events, social, local, and religious events can be classified using the extension of machine learning i.e., deep learning.</ns0:p><ns0:p>&#61623; In the future grammatical, contextual, and lexical information can be used to categorize events. Temporal information related to events can be further utilized to classify an event as real and retrospective.</ns0:p><ns0:p>&#61623; Classification of events can be performed at the document level and phrase level.</ns0:p><ns0:p>&#61623; Deep learning classifiers can be used for a higher number of event classes.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>2 https://www.worldometers.info/world-population/population-by-country/3 http://www.mbilalm.com/download/pak-urdu-installer.php PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:2:0:CHECK 7 Jul 2021) Manuscript to be reviewed Computer Science &#61623; Morphologically enriched &#61623; Different structure of grammar &#61623; Right to the left writing style &#61623; No text capitalization</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Number of words in a tweet (w) &#61623; Verb in a tweet by (verb) &#61623; Number of verbs in a tweet by (v) &#61623; Position of the query by (Pos) &#61623; Word before query word (before) &#61623; Word after query word (after)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:2:0:CHECK 7 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>PeerJ&#61623;</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:11:55370:2:0:CHECK 7 Jul 2021)Manuscript to be reviewedComputer Sciencefeature generating techniques are used to convert text into (numeric) real value for machine learning models. Random Forest classification model showed 52% and 53% accuracy for Last-4words and combination of length and last-4-words.Future WorkIn a comprehensive review of Urdu literature, we found a few numbers of referential works related to Urdu text processing. One of the main issues associated with the Urdu language research is the unavailability of the appropriate corpus like the data set of Urdu sentences representing the event; the close-domain PoS tagger; the lexicons, and the annotator etc.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,357.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,322.50' type='bitmap' /></ns0:figure> <ns0:note place='foot' n='4'>http://www.cle.org.pk/ PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:2:0:CHECK 7 Jul 2021)</ns0:note> <ns0:note place='foot' n='5'>https://urdu.geo.tv/ 6 https://www.urdupoint.com/daily/ 7 https://www.bbc.com/urdu PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:2:0:CHECK 7 Jul 2021) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:2:0:CHECK 7 Jul 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
" July 07, 20201 The Islamia University of Bahawalpur, Pakistan, Department of Software Engineering, Faculty of Computing. Dear Editor Christopher Mungall, We appreciate the reviewers for their constructive comments. All comments are addressed and included in the manuscript in the corresponding sections. Both the title and the abstract of the research article are edited as per suggestions given by the Editor to remove any ambiguity. We hope that the updated manuscript is in manageable form according to the criteria of the PeerJ Journal. The reply is sent on the behalf of all authors. Malik Daler Ali Awan (PhD) Lecturer Department of Software Engineering Faculty of Computing, The Islamia University of Bahawalpur. Editor comments (Christopher Mungall) MAJOR REVISIONS In order to be published, major revisions are required in particular see Reviewer 2's comments about lack of detail in results. In multiple sections, you say your dataset consists of more than one million sentences/instances. But the actual number seems to be 0.1 million. Also do not write '1,02,962'. The comma is in the wrong place, and it misleads the reader into thinking there are one million, rather than one-tenth that. Write '102,962' instead. You also do not report the number of distinct posts. A single post has one title and many sentences. As reviewer 1 points out, you report unreasonably high accuracy when using Title as a feature. I suspect that when you split the dataset into training and test sets, you are splitting by sentence and not by post. It is therefore not surprising that you can predict with 100% accuracy based on the title! If this is true you need to redo the analysis to ensure that the features of the post do not end up polluting your results. I think your reporting of the results needs improving. Figures 4-6 are redundant with the tables. You should combine these into a smaller number of tables or figures. But this point is secondary if the evaluation is flawed. I also ask that you improve your literature review section. In particular, you say: > Urdu textual contents explored [27] for classification using the majority voting algorithm. They categorized Urdu text into seven classes i.e., Health, Business, Entertainment, Science, Culture, Sports, and Wired. They used 21769 news documents for classification and reported 94% precision and recall. Dataset evaluated using these algorithms, Linear SGD, Bernoulli Naïve Bayes, Linear SVM, Naïve Bayes, random forest classifier, and Multinomial Naïve Bayes. Textual classification is close to our problem that is events classification by text at the sentence level, but it is completely different. They did not report the overall accuracy of the system for multiple classes. The information about feature selection is also omitted by the researchers but comparatively, we disclosed the feature selection, engineering, and accuracy of classifiers for multi-classes. Our dataset set consists of 1,02,960 instances of sentences and twelve (12) classes that are comparatively very greater Citation [27] (Daud et al) is a survey, summarizing the current state of the art. When you say 'they reported 94% precision and recall' and 'They did not report the overall accuracy of the system' this is misleading. Daud is summarizing the results of Sajjad and Schmid (2009) (amongst many others). It is not appropriate for you to criticize them for not reporting all statistics, as it is a review paper. I recommend that you take this section out, and instead cite the primary research in the papers cited in Daud et al, and compare your work against the primary research works. [# PeerJ Staff Note: Please ensure that all review comments are addressed in a response letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.  It is a common mistake to address reviewer questions in the response letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the response letter.  Directions on how to prepare a response letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #] Reply: Thank for your valuable suggestions. All the recommendation are incorporated in the article. Reviewer 1 (Manar Alkhatib) Basic reporting no comment Experimental design no comment Validity of the findings The accuracy for the ML algorithms with the features , are not reasonable , increasing from 17% to 85% , then the final results 99% !!! The authors didn't mention any weaknesses of their model. In many English and Arabic researches , the SVM and KNN , were the best ML algorithms , but in your research they are 's not, can the authors mention the reasons!! Reply: Thank for your valuable suggestions. All the recommendation are incorporated in the article. We agree with reviewer that performance of SVM and k-NN is better but in our case, both classifiers showed lower results. Furthermore, the structure and writing script of Urdu language is different from English and Arabic Language that cause the low performance of SVM and k-NN as compared to Random Forest. Comments for the Author The examples should be written in Urdu , and English language as well, Reply: Recommendations are incorporated in the article. Reviewer 2 (Vaibhav Rupapara) Basic reporting Comments are below Experimental design Comments are below Validity of the findings Comments are below Comments for the Author The author asks to revise the manuscript to improve the quality of the article and for this, I have mentioned some comments for the author. Lots of comments have cooperated but still, there are some weak areas in the manuscript. 1- Abstract's first 3 and 4 lines are too general and the overall abstract should be more attractive. best performers' results should be added to the abstract. 2- Results section should contain more detail about the results and represent the significance of the models. Reply: Thanks for your valuable suggestions, all possible changes has been made, and the recommendations are incorporated in the article. "
Here is a paper. Please give your review comments after reading it.
281
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Extraction and classification of multiclass events from local languages are challenging tasks because of resource lacking. In this research paper, we presented the event classification for the Urdu language text existing on social media and the news channels.</ns0:p><ns0:p>The dataset contains more than 0.1 million (102,962) labeled instances of twelve ( <ns0:ref type='formula'>12</ns0:ref>) different types of events. Title, Length, and last-4-words of a sentence are used as features to classify events. The Term Frequency-Inverse Document Frequency (tf-idf) showed the best results as a feature vector to evaluate the performance of the six popular machine learning classifiers. Random Forest (RF), Decision Tree, and k-Nearest Neighbor outperformed among the other classifiers. Random Forest and K-Nearest Neighbor are the classifiers that out-performed among other classifiers by achieving 98.00% and 99.00% accuracy, respectively.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>In the current digital and innovative era, the text is still the strongest and dominant source of communication instead of pictures, emoji, sounds, and animations <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. The innovative environment of communication; real-time availability <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> of the Internet and the unrestricted communication mode of social networks have attracted billions of people around the world. Now, people are hooked together via the Internet like a global village. They preferred to share insights about different topics, opinions, views, ideas, and events <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref> on social networks in different languages. One of the reasons i.e., Because social media and news channels have created space for local languages <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>. Google input tool 1 provides language transliteration support for more than 88 different languages. Many tools provide the support to use local languages on social media for communication. The google language translator 2 is a platform that facilitates multilingual users of more than 100 languages for conversation. Generally, people prefer to communicate in local languages instead of non-local languages for sake of easiness. A cursive language Urdu is one of the local languages that is being highly adapted for communication. There are more than 300 million <ns0:ref type='bibr' target='#b13'>[10]</ns0:ref> Urdu language users all around the world. The Urdu language is a mix-composition of different languages i.e., Arabic, Persian, Turkish, and Hindi <ns0:ref type='bibr' target='#b14'>[11]</ns0:ref>. In Pakistan and India, more than 65 million people can speak, understand, and write the Urdu language <ns0:ref type='bibr' target='#b15'>[12]</ns0:ref>. It is one of the resource-poor, neglected languages <ns0:ref type='bibr' target='#b16'>[13]</ns0:ref> and the national language <ns0:ref type='bibr' target='#b17'>[14]</ns0:ref> of Pakistan: the 6 th most populous 2 country in the world. Urdu is widely adopted as a second language all over Pakistan <ns0:ref type='bibr' target='#b14'>[11]</ns0:ref><ns0:ref type='bibr' target='#b15'>[12]</ns0:ref><ns0:ref type='bibr' target='#b16'>[13]</ns0:ref><ns0:ref type='bibr' target='#b17'>[14]</ns0:ref>. In contrast to cursive languages, there exists noteworthy work of information extraction and classification for i.e., English, French, German, and many other non-cursive languages <ns0:ref type='bibr' target='#b17'>[14]</ns0:ref><ns0:ref type='bibr' target='#b18'>[15]</ns0:ref>. In South Asia other countries <ns0:ref type='bibr' target='#b18'>[15]</ns0:ref> i.e., Bangladesh, Iran, and Afghanistan also have a considerable number of Urdu language users. Several tools support the usage of local languages on social media and news channels. Pak Urdu Installer 3 is also one of that software, it supports the Urdu language for textual communication. Sifting worthy insights from an immense amount of heterogeneous text existing on social media is an interesting and challenging task of Natural Language Processing (NLP). Event extraction and classification is one of those tasks. Event classification insights are helpful to develop various NLP applications i.e., to respond to emergencies, outbreaks, rain, flood, and earthquake <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref>, etc. People share their intent, appreciation, or criticism <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref> i.e., enjoying discount offers by selling brands or criticizing the quality of the product. Earlier awareness of sentimental insights can be helpful to protect from business losses. The implementation of smart-cities possesses a lot of challenges; decision making, event management, communication, and information retrieval. Extracting useful insights from an immense amount of text, dramatically enhance the worth of smart cities <ns0:ref type='bibr' target='#b9'>[7]</ns0:ref>. Event information can be used to predict the effects of the event on the community, improve security and rescue the people. Classification of events can be used to collect relevant information about a specific topic, toptrends, stories, text summarization, and question and answering systems <ns0:ref type='bibr' target='#b11'>[8]</ns0:ref><ns0:ref type='bibr' target='#b12'>[9]</ns0:ref>. Such information can be used to predict upcoming events, situations, and happening. For example, protesting events reported on social media generally end with conflict among different parties, injuries, death of people, and misuse of resources that cause anarchy. Some proactive measurements can be taken by the state to diffuse the situation and to prevent conflict. Similarly, event classification is crucial to monitor the law-and-order situation of the world. Extracting and classification of event information from Urdu language text is a unique, interesting, and challenging task. The characteristic features of the Urdu langue that made the event classification tasks more complex and challenging are listed below.</ns0:p><ns0:p>&#61623; Cursive nature of the script 2 https://www.worldometers.info/world-population/population-by-country/ Similarly, the lack of resources i.e., the Part of speech tagger (PoS), words stemmer, datasets, and word annotators are some other factors that made the processing of the Urdu text complex. There exist a few noteworthy works related to the Urdu language text processing (See the literature for more details). All the above-mentioned factors motivated us to explore Urdu language text for our task.</ns0:p></ns0:div> <ns0:div><ns0:head>Concept of Events</ns0:head><ns0:p>The definition of events varies from domain to domain. In literature, the event is defined in various aspects, such as a verb, adjective, and noun based depending on the environmental situation <ns0:ref type='bibr' target='#b19'>[16]</ns0:ref><ns0:ref type='bibr' target='#b20'>[17]</ns0:ref>. In our research work event can be defined as 'An environmental change that occurs because of some reasons or actions for a specific period.' For example, the explosion of the gas container, a collision between vehicles, terrorist attacks, and rainfall, etc. There are several hurdles to process Urdu language text for event classification. Some of them are i.e., determining the boundary of events in a sentence, identifying event triggers, and assigning an appropriate label. Event Classification 'The automated way of assigning predefined labels of events to new instances by using pretrained classification models is called event classification.'. Classification is supervised machine learning; all the classifiers are trained on label instances of the dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Multiclass Event Classification</ns0:head><ns0:p>It is the task of automatically assigning the most relevant one class from the given multiple classes. Some serious challenges of multiclassification are sentences overlapping in multiple classes <ns0:ref type='bibr' target='#b21'>[18]</ns0:ref><ns0:ref type='bibr' target='#b22'>[19]</ns0:ref> and imbalanced instances of classes. These factors generally affect the overall performance of the classification system.</ns0:p></ns0:div> <ns0:div><ns0:head>Lack of Recourse</ns0:head><ns0:p>The researchers of cursive languages in the past were unexcited and vapid <ns0:ref type='bibr' target='#b16'>[13]</ns0:ref> because of lacking resources i.e., dataset, part of speech tagger and word annotators, etc. Therefore, a very low amount of research work exists for cursive language i.e., Arabic, Persian Hindi, and Urdu <ns0:ref type='bibr' target='#b23'>[20]</ns0:ref>. But now, from the last few years, cursive languages have attracted researchers. The main reason behind the attraction is that a large amount of cursive language data was being generated rapidly over the internet. Now, some processing tools also have been developed i.e., Part of speech tagger, word stemmer, and annotator that play an important role by making research handier. But these tools are still limited, commercial, and close domain. Natural language processing is tightly coupled with resources i.e., processing resources, datasets, semantical, syntactical, and contextual information. Textual features i.e., Part of Speech (PoS) and semantic are important for text processing. Central Language of Engineering (CLE) 4 provides limited access to PoS tagger because of the close domain and paid that diverged the researcher to explore Urdu text more easily. Contextual features <ns0:ref type='bibr' target='#b24'>[21]</ns0:ref> i.e., grammatical insight (tense), and sequence of words play important role in text processing. Because of the morphological richness nature of Urdu, a word can be used for a different purpose and convey different meanings depending on the context of contents. Unfortunately, the Urdu language is still lacking such tools that are publicly available for research. Dataset is the core element of research. Dataset for the Urdu language generally exists for name entity extraction with a small number of instances that are &#61623; Enabling Minority Language Engineering (EMILLE) (only 200000 tokens) <ns0:ref type='bibr' target='#b26'>[22]</ns0:ref>.</ns0:p><ns0:p>&#61623; Becker-Riaz corpus (only 50000 tokens) <ns0:ref type='bibr' target='#b27'>[23]</ns0:ref> &#61623; International Joint Conference on Natural Language Processing (IJCNLP) workshop corpus (only 58252 tokens) &#61623; Computing Research Laboratory (CRL) annotated corpus (only 55,000 tokens are publicly available data corpora. <ns0:ref type='bibr' target='#b28'>[24]</ns0:ref> There is no specific dataset for events classification for Urdu language text.</ns0:p></ns0:div> <ns0:div><ns0:head>Concept of Our System</ns0:head><ns0:p>The overall working process of our proposed framework is given in Fig. <ns0:ref type='figure'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Our Contribution</ns0:head><ns0:p>&#61623; In this research article, we claim that we are the first ones who are exploring the Urdu language text to perform multi-class event classification at the sentence level using a machine learning approach, &#61623; A dataset that is larger than state-of-art used in experiments. In our best knowledge classification for twelve 12 different types of events never performed, &#61623; A comprehensive and detailed comparison of six machine learning algorithms is presented to find a more accurate model for event classification for the Urdu language text.</ns0:p></ns0:div> <ns0:div><ns0:head>Our Limitations</ns0:head><ns0:p>&#61623; There is no specific Word2Vec model for Urdu language text, &#61623; There is also no availability of the free (open source) Part of Speech tagger and word stemmer for Urdu language text, &#61623; Also, there exists no publicly available dataset of Urdu language text for sentence classification.</ns0:p></ns0:div> <ns0:div><ns0:head>Related Work</ns0:head><ns0:p>Classification of events from the textual dataset is a very challenging and interesting task of Natural Language Processing (NLP). An intent mining system was developed <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref> to facilitate citizens and cooperative authorities using a bag of the token model. The researchers explored the hybrid feature representation for binary classification and multi-label classification. It showed a 6% to 7% improvement in the top-down feature set processing approach. Intelligence information retrieval plays a vital role in the management of smart cities. Such information helps to enhance security and emergency management capabilities in smart cities <ns0:ref type='bibr' target='#b9'>[7]</ns0:ref>. The textual content on social media is explored in different ways to extract event information. Generally, the event has been defined as a verb, noun, and adjective <ns0:ref type='bibr' target='#b17'>[14]</ns0:ref>. Event detection is a generic term that is further divided into event extraction and event classification. A combined neural network of the convolutional and recurrent network was designed to extract events from English, Tamil, and Hindi languages. It showed 39.91%, 37.42% and 39.71% F_ Measure <ns0:ref type='bibr' target='#b20'>[17]</ns0:ref>. In the past, the researchers were impassive in cursive language, therefore a very limited amount of research work exists in cursive language i.e., Arabic, Persian Hindi, and Urdu <ns0:ref type='bibr' target='#b29'>[25]</ns0:ref>. Similarly, in the work of <ns0:ref type='bibr' target='#b29'>[25]</ns0:ref>, the authors developed a multiple minimal reduct extraction algorithm which is an improved version of the Quick reduct algorithm <ns0:ref type='bibr' target='#b30'>[26]</ns0:ref>. The purpose of developing the algorithm is to produce a set of rules that assist in the classification of Urdu sentences. For evaluation purposes, an Arabic-based corpus containing more than 2500 documents was plugged in for classifying them into one of the nine classes. In the experiment, we compared the results of the proposed approach when using multiple and single minimal reducts. The results showed that the proposed approach had achieved an accuracy of 94% when using multiple reducts, which outperformed the single reduct method which achieved an accuracy of 86%. The results of the experiments also showed that the proposed approach outperforms both the K-NN and J48 algorithms regarding classification accuracy using the dataset on hand. Urdu textual contents were explored <ns0:ref type='bibr' target='#b31'>[27]</ns0:ref> for classification using the majority voting algorithm. They categorized Urdu text into seven classes i.e., Health, Business, Entertainment, Science, Culture, Sports, and Wired. They used 21769 news documents for classification and reported 94% precision and recall. Dataset evaluated using these algorithms, Linear SGD, Bernoulli Na&#239;ve Bayes, Linear SVM, Na&#239;ve Bayes, random forest classifier, and Multinomial Na&#239;ve Bayes.</ns0:p><ns0:p>A framework <ns0:ref type='bibr' target='#b32'>[28]</ns0:ref> proposed a tweet classification system to rescue people looking for help in a disaster like a flood <ns0:ref type='bibr' target='#b33'>[29]</ns0:ref>. The developed system was based on the Markov Model achieve 81% and 87% accuracy for classification and location detection, respectively. The features used in their system are <ns0:ref type='bibr' target='#b33'>[29]</ns0:ref>: To classify Urdu news headlines <ns0:ref type='bibr' target='#b34'>[30]</ns0:ref> by using maximum indexes of vectors. They used stemmed and non-stemmed textual data for experiments. The system was specifically designed for text classification instead of event classification. The proposed system achieved 78.0% for competitors and 86.6% accuracy for the proposed methodology. In comparison, we used sentences of Urdu language for classification and explored the textual features of sentences. We have explored all the textual and numeric features i.e., title, length, last-4-words, and the combinations of these (for more detail see Tab. 1) in detail in this paper that were not reported ever in state-of-art according to our knowledge. Twitter <ns0:ref type='bibr' target='#b35'>[31]</ns0:ref> to detect natural disasters i.e., bush fires, earthquakes and cyclones, and humanitarian crises <ns0:ref type='bibr' target='#b36'>[32]</ns0:ref>. To be aware of emergencies situation in natural disasters a framework work designed based on SVM and Na&#239;ve Bayes classifiers using word unigram, bi-gram, length, number of #Hash tag, and reply. These features were selected on a sentence basis. SVM and Nave Bayes showed 87.5% and 86.2% accuracy respectively for tweet classification i.e., seeking help, offering for help, and none. A very popular social website (Twitter) textual data was used <ns0:ref type='bibr' target='#b37'>[33]</ns0:ref> to extract and classify events for the Arabic language. Implementation and testing of Support Vector Machine (SVM) and Polynomial Network (PN) algorithms showed promising results for tweet classification 89.2% and 92.7%. Stemmer with PN and SVM magnified the classification 93.9% and 91.7% respectively. Social events <ns0:ref type='bibr' target='#b38'>[34]</ns0:ref> were extracted assuming that to predict either parties or one of them aware of the event. The research aimed to find the relation between related events. Support Vector Machine (SVM) with kernel method was used on adopted annotated data of Automated Content Extraction (ACE). Structural information derived from the dependency tree and parsing tree are utilized to derive new structures that played important role in event identification and classification. The Tweet classification of the tweets related to the US Air Lines <ns0:ref type='bibr' target='#b45'>[40]</ns0:ref> is performed by the sentiment analysis companies that are not related to our work. We tried to classify events at sentence level that is challenging since the Urdu sentence contains very short features as compared to a tweet. It is pertinent to mention that the sentiment classification is different from the event classification. Multiclass event classification is reported <ns0:ref type='bibr' target='#b46'>[41]</ns0:ref> comprehensively, deep learning classifiers are used to classify events into different classes.</ns0:p><ns0:formula xml:id='formula_0'>&#61623;</ns0:formula></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>Event classification for Urdu text is performed using a supervised machine learning approach. A complete overview of the multi-class event classification methodology is given in Fig. <ns0:ref type='figure'>1</ns0:ref>. Textual data classification possesses a lot of challenges i.e., word similarity, poor grammatical structure, misuse of terms, and multilingual words. That is the reason, we decided to adopt a supervised classification approach to classify Urdu sentences into different categories.</ns0:p></ns0:div> <ns0:div><ns0:head>Data Collection</ns0:head><ns0:p>Urdu data were collected from popular social networks (Twitter), famous news channel blogs i.e., Geo News 5 , Urdu Point 6 , and BBC Urdu 7 . The data collection consists of the title, the main body, the published date, the location, and the URL of the post. In the phase of data collection, a PHP-based web scraper is used to crawl data from the above-mentioned social websites. A complete post is retrieved from the websites and stored in MariaDB (database). Our dataset consists of 0.1 million (102, 960) label sentences of different types of events. All the different types of events used in our research work and their maximum number of instances are shown below in Fig. <ns0:ref type='figure'>2</ns0:ref>. There are twelve different types of events that we try to classify in our research work. These events are a factual representation of the state and the situation of the people. In Fig. <ns0:ref type='figure'>2</ns0:ref>. imbalances number of instances of each event are given. It can be visualized that politics, sports, and Fraud &amp; Corruption have a higher number of instances while Inflation, Sexual Assault, and Terrorist attacks have a lower number of instances. These imbalanced numbers of instances made our classification more interesting and challenging. Multiclass events classification tasks are comprised of many classes. The different types of events that are used in our research work i.e., sports, Inflation, Murder &amp; Death, Terrorist attacks, Politics, Law and Order, Earthquake, Showbiz, Fraud &amp; Corruption, Weather, Sexual Assault, and Business. All the sentences of the dataset are labeled by the above-mentioned twelve (12) different types of events. Finally, a numeric (integer) value is assigned to each type of event label (See Tab. 2 for more details of the label and its relevant numeric value).</ns0:p></ns0:div> <ns0:div><ns0:head>Preprocessing</ns0:head><ns0:p>The initial preprocessing steps are performed on the corpus to prepare it for machine learning algorithms. Because textual data cannot directly process by machine learning classifiers. It also contains many irrelevant words. The detail of all the preprocessing steps is given below. These steps were implemented in a PHP-based environment. While the words tokenization is performed using the scikit library <ns0:ref type='bibr' target='#b23'>[20]</ns0:ref> in python.</ns0:p></ns0:div> <ns0:div><ns0:head>Post Splitting</ns0:head><ns0:p>The PHP crawler extracted the body of the post. It comprises many sentences as a paragraph. In the Urdu language script, sentences end with a sign called '-'Hyphen (Khatma-&#8235;.)&#65175;&#65252;&#64423;&#8236; It is a standard punctuation mark in the Urdu language to represent the end of the sentence. As mentioned earlier, we are performing event classification at the sentence level. So, we split paragraphs of every post into sentences. Every line in the paragraphs ending at Hyphen is split as a single line.</ns0:p></ns0:div> <ns0:div><ns0:head>Stop Words Elimination</ns0:head><ns0:p>Generally, those words that occur frequently in text corpus are considered as stop words. These words merely affect the performance of the classifier. Punctuation marks ('!', '@',' #', etc.) and frequent words of the Urdu languages (&#8235;(&#64400;&#65166;&#8236;ka), &#8235;&#64400;&#64431;&#8236; (kay), &#8235;(&#64400;&#64509;&#8236;ki), etc.) are the common examples of stop words. All the stop words <ns0:ref type='bibr' target='#b32'>[28]</ns0:ref> that do not play an influential role in event classification for the Urdu language text are eliminated from the corpus. Stop words elimination reduces memory and processing utilization and makes the processing efficient.</ns0:p></ns0:div> <ns0:div><ns0:head>Noise Removal and Sentences Filtering</ns0:head><ns0:p>Our data were collected from different sources (see section 3). It contains a lot of noisy elements i.e., multilanguage words, links, mathematical characters, and special symbols, etc. To clean the corpus, we removed noise i.e., multilingual sentences, irrelevant links, and special characters. The nature of our problem confined us to define the limit of words per sentence. Because of the multiple types of events, it is probably hard to find a sentence of the same length. We decided to keep the maximum number of sentences in our corpus. All those sentences which are brief and extensive are removed from our corpus. In our dataset lot of sentences varying in length from 5 words to 250 words. We decided to use sentences that consist of 5 words to 150 words to lemmatize our research problem and to reduce the consumption of processing resources.</ns0:p></ns0:div> <ns0:div><ns0:head>Sentence Labeling</ns0:head><ns0:p>In supervised learning, providing output (Label) detail in the corpus is a core element. Sentence labeling is an exhausting task that requires deep knowledge and an expert's skill of language. All the sentences were manually labeled by observing the title of the post and body of sentences by Urdu language experts (see Tab. 2 for sentence labeling). Three Urdu language experts were engaged in the task of sentence labeling. One of them is Ph.D. (Scholar) while the other two are M.Phil. To our best knowledge, it is the first largest labeled dataset for the multi-class event in the Urdu language.</ns0:p></ns0:div> <ns0:div><ns0:head>Feature Selection</ns0:head><ns0:p>The performance of prediction or classification models is cohesively related to the selection of appropriate features. In our dataset six (6) features excluding 'Date' as a feature are considered valuable to classify Urdu news sentences into different classes. All the proposed features that are used in our research work are listed in Tab.1. Why were these features selected? Last-4-Words of Sentence Occurrence, happening, and situations are generic terms that are used to represent events. In general, 'verb' represents an event. The grammatical structure of Urdu language is Subject_ Object_ Verb (SOV) <ns0:ref type='bibr' target='#b35'>[31]</ns0:ref>, which depicts that verb, is laying in the last part of the sentences. For example, the sentence &#8235;&#65193;&#64510;&#65166;&#1748;'(&#8236; &#8235;&#64344;&#65166;&#65255;&#64509;&#8236; &#8235;&#64400;&#65262;&#8236; &#8235;&#64344;&#65262;&#65193;&#65261;&#64414;&#8236; &#8235;&#65255;&#64431;&#8236; &#8235;&#65165;&#65187;&#65252;&#65194;&#8236; -Ahmad ney podon ko pani dia'), (Ahmad watered the plants) follows the SOV format. 'Pani dia-&#8235;&#65193;&#64510;&#65166;&#8236; &#8235;'&#64344;&#65166;&#65255;&#64509;&#8236; is the verbal part of the sentence existing in the last two words of the sentence. It shows the happening or action of the event. Our research problem is to classify sentences into different classes of events. So, that last_4_ words are considered one of the vital features to identify events and non-event sentences. For example, in Tab. 3 in the event column underline/highlighted part of the sentence represents the happening of an event i.e., last_4_words in the sentence. While labeling the sentences we are strictly concerned that only event sentences of different types should be labeled.</ns0:p></ns0:div> <ns0:div><ns0:head>Title of Post</ns0:head><ns0:p>Every conversation has a central point i.e., title. Textual, pictorial, or multimedia content that is posted on social networks as a blog post, at the paragraph level or sentence level describes the specific event. Although many posts contain irrelevant titles to the body of the message. However, using the title as a feature to classify sentences is crucial because the title is assigned to the contents-based material.</ns0:p></ns0:div> <ns0:div><ns0:head>Length of Sentence</ns0:head><ns0:p>A sentence is a composition of many words. The length of the sentence is determined by the total number of words or tokens that exist in it. It can be used as a feature to classify sentences because many sentences of the same event have probably the same length.</ns0:p></ns0:div> <ns0:div><ns0:head>Title and Length</ns0:head><ns0:p>The proposed feature is the combination of the title of the post and the length of the sentence. The title represents the central idea of the post, and the length of the sentence varies from title to title.</ns0:p></ns0:div> <ns0:div><ns0:head>Title and Last-4-words</ns0:head><ns0:p>The combination of title and last_4_words in Urdu language text is very helpful to classify the sentences. Because last_4_words generally represent the occurrence/happening of some event.</ns0:p></ns0:div> <ns0:div><ns0:head>Length and Last-4-words</ns0:head><ns0:p>We also consider the combination of length with last_4_words as a valuable feature because the length of a sentence varies from event to event.</ns0:p></ns0:div> <ns0:div><ns0:head>Features Engineering</ns0:head><ns0:p>Feature Engineering is a way of generating specific features from a given set of features and converting selected features to machine-understandable format. Our dataset is text-based that consists of more than 1 million (102,960 labeled) instances i.e., sports, inflation, death, terrorist attack, and sexual assault, etc. 12 classes. As mentioned earlier that the Urdu language is one of the resource-poor languages and since there are no pre-trained word embedding models to generate the embedding vectors for Urdu language text, we could not use the facility of Word2Vec embedding technique. All the textual features are converted to numeric format i.e., (Term Frequency_ Inverse Document Frequency) TF_IDF and Count-Vectorizer. These two features TF_IDF and Count-Vectorizer are used in a parallel fashion. The scikit-learn package is used to transform text data into numerical value <ns0:ref type='bibr' target='#b23'>[20]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Count_ Vectorization</ns0:head><ns0:p>The process of converting words to numerical form is called vectorization. Its working strategy is based on term frequency. It counts the frequency of specific word w and builds the spare matrix-vector using bag-of-words (BOW). The length of the feature vector depends on the size of the bag-of-words i.e., dictionary.</ns0:p></ns0:div> <ns0:div><ns0:head>Term Frequency Inverse Document Frequency</ns0:head><ns0:p>It is a statistical measure of word w to understand the importance of that word for specific document d in the corpus. The importance of a word is proportionally related to frequency i.e., higher frequency more important. The mathematical formulas related to TF_IDF are given below:</ns0:p><ns0:p>Term Frequency (TF) =</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_1'>&#119873;&#119906;&#119898;&#119887;&#119890;&#119903; &#119900;&#119891; &#119905;&#119894;&#119898;&#119890; &#119905;&#119890;&#119903;&#119898; &#119905; &#119886;&#119901;&#119901;&#119890;&#119886;&#119903;&#119904; &#119894;&#119899; &#119889;&#119900;&#119888;&#119906;&#119898;&#119890;&#119899;&#119905; &#119879;&#119900;&#119905;&#119886;&#119897; &#119899;&#119906;&#119898;&#119887;&#119890;&#119903; &#119900;&#119891; &#119905;&#119890;&#119903;&#119898;&#119904; &#119894;&#119899; &#119889;&#119900;&#119888;&#119906;&#119898;&#119890;&#119899;&#119905;&#119904; Inverse Document Frequency (IDF) =Log e (<ns0:label>2</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>)</ns0:formula><ns0:p>&#119879;&#119900;&#119905;&#119886;&#119897; &#119899;&#119906;&#119898;&#119887;&#119890;&#119903; &#119900;&#119891; &#119889;&#119900;&#119888;&#119906;&#119898;&#119890;&#119899;&#119905; &#119879;&#119900;&#119905;&#119886;&#119897; &#119899;&#119906;&#119898;&#119887;&#119890;&#119903; &#119900;&#119891; &#119889;&#119900;&#119888;&#119906;&#119898;&#119890;&#119899;&#119905;&#119904; &#119905;&#119890;&#119903;&#119898; &#119905; &#119886;&#119901;&#119901;&#119890;&#119886;&#119903;&#119904; TF_IDF =</ns0:p><ns0:p>(3) &#119879;&#119865; * &#119868;&#119863;&#119865;</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental Setup</ns0:head><ns0:p>Classifiers are the algorithms used to classify data instances into predefined categories. Many classifiers exist that process the textual data using a machine learning approach. In our research work, we selected the six most popular machine learning algorithms i.e., Random Forest (RF) <ns0:ref type='bibr' target='#b13'>[10]</ns0:ref>, K-Nearest Neighbor (KNN), Support Vector Machine (SVM, Decision Tree (DT), Na&#239;ve Bayes Multinomial (NBM), and Linear Regression (LR).</ns0:p></ns0:div> <ns0:div><ns0:head>Machine Learning Classifiers</ns0:head><ns0:p>In this section, we presented the detail of six classifiers that were used to classify the Urdu sentences using different proposed features. 1</ns0:p><ns0:p>Random Forest (RF) This model is comprised of several decision trees that act as a building block of RF. Every decision tree is created using the rules i.e., if then else, and the conditional statements, etc. <ns0:ref type='bibr' target='#b13'>[10]</ns0:ref>. These rules are then followed by the multiple decision trees to analyze the problem at a discrete level. 2 k-Nearest Neighbor It is one of the statistical models that find the similarity among the data points using Euclidean distance <ns0:ref type='bibr' target='#b39'>[35]</ns0:ref>. It belongs to the category of lazy classifiers and is widely used for classification and regression tasks.</ns0:p></ns0:div> <ns0:div><ns0:head>3</ns0:head><ns0:p>Support Vector Machine It is based on statistical theory <ns0:ref type='bibr' target='#b40'>[36]</ns0:ref>, to draw a hyperplane among points of the dataset. It is highly recommended for regression and classification i.e., binary classification, multiclass classification, and multilabel classification. It finds the decision boundary to identify different classes and maximize the margin.</ns0:p></ns0:div> <ns0:div><ns0:head>4</ns0:head><ns0:p>Decision Tree It is one of the supervised classifiers that work following certain rules. Data points/inputs are split according to the specific condition <ns0:ref type='bibr' target='#b41'>[37]</ns0:ref>. It is used for regression and classification using the non-parametric method because it can handle textual and numerical data. Learning from data points is accomplished by approximating the sine curve with the combination of an if-else-like set of rules. The accuracy of a model is related to the deepness and complexity of rules.</ns0:p></ns0:div> <ns0:div><ns0:head>5</ns0:head><ns0:p>Na&#239;ve Bayes Multinominal It is a computationally efficient classifier for text classification using discrete features. It can also handle the textual data by converting it into numerical <ns0:ref type='bibr' target='#b43'>[38]</ns0:ref> format using count vectorizer and term frequency-inverse document frequency (tf-idf).</ns0:p></ns0:div> <ns0:div><ns0:head>6</ns0:head><ns0:p>Linear Regression It is a highly recommended classifier for numerical output. It is used to perform prediction by learning linear relationships between independent variables (inputs) and dependent variables (output) <ns0:ref type='bibr' target='#b44'>[39]</ns0:ref>. Training Dataset A subpart of the dataset that is used to train the models to learn the relationship among dependent and independent variables is called the training dataset. We divided our data into training and testing using the train_ test_ split function of the scikit library using python. Our training dataset consists of 70% of the dataset that is more than 70,000 labeled sentences of Urdu language text.</ns0:p></ns0:div> <ns0:div><ns0:head>Testing Dataset</ns0:head><ns0:p>It is also the subpart of the dataset that is usually smaller than size as compared to the training dataset. In our research case, we decided to use 30% of the dataset for testing and validating the performance of classifiers. It comprises more than 30,000 instances/sentences of Urdu langue text.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance Measuring Parameters</ns0:head><ns0:p>The most common performance measuring parameters <ns0:ref type='bibr' target='#b17'>[14]</ns0:ref><ns0:ref type='bibr' target='#b18'>[15]</ns0:ref><ns0:ref type='bibr' target='#b19'>[16]</ns0:ref><ns0:ref type='bibr' target='#b20'>[17]</ns0:ref><ns0:ref type='bibr' target='#b21'>[18]</ns0:ref> i.e., precision, recall, and F1_measure are used to evaluate the proposed framework since these parameters are the key indicators while performing the classification in a multiclass environment using an imbalanced dataset. </ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>To evaluate our dataset, the Python package scikit-learn is used to perform event classification at the sentence level. We extracted the last-4-words of each sentence and calculated the length of each sentence. To obtain the best classification results we evaluated six machine learning classifiers among others i.e., Decision Tree (DT), Random Forest (RF), Logistic Regression (LR), Support Vector Machine (SVM), k-Nearest Neighbor, and Na&#239;ve Bayes Multinominal (NBM). We proposed three features i.e., Length, Last-4-words, and Length and Last-4-words to classify sentences into different types of events (see tab. 2). The results were obtained using 'length ' as the feature is shown in Tab. 4. The classifiers i.e., DT, RF, NBM, and LR showed 32% accuracies that is very low. The comparatively second feature that is Last-4-words showed better results for these above-mentioned classifiers. Random Forest showed 52% accuracy that is a considerable result as an initiative for multiclass event classification in the Urdu language text. The detail of results regarding other classifiers can be seen in Tab 5. We also evaluated these classifiers using another feature that is the combination of both Length and Last-4-grams. It also improved the overall 1% accuracy of the proposed system. The Random Forest showed 53.00% accuracy. The further details of accuracies of other used machine learning models can be seen in Tab. <ns0:ref type='bibr' target='#b6'>6</ns0:ref> The results obtained by using the above features are very low, we deiced to use the title of the post as a feature to improve the performance of the system. We integrated the 'Title' of the post with each sentence of the same paragraph that dramatically improves the accuracy of the system. We combined the 'Title' of the post with other features i.e., length, and Last-4-words. The detail of the highest accuracies that is obtained by the combination of these features i.e., Last-4-words, length, and title are given in Tab. 7 and Tab. 8. Random forest and k-NN showed the highest accuracies. The detail of the confusion matrix related to the proposed system (TP, FP, TN, FN) is also given in Tab. 9 and Tab. 10. The standard performance measuring parameters i.e., precision, recall, and f1-measure of Random Forest and k-NN classifiers using 'Title and Last-4words' as features are given in Tab. 11 and Tab. 12 respectively. Similarly other combinations of features i.e., 'Title and Length' are used to enhance the accuracy of the system. The Decision Tree and Random Forest showed the highest results as compared to other classifiers for this specific combination of features. A detailed summary of the results related to Decision Tree and Random Forest is given in Tab. 13 and Tab. 14 respectively. We finally presented the comparison of four classifiers that showed the highest results in fig. <ns0:ref type='figure'>3</ns0:ref>.</ns0:p><ns0:p>The semantics of the script written in the Urdu language is quite different from that of English and Arabic Language which causes the low performance of SVM and k-NN as compared to Random Forest.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Event extraction and classification are tightly coupled with processing resources i.e., Part of speech tagger (PoS), Text annotators, and contextual insights. Usage of local languages being highly preferred over social media. Urdu is one of those languages that have a considerable number of users and a huge bulk of data on social networks. The evaluation reports obtained after analyzing multiple features i.e., Length, Last-4-words, Title, and combination of all these features converged our findings to conclude that length and last-4-words are basic features to classify multiclass events but showed 53% accuracy. To improve the accuracy of the proposed system, we integrated 'Title' as the feature with other two features i.e., Length and Last-4words. The combination of 'Title' with 'Length and Last-4-words' improved the performance of the proposed system and showed the highest results. Furthermore, extracting and classification of events from resource-poor language is an interesting and challenging task. There are no standard (benchmark) datasets and word embedding models like Word2Vec or Glove (Exists for the English Language) for Urdu language text.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>A massive amount of Urdu textual data exists on social networks and news websites. Multiclass event classification for Urdu text at the sentence level is a challenging task because of the few numbers of words and limited contextual information. We performed experiments by selecting appropriate features i.e., length, last-4-words, and combination of both length and last-4-words. These are the key features to achieve our expected results. Count_ Vectorizer and TF-IDF feature generating techniques are used to convert text into (numeric) real value for machine learning models. Random Forest classification model showed 52% and 53% accuracy for Last-4words and combination of length and last-4-words.</ns0:p><ns0:p>The title is the key feature that can dramatically improve the performance of event classification models that works on a sentence level. &#61623; There is a need to develop the supporting tools i.e., the PoS tagger, the annotation tools, the dataset of the Urdu-based languages having information about some information associated with the events, and the lexicons can be created to extend the research areas in the Urdu language.</ns0:p><ns0:p>&#61623; In the future, many other types of events and other domains of information like medical events, social, local, and religious events can be classified using the extension of machine learning i.e., deep learning.</ns0:p><ns0:p>&#61623; In the future grammatical, contextual, and lexical information can be used to categorize events. Temporal information related to events can be further utilized to classify an event as real and retrospective.</ns0:p><ns0:p>&#61623; Classification of events can be performed at the document level and phrase level.</ns0:p><ns0:p>&#61623; Deep learning classifiers can be used for a higher number of event classes. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>3 http://www.mbilalm.com/download/pak-urdu-installer.php PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:3:0:CHECK 8 Aug 2021) Manuscript to be reviewed Computer Science &#61623; Morphologically enriched &#61623; Different structures of grammar &#61623; Right to the left writing style &#61623; No text capitalization</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Number of words in a tweet (w) &#61623; Verb in a tweet by (verb) &#61623; Number of verbs in a tweet by (v) &#61623; Position of the query by (Pos) &#61623; Word before query word (before) &#61623; Word after query word (after)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>PeerJ&#61623;</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:11:55370:3:0:CHECK 8 Aug 2021) In a comprehensive review of Urdu literature, we found a few numbers of referential works related to Urdu text processing. One of the main issues associated with the Urdu language research is the unavailability of the appropriate corpus like the data set of Urdu sentences representing the event; the close-domain PoS tagger; the lexicons, and the annotator, etc.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='46,42.52,178.87,525.00,357.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='48,42.52,178.87,525.00,352.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Proposed Features</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Sr. No.</ns0:cell><ns0:cell>Feature _Name</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Length</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Last_4_ words</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>Last_4_words and Length</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>Title</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>Title and Length</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>Title and Last_4-words</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Types of events and their labels in the dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Event</ns0:cell><ns0:cell cols='2'>Label Event</ns0:cell><ns0:cell>Label</ns0:cell></ns0:row><ns0:row><ns0:cell>Sports</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>Earthquake</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell>Inflation</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>Showbiz</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Murder and Death 3</ns0:cell><ns0:cell cols='2'>Fraud and Corruption 9</ns0:cell></ns0:row><ns0:row><ns0:cell>Terrorist Attack</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>Rain/Weather</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>Politics</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>Sexual Assault</ns0:cell><ns0:cell>11</ns0:cell></ns0:row><ns0:row><ns0:cell>Law and Order</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>Business</ns0:cell><ns0:cell>12</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Length </ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Feature</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>17%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>NBM</ns0:cell><ns0:cell>32%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>LR Decision Tree</ns0:cell><ns0:cell>32% 32%</ns0:cell><ns0:cell>Length</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Random Forest 32%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>K-NN</ns0:cell><ns0:cell>24%</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Last _4_words accuracy</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Feature</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>45%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>NBMN</ns0:cell><ns0:cell>44%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>LR Decision Tree</ns0:cell><ns0:cell>49% 49%</ns0:cell><ns0:cell>Last _4_words</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Random Forest 52%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>K-NN</ns0:cell><ns0:cell>48%</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Last _4_words and Length Accuracy</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell cols='2'>Accuracy Feature</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>46%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>NBMN</ns0:cell><ns0:cell>44%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>LR Decision Tree Forest Random</ns0:cell><ns0:cell>49% 48% 53%</ns0:cell><ns0:cell>Length and Last _4_words</ns0:cell></ns0:row><ns0:row><ns0:cell>K-NN</ns0:cell><ns0:cell>49%</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Title and Last _4_words accuracy</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Feature</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>85%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>NBMN</ns0:cell><ns0:cell>91%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>LR Decision Tree Forest Random</ns0:cell><ns0:cell>95% 97% 98%</ns0:cell><ns0:cell>Title and Last _4_words</ns0:cell></ns0:row><ns0:row><ns0:cell>K-NN</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>KNN TP, FN, FP and TN K-Nearest Neighbor</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Label</ns0:cell><ns0:cell>Type of Event</ns0:cell><ns0:cell>TP</ns0:cell><ns0:cell>FN</ns0:cell><ns0:cell>FP</ns0:cell><ns0:cell>TN</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Sports</ns0:cell><ns0:cell>5638</ns0:cell><ns0:cell>23</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Inflation</ns0:cell><ns0:cell>967</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>Murder and Death</ns0:cell><ns0:cell>2077</ns0:cell><ns0:cell>38</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell cols='2'>Terrorist Attack 858</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>Politics</ns0:cell><ns0:cell>9931</ns0:cell><ns0:cell>99</ns0:cell><ns0:cell>98</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>law and order</ns0:cell><ns0:cell>2238</ns0:cell><ns0:cell>55</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>Earthquake</ns0:cell><ns0:cell>970</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>07</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>Showbiz</ns0:cell><ns0:cell>2242</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>Fraud and corruption</ns0:cell><ns0:cell>3023</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>Rain/weather</ns0:cell><ns0:cell>1031</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>Sexual Assault</ns0:cell><ns0:cell>889</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>Business</ns0:cell><ns0:cell>1001</ns0:cell><ns0:cell>51</ns0:cell><ns0:cell>04</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='4'>http://www.cle.org.pk/ PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:3:0:CHECK 8 Aug 2021)</ns0:note> <ns0:note place='foot' n='5'>https://urdu.geo.tv/ 6 https://www.urdupoint.com/daily/ 7 https://www.bbc.com/urdu PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:3:0:CHECK 8 Aug 2021)Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:3:0:CHECK 8 Aug 2021)</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:3:0:CHECK 8 Aug 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot' n='3'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:3:0:CHECK 8 Aug 2021)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
" August 08, 20201 The Islamia University of Bahawalpur, Pakistan, Department of Software Engineering, Faculty of Computing. Dear Editor Christopher Mungall, We appreciate the efforts of editor and reviewers for their constructive comments to improve the quality of research article. The concerns and comments raised by honourable editor and reviewers are addressed point by point and included in the manuscript. Both the title and the abstract of the research article are edited as per suggestions given by the Editor to remove any ambiguity. We hope that the updated manuscript is in manageable form according to the criteria of the PeerJ Journal. The reply is sent on the behalf of all authors. Malik Daler Ali Awan (PhD) Lecturer Department of Software Engineering Faculty of Computing, The Islamia University of Bahawalpur. Editor comments (Christopher Mungall) MAJOR REVISIONS In order to be published, major revisions are required in particular see Reviewer 2's comments about lack of detail in results. In multiple sections, you say your dataset consists of more than one million sentences/instances. But the actual number seems to be 0.1 million. Also do not write '1,02,962'. The comma is in the wrong place, and it misleads the reader into thinking there are one million, rather than one-tenth that. Write '102,962' instead. Reply to point 1: The statistics about the dataset are corrected in the updated script in order to avoid any confusion in understanding the data set details. You also do not report the number of distinct posts. A single post has one title and many sentences. As reviewer 1 points out, you report unreasonably high accuracy when using Title as a feature. I suspect that when you split the dataset into training and test sets, you are splitting by sentence and not by post. It is therefore not surprising that you can predict with 100% accuracy based on the title! If this is true you need to redo the analysis to ensure that the features of the post do not end up polluting your results. Reply to point 2: In our experimental work, the dataset contains the Urdu statements grouped under different events, as shown in Table 2. In our dataset, the features of the statement (or sentence) are Title of the statement, length of the statement in terms of number of words, the last 4 words. These features were used using all their possible combinations while training the classifier. We observed that the classifier showed the significant accuracy when the “Title” is combined with the “last 4 words” of the sentence when compared with the results in which the features were used individually. That is the reason; we have to include “Title” in combination with the other features to acquire higher accuracy. I think your reporting of the results needs improving. Figures 4-6 are redundant with the tables. You should combine these into a smaller number of tables or figures. But this point is secondary if the evaluation is flawed. Reply to the point 3: Since your concern is associated with the statements in the previous comment, we have to mention the Figure 4-6 to elaborate the results generated through performing set of experiments. > Urdu textual contents explored [27] for classification using the majority voting algorithm. They categorized Urdu text into seven classes i.e., Health, Business, Entertainment, Science, Culture, Sports, and Wired. They used 21769 news documents for classification and reported 94% precision and recall. Dataset evaluated using these algorithms, Linear SGD, Bernoulli Naïve Bayes, Linear SVM, Naïve Bayes, random forest classifier, and Multinomial Naïve Bayes. Textual classification is close to our problem that is events classification by text at the sentence level, but it is completely different. They did not report the overall accuracy of the system for multiple classes. The information about feature selection is also omitted by the researchers but comparatively, we disclosed the feature selection, engineering, and accuracy of classifiers for multi-classes. Our dataset set consists of 1,02,960 instances of sentences and twelve (12) classes that are comparatively very greater Citation [27] (Daud et al) is a survey, summarizing the current state of the art. When you say 'they reported 94% precision and recall' and 'They did not report the overall accuracy of the system' this is misleading. Daud is summarizing the results of Sajjad and Schmid (2009) (amongst many others). It is not appropriate for you to criticize them for not reporting all statistics, as it is a review paper. I recommend that you take this section out, and instead cite the primary research in the papers cited in Daud et al, and compare your work against the primary research works. Reply to point 4: To make the literature review more comprehensive and meaningful, we presented the noteworthy work in informative way. Furthermore, we also opted out the work that presented the initial results rather than the comprehensive outcome. Furthermore we also removed out the suggested portion for literature. [# PeerJ Staff Note: Please ensure that all review comments are addressed in a response letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.  It is a common mistake to address reviewer questions in the response letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the response letter.  Directions on how to prepare a response letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #] Reply: Thank for your valuable suggestions. All the recommendation are incorporated in the article. Reviewer 1 (Manar Alkhatib) Basic reporting no comment Experimental design no comment Validity of the findings The accuracy for the ML algorithms with the features, are not reasonable, increasing from 17% to 85% , then the final results 99% !!! The authors didn't mention any weaknesses of their model. In many English and Arabic researches , the SVM and KNN , were the best ML algorithms , but in your research they are 's not, can the authors mention the reasons!! Reply: Thank for your valuable comment. We agree with the respected reviewer that performance of SVM and k-NN is better but in our case, both classifiers showed lower results since the semantics of the script written in Urdu language are quite different from that of English and Arabic Language that cause the low performance of SVM and k-NN as compared to Random Forest. Following are the limitations of SVM and K-NN that prevent the experimental work to produce significant results hen compared with Random Forest. The accuracy provided by applying the K-NN and SVM algorithms solely depend on the quality of the data i.e., the dataset must be noise free or has negligible noise in order to consume less computation time in pre-processing. In our experiment, our data set has a number of sentences in Urdu language in which each sentence may have repetitive terms and each term is having different semantics according to the rules of the Urdu language. The removal of these repetitive terms in pre-processing phase lead to produce a statement that will not be beneficial for extracting events that is our ultimate task. Therefore we have to work on this so called “noisy” data generating the less accuracy while using SVM and KNN. Comments for the Author The examples should be written in Urdu , and English language as well, Reply: Recommendations are incorporated in the article. Reviewer 2 (Vaibhav Rupapara) Basic reporting Comments are below Experimental design Comments are below Validity of the findings Comments are below Comments for the Author The author asks to revise the manuscript to improve the quality of the article and for this, I have mentioned some comments for the author. Lots of comments have cooperated but still, there are some weak areas in the manuscript. 1- Abstract's first 3 and 4 lines are too general and the overall abstract should be more attractive. best performers' results should be added to the abstract. Reply to point 1: The Abstract is updated to reflect the objective of our work. Furthermore, the results achieved are also depicted by giving the comparison analysis. 2- Results section should contain more detail about the results and represent the significance of the models. Reply to point 2: Thanks for your valuable suggestion. The Results section is updated according to the valuable suggestions of the respected reviewer and the details of various classifiers metrics are reported. "
Here is a paper. Please give your review comments after reading it.
282
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The real-time availability of the internet has engaged millions of users around the world.</ns0:p><ns0:p>The usage of regional languages is being preferred for effective and ease of communication that is causing multilingual data on social networks and news channels.</ns0:p><ns0:p>People share ideas, opinions, and events that are happening globally i.e., sports, inflation, protest, explosion, and sexual assault, etc. in regional (local) languages. Extraction and classification of events from multilingual data have become bottlenecks because of resource lacking. In this research paper, we presented the event classification for the Urdu language text existing on social media and the news channels by using machine learning classifiers. The dataset contains more than 0.1 million (102,962) labeled instances of twelve (12) different types of events. The title, its length, and last-4-words of a sentence are used as features to classify the events. The Term Frequency-Inverse Document Frequency (tf-idf) showed the best results as a feature vector to evaluate the performance of the six popular machine learning classifiers. The Random Forest (RF) and K-Nearest Neighbor (KNN) are among the classifiers that out-performed among other classifiers by achieving 98.00% and 99.00% accuracy, respectively. The novelty lies in the fact that the features aforementioned are not applied, up to the best of our knowledge, in the event extraction of the text written in the Urdu language.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>In the current digital and innovative era, text is still the strongest and dominant source of communication instead of pictures, emoji, sounds, and animations <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. The innovative environment of communication; real-time availability <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> of the Internet and unrestricted access for communication on social networks have attracted billions of people around the world. Now, people are hooked together via the Internet like a global village because of the Internet. They preferred to share detailed worthy information about different topics, opinions, views, ideas, and events <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref> on social networks in different languages. The usage of different languages is being popular because social media and news channels have created space for local languages <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>.</ns0:p><ns0:p>Google input tool 1 provides language transliteration support for more than 88 different languages. Many other tools like the software ( Inpage and Pak-Urdu for the Urdu language) provide the support to use local languages on social media for communication. The google language translator 2 is a platform that facilitates multilingual users of more than 100 languages for conversation. Generally, people prefer to communicate in local languages instead of nonlocal languages for sake of easiness. A cursive language Urdu is one of the local languages that is being highly adapted for communication. There are more than 300 million <ns0:ref type='bibr' target='#b12'>[10]</ns0:ref> Urdu language users all around the world that can speak, understand and write in the Urdu language. The Urdu language is a mixcomposition of different languages i.e., Arabic, Persian, Turkish, and Hindi <ns0:ref type='bibr' target='#b13'>[11]</ns0:ref>. In Pakistan and India, more than 65 million people can speak, understand, and write the Urdu language <ns0:ref type='bibr' target='#b14'>[12]</ns0:ref>. It is one of the resource-poor, neglected languages <ns0:ref type='bibr' target='#b15'>[13]</ns0:ref> and the national language <ns0:ref type='bibr' target='#b16'>[14]</ns0:ref> of Pakistan: the 6 th most populous 2 country in the world. Urdu is also widely adopted and spoke as a second language all over Pakistan <ns0:ref type='bibr' target='#b13'>[11]</ns0:ref><ns0:ref type='bibr' target='#b14'>[12]</ns0:ref><ns0:ref type='bibr' target='#b15'>[13]</ns0:ref><ns0:ref type='bibr' target='#b16'>[14]</ns0:ref>. In South Asia other countries <ns0:ref type='bibr' target='#b17'>[15]</ns0:ref> i.e., Bangladesh, Iran, and Afghanistan also have a considerable number of Urdu language users. Pak Urdu Installer 3 and Inpage are also common software, it support the Urdu language for textual writing (communication). In contrast to cursive languages, there exists noteworthy work of information extraction and classification for i.e., English, French, German, and many other non-cursive languages <ns0:ref type='bibr' target='#b16'>[14]</ns0:ref><ns0:ref type='bibr' target='#b17'>[15]</ns0:ref>. Sifting worthy insights from an immense amount of heterogeneous text existing on social media is an interesting and challenging task of Natural Language Processing (NLP). Event extraction and classification is one of the NLP tasks. The information of event classification is helpful to develop various NLP applications i.e., to respond to emergencies, outbreaks, rain, flood, and earthquake <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref>, etc. Generally, people share their intent, appreciation, or criticism <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref> i.e., enjoying discount offers by selling brands or criticizing the quality of the product. Earlier awareness of sentimental insights can be helpful to protect from business losses. The implementation of smart-cities possess a lot of challenges; decision making, event management, communication, and information retrieval. Extracting useful insights from an immense amount of text, dramatically enhance the worth of smart cities <ns0:ref type='bibr' target='#b8'>[7]</ns0:ref>. Event information can be used to predict the effects of the event on the community, improve security and rescue the people. Furthermore, classification of events can be used to collect relevant information about a specific topic, top-trends, stories, text summarization, and question and answering systems <ns0:ref type='bibr' target='#b10'>[8]</ns0:ref><ns0:ref type='bibr' target='#b11'>[9]</ns0:ref>. Such information can be used to predict upcoming events, situations, and happening. For example, protesting events reported on social media generally end with conflict among different parties, injuries, death of people, and misuse of resources that cause anarchy. Some proactive measurements can be taken by the state to diffuse the situation and to prevent conflict. Similarly, event classification is crucial to monitor the law-and-order situation of the world. Extracting and classification of event information from Urdu language text is a unique, interesting, and challenging task. The characteristic features of the Urdu langue that made the event classification tasks more complex and challenging are listed below. Similarly, the lack of resources i.e., the Part of speech tagger (PoS), words stemmer, datasets, and word annotators are some other factors that made the processing of the Urdu text complex. There exist a few noteworthy works related to the Urdu language text processing (See the literature for more details). All the above-mentioned factors motivated us to explore Urdu language text for our task.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61623;</ns0:head></ns0:div> <ns0:div><ns0:head>Concept of Events</ns0:head><ns0:p>The definition of events varies from domain to domain. In literature, the event is defined in various aspects, such as a verb, adjective, and noun based depending on the environmental situation <ns0:ref type='bibr' target='#b18'>[16]</ns0:ref><ns0:ref type='bibr' target='#b19'>[17]</ns0:ref>. In our research work event can be defined as 'An environmental change that occurs because of some reasons or actions for a specific period and influences the community.' For example, the explosion of the gas container, a collision between vehicles, terrorist attacks, and rainfall, etc. There are several hurdles to process Urdu language text for event classification. Some of them are i.e., determining the boundary of events in a sentence, identifying event triggers, and assigning an appropriate label. Event Classification 'The automated way of assigning predefined labels of events to new instances by using pretrained classification models is called event classification.'. Classification is supervised machine learning; all the classifiers are trained on label instances of the dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Multiclass Event Classification</ns0:head><ns0:p>It is the task of automatically assigning the most relevant one class from the given multiple classes. Some serious challenges of multiclassification are sentences overlapping in multiple classes <ns0:ref type='bibr' target='#b20'>[18]</ns0:ref><ns0:ref type='bibr' target='#b21'>[19]</ns0:ref> and imbalanced instances of classes. These factors generally affect the overall performance of the classification system.</ns0:p></ns0:div> <ns0:div><ns0:head>Lack of Recourse</ns0:head><ns0:p>The researchers of cursive languages in the past were unexcited and vapid <ns0:ref type='bibr' target='#b15'>[13]</ns0:ref> because of lacking resources i.e., dataset, part of speech tagger and word annotators, etc. Therefore, a very low amount of research work exists for cursive language i.e., Arabic, Persian Hindi, and Urdu <ns0:ref type='bibr' target='#b22'>[20]</ns0:ref>. But now, from the last few years, cursive languages have attracted researchers. The main reason behind the attraction is that a large amount of cursive language data was being generated rapidly over the internet. Now, some processing tools also have been developed i.e., Part of speech tagger, word stemmer, and annotator that play an important role by making research handier. But these tools are still limited, commercial, and close domain. Natural language processing is tightly coupled with resources i.e., processing resources, datasets, semantical, syntactical, and contextual information. Textual features i.e., Part of Speech (PoS) and semantic are important for text processing. Central Language of Engineering (CLE) 4 provides limited access to PoS tagger because of the close domain and paid that diverged the researcher to explore Urdu text more easily. Contextual features <ns0:ref type='bibr' target='#b23'>[21]</ns0:ref> i.e., grammatical insight (tense), and sequence of words play important role in text processing. Because of the morphological richness nature of Urdu, a word can be used for a different purpose and convey different meanings depending on the context of contents. Unfortunately, the Urdu language is still lacking such tools that are publicly available for research. Dataset is the core element of research. Dataset for the Urdu language generally exists for name entity extraction with a small number of instances that are &#61623; Enabling Minority Language Engineering (EMILLE) (only 200000 tokens) <ns0:ref type='bibr' target='#b24'>[22]</ns0:ref>.</ns0:p><ns0:p>&#61623; Becker-Riaz corpus (only 50000 tokens) <ns0:ref type='bibr' target='#b25'>[23]</ns0:ref> &#61623; International Joint Conference on Natural Language Processing (IJCNLP) workshop corpus (only 58252 tokens) &#61623; Computing Research Laboratory (CRL) annotated corpus (only 55,000 tokens are publicly available data corpora. <ns0:ref type='bibr' target='#b26'>[24]</ns0:ref> There is no specific dataset for events classification for Urdu language text.</ns0:p></ns0:div> <ns0:div><ns0:head>Concept of Our System</ns0:head><ns0:p>The overall working process of our proposed framework is given in Fig. <ns0:ref type='figure'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Our Contribution</ns0:head><ns0:p>&#61623; In this research article, we claim that we are the first ones who are exploring the Urdu language text to perform multi-class event classification at the sentence level using a machine learning approach, &#61623; A dataset that is larger than state-of-art used in experiments. In our best knowledge classification for twelve 12 different types of events never performed, &#61623; A comprehensive and detailed comparison of six machine learning algorithms is presented to find a more accurate model for event classification for the Urdu language text.</ns0:p></ns0:div> <ns0:div><ns0:head>Our Limitations</ns0:head><ns0:p>&#61623; There is no specific Word2Vec model for Urdu language text, &#61623; There is also no availability of the free (open source) Part of Speech tagger and word stemmer for Urdu language text, &#61623; Also, there exists no publicly available dataset of Urdu language text for sentence classification. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Related Work</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Classification of events from the textual dataset is a very challenging and interesting task of Natural Language Processing (NLP). An intent mining system was developed <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref> to facilitate citizens and cooperative authorities using a bag of token model. The researchers explored the hybrid feature representation for binary classification and multi-label classification. It showed a 6% to 7% improvement in the top-down feature set processing approach. Intelligence information retrieval plays a vital role in the management of smart cities. Such information helps to enhance security and emergency management capabilities in smart cities <ns0:ref type='bibr' target='#b8'>[7]</ns0:ref>. The textual content on social media is explored in different ways to extract event information. Generally, the event has been defined as a verb, noun, and adjective <ns0:ref type='bibr' target='#b16'>[14]</ns0:ref>. Event detection is a generic term that is further divided into event extraction and event classification. A combined neural network of the convolutional and recurrent network was designed to extract events from English, Tamil, and Hindi languages. It showed 39.91%, 37.42% and 39.71% F_ Measure <ns0:ref type='bibr' target='#b19'>[17]</ns0:ref>. In the past, the researchers were impassive in cursive language, therefore a very limited amount of research work exists in cursive language i.e., Arabic, Persian Hindi, and Urdu <ns0:ref type='bibr' target='#b27'>[25]</ns0:ref>. Similarly, in the work of <ns0:ref type='bibr' target='#b27'>[25]</ns0:ref>, the authors developed a multiple minimal reduct extraction algorithm which is an improved version of the Quick reduct algorithm <ns0:ref type='bibr' target='#b28'>[26]</ns0:ref>. The purpose of developing the algorithm is to produce a set of rules that assist in the classification of Urdu sentences. For evaluation purposes, an Arabic-based corpus containing more than 2500 documents was plugged in for classifying them into one of the nine classes. In the experiment, we compared the results of the proposed approach when using multiple and single minimal reducts. The results showed that the proposed approach had achieved an accuracy of 94% when using multiple reducts, which outperformed the single reduct method which achieved an accuracy of 86%. The results of the experiments also showed that the proposed approach outperforms both the K-NN and J48 algorithms regarding classification accuracy using the dataset on hand. Urdu textual contents were explored <ns0:ref type='bibr' target='#b29'>[27]</ns0:ref> for classification using the majority voting algorithm. They categorized Urdu text into seven classes i.e., Health, Business, Entertainment, Science, Culture, Sports, and Wired. They used 21769 news documents for classification and reported 94% precision and recall. Dataset evaluated using these algorithms, Linear SGD, Bernoulli Na&#239;ve Bayes, Linear SVM, Na&#239;ve Bayes, random forest classifier, and Multinomial Na&#239;ve Bayes.</ns0:p><ns0:p>A framework <ns0:ref type='bibr' target='#b31'>[28]</ns0:ref> proposed a tweet classification system to rescue people looking for help in a disaster like a flood <ns0:ref type='bibr' target='#b32'>[29]</ns0:ref>. The developed system was based on the Markov Model achieve 81% and 87% accuracy for classification and location detection, respectively. The features used in their system are <ns0:ref type='bibr' target='#b32'>[29]</ns0:ref>: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>&#61623;</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To classify Urdu news headlines <ns0:ref type='bibr' target='#b33'>[30]</ns0:ref> by using maximum indexes of vectors. They used stemmed and non-stemmed textual data for experiments. The system was specifically designed for text classification instead of event classification. The proposed system achieved 78.0% for competitors and 86.6% accuracy for the proposed methodology. In comparison, we used sentences of Urdu language for classification and explored the textual features of sentences. We have explored all the textual and numeric features i.e., title, length, last-4-words, and the combinations of these (for more detail see Tab. 1) in detail in this paper that were not reported ever in state-of-art according to our knowledge. Twitter <ns0:ref type='bibr' target='#b34'>[31]</ns0:ref> to detect natural disasters i.e., bush fires, earthquakes and cyclones, and humanitarian crises <ns0:ref type='bibr' target='#b35'>[32]</ns0:ref>. To be aware of emergencies situation in natural disasters a framework work designed based on SVM and Na&#239;ve Bayes classifiers using word unigram, bi-gram, length, number of #Hash tag, and reply. These features were selected on a sentence basis. SVM and Nave Bayes showed 87.5% and 86.2% accuracy respectively for tweet classification i.e., seeking help, offering for help, and none. A very popular social website (Twitter) textual data was used <ns0:ref type='bibr' target='#b36'>[33]</ns0:ref> to extract and classify events for the Arabic language. Implementation and testing of Support Vector Machine (SVM) and Polynomial Network (PN) algorithms showed promising results for tweet classification 89.2% and 92.7%. Stemmer with PN and SVM magnified the classification 93.9% and 91.7% respectively. Social events <ns0:ref type='bibr' target='#b37'>[34]</ns0:ref> were extracted assuming that to predict either parties or one of them aware of the event. The research aimed to find the relation between related events. Support Vector Machine (SVM) with kernel method was used on adopted annotated data of Automated Content Extraction (ACE). Structural information derived from the dependency tree and parsing tree is utilized to derive new structures that played important role in event identification and classification. The Tweet classification of the tweets related to the US Air Lines <ns0:ref type='bibr' target='#b43'>[40]</ns0:ref> is performed by the sentiment analysis companies that are not related to our work. We tried to classify events at sentence level that is challenging since the Urdu sentence contains very short features as compared to a tweet. It is pertinent to mention that the sentiment classification is different from the event classification. Multiclass event classification is reported <ns0:ref type='bibr' target='#b44'>[41]</ns0:ref> comprehensively, deep learning classifiers are used to classify events into different classes.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>Event classification for Urdu text is performed using a supervised machine learning approach. A complete overview of the multi-class event classification methodology is given in Fig. <ns0:ref type='figure'>1</ns0:ref>. Textual data classification possesses a lot of challenges i.e., word similarity, poor grammatical structure, misuse of terms, and multilingual words. That is the reason, we decided to adopt a supervised classification approach to classify Urdu sentences into different categories.</ns0:p></ns0:div> <ns0:div><ns0:head>Data Collection</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:4:0:NEW 13 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Urdu data were collected from popular social networks (Twitter), famous news channel blogs i.e., Geo News 5 , Urdu Point 6 , and BBC Urdu 7 . The data collection consists of the title, the main body, the published date, the location, and the URL of the post. In the phase of data collection, a PHP-based web scraper is used to crawl data from the above-mentioned social websites. A complete post is retrieved from the websites and stored in MariaDB (database). Our dataset consists of 0.1 million (102, 960) label sentences of different types of events. All the different types of events used in our research work and their maximum number of instances are shown below in Fig. <ns0:ref type='figure'>2</ns0:ref>. There are twelve different types of events that we try to classify in our research work. These events are a factual representation of the state and the situation of the people. In Fig. <ns0:ref type='figure'>2</ns0:ref>. imbalances number of instances of each event are given. It can be visualized that politics, sports, and Fraud &amp; Corruption have a higher number of instances while Inflation, Sexual Assault, and Terrorist attacks have a lower number of instances. These imbalanced numbers of instances made our classification more interesting and challenging. Multiclass events classification tasks are comprised of many classes. The different types of events that are used in our research work i.e., sports, Inflation, Murder &amp; Death, Terrorist attacks, Politics, Law and Order, Earthquake, Showbiz, Fraud &amp; Corruption, Weather, Sexual Assault, and Business. All the sentences of the dataset are labeled by the above-mentioned twelve (12) different types of events. Finally, a numeric (integer) value is assigned to each type of event label (See Tab. 2 for more details of the label and its relevant numeric value).</ns0:p></ns0:div> <ns0:div><ns0:head>Preprocessing</ns0:head><ns0:p>The initial preprocessing steps are performed on the corpus to prepare it for machine learning algorithms. Because textual data cannot directly process by machine learning classifiers. It also contains many irrelevant words. The detail of all the preprocessing steps is given below. These steps were implemented in a PHP-based environment. While the words tokenization is performed using the scikit library <ns0:ref type='bibr' target='#b22'>[20]</ns0:ref> in python.</ns0:p></ns0:div> <ns0:div><ns0:head>Post Splitting</ns0:head><ns0:p>The PHP crawler extracted the body of the post. It comprises many sentences as a paragraph. In the Urdu language script, sentences end with a sign called '-'Hyphen (Khatma-&#8235;.)&#65175;&#65252;&#64423;&#8236; It is a standard punctuation mark in the Urdu language to represent the end of the sentence. As mentioned earlier, we are performing event classification at the sentence level. So, we split paragraphs of every post into sentences. Every line in the paragraphs ending at Hyphen is split as a single line.</ns0:p></ns0:div> <ns0:div><ns0:head>Stop Words Elimination</ns0:head><ns0:p>Generally, those words that occur frequently in text corpus are considered as stop words. These words merely affect the performance of the classifier. Punctuation marks ('!', '@',' #', etc.) and frequent words of the Urdu languages (&#8235;(&#64400;&#65166;&#8236;ka), &#8235;&#64400;&#64431;&#8236; (kay), &#8235;(&#64400;&#64509;&#8236;ki), etc.) are the common examples of stop words. All the stop words <ns0:ref type='bibr' target='#b31'>[28]</ns0:ref> that do not play an influential role in event classification for the Urdu language text are eliminated from the corpus. Stop words elimination reduces memory and processing utilization and makes the processing efficient.</ns0:p></ns0:div> <ns0:div><ns0:head>Noise Removal and Sentences Filtering</ns0:head><ns0:p>Our data were collected from different sources (see section 3). It contains a lot of noisy elements i.e., multilanguage words, links, mathematical characters, and special symbols, etc. To clean the corpus, we removed noise i.e., multilingual sentences, irrelevant links, and special characters. The nature of our problem confined us to define the limit of words per sentence. Because of the multiple types of events, it is probably hard to find a sentence of the same length. We decided to keep the maximum number of sentences in our corpus. All those sentences which are brief and extensive are removed from our corpus. In our dataset lot of sentences varying in length from 5 words to 250 words. We decided to use sentences that consist of 5 words to 150 words to lemmatize our research problem and to reduce the consumption of processing resources.</ns0:p></ns0:div> <ns0:div><ns0:head>Sentence Labeling</ns0:head><ns0:p>In supervised learning, providing output (Label) detail in the corpus is a core element. Sentence labeling is an exhausting task that requires deep knowledge and an expert's skill of language. All the sentences were manually labeled by observing the title of the post and body of sentences by Urdu language experts (see Tab. 2 for sentence labeling). Three Urdu language experts were engaged in the task of sentence labeling. One of them is Ph.D. (Scholar) while the other two are M.Phil. To our best knowledge, it is the first largest labeled dataset for the multi-class event in the Urdu language.</ns0:p></ns0:div> <ns0:div><ns0:head>Feature Selection</ns0:head><ns0:p>The performance of prediction or classification models is cohesively related to the selection of appropriate features. In our dataset six (6) features excluding 'Date' as a feature are considered valuable to classify Urdu news sentences into different classes. All the proposed features that are used in our research work are listed in Tab.1. Why were these features selected? Last-4-Words of Sentence Occurrence, happening, and situations are generic terms that are used to represent events. In general, 'verb' represents an event. The grammatical structure of Urdu language is Subject_ Object_ Verb (SOV) <ns0:ref type='bibr' target='#b34'>[31]</ns0:ref>, which depicts that verb, is laying in the last part of the sentences. For example, the sentence &#8235;&#65193;&#64510;&#65166;&#1748;'(&#8236; &#8235;&#64344;&#65166;&#65255;&#64509;&#8236; &#8235;&#64400;&#65262;&#8236; &#8235;&#64344;&#65262;&#65193;&#65261;&#64414;&#8236; &#8235;&#65255;&#64431;&#8236; &#8235;&#65165;&#65187;&#65252;&#65194;&#8236; -Ahmad ney podon ko pani dia'), (Ahmad watered the plants) follows the SOV format. 'Pani dia-&#8235;&#65193;&#64510;&#65166;&#8236; &#8235;'&#64344;&#65166;&#65255;&#64509;&#8236; is the verbal part of the sentence existing in the last two words of the sentence. It shows the happening or action of the event. Our research problem is to classify sentences into different classes of events. So, that last_4_ words are considered one of the vital features to identify events and non-event sentences. For example, in Tab. 3 in the event column underline/highlighted part of the sentence represents the happening of an event i.e., last_4_words in the sentence. While labeling the sentences we are strictly concerned that only event sentences of different types should be labeled.</ns0:p></ns0:div> <ns0:div><ns0:head>Title of Post</ns0:head><ns0:p>Every conversation has a central point i.e., title. Textual, pictorial, or multimedia content that is posted on social networks as a blog post, at the paragraph level or sentence level describes the specific event. Although many posts contain irrelevant titles to the body of the message. However, using the title as a feature to classify sentences is crucial because the title is assigned to the contents-based material.</ns0:p></ns0:div> <ns0:div><ns0:head>Length of Sentence</ns0:head><ns0:p>A sentence is a composition of many words. The length of the sentence is determined by the total number of words or tokens that exist in it. It can be used as a feature to classify sentences because many sentences of the same event have probably the same length.</ns0:p></ns0:div> <ns0:div><ns0:head>Title and Length</ns0:head><ns0:p>The proposed feature is the combination of the title of the post and the length of the sentence.</ns0:p><ns0:p>The title represents the central idea of the post, and the length of the sentence varies from title to title.</ns0:p></ns0:div> <ns0:div><ns0:head>Title and Last-4-words</ns0:head><ns0:p>The combination of title and last_4_words in Urdu language text is very helpful to classify the sentences. Because last_4_words generally represent the occurrence/happening of some event.</ns0:p></ns0:div> <ns0:div><ns0:head>Length and Last-4-words</ns0:head><ns0:p>We also consider the combination of length with last_4_words as a valuable feature because the length of a sentence varies from event to event.</ns0:p></ns0:div> <ns0:div><ns0:head>Features Engineering</ns0:head><ns0:p>Feature Engineering is a way of generating specific features from a given set of features and converting selected features to machine-understandable format. Our dataset is text-based that consists of more than 1 million (102,960 labeled) instances i.e., sports, inflation, death, terrorist attack, and sexual assault, etc. 12 classes. As mentioned earlier that the Urdu language is one of the resource-poor languages and since there are no pre-trained word embedding models to generate the embedding vectors for Urdu language text, we could not use the facility of Word2Vec embedding technique. All the textual features are converted to numeric format i.e., (Term Frequency_ Inverse Document Frequency) TF_IDF and Count-Vectorizer. These two features TF_IDF and Count-Vectorizer are used in a parallel fashion. The scikit-learn package is used to transform text data into numerical value <ns0:ref type='bibr' target='#b22'>[20]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Count_ Vectorization</ns0:head><ns0:p>The process of converting words to numerical form is called vectorization. Its working strategy is based on term frequency. It counts the frequency of specific word w and builds the spare matrix-vector using bag-of-words (BOW). The length of the feature vector depends on the size of the bag-of-words i.e., dictionary.</ns0:p></ns0:div> <ns0:div><ns0:head>Term Frequency Inverse Document Frequency</ns0:head><ns0:p>It is a statistical measure of word w to understand the importance of that word for specific document d in the corpus. The importance of a word is proportionally related to frequency i.e., </ns0:p></ns0:div> <ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>Experimental Setup</ns0:head><ns0:p>Classifiers are the algorithms used to classify data instances into predefined categories. Many classifiers exist that process the textual data using a machine learning approach. In our research work, we selected the six most popular machine learning algorithms i.e., Random Forest (RF) <ns0:ref type='bibr' target='#b12'>[10]</ns0:ref>, K-Nearest Neighbor (KNN), Support Vector Machine (SVM, Decision Tree (DT), Na&#239;ve Bayes Multinomial (NBM), and Linear Regression (LR).</ns0:p></ns0:div> <ns0:div><ns0:head>Machine Learning Classifiers</ns0:head><ns0:p>In this section, we presented the detail of six classifiers that were used to classify the Urdu sentences using different proposed features. 1</ns0:p><ns0:p>Random Forest (RF) This model is comprised of several decision trees that act as a building block of RF. Every decision tree is created using the rules i.e., if then else, and the conditional statements, etc. <ns0:ref type='bibr' target='#b12'>[10]</ns0:ref>. These rules are then followed by the multiple decision trees to analyze the problem at a discrete level.</ns0:p></ns0:div> <ns0:div><ns0:head>2</ns0:head><ns0:p>k-Nearest Neighbor It is one of the statistical models that find the similarity among the data points using Euclidean distance <ns0:ref type='bibr' target='#b38'>[35]</ns0:ref>. It belongs to the category of lazy classifiers and is widely used for classification and regression tasks.</ns0:p></ns0:div> <ns0:div><ns0:head>3</ns0:head><ns0:p>Support Vector Machine It is based on statistical theory <ns0:ref type='bibr' target='#b39'>[36]</ns0:ref>, to draw a hyperplane among points of the dataset. It is highly recommended for regression and classification i.e., binary classification, multiclass classification, and multilabel classification. It finds the decision boundary to identify different classes and maximize the margin.</ns0:p></ns0:div> <ns0:div><ns0:head>4</ns0:head><ns0:p>Decision Tree It is one of the supervised classifiers that work following certain rules. Data points/inputs are split according to the specific condition <ns0:ref type='bibr' target='#b40'>[37]</ns0:ref>. It is used for regression and classification using the non-parametric method because it can handle textual and numerical data. Learning from data points is accomplished by approximating the sine curve with the combination of an if-else-like set of rules. The accuracy of a model is related to the deepness and complexity of rules.</ns0:p></ns0:div> <ns0:div><ns0:head>5</ns0:head><ns0:p>Na&#239;ve Bayes Multinominal It is a computationally efficient classifier for text classification using discrete features. It can also handle the textual data by converting it into numerical <ns0:ref type='bibr' target='#b41'>[38]</ns0:ref> format using count vectorizer and term frequency-inverse document frequency (tf-idf).</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>Linear Regression</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:4:0:NEW 13 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>It is a highly recommended classifier for numerical output. It is used to perform prediction by learning linear relationships between independent variables (inputs) and dependent variables (output) <ns0:ref type='bibr' target='#b42'>[39]</ns0:ref>.</ns0:p><ns0:p>Training Dataset A subpart of the dataset that is used to train the models to learn the relationship among dependent and independent variables is called the training dataset. We divided our data into training and testing using the train_ test_ split function of the scikit library using python. Our training dataset consists of 70% of the dataset that is more than 70,000 labeled sentences of Urdu language text.</ns0:p></ns0:div> <ns0:div><ns0:head>Testing Dataset</ns0:head><ns0:p>It is also the subpart of the dataset that is usually smaller than size as compared to the training dataset. In our research case, we decided to use 30% of the dataset for testing and validating the performance of classifiers. It comprises more than 30,000 instances/sentences of Urdu langue text.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance Measuring Parameters</ns0:head><ns0:p>The most common performance measuring parameters <ns0:ref type='bibr' target='#b16'>[14]</ns0:ref><ns0:ref type='bibr' target='#b17'>[15]</ns0:ref><ns0:ref type='bibr' target='#b18'>[16]</ns0:ref><ns0:ref type='bibr' target='#b19'>[17]</ns0:ref><ns0:ref type='bibr' target='#b20'>[18]</ns0:ref> i.e., precision, recall, and F1_measure are used to evaluate the proposed framework since these parameters are the key indicators while performing the classification in a multiclass environment using an imbalanced dataset.</ns0:p><ns0:p>Precision = </ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>To evaluate our dataset, the Python package scikit-learn is used to perform event classification at the sentence level. We extracted the last-4-words of each sentence and calculated the length of each sentence. To obtain the best classification results we evaluated six machine learning classifiers among others i.e., Decision Tree (DT), Random Forest (RF), Logistic Regression (LR), Support Vector Machine (SVM), k-Nearest Neighbor, and Na&#239;ve Bayes Multinominal (NBM). We proposed three features i.e., Length, Last-4-words, and Length and Last-4-words to classify sentences into different types of events (see tab. 2). The results were obtained using 'length ' as the feature is shown in Tab. 4. The classifiers i.e., DT, RF, NBM, and LR showed 32% accuracies that is very low. The comparatively second feature that is Last-4-words showed better results for these above-mentioned classifiers. Random Forest showed 52% accuracy that is a considerable result as an initiative for multiclass event classification in the Urdu language text. The detail of results regarding other classifiers can be seen in Tab 5.</ns0:p><ns0:p>We also evaluated these classifiers using another feature that is the combination of both Length and Last-4-grams. It also improved the overall 1% accuracy of the proposed system. The Random Forest showed 53.00% accuracy. The further details of accuracies of other used machine learning models can be seen in Tab. <ns0:ref type='bibr' target='#b6'>6</ns0:ref> The results obtained by using the above features are very low, we deiced to use the title of the post as a feature to improve the performance of the system. We integrated the 'Title' of the post with each sentence of the same paragraph that dramatically improves the accuracy of the system. We combined the 'Title' of the post with other features i.e., length, and Last-4-words. The detail of the highest accuracies that is obtained by the combination of these features i.e., Last-4-words, length, and title are given in Tab. 7 and Tab. 8. Random forest and k-NN showed the highest accuracies. The detail of the confusion matrix related to the proposed system (TP, FP, TN, FN) is also given in Tab. 9 and Tab. 10. The standard performance measuring parameters i.e., precision, recall, and f1-measure of Random Forest and k-NN classifiers using 'Title and Last-4words' as features are given in Tab. 11 and Tab. 12 respectively. Similarly other combinations of features i.e., 'Title and Length' are used to enhance the accuracy of the system. The Decision Tree and Random Forest showed the highest results as compared to other classifiers for this specific combination of features. A detailed summary of the results related to Decision Tree and Random Forest is given in Tab. 13 and Tab. 14 respectively. We finally presented the comparison of four classifiers that showed the highest results in fig. <ns0:ref type='figure'>3</ns0:ref>. The semantics of the script written in the Urdu language is quite different from that of English and Arabic Language which causes the low performance of SVM and k-NN as compared to Random Forest.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Event extraction and classification are tightly coupled with processing resources i.e., Part of speech tagger (PoS), Text annotators, and contextual insights. Meanwhile, the usage of local languages being highly preferred over social media is creating problems to analyze by existing tools. Urdu is one of those languages that have a considerable number of users and a huge bulk of data on social networks. It contains worthy insights that are necessary to process for different purposes like to improve security, to understand the intentions, trends, and mindset of people. We performed event classification written in Urdu langue text on different social media like platforms. The evaluation of results is presented after analyzing multiple features i.e., Length, Last-4-words, Title, and combination of all these features converged our findings to conclude that length and last-4-words are basic features to classify multiclass events but showed 53% accuracy. To improve the accuracy of the proposed system, we integrated 'Title' as the feature with other two features i.e., Length and Last-4-words. The combination of 'Title' with 'Length and Last-4-words' improved the performance of the proposed system and showed the highest results as reported in the abstract.</ns0:p><ns0:p>As described in the dataset section that the dataset is imbalanced and contains multiple classes.</ns0:p><ns0:p>To validate the accuracy of results not only TP, FP, TN, FP reported but also the standard performance evaluations parameters i.e., precision, recall, and f1-measure are reported in Tab. 11, Tab. 12, Tab. 13, and Tab.14. Furthermore, extracting and classification of events from resource-poor language is an interesting and challenging task. There are no standard (benchmark) datasets and word embedding models like Word2Vec or Glove (Exists for the English Language) for Urdu language text.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>A massive amount of Urdu textual data exists on social networks and news websites. Multiclass event classification for Urdu text at the sentence level is a challenging task because of the few numbers of words and limited contextual information.</ns0:p><ns0:p>The selection of appropriate features and approaches is necessary to classify multiclass events written in Urdu language text.</ns0:p><ns0:p>The deep analysis of the structure of sentences written in the Urdu language leads us to select these appropriate features i.e. title, length, last-4-words, and combination of all these features.</ns0:p><ns0:p>Experimental results showed that non-of single feature is capable to classify multiclass events. Contrary to the different combinations of these features i..e title and 4-words, title and length and last-4-words and length showed considerable results.</ns0:p><ns0:p>Count_ Vectorizer and TF-IDF feature generating techniques are used to convert text into (numeric) real value for machine learning models. Random Forest classification model showed 52% and 53% accuracy for Last-4-words and combination of length and last-4-words.</ns0:p><ns0:p>The title is the key feature that can dramatically improve the performance of event classification models that works on a sentence level. Combining title with last-4-words and length showed the highest accuracies i.e. 98.00% and 99.00% for Random Forest and k-NN classifiers respectively.</ns0:p></ns0:div> <ns0:div><ns0:head>Future Work</ns0:head><ns0:p>&#61623; In a comprehensive review of Urdu literature, we found a few numbers of referential works related to Urdu text processing. One of the main issues associated with the Urdu language research is the unavailability of the appropriate corpus like the data set of Urdu sentences representing the event; the close-domain PoS tagger; the lexicons, and the annotator, etc.</ns0:p><ns0:p>&#61623; There is a need to develop the supporting tools i.e., the PoS tagger, the annotation tools, the dataset of the Urdu-based languages having information about some information associated with the events, and the lexicons can be created to extend the research areas in the Urdu language.</ns0:p><ns0:p>&#61623; In the future, many other types of events and other domains of information like medical events, social, local, and religious events can be classified using the extension of machine learning i.e., deep learning. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science &#61623; In the future grammatical, contextual, and lexical information can be used to categorize events. Temporal information related to events can be further utilized to classify an event as real and retrospective.</ns0:p><ns0:p>&#61623; Classification of events can be performed at the document level and phrase level.</ns0:p><ns0:p>&#61623; Deep learning classifiers can be used for a higher number of event classes. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Cursive nature of the script &#61623; Morphologically enriched &#61623; Different structures of grammar &#61623; Right to the left writing style &#61623; No text capitalization</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>4 http://www.cle.org.pk/ PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:4:0:NEW 13 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Number of words in a tweet (w) &#61623; Verb in a tweet by (verb) &#61623; Number of verbs in a tweet by (v) &#61623; Position of the query by (Pos) &#61623; Word before query word (before) &#61623; Word after query word (after) PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:4:0:NEW 13 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>higher frequency more important. The mathematical formulas related to TF_IDF are given below: Term Frequency (TF) = (1) &#119873;&#119906;&#119898;&#119887;&#119890;&#119903; &#119900;&#119891; &#119905;&#119894;&#119898;&#119890; &#119905;&#119890;&#119903;&#119898; &#119905; &#119886;&#119901;&#119901;&#119890;&#119886;&#119903;&#119904; &#119894;&#119899; &#119889;&#119900;&#119888;&#119906;&#119898;&#119890;&#119899;&#119905; &#119879;&#119900;&#119905;&#119886;&#119897; &#119899;&#119906;&#119898;&#119887;&#119890;&#119903; &#119900;&#119891; &#119905;&#119890;&#119903;&#119898;&#119904; &#119894;&#119899; &#119889;&#119900;&#119888;&#119906;&#119898;&#119890;&#119899;&#119905;&#119904; Inverse Document Frequency (IDF) =Log e</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:11:55370:4:0:NEW 13 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='46,42.52,178.87,525.00,357.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='48,42.52,178.87,525.00,352.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Proposed Features</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Sr. No.</ns0:cell><ns0:cell>Feature _Name</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Length</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Last_4_ words</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>Last_4_words and Length</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>Title</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>Title and Length</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>Title and Last_4-words</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Types of events and their labels in the dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Event</ns0:cell><ns0:cell cols='2'>Label Event</ns0:cell><ns0:cell>Label</ns0:cell></ns0:row><ns0:row><ns0:cell>Sports</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>Earthquake</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell>Inflation</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>Showbiz</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Murder and Death 3</ns0:cell><ns0:cell cols='2'>Fraud and Corruption 9</ns0:cell></ns0:row><ns0:row><ns0:cell>Terrorist Attack</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>Rain/Weather</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>Politics</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>Sexual Assault</ns0:cell><ns0:cell>11</ns0:cell></ns0:row><ns0:row><ns0:cell>Law and Order</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>Business</ns0:cell><ns0:cell>12</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Length </ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Feature</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>17%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>NBM</ns0:cell><ns0:cell>32%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>LR Decision Tree</ns0:cell><ns0:cell>32% 32%</ns0:cell><ns0:cell>Length</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Random Forest 32%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>K-NN</ns0:cell><ns0:cell>24%</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Last _4_words accuracy</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Feature</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>45%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>NBMN</ns0:cell><ns0:cell>44%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>LR Decision Tree</ns0:cell><ns0:cell>49% 49%</ns0:cell><ns0:cell>Last _4_words</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Random Forest 52%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>K-NN</ns0:cell><ns0:cell>48%</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Last _4_words and Length Accuracy</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell cols='2'>Accuracy Feature</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>46%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>NBMN</ns0:cell><ns0:cell>44%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>LR Decision Tree Forest Random</ns0:cell><ns0:cell>49% 48% 53%</ns0:cell><ns0:cell>Length and Last _4_words</ns0:cell></ns0:row><ns0:row><ns0:cell>K-NN</ns0:cell><ns0:cell>49%</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Title and Last _4_words accuracy</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Feature</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>85%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>NBMN</ns0:cell><ns0:cell>91%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>LR Decision Tree Forest Random</ns0:cell><ns0:cell>95% 97% 98%</ns0:cell><ns0:cell>Title and Last _4_words</ns0:cell></ns0:row><ns0:row><ns0:cell>K-NN</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>KNN TP, FN, FP and TN K-Nearest Neighbor</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Label</ns0:cell><ns0:cell>Type of Event</ns0:cell><ns0:cell>TP</ns0:cell><ns0:cell>FN</ns0:cell><ns0:cell>FP</ns0:cell><ns0:cell>TN</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Sports</ns0:cell><ns0:cell>5638</ns0:cell><ns0:cell>23</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Inflation</ns0:cell><ns0:cell>967</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>Murder and Death</ns0:cell><ns0:cell>2077</ns0:cell><ns0:cell>38</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell cols='2'>Terrorist Attack 858</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>Politics</ns0:cell><ns0:cell>9931</ns0:cell><ns0:cell>99</ns0:cell><ns0:cell>98</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>law and order</ns0:cell><ns0:cell>2238</ns0:cell><ns0:cell>55</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>Earthquake</ns0:cell><ns0:cell>970</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>07</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>Showbiz</ns0:cell><ns0:cell>2242</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>Fraud and corruption</ns0:cell><ns0:cell>3023</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>Rain/weather</ns0:cell><ns0:cell>1031</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>Sexual Assault</ns0:cell><ns0:cell>889</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>Business</ns0:cell><ns0:cell>1001</ns0:cell><ns0:cell>51</ns0:cell><ns0:cell>04</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='5'>https://urdu.geo.tv/ 6 https://www.urdupoint.com/daily/ 7 https://www.bbc.com/urdu PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:4:0:NEW 13 Oct 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:4:0:NEW 13 Oct 2021)</ns0:note> <ns0:note place='foot' n='3'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55370:4:0:NEW 13 Oct 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
" October 10, 20201 The Islamia University of Bahawalpur, Pakistan, Department of Software Engineering, Faculty of Computing. Dear Editor, Arkaitz Zubiaga, We appreciate the efforts of the editor and reviewers for their constructive comments to improve the quality of the research article. The concerns and comments raised by the honorable editor and reviewers are addressed point by point and included in the manuscript. The abstract, introduction and conclusion sections of the research article are edited as per suggestions given by respected editors and reviewers i.e., to mention the contribution. Furthermore, proofreading is also done as per recommendation to improve the quality of the research paper. We hope that the updated manuscript is in manageable form according to the criteria of the PeerJ Journal. The reply is sent on the behalf of all authors. Malik Daler Ali Awan (PhD) Lecturer Department of Software Engineering Faculty of Computing, The Islamia University of Bahawalpur. Editor comments (Arkaitz Zubiaga) MINOR REVISIONS I took over handling your manuscript as the previous Academic Editor became unavailable. While the content of the paper is deemed scientifically correct, the writing needs to improve that the contributions and the findings are clearly stated throughout, particularly in the abstract, introduction and conclusion. Please revise these sections to make this clear. Please proofread the paper to improve its readability. Reply We have gone through the research article and done the proofreading as per recommendation. Reviewer 1 (Manar Alkhatib) Basic reporting No comments Experimental design No comments Validity of the findings no comments Additional comments no comments Reviewer 2 (Vaibhav Rupapara) Basic reporting The author presented the event classification for the Urdu language text existing on social media and news channels. The dataset contains more than 0.1 million (102,962) labeled instances of twelve (12) different types of events. Title, Length, and last-4-words of a sentence are used as features to classify events. The Term Frequency-Inverse Document Frequency (tf-idf) showed the best results as a feature vector to evaluate the performance of the six popular machine learning classifiers. The author resolves the comments but still lots of things should be improved especially the abstract. such as: These mention two sentences are the same in the abstract why? Random Forest (RF), Decision Tree, and k-Nearest Neighbor outperformed among the other classifiers. Random Forest and K-Nearest Neighbor are the classifiers that out-performed among other classifiers by achieving 98.00% and 99.00% accuracy, respectively. The Abstract didn't contain detail about contribution and methodology. it's too general and short. Reply Thanks for highlighting these important sections. These recommended changes certainly improve the quality of research work. The detail of work and contribution is also incorporated in the abstract. It is specified to task and expanded to the appropriate length. Experimental design No commets Validity of the findings The dataset is imbalanced then the results are too significant justifies that the models are no overfitted on majority class data. The author gives the confusion matrix report in terms of TP, TN, FP, and FP.. These terms are useful for binary classification how they adjust them in multiclass classification. add some visual infromation. Reply Thank you for diverting our attention towards the validity of results for multiclass classification. To validate the results, we have also already reported the standard performance measuring parameters in Tab. 11 Tab. 12, Tab.13 and Tab.14. To avoid redundancy of information we did not report in visuals. Additional comments No comemnt "
Here is a paper. Please give your review comments after reading it.
283
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>It is well established that reduced precision arithmetic can be exploited to accelerate the solution of dense linear systems. Typical examples are mixed precision algorithms that reduce the execution time and the energy consumption of parallel solvers for dense linear systems by factorizing a matrix at a precision lower than the working precision. Much less is known about the efficiency of reduced precision in parallel solvers for sparse linear systems, and existing work focuses on single core experiments. We evaluate the benefits of using single precision arithmetic in solving a double precision sparse linear system using multiple cores. We consider both direct methods and iterative methods and we focus on using single precision for the key components of LU factorization and matrix-vector products. Our results show that the anticipated speedup of 2 over a double precision LU factorization is obtained only for the very largest of our test problems. We point out two key factors underlying the poor speedup. First, we find that single precision sparse LU factorization is prone to a severe loss of performance due to the intrusion of subnormal numbers. We identify a mechanism that allows cascading fill-ins to generate subnormal numbers and show that automatically flushing subnormals to zero avoids the performance penalties. The second factor is the lack of parallelism in the analysis and reordering phases of the solvers and the absence of floating-point arithmetic in these phases. For iterative solvers, we find that for the majority of the matrices computing or applying incomplete factorization preconditioners in single precision provides at best modest performance benefits compared with the use of double precision. We also find that using single precision for the matrix-vector product kernels provides an average speedup of 1.5 over double precision kernels. In both cases some form of refinement is needed to raise the single precision results to double precision accuracy, which will reduce performance gains.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION 31</ns0:head><ns0:p>Ever since early versions of Fortran offered real and double precision data types, we have been able to 32 choose between single and double precision floating-point arithmetics. Although single precision was no 33 faster than double precision on most processors up to the early 2000s, on modern processors it executes 34 twice as fast as double precision and has the additional benefit of halving the data movement. As a result, 35 single precision (as well as half precision) is starting to be used in applications such as weather and 36 climate modelling <ns0:ref type='bibr' target='#b16'>(Dawson et al., 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b37'>(V&#225;&#328;a et al., 2017)</ns0:ref> and seismic modeling <ns0:ref type='bibr' target='#b17'>(Fabien-Ouellet, 2020)</ns0:ref>,</ns0:p><ns0:formula xml:id='formula_0'>37</ns0:formula><ns0:p>where traditionally double precision was used. Mixed precision algorithms, which use some combination 38 of half, single, double, and perhaps even quadruple precisions, are increasingly being developed and used 39 in high performance computing <ns0:ref type='bibr' target='#b0'>(Abdelfattah et al., 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>40</ns0:head><ns0:p>In 2006, Langou et al. <ns0:ref type='bibr' target='#b30'>(Langou et al., 2006)</ns0:ref>, <ns0:ref type='bibr' target='#b10'>(Buttari et al., 2007)</ns0:ref>, drew the attention of the HPC 41 community to the potential of mixed precision iterative refinement algorithms for solving dense linear the working precision is used. The resulting algorithms are now implemented in LAPACK <ns0:ref type='bibr' target='#b7'>(Anderson et al., 1999)</ns0:ref> (as DSGETRS, and DSPOTRS for general and symmetric positive definite problems, respectively), and are generally twice as fast as a full double precision solve for sufficiently well conditioned matrices.</ns0:p><ns0:p>A decade after the two-precision iterative refinement work by <ns0:ref type='bibr'>Buttari et al., Carson and</ns0:ref> Higham introduced a GMRES-based iterative refinement algorithm that uses up to three precisions for the solution of linear systems <ns0:ref type='bibr' target='#b11'>(Carson and Higham, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b12'>(Carson and Higham, 2018)</ns0:ref>. This algorithm enabled <ns0:ref type='bibr' target='#b21'>Haidar et al. (Haidar et al., 2018a)</ns0:ref>, <ns0:ref type='bibr' target='#b24'>(Haidar et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b25'>(Haidar et al., 2018b)</ns0:ref> to successfully exploit the half-precision floating-point arithmetic units of NVIDIA tensor cores in the solution of linear systems.</ns0:p><ns0:p>Compared with linear solvers using exclusively double precision, their implementation shows up to a 4x-5x speedup while still delivering double precision accuracy <ns0:ref type='bibr' target='#b24'>(Haidar et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b25'>(Haidar et al., 2018b)</ns0:ref>. This algorithm is now implemented in the MAGMA library <ns0:ref type='bibr' target='#b1'>(Agullo et al., 2009)</ns0:ref>, (Magma, 2021) (routine magma_dhgesv_iteref_gpu) and in cuSOLVER, the NVIDIA library that provides LAPACK-like routines (routine cusolverDnDHgesv). Most recently, a five-precision form of GMRES-based iterative refinement has been proposed by <ns0:ref type='bibr' target='#b3'>Amestoy et al. (2021)</ns0:ref>, which provides extra flexibility in exploiting multiple precisions.</ns0:p><ns0:p>Mixed precision iterative refinement algorithms can be straightforwardly applied to parallel sparse direct solvers. But the variability of sparse matrix patterns and the complexity of sparse direct solvers make the estimation of the performance speedup difficult to predict. The primary aim of this work is to provide insight into the speedup to expect from mixed precision parallel sparse linear solvers. It is important to note that it is not our objective to design a new mixed precision algorithm, but rather we focus on analysing whether using single precision arithmetic in parallel sparse linear solvers has enough performance benefit to motivate mixed precision implementations.</ns0:p><ns0:p>After discussing existing work and the need for new studies we describe our experimental settings, including details of the sparse matrices and the hardware selected for our benchmark and analysis. We then introduce the issue of subnormal numbers appearing in single precision sparse LU factorization, explain how the subnormal numbers can be generated, and propose different mitigation strategies. We present experimental performance results and show that by reducing the working precision from double precision to single precision for parallel sparse LU factorization, the expected speedup of 2 is only achieved for very large matrices. We provide a detailed performance profiling to explain the results and we present a similar analysis for iterative solvers by studying performance implications of precision reduction in sparse matrix-vector product and incomplete LU factorization preconditioner kernels.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION OF EXISTING STUDIES</ns0:head><ns0:p>The performance benefits of mixed precision iterative refinement have been widely demonstrated for dense linear systems. The few such performance studies for sparse linear systems are summarized below, with an emphasis on the performance metrics reported. <ns0:ref type='bibr' target='#b9'>Buttari et al. (2008)</ns0:ref> studied the performance of mixed precision iterative refinement algorithms for sparse linear systems. They used Algorithm 1, in which the precision in which each line should be executed is shown at the end of the line, with FP32 denoting single precision and FP64 double precision. To implement Algorithm 1 they selected two existing sparse direct solvers: a multifrontal sparse direct solver MUMPS, by <ns0:ref type='bibr' target='#b5'>Amestoy et al. (2000)</ns0:ref> and a supernodal sparse direct solver SuperLU, by <ns0:ref type='bibr' target='#b32'>Li and Demmel (2003)</ns0:ref>. Multifrontal and supernodal methods are the two main variants of sparse direct methods; for a full description and a performance comparison see <ns0:ref type='bibr' target='#b6'>Amestoy et al. (2001)</ns0:ref>. <ns0:ref type='bibr'>Buttari et al.</ns0:ref> showed that the version of SuperLU used in their study does not benefit from using low-precision arithmetic. Put differently, the time spent in matrix factorization, which is the most timeconsuming part of the algorithm, is hardly reduced when single precision arithmetic is used in place of double precision. They concluded that a mixed precision iterative refinement based on SuperLU would be no faster than the standard double precision algorithm.</ns0:p></ns0:div> <ns0:div><ns0:head>Mixed Precision Iterative Refinement for Sparse Direct Solvers</ns0:head><ns0:p>For MUMPS, their experimental results showed that the mixed precision version can be up to two times faster than the standard double precision MUMPS. While this result is consistent with the performance observed for dense linear systems, there is an important difference to point out here: all the experimental results in <ns0:ref type='bibr' target='#b9'>Buttari et al. (2008)</ns0:ref> were obtained using a single core. Solve Ad = r using the LU factors. &#8882; (FP32) 7:</ns0:p><ns0:p>x &#8592; x + d &#8882; (FP64) 8: end while</ns0:p><ns0:p>In 2010, <ns0:ref type='bibr' target='#b28'>Hogg and Scott (2010)</ns0:ref> designed a mixed precision iterative solver for the solution of sparse symmetric linear systems. The algorithm is similar to Algorithm 1, except they perform LDL T factorization instead of LU factorization and they also considered flexible GMRES <ns0:ref type='bibr' target='#b35'>(Saad, 1993)</ns0:ref> for the refinement process. Their experimental results show that the advantage of mixed precision is limited to very large problems, where the computation time can be reduced by up to a factor of two. But the results of this study are again based on single core benchmarks and also involve out-of-core techniques.</ns0:p><ns0:p>As these existing works are limited to a single core, further study is required to evaluate how the performance will be affected in fully-featured parallel sparse direct solvers using many cores. The main objective of using single precision arithmetic in sparse direct solvers is to reduce the time to solution.</ns0:p><ns0:p>A safe way to improve performance without risking accuracy loss or inducing numerical stability is by exploiting the thread-level parallelism available in modern multicore processors. It is then sensible to first take advantage of core parallelism before using mixed precision algorithms for further performance enhancement. We aim to provide new insights into how far the exploitation of single precision arithmetic can advance the performance of parallel sparse solvers when computing a double precision accuracy solution.</ns0:p></ns0:div> <ns0:div><ns0:head>Mixed Precision Methods for Iterative Solvers</ns0:head><ns0:p>Here we summarize studies that use mixed precision arithmetic to improve the performance of iterative solvers. The existing works can be classified in three categories.</ns0:p><ns0:p>The first approach consists of using a single precision preconditioner or a few steps of a single precision iterative scheme as a preconditioner in a double precision iterative method. <ns0:ref type='bibr' target='#b9'>Buttari et al. (2008)</ns0:ref> have demonstrated the performance potential of this method using a collection of five sparse matrices, with a speedup ranging from 1.5x to 2.x. But the experiment has been performed on a single core using a diagonal preconditioner with an unvectorized sparse matrix-vector multiplication (SpMV) kernel.</ns0:p><ns0:p>The second approach, proposed in <ns0:ref type='bibr' target='#b8'>(Anzt et al., 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b19'>(Flegar et al., 2021)</ns0:ref> uses low precision data storage whenever possible to accelerate data movement while performing all the computation in high precision. This concept is appealing, but hard to implement in practice as it requires an optimized data conversion routine and knowledge of key numerical properties of the matrices, such as the condition number. To illustrate this idea the authors of <ns0:ref type='bibr' target='#b8'>Anzt et al. (2019)</ns0:ref> designed a mixed precision block-Jacobi preconditioning method where the explicit inversion of the block diagonals is required.</ns0:p><ns0:p>The third category consists of studies that focus on designing a mixed precision SpMV kernel for iterative solvers. This approach has been implemented by <ns0:ref type='bibr' target='#b2'>Ahmad et al. (2019)</ns0:ref> by proposing a new sparse matrix format that stores selected entries of the input matrix in single precision and the remainder in double precision. Their algorithm accelerates data movement and computation with a small accuracy loss compared with double precision SpMV. Their implementation demonstrates up to 2x speedup in the best case, but hardly achieves any speedup on most of the matrices due to data format conversion overhead. A similar approach has been implemented by <ns0:ref type='bibr' target='#b20'>Grigora&#351; et al. (2016)</ns0:ref> with a better speedup for FPGA architectures.</ns0:p><ns0:p>Our contribution is to assess from a practical point of view the benefit of using single precision arithmetic in iterative solvers for a double precision accuracy solution, by evaluating optimized vendor kernels used in applications. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>EXPERIMENTAL SETUP</ns0:head><ns0:p>The experimental results are reported using the Intel dual-socket Skylake with 40 cores and the NVIDIA V100 GPU. We have also performed experiments using the AMD dual-socket EPYC Naples system with 64 cores and the NVIDIA P100 GPU; and we obtained similar results. We note that the arithmetic properties of the NVIDIA GPUs are investigated in <ns0:ref type='bibr' target='#b18'>Fasi et al. (2021)</ns0:ref>. The sparse matrices selected for the benchmark are from various scientific and engineering applications and are summarized in Table <ns0:ref type='table'>1</ns0:ref>. The Intel Skylake node has 50 gigabytes of main memory, and consequently sparse matrices whose factors require more than 50 gigabytes storage are not included. The matrices are divided in two groups. The first 21 matrices are from the medium size group with 700, 000 to 5, 000, 000 nonzero elements. It takes a few seconds on average to factorize these matrices. The second group contains larger matrices with 7,000,000 to 64,000,000 nonzeros and it takes on average a few minutes to factorize most of the matrices in this group. For each matrix, the largest absolute value max i, j |a i j | and the smallest nonzero absolute value min i, j { |a i j | : a i j = 0 } of the elements are reported in Table <ns0:ref type='table'>1</ns0:ref>. For medium size matrices, an estimate for the 1-norm condition number, &#954; 1 (A) = A &#8722;1 1 A 1 , computed using the MATLAB condest routine, is also provided.</ns0:p><ns0:p>For each experiment, we consider the average time over 10 executions and we clear the L1 and L2 caches between consecutive runs.</ns0:p></ns0:div> <ns0:div><ns0:head>APPEARANCE OF SUBNORMAL NUMBERS IN SINGLE PRECISION SPARSE LU AND MITIGATION TECHNIQUES</ns0:head><ns0:p>From Table <ns0:ref type='table'>1</ns0:ref>, one can observe that the entries of the matrices fit in the range of single precision arithmetic, which from Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> we see comprises numbers of modulus roughly between 10 &#8722;45 and 10 38 . There is no risk of underflow or overflow in converting these matrices to single precision format. However, the smallest absolute value of matrix ASIC_320ks, 1.26 &#215; 10 &#8722;39 , is a subnormal number in single precision.</ns0:p><ns0:p>A subnormal floating-point number is a nonzero number with magnitude less than the absolute value of the smallest normalized number <ns0:ref type='bibr'>(Higham, 2002, Chap.</ns0:ref> 2), <ns0:ref type='bibr'>(Muller et al., 2018, Chap. 2)</ns0:ref>. Floating-point operations on subnormals can be very slow, because they often require extra clock cycles which introduces a high overhead.</ns0:p><ns0:p>The risk of underflow, overflow or generating subnormal numbers during the conversion from higher precision to lower precision can be reduced using scaling techniques proposed by <ns0:ref type='bibr' target='#b27'>Higham et al. (2019)</ns0:ref>.</ns0:p><ns0:p>However, even if matrices have been safely converted from double to normalized single precision numbers, subnormal numbers may still be generated during the computation. We first suspected this behavior in our benchmark when some single precision computations took significantly more time than the corresponding double precision computations. For example, the sparse direct solver MUMPS computed the double precision LU decomposition of the matrix Baumann (#3 in Table <ns0:ref type='table'>1</ns0:ref>) in 1.6251 seconds, while the single precision factorization took 3.586 seconds. Instead of being two times faster than the double precision computation, the single precision computation is two times slower. A further analysis reveals that the smallest magnitude entries of the single precision factors L and U are of the order of 10 &#8722;88 , which is a subnormal number in single precision but a normalized number in double precision. The appearance of subnormal numbers in the single precision factors may be surprising, since the absolute values of the entries of this matrix range from 5 &#215; 10 &#8722;2 to 1.29 &#215; 10 4 , which appears to be innocuous for single precision.</ns0:p><ns0:p>This phenomenon of LU factorization generating subnormal numbers does not appear to have been observed before. How can it happen? The elements at the (k + 1)st stage of Gaussian elimination are generated from the formula</ns0:p><ns0:formula xml:id='formula_1'>a (k+1) i j = a (k) i j &#8722; m ik a (k) k j , m ik = a (k) ik a (k) kk</ns0:formula><ns0:p>, where m ik is a multiplier. If A is a dense matrix of normalized floating-point numbers with norm of order 1, it is extremely unlikely that any of the a (k) i j will become subnormal. However, for sparse matrices we can identify a mechanism whereby fill-in cascades down a column and small multipliers combine Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Selected matrices from the SuiteSparse Matrix Collection <ns0:ref type='bibr' target='#b14'>(Davis, 2021)</ns0:ref>, <ns0:ref type='bibr' target='#b15'>(Davis and Hu, 2011)</ns0:ref>. The first 21 matrices are of medium size and each can be factorized in a few seconds. Matrices 22 to 36 are larger and require more time and memory to solve. </ns0:p></ns0:div> <ns0:div><ns0:head>Matrix</ns0:head><ns0:formula xml:id='formula_2'>A = &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; d 1 0 . . . . . . 0 1 &#8722;a 1 d 2 0 . . . 0 0 &#8722;a 2 d 3 0 . . . . . . &#8722;a 3 d 4 . . . . . . . . . . . . 0 &#8722;a n&#8722;1 d n &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; .</ns0:formula><ns0:p>LU factorization without row or column permutations produces the LU factorization</ns0:p><ns0:formula xml:id='formula_3'>LU &#8801; &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; 1 &#8722; a 1 d 1 1 &#8722; a 2 d 2 . . . . . . 1 &#8722; a n&#8722;1 d n&#8722;1 1 &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; &#63726; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63727; &#63728; d 1 0 . . . . . . 0 1 d 2 0 . . . 0 a 1 d 1 d 3 0 . . . a 1 a 2 d 1 d 2 d 4 . . . . . . . . . a 1 a 2 ...a n&#8722;2 d 1 d 2 ...d n&#8722;2 d n + a 1 a 2 ...a n&#8722;1 d 1 d 2 ...d n&#8722;2 &#63737; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63738; &#63739; .</ns0:formula><ns0:p>The elements &#8722;a i /d i on the subdiagonal of L are multipliers. The problem is in the last column of U. The performance loss caused by arithmetic on subnormal numbers is often mitigated by two options:</ns0:p><ns0:formula xml:id='formula_4'>If |a i /d i |</ns0:formula><ns0:p>Flush to Zero (FTZ) and Denormals 1 Are Zero (DAZ). With the FTZ option, when an operation results in a subnormal output, zero is returned instead, while with the DAZ option any subnormal input is replaced with zero. For the sake of simplicity we will refer to both options as FTZ in the rest of this paper. It may be possible to enable the FTZ option using compiler flags. For example this is automatically activated by Intel's C and Fortran compilers whenever the optimization level is set higher than -O0. However, we have used the GNU Compiler Collection (GCC) in this study, and the only option to flush subnormals to zero is via the -fast-math option. But the -fast-math flag is dangerous as it also disables checking for</ns0:p><ns0:p>NaNs and +-Infs and does not maintain IEEE arithmetic compatibility, so it can result in incorrect output for programs that depend on an IEEE-compliant implementation 2 . As a safe alternative to the </ns0:p></ns0:div> <ns0:div><ns0:head>SINGLE PRECISION SPEEDUP OVER DOUBLE PRECISION FOR SPARSE LU FACTORIZATION</ns0:head><ns0:p>The main performance gain of mixed precision iterative refinement algorithms comes from using low precision arithmetic to factorize the coefficient matrix associated with the linear system. The factorization stage dominates the cost of the algorithm, assuming that the refinement converges quickly. We therefore focus on the speedup achieved during the matrix factorization step to evaluate the potential of lowprecision arithmetic for solving sparse linear systems. For each problem from Table <ns0:ref type='table'>1</ns0:ref>, we report the speedup achieved during the factorization, and we use a threshold of 1.5x to decide whether low precision is beneficial. Note that in the case of dense linear systems, the factorization step speedup is usually close to 2x.</ns0:p><ns0:p>1 Subnormal numbers are also referred to as denormal numbers. In addition to SuperLU and MUMPS, we have added PARDISO <ns0:ref type='bibr' target='#b36'>(Schenk et al., 2001)</ns0:ref>, which is available in the Intel Math Kernel Library (MKL), to the set of sparse direct solvers for the benchmarks.</ns0:p><ns0:p>PARDISO combines left-and right-looking level 3 BLAS supernodal algorithms for better parallelism.</ns0:p><ns0:p>The solvers also include the multithreaded version of SuperLU, called SuperLU_MT Li ( <ns0:ref type='formula'>2005</ns0:ref>). We will refer to both packages as SuperLU unless there is ambiguity. We also considered adding UMFPACK Davis ( <ns0:ref type='formula'>2004</ns0:ref>), but this package does not have support for single precision.</ns0:p><ns0:p>For each sparse direct solver, we report the factorization speedup for both sequential and parallel runs. Even though the Intel Skylake has 40 cores, we report parallel results with 10 cores as for most of the experiments the performance stagnates and sometimes declines beyond 10 cores. To stress the performance penalty induced by subnormals in the single precision computations, the results with and without FTZ are reported.</ns0:p><ns0:p>The experimental results with serial PARDISO are summarized in Figure <ns0:ref type='figure'>2</ns0:ref>. For each matrix two bars are shown, which give the speedup for LU factorization with and without FTZ. Without FTZ, up to 15 matrices out of 36 show a speedup below 1. In other words, single precision decreases the performance for 42% of the problems compared with double precision. This anomaly is corrected by flushing subnormals to zero. By comparing the results with FTZ with results without FTZ, we see that more than half of the problems generated subnormals during the single precision computation. As for the performance benefit of using single precision for the matrix factorization, half of the matrices show a speedup above the 1.5x threshold. The matrices that did not exceed 1.5x speedup are predominately of medium size. The parallel results in Figure <ns0:ref type='figure'>3</ns0:ref> show that with 10 cores the proportion of problems that reach 1.5x speedup drops from 50% to 30%. The problems that still reach 1.5x speedup with 10 cores are exclusively from the large matrices and represent 65% of them.</ns0:p><ns0:p>The results for serial MUMPS are summarized in Figure <ns0:ref type='figure'>4</ns0:ref>. The matrices that suffered performance degradation due to subnormals in the PARDISO experiments exhibit similar behavior with MUMPS.</ns0:p><ns0:p>Similarly, half of the matrices did not reach the threshold of 1.5x, and the matrices beyond 1.5x are mainly the large ones. The parallel results in Figure <ns0:ref type='figure'>5</ns0:ref> are less attractive as only five matrices deliver a speedup beyond 1.5x. These matrices are from the large size group.</ns0:p><ns0:p>Unlike PARDISO and MUMPS, the multithreaded SuperLU ran out of memory for 15 problems out of the 36, predominantly the large size ones. Results are reported for only the 21 remaining matrices.</ns0:p><ns0:p>The serial results in Figure <ns0:ref type='figure'>6</ns0:ref> show that only 33% of the 21 problems, successfully solved exceed 1.5x speedup, against 24% for the parallel results in Figure <ns0:ref type='figure'>7</ns0:ref>.</ns0:p><ns0:p>These results show that mixed precision iterative refinement may only be beneficial for large sparse matrices. However, a large matrix size and higher density are not enough to predict the speedup, as matrix dielFilterV2real is much larger and denser than cage13 but its speedup is lower than cage13's speedup in all the experiments. We note the contrast with dense linear systems, where a 2x speedup is often achieved even for matrices of size as small as 200 &#215; 200.</ns0:p></ns0:div> <ns0:div><ns0:head>ANALYSIS OF RESULTS FOR SPARSE LU FACTORIZATION</ns0:head><ns0:p>Apart from the unforeseen high occurrence of subnormal numbers in single precision sparse LU factorization, two other unexpected observations require further explanation. These are the poor speedup of the matrices from the medium size group, and the fact that many matrices show better speedup in single core experiments than with parallel execution. This section aims to address these questions.</ns0:p><ns0:p>Sparse direct solvers employ more elaborate algorithms than dense solvers. Given a sparse linear system to solve, the rows and the columns of the sparse matrix are first reordered to reduce the number of nonzero elements in the factors, or such that the matrix has dense clusters to take advantage of BLAS 3 kernels. This pre-processing step is called reordering, and it is critical for the overall performance and the memory consumption. After the ordering, the resulting matrix is analyzed to determine the nonzero structures of the factors and allocate the required memory accordingly. This step is called symbolic factorization. It is followed by the numerical factorization step that computes the LU factors, and finally the solve step.</ns0:p><ns0:p>The reordering and the analysis steps do not involve floating-point arithmetic. Therefore, they do not benefit from lowering the arithmetic precision. If the reordering and the analysis represent 50%</ns0:p><ns0:p>of the overall factorization time, for example, then using single precision instead of double will only reduce the overall time by a quarter in the best case. This explains the poor speedup on average size matrices compared with the large size group. This is illustrated in Figure <ns0:ref type='figure' target='#fig_4'>8</ns0:ref> where one can observe that the majority of average size matrices spend more than 25% of the overall time in the reordering and analysis steps. The matrices for which the reordering and analysis time is negligible are the ones that reach up to 2x speedup with single precision. In general, the matrix sparsity pattern and the effectiveness of the reordering algorithms will impact the speedup observed. For example, for some small or moderate size matrices with a complex sparsity pattern, some reordering algorithms may suffer a large amount of fill-in, causing the cost of the numerical factorization to dominate and leading to a significant benefit from using single precision. Further analysis of how the speedup depends on the matrix characteristics and the fill-in rate observed during the symbolic and reordering steps is outside the scope of this work.</ns0:p><ns0:p>The second issue, the decrease of speedup in parallel experiments compared with single core executions, is due to the lack of parallelism in the reordering and analysis steps. For example in this work, all the sparse solvers except PARDISO use sequential reordering and analysis algorithms on shared memory multicore architectures. PARDISO provides the parallel version of the nested dissection algorithm for reordering, but compared with the sequential version, it reduces the reordering time only by a factor of 2 while the numerical factorization time decreases significantly, by up to a factor of 8 using 10 cores.</ns0:p><ns0:p>Consequently, by increasing the number of cores, the proportion of time spent in reordering and analysis steps increases as illustrated in Figure <ns0:ref type='figure'>9</ns0:ref>. One can observe that in the parallel experiment, half of the matrices spent more than 50% of the overall factorization time in reordering and analysis, which explains the limited acceleration from lowering the precision.</ns0:p></ns0:div> <ns0:div><ns0:head>SINGLE PRECISION SPEEDUP OVER DOUBLE PRECISION FOR SPARSE ITERATIVE SOLVERS</ns0:head><ns0:p>The performance of an iterative solver depends not only on the algorithm implemented but also on the eigenvalue distribution and condition number of the matrix, the choice of preconditioner, and the accuracy targeted. It is therefore hard to make general statements about how mixed precision techniques will affect the performance of an iterative solver. Therefore in this section we focus instead on analyzing the impact of low precision in SpMv kernels and preconditioners, as they are the building blocks of iterative solvers. The results in Figure <ns0:ref type='figure'>10</ns0:ref> illustrate the speedup from using single precision incomplete LU factorization (ILU0) from the cuSPARSE 3 library on an NVIDIA V100 GPU. The cuSPARSE library provides an optimized implementation of a set of sparse linear algebra routines for NVIDIA GPUs. For the sake of readability, the matrices are sorted in a decreasing order of the solve step speedup.</ns0:p><ns0:p>The most critical part of the preconditioner application is the forward and backward solve, because it is executed at each iteration and can easily become the most time consuming part of iterative solvers. The dark green bars in Figure <ns0:ref type='figure'>10</ns0:ref> represent the speedup of the single precision ILU0 preconditioner application.</ns0:p><ns0:p>The performance shows that lowering the precision in the preconditioner application did not enhance the performance. The same is true for the incomplete factorization itself, so there is no benefit to using single precision in place of double precision. The results from SuperLU ILU in Figure <ns0:ref type='figure'>11</ns0:ref> show a better speedup for the solve step compared with the results from cuSPARSE ILU0. However, the speedup is still under the threshold of 1.5x speedup, except for one matrix (Transport). For the incomplete LU factorization step itself, the performance gain from using single precision is insignificant. As the factorization step is more time-consuming than the solve steps, the overall speedup of the preconditioner computation and application remains very small and does not seem to present enough potential to accelerate parallel iterative solvers. Note that from the libraries evaluated in this work only SuperLU and cuSPARSE provide incomplete LU factorization implementation.</ns0:p><ns0:p>To evaluate how low precision can accelerate SpMV kernels, we have considered the compressed row storage (CSR) format, as it is widely used in applications. In the CSR format, a double precision sparse matrix with nnz nonzero elements requires approximately 12nnz bytes for the storage (each nonzero element requires 8 bytes for its value and 4 bytes for its column index). In single precision the matrix will occupy approximately 8nnz bytes of memory. As SpMV kernels are memory bandwidth-bound, the use of single precision will only provide a 1.5x (12nnz divided by 8nnz) speedup in theory. Note that, for simplicity we have ignored the 4n bytes for row indices, where n is the number of rows, and the extra memory for left-and right-hand side vectors. The results in Figure <ns0:ref type='figure'>12</ns0:ref> for the optimized cuSPARSE</ns0:p><ns0:p>SpMV on the NVIDIA V100 GPU show that the speedup is oscillating around 1.5x. Similarly, the benchmark of the MKL SpMV in Figure <ns0:ref type='figure'>13</ns0:ref> shows that the single precision kernel has approximately 1.5x speedup over the double precision kernel.</ns0:p><ns0:p>This study shows that computing or applying the ILU preconditioner in single precision usually Manuscript to be reviewed</ns0:p><ns0:p>Computer Science offers at best a modest speedup over double precision. Taking advantage of efficient single precision SpMV kernels typically gives a 1.5 speedup. However, in both cases the results will have at best single precision accuracy, so some form of refinement to double precision will be necessary, which will reduce the speedups.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>The benefits of using single precision arithmetic to accelerate compute intensive operations when solving double precision dense linear systems are well documented in the HPC community. Much less is known about the speedup to expect when using single precision arithmetic in parallel algorithms for double precision sparse linear systems, and existing work focuses on single core experiments. In this work, we have assessed the benefit of using single precision arithmetic in solving double precision sparse linear systems on multicore architectures. We have evaluated two classes of algorithms: iterative refinement based on single precision LU factorization and iterative methods using single precision for the matrixvector product kernels or preconditioning.</ns0:p><ns0:p>Our first finding is that a limiting factor in the performance of single precision sparse LU factorization is the generation of subnormal numbers, which occurs for the majority of our test matrices. We have identified a mechanism whereby fill-in can cascade down a column, creating and then propagating subnormal numbers with it. We have demonstrated the severe performance drop that can result and have shown how flushing subnormals to zero can mitigate it.</ns0:p><ns0:p>Our second finding is that the anticipated speedup of 2 from using single precision arithmetic is obtained only for the very largest of our test problems, where the analysis and reordering time is negligible compared with numerical factorization time.</ns0:p><ns0:p>Our last finding concerns iterative solvers. Our results show that the performance gain in computing or applying incomplete factorization preconditioners in single precision is typically much less than a factor 1.5, but we have observed a speedup of around 1.5 by evaluating matrix-vector product kernels in single precision. In future work, we will explore new approaches to integrate efficiently single precision matrix-vector product kernels and single precision preconditioners in double precision iterative solvers without accuracy loss.</ns0:p><ns0:p>Finally, we note that half precision arithmetic is of growing interest, because of the further benefits it brings through faster arithmetic and reduced data movement. For dense systems, GMRES-based iterative refinement (discussed in the introduction) successfully exploits a half precision LU factorization to deliver double precision accuracy in the solution. We are not aware of any half precision implementations of sparse LU factorization but if and when they become available we hope to extend our investigation to them.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62001:1:1:NEW 21 Sep 2021)Manuscript to be reviewedComputer ScienceAlgorithm 1 Mixed-precision iterative refinement. Given a sparse matrix A &#8712; R n&#215;n , and a vector b &#8712; R n , this algorithm solves Ax = b using a single precision sparse LU factorization of A then refines x to double precision accuracy.1: Carry out the reordering and analysis for A. 2: LU &#8592; sparse_lu(A) &#8882; (FP32) 3: Solve Ax = b using the LU factors.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62001:1:1:NEW 21 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62001:1:1:NEW 21 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>-</ns0:head><ns0:label /><ns0:figDesc>fast-math flag, we use the x86 assembly code; see the listing in Fig. 1. Calling SetFTZ() before the factorization routines guarantees flushing subnormals to zero without compromising the numerical robustness of the software. Once the SetFTZ() routine is called at the beginning of a program, it is effective during the whole execution, unless it is explicitly deactivated by calling another x86 assembly code not listed in this paper.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Time spent by double precision sequential PARDISO LU in each step on a single Intel Skylake core. The bars are sorted by decreasing time associated with the reordering and analysis step.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Parameters for IEEE single and double precision point arithmetic. x min,s is the smallest nonzero subnormal number and x min and x max are the smallest and largest normalized floating-point numbers. &#215; 10 &#8722;324 2.2 &#215; 10 &#8722;308 1.8 &#215; 10 308 1.1 &#215; 10 &#8722;16</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Size</ns0:cell><ns0:cell>nnz</ns0:cell><ns0:cell>&#954; 1 (A)</ns0:cell><ns0:cell cols='2'>max i, j |a i j | min i, j { |a i j | : a i j = 0 }</ns0:cell></ns0:row><ns0:row><ns0:cell>2cubes_sphere</ns0:cell><ns0:cell>101,492</ns0:cell><ns0:cell cols='3'>1,647,264 2.93e+09 2.52e+10</ns0:cell><ns0:cell>6.68e-15</ns0:cell></ns0:row><ns0:row><ns0:cell>ASIC_320ks</ns0:cell><ns0:cell>321,67</ns0:cell><ns0:cell cols='3'>1,316,085 5.06e+22 1.00e+06</ns0:cell><ns0:cell>1.26e-39</ns0:cell></ns0:row><ns0:row><ns0:cell>Baumann</ns0:cell><ns0:cell>112,211</ns0:cell><ns0:cell cols='3'>748,331 1.368+09 1.29e+04</ns0:cell><ns0:cell>5.00e-02</ns0:cell></ns0:row><ns0:row><ns0:cell>cfd2</ns0:cell><ns0:cell>123,440</ns0:cell><ns0:cell cols='3'>3,085,406 3.66e+06 1.00e+00</ns0:cell><ns0:cell>6.66e-09</ns0:cell></ns0:row><ns0:row><ns0:cell>crashbasis</ns0:cell><ns0:cell>160,000</ns0:cell><ns0:cell cols='3'>1,750,416 1.78e+03 4.08e+02</ns0:cell><ns0:cell>6.42e-11</ns0:cell></ns0:row><ns0:row><ns0:cell>ct20stif</ns0:cell><ns0:cell>52,329</ns0:cell><ns0:cell cols='3'>2,600,295 2.22e+14 8.86e+11</ns0:cell><ns0:cell>3.02e-34</ns0:cell></ns0:row><ns0:row><ns0:cell>dc1</ns0:cell><ns0:cell>116,835</ns0:cell><ns0:cell cols='2'>861,071 1.01e+10</ns0:cell><ns0:cell>5.67e+4</ns0:cell><ns0:cell>3.00e-12</ns0:cell></ns0:row><ns0:row><ns0:cell>Dubcova3</ns0:cell><ns0:cell>146,689</ns0:cell><ns0:cell cols='3'>3,636,643 1.14e+04 2.66e+00</ns0:cell><ns0:cell>8.47e-22</ns0:cell></ns0:row><ns0:row><ns0:cell>ecology2</ns0:cell><ns0:cell>999,999</ns0:cell><ns0:cell cols='3'>4,995,991 6.66e+07 4.00e+01</ns0:cell><ns0:cell>1.00e+01</ns0:cell></ns0:row><ns0:row><ns0:cell>FEM_3D_thermal2</ns0:cell><ns0:cell>147,900</ns0:cell><ns0:cell cols='2'>3,489,300 1.66e+03</ns0:cell><ns0:cell>2.92e-01</ns0:cell><ns0:cell>1.16e-05</ns0:cell></ns0:row><ns0:row><ns0:cell>G2_circuit</ns0:cell><ns0:cell>150,102</ns0:cell><ns0:cell cols='3'>726,674 1.97e+07 2.22e+04</ns0:cell><ns0:cell>3.27e-01</ns0:cell></ns0:row><ns0:row><ns0:cell>Goodwin_095</ns0:cell><ns0:cell>100,037</ns0:cell><ns0:cell cols='3'>3,226,066 3.43e+07 1.00e+00</ns0:cell><ns0:cell>1.41e-21</ns0:cell></ns0:row><ns0:row><ns0:cell>matrix-new_3</ns0:cell><ns0:cell>125,329</ns0:cell><ns0:cell cols='3'>893,984 3.47e+22 1.00e+00</ns0:cell><ns0:cell>1.27e-21</ns0:cell></ns0:row><ns0:row><ns0:cell>offshore</ns0:cell><ns0:cell>259,789</ns0:cell><ns0:cell cols='3'>4,242,673 2.32e+13 7.47e+14</ns0:cell><ns0:cell>7.19e-21</ns0:cell></ns0:row><ns0:row><ns0:cell>para-10</ns0:cell><ns0:cell>155,924</ns0:cell><ns0:cell cols='3'>2,094,873 8.13e+18 6.44e+11</ns0:cell><ns0:cell>2.26e-20</ns0:cell></ns0:row><ns0:row><ns0:cell>parabolic_fem</ns0:cell><ns0:cell>525,825</ns0:cell><ns0:cell cols='2'>3,674,625 2.11e+05</ns0:cell><ns0:cell>4.00e-01</ns0:cell><ns0:cell>3.18e-07</ns0:cell></ns0:row><ns0:row><ns0:cell>ss1</ns0:cell><ns0:cell>205,282</ns0:cell><ns0:cell cols='3'>845,089 1.29e+01 1.00e+00</ns0:cell><ns0:cell>1.06e-11</ns0:cell></ns0:row><ns0:row><ns0:cell>stomach</ns0:cell><ns0:cell>213,360</ns0:cell><ns0:cell cols='2'>3,021,648 8.01e+1</ns0:cell><ns0:cell>1.38e+00</ns0:cell><ns0:cell>1.47e-09</ns0:cell></ns0:row><ns0:row><ns0:cell>thermomech_TK</ns0:cell><ns0:cell>102,158</ns0:cell><ns0:cell cols='3'>711,558 1.62e+20 1.96e+02</ns0:cell><ns0:cell>4.83e-03</ns0:cell></ns0:row><ns0:row><ns0:cell>tmt_unsym</ns0:cell><ns0:cell>917,825</ns0:cell><ns0:cell cols='3'>4,584,801 2.26e+09 4.00e+00</ns0:cell><ns0:cell>1.00e+00</ns0:cell></ns0:row><ns0:row><ns0:cell>xenon2</ns0:cell><ns0:cell>157,464</ns0:cell><ns0:cell cols='3'>3,866,688 1.76e+05 3.17e+28</ns0:cell><ns0:cell>5.43e+23</ns0:cell></ns0:row><ns0:row><ns0:cell>af_shell10</ns0:cell><ns0:cell cols='3'>1,508,065 52,259,885</ns0:cell><ns0:cell>5.72e+05</ns0:cell><ns0:cell>1.00e-06</ns0:cell></ns0:row><ns0:row><ns0:cell>af_shell2</ns0:cell><ns0:cell cols='3'>504,855 17,588,875</ns0:cell><ns0:cell>1.51e+06</ns0:cell><ns0:cell>4.55e-13</ns0:cell></ns0:row><ns0:row><ns0:cell>atmosmodd</ns0:cell><ns0:cell>1,270,432</ns0:cell><ns0:cell cols='2'>8,814,880</ns0:cell><ns0:cell>2.22e+04</ns0:cell><ns0:cell>3.19e+03</ns0:cell></ns0:row><ns0:row><ns0:cell>atmosmodl</ns0:cell><ns0:cell cols='3'>1,489,752 10,319,760</ns0:cell><ns0:cell>7.80e+04</ns0:cell><ns0:cell>3.96e+04</ns0:cell></ns0:row><ns0:row><ns0:cell>cage13</ns0:cell><ns0:cell>445,315</ns0:cell><ns0:cell cols='2'>7,479,343</ns0:cell><ns0:cell>9.31e-01</ns0:cell><ns0:cell>1.15e-02</ns0:cell></ns0:row><ns0:row><ns0:cell>CurlCurl_2</ns0:cell><ns0:cell>806,529</ns0:cell><ns0:cell cols='2'>8,921,789</ns0:cell><ns0:cell>4.42e+10</ns0:cell><ns0:cell>8.84e+06</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>dielFilterV2real 1,157,456 48,538,952</ns0:cell><ns0:cell>6.14e+01</ns0:cell><ns0:cell>3.25e-13</ns0:cell></ns0:row><ns0:row><ns0:cell>Geo_1438</ns0:cell><ns0:cell cols='3'>1,437,960 60,236,322</ns0:cell><ns0:cell>6.69e+12</ns0:cell><ns0:cell>4.75e-07</ns0:cell></ns0:row><ns0:row><ns0:cell>Hook_1498</ns0:cell><ns0:cell cols='3'>1,498,023 59,374,451</ns0:cell><ns0:cell>1.58e+05</ns0:cell><ns0:cell>5.17e-26</ns0:cell></ns0:row><ns0:row><ns0:cell>ML_Laplace</ns0:cell><ns0:cell cols='3'>377,002 27,689,972</ns0:cell><ns0:cell>1.22e+07</ns0:cell><ns0:cell>1.24e-09</ns0:cell></ns0:row><ns0:row><ns0:cell>nlpkkt80</ns0:cell><ns0:cell cols='3'>1,062,400 28,192,672</ns0:cell><ns0:cell>2.00e+02</ns0:cell><ns0:cell>4.08e-01</ns0:cell></ns0:row><ns0:row><ns0:cell>Serena</ns0:cell><ns0:cell cols='3'>1,391,349 64,131,971</ns0:cell><ns0:cell>5.51e+13</ns0:cell><ns0:cell>2.19e-01</ns0:cell></ns0:row><ns0:row><ns0:cell>Si87H76</ns0:cell><ns0:cell cols='3'>240,369 10,661,631</ns0:cell><ns0:cell>1.83e+01</ns0:cell><ns0:cell>2.57e-13</ns0:cell></ns0:row><ns0:row><ns0:cell>StocF-1465</ns0:cell><ns0:cell cols='3'>1,465,137 21,005,389</ns0:cell><ns0:cell>3.10e+11</ns0:cell><ns0:cell>9.57e-09</ns0:cell></ns0:row><ns0:row><ns0:cell>Transport</ns0:cell><ns0:cell cols='3'>1,602,111 23,487,281</ns0:cell><ns0:cell>1.00e+00</ns0:cell><ns0:cell>1.62e-12</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>x min,s</ns0:cell><ns0:cell>x min</ns0:cell><ns0:cell>x max</ns0:cell><ns0:cell>Unit roundoff</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>FP32 1.4 &#215; 10 &#8722;45 1.2 &#215; 10 &#8722;38</ns0:cell><ns0:cell>3.4 &#215; 10 38</ns0:cell><ns0:cell>6.0 &#215; 10 &#8722;8</ns0:cell></ns0:row><ns0:row><ns0:cell>FP64 4.9</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>5/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62001:1:1:NEW 21 Sep 2021) Manuscript to be reviewed Computer Science multiplicatively. Consider the upper Hessenberg matrix</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>MXCSR register on stack * / 'orl $0x8040,-0x4(%rsp)\n\t' / * set bits 15(FTZ) and 7(DAZ) * / 'ldmxcsr -0x4(%rsp)'); / * load MXCSR register from stack * / }</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='38'>Figure 1. x86 assembly code for flushing subnormals to zero, while maintaining IEEE arithmetic</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>compatibility.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>2.5x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='7'>LU Fact. with FTZ</ns0:cell><ns0:cell /><ns0:cell cols='8'>LU Fact. without FTZ</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>2.0x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>1.5x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Speedup (FP64/FP32)</ns0:cell><ns0:cell>0.5x 1.0x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.0x</ns0:cell><ns0:cell>StocF-1465</ns0:cell><ns0:cell>Geo_1438</ns0:cell><ns0:cell>Serena</ns0:cell><ns0:cell>ss1</ns0:cell><ns0:cell>nlpkkt80</ns0:cell><ns0:cell>Si87H76</ns0:cell><ns0:cell>cage13</ns0:cell><ns0:cell>Transport</ns0:cell><ns0:cell>Hook_1498</ns0:cell><ns0:cell>atmosmodd</ns0:cell><ns0:cell>ML_Laplace</ns0:cell><ns0:cell>CurlCurl_2</ns0:cell><ns0:cell>ASIC_320ks</ns0:cell><ns0:cell>matrix-new_3</ns0:cell><ns0:cell>para-10</ns0:cell><ns0:cell>dielFilterV2real</ns0:cell><ns0:cell>2cubes_sphere</ns0:cell><ns0:cell>atmosmodl</ns0:cell><ns0:cell>af_shell10</ns0:cell><ns0:cell>offshore</ns0:cell><ns0:cell>xenon2</ns0:cell><ns0:cell>cfd2</ns0:cell><ns0:cell>Baumann</ns0:cell><ns0:cell>af_shell2</ns0:cell><ns0:cell>ct20stif</ns0:cell><ns0:cell>stomach</ns0:cell><ns0:cell>crashbasis</ns0:cell><ns0:cell>tmt_unsym</ns0:cell><ns0:cell>FEM_3D_thermal2</ns0:cell><ns0:cell>Goodwin_095</ns0:cell><ns0:cell>G2_circuit</ns0:cell><ns0:cell>parabolic_fem</ns0:cell><ns0:cell>Dubcova3</ns0:cell><ns0:cell>thermomech_TK</ns0:cell><ns0:cell>ecology2</ns0:cell><ns0:cell>dc1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Matrices</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>2 https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html 6/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62001:1:1:NEW 21 Sep 2021) Manuscript to be reviewed Computer Science void SetFTZ(void_zou) { asm('stmxcsr -0x4(%rsp)\n\t' / * store Figure 2. Single precision speedup over double precision for sparse LU factorization using PARDISO on a single Intel Skylake core.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>LU Fact. with FTZ LU Fact. without FTZ Figure</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell cols='11'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>2.5x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>2.0x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>1.5x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Speedup (FP64/FP32)</ns0:cell><ns0:cell>0.5x 1.0x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.0x</ns0:cell><ns0:cell>nlpkkt80</ns0:cell><ns0:cell>cage13</ns0:cell><ns0:cell>Geo_1438</ns0:cell><ns0:cell>Si87H76</ns0:cell><ns0:cell>Serena</ns0:cell><ns0:cell>StocF-1465</ns0:cell><ns0:cell>atmosmodl</ns0:cell><ns0:cell>atmosmodd</ns0:cell><ns0:cell>Transport</ns0:cell><ns0:cell>Hook_1498</ns0:cell><ns0:cell>CurlCurl_2</ns0:cell><ns0:cell>dielFilterV2real</ns0:cell><ns0:cell>2cubes_sphere</ns0:cell><ns0:cell>para-10</ns0:cell><ns0:cell>ASIC_320ks</ns0:cell><ns0:cell>matrix-new_3</ns0:cell><ns0:cell>offshore</ns0:cell><ns0:cell>ss1</ns0:cell><ns0:cell>af_shell10</ns0:cell><ns0:cell>ML_Laplace</ns0:cell><ns0:cell>xenon2</ns0:cell><ns0:cell>cfd2</ns0:cell><ns0:cell>stomach</ns0:cell><ns0:cell>Baumann</ns0:cell><ns0:cell>af_shell2</ns0:cell><ns0:cell>ct20stif</ns0:cell><ns0:cell>FEM_3D_thermal2</ns0:cell><ns0:cell>crashbasis</ns0:cell><ns0:cell>Goodwin_095</ns0:cell><ns0:cell>tmt_unsym</ns0:cell><ns0:cell>ecology2</ns0:cell><ns0:cell>thermomech_TK</ns0:cell><ns0:cell>G2_circuit</ns0:cell><ns0:cell>parabolic_fem</ns0:cell><ns0:cell>Dubcova3</ns0:cell><ns0:cell>dc1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Matrices</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>7/17PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62001:1:1:NEW 21 Sep 2021)Manuscript to be reviewed 3. Single precision speedup over double precision for sparse LU factorization using PARDISO on 10 Intel Skylake cores.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>LU Fact. with FTZ LU Fact. without FTZ Figure</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell cols='11'>Computer Science Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>2.5x 2.5x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>2.0x 2.0x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>1.5x 1.5x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Speedup (FP64/FP32) Speedup (FP64/FP32)</ns0:cell><ns0:cell>0.5x 1.0x 0.5x 1.0x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.0x 0.0x</ns0:cell><ns0:cell cols='2'>cage13 offshore Serena</ns0:cell><ns0:cell cols='3'>Geo_1438 xenon2 Si87H76 Baumann atmosmodd</ns0:cell><ns0:cell cols='2'>nlpkkt80 para-10 Transport</ns0:cell><ns0:cell cols='3'>atmosmodl stomach Hook_1498 tmt_unsym matrix-new_3</ns0:cell><ns0:cell cols='2'>StocF-1465 FEM_3D_thermal2 para-10</ns0:cell><ns0:cell cols='2'>CurlCurl_2 Goodwin_095 ML_Laplace</ns0:cell><ns0:cell cols='3'>af_shell10 2cubes_sphere dielFilterV2real crashbasis xenon2</ns0:cell><ns0:cell cols='2'>af_shell2 ASIC_320ks 2cubes_sphere</ns0:cell><ns0:cell cols='3'>offshore af_shell10 cfd2 af_shell2 ss1</ns0:cell><ns0:cell cols='2'>Baumann parabolic_fem ct20stif</ns0:cell><ns0:cell cols='2'>stomach thermomech_TK FEM_3D_thermal2</ns0:cell><ns0:cell cols='3'>crashbasis Dubcova3 parabolic_fem ecology2 ecology2</ns0:cell><ns0:cell cols='2'>G2_circuit G2_circuit thermomech_TK</ns0:cell><ns0:cell cols='3'>tmt_unsym CurlCurl_2 Goodwin_095 cfd2 Dubcova3</ns0:cell><ns0:cell cols='2'>dc1 ct20stif ASIC_320ks</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Matrices Matrices</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>2.5x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='7'>LU Fact. with FTZ</ns0:cell><ns0:cell /><ns0:cell cols='8'>LU Fact. without FTZ</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>2.0x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>1.5x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Speedup (FP64/FP32)</ns0:cell><ns0:cell>0.5x 1.0x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.0x</ns0:cell><ns0:cell>Si87H76</ns0:cell><ns0:cell>cage13</ns0:cell><ns0:cell>Serena</ns0:cell><ns0:cell>Geo_1438</ns0:cell><ns0:cell>nlpkkt80</ns0:cell><ns0:cell>Hook_1498</ns0:cell><ns0:cell>atmosmodd</ns0:cell><ns0:cell>atmosmodl</ns0:cell><ns0:cell>matrix-new_3</ns0:cell><ns0:cell>Transport</ns0:cell><ns0:cell>para-10</ns0:cell><ns0:cell>StocF-1465</ns0:cell><ns0:cell>CurlCurl_2</ns0:cell><ns0:cell>af_shell10</ns0:cell><ns0:cell>dielFilterV2real</ns0:cell><ns0:cell>xenon2</ns0:cell><ns0:cell>af_shell2</ns0:cell><ns0:cell>Baumann</ns0:cell><ns0:cell>ct20stif</ns0:cell><ns0:cell>offshore</ns0:cell><ns0:cell>2cubes_sphere</ns0:cell><ns0:cell>cfd2</ns0:cell><ns0:cell>FEM_3D_thermal2</ns0:cell><ns0:cell>crashbasis</ns0:cell><ns0:cell>stomach</ns0:cell><ns0:cell>ss1</ns0:cell><ns0:cell>ML_Laplace</ns0:cell><ns0:cell>ecology2</ns0:cell><ns0:cell>tmt_unsym</ns0:cell><ns0:cell>dc1</ns0:cell><ns0:cell>ASIC_320ks</ns0:cell><ns0:cell>parabolic_fem</ns0:cell><ns0:cell>G2_circuit</ns0:cell><ns0:cell>thermomech_TK</ns0:cell><ns0:cell>Dubcova3</ns0:cell><ns0:cell>Goodwin_095</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Matrices</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>8/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62001:1:1:NEW 21 Sep 2021)Manuscript to be reviewed 4. Single precision speedup over double precision for sparse LU factorization using MUMPS on a single Intel Skylake core.Figure 5. Single precision speedup over double precision for sparse LU factorization using MUMPS on 10 Intel Skylake cores. 9/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62001:1:1:NEW 21 Sep 2021)Manuscript to be reviewed</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>LU Fact. with FTZ LU Fact. without FTZ Figure 6.</ns0:head><ns0:label /><ns0:figDesc>Single precision speedup over double precision of sparse LU factorization using SuperLU on a single Intel Skylake core. SuperLU ran out of memory for 15 problems.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='8'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='21'>dc1 thermomech_TK Dubcova3 Goodwin_095 parabolic_fem G2_circuit ecology2 tmt_unsym FEM_3D_thermal crashbasis ASIC_320ks ct20stif stomach af_shell2 cfd2 Baumann offshore xenon2 af_shell10 2cubes_sphere ML_Laplace dielFilterV2real para-10 matrix-new_3 CurlCurl_2 StocF-1465 Transport ss1 Hook_1498 atmosmodl atmosmodd Geo_1438 Serena nlpkkt80 Si87H76 cage13</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2.5x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>LU Fact. with FTZ</ns0:cell><ns0:cell /><ns0:cell cols='4'>LU Fact. without FTZ</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>2.0x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>1.5x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Speedup (FP64/FP32)</ns0:cell><ns0:cell>0.5x 1.0x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.0x</ns0:cell><ns0:cell>xenon2</ns0:cell><ns0:cell>para-10</ns0:cell><ns0:cell>offshore</ns0:cell><ns0:cell>FEM_3D_thermal2</ns0:cell><ns0:cell>Baumann</ns0:cell><ns0:cell>stomach</ns0:cell><ns0:cell>CurlCurl_2</ns0:cell><ns0:cell>2cubes_sphere</ns0:cell><ns0:cell>tmt_unsym</ns0:cell><ns0:cell>Goodwin_095</ns0:cell><ns0:cell>crashbasis</ns0:cell><ns0:cell>ASIC_320ks</ns0:cell><ns0:cell>ecology2</ns0:cell><ns0:cell>cfd2</ns0:cell><ns0:cell>ct20stif</ns0:cell><ns0:cell>af_shell10</ns0:cell><ns0:cell>af_shell2</ns0:cell><ns0:cell>Dubcova3</ns0:cell><ns0:cell>G2_circuit</ns0:cell><ns0:cell>thermomech_TK</ns0:cell><ns0:cell>parabolic_fem</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Matrices</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>Figure 7. Single precision speedup over double precision for sparse LU factorization using SuperLU on 10 Intel Skylake cores. SuperLU ran out of memory for 15 problems. 10/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62001:1:1:NEW 21 Sep 2021)Manuscript to be reviewed</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>Time spent by double precision parallel PARDISO LU in each step on 10 Intel Skylake cores. The bars are sorted by decreasing time associated with the reordering and analysis step. Speedup of single precision versus double precision for sparse incomplete LU factorization (ILU) using SuperLU on Intel Skylake. The SuperLU ILU implementation is serial but it has been compiled against a multithreaded MKL BLAS and run with 10 cores.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Computer Science Computer Science</ns0:cell><ns0:cell cols='11'>Manuscript to be reviewed Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Factorization</ns0:cell><ns0:cell>Solve</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>2.0x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>1.5x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Speedup (FP64/FP32)</ns0:cell><ns0:cell>0.5x 1.0x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>0.0x Figure 11.</ns0:cell><ns0:cell cols='12'>dc1 Dubcova3 thermomech_TK ct20stif parabolic_fem FEM_3D_thermal Goodwin_095 G2_circuit crashbasis ecology2 af_shell2 tmt_unsym stomach cfd2 ASIC_320ks ML_Laplace Baumann xenon2 offshore af_shell10 Matrices Transport cage13 Goodwin_095 para-10 matrix-new_3 cfd2 stomach crashbasis atmosmodl nlpkkt80 Serena Hook_1498 StocF-1465 tmt_unsym Si87H76 af_shell2 Geo_1438 ML_Laplace ct20stif ecology2 ss1 af_shell10 ss1 dielFilterV2real para-10 matrix-new_3 CurlCurl_2 Transport StocF-1465 Hook_1498 atmosmodl Geo_1438 atmosmodd Serena Si87H76 nlpkkt80 cage13 offshore dc1 G2_circuit xenon2 2cubes_sphere parabolic_fem Dubcova3 ASIC_320ks thermomech_TK CurlCurl_2 dielFilterV2real Baumann atmosmodd FEM_3D_thermal 2cubes_sphere</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Speedup (FP64/FP32) Figure 9. Matrices 0.0x 0.5x 1.0x 1.5x offshore Baumann crashbasis atmosmodd 2cubes_sphere ML_Laplace Transport atmosmodl cage13 CurlCurl_2 xenon2 stomach Si87H76 Hook_1498 StocF-1465 parabolic_fem FEM_3D_thermal2 2.0x Factorization Solve af_shell10</ns0:cell><ns0:cell>cfd2</ns0:cell><ns0:cell>Serena</ns0:cell><ns0:cell>para-10</ns0:cell><ns0:cell>dielFilterV2real</ns0:cell><ns0:cell>ss1</ns0:cell><ns0:cell>af_shell2</ns0:cell><ns0:cell>Geo_1438</ns0:cell><ns0:cell>matrix-new_3</ns0:cell><ns0:cell>dc1</ns0:cell><ns0:cell>ct20stif</ns0:cell><ns0:cell>ASIC_320ks</ns0:cell><ns0:cell>Dubcova3</ns0:cell></ns0:row><ns0:row><ns0:cell cols='14'>Figure 10. Speedup of single precision versus double precision for sparse incomplete LU factorization</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(ILU0) using cuSPARSE on NVIDIA V100 GPU.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>11/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62001:1:1:NEW 21 Sep 2021) 12/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62001:1:1:NEW 21 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head /><ns0:label /><ns0:figDesc>Speedup of single precision versus double precision for SpMV using cuSPARSE on NVIDIA V100 GPU.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Computer Science</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2.0x</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>1.5x</ns0:cell></ns0:row><ns0:row><ns0:cell>Speedup (FP64/FP32)</ns0:cell><ns0:cell>0.5x 1.0x</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.0x</ns0:cell><ns0:cell>af_shell10 af_shell2 Transport Hook_1498 Serena nlpkkt80 Geo_1438 cage13 dielFilterV2real xenon2 atmosmodl StocF-1465 atmosmodd tmt_unsym Dubcova3 parabolic_fem CurlCurl_2 FEM_3D_thermal2 ML_Laplace ecology2 stomach Goodwin_095 G2_circuit crashbasis cfd2 Si87H76 offshore ss1 thermomech_TK 2cubes_sphere Baumann ct20stif ASIC_320ks para-10 matrix-new_3 dc1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Matrices</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Figure 12. Geo_1438 atmosmodd ML_Laplace Transport ecology2 af_shell10 xenon2 2cubes_sphere nlpkkt80 ASIC_320ks ct20stif atmosmodl thermomech_TK StocF-1465 parabolic_fem tmt_unsym para-10 FEM_3D_thermal2 Baumann dc1 stomach crashbasis matrix-new_3 Dubcova3 Si87H76 offshore cage13 CurlCurl_2 G2_circuit Hook_1498 cfd2 ss1 Serena af_shell2 dielFilterV2real Goodwin_095</ns0:cell></ns0:row></ns0:table><ns0:note>3 https://docs.nvidia.com/cuda/cusparse13/17PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62001:1:1:NEW 21 Sep 2021)Manuscript to be reviewedFigure 13. Speedup of single precision versus double precision for SpMV using MKL on 10 Intel Skylake cores. 14/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62001:1:1:NEW 21 Sep 2021)</ns0:note></ns0:figure> <ns0:note place='foot' n='17'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62001:1:1:NEW 21 Sep 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Reply to Referees for PeerJ Computer Science submission “Performance impact of precision reduction in sparse linear systems solvers“ Mawussi Zounon, Nicholas J. Higham, Craig Lucas, and Françoise Tisseur September 7, 2021 We thank the reviewers for their insightful and constructive comments that helped us improve the overall quality of the paper. In the following, we provide answers or comments to the issues raised by the reviewers. In the revised manuscript, new or significantly modified text is typeset in blue. Reply to Reviewer #1 ▷ One thing that I had faced in the past is the poor implementation of singleprecision kernels (in the MKL especially). It might not be true anymore, but as the authors rely on the MKL/cuSPARSE as a ”black-box”, they are considering that the kernels are as optimized in both cases, which might not be true. I would like the authors to ensure that the differences they are seeing are not coming from a difference in the optimization level. With this aim, I would suggest the following. The authors could convert a small dense matrix that fit in the L1 cache in the CSR format and apply operations (such as SpMV) on it (hundreds or thousands of times) such that it will provide a good view of the raw performance of the kernels when the memory transfers are negligible. The authors could then simply add a sentence in the manuscript to state if there is a difference or not (and update the corresponding sections if needed). This is just one way to do it, and of course, the authors could use any other strategy to test that the MKL or cuSPARSE kernels are optimized similarly. Thanks for pointing this out. In fact, the sparse direct solvers used in this study rely heavily on BLAS kernels for performance and these kernels are highly optimized for both double and single precision. As for the CSR format implementation of SpMV kernels, the speedups observed are approximately 1.5x which is consistent with our theoretical analysis. ▷ I would appreciate in ”EXPERIMENTAL SETUP” to have an idea of how the numbers were extracted, for example, are they average of X executions? Or the median? Similarly, I imagine that the first data that will be used by the kernels are not in the L1 cache. Thanks for the remark. For each experiment, we consider the average time over 10 executions, and we also clear the L1 and L2 caches between consecutive runs. We clear the L1 and L2 caches by running a large dense matrix vector product with randomly generated matrix and vector. A short paragraph has been added at the end of the experimental setup section to cover this. ▷ I do not understand this sentence ”because they are usually processed at the software level,” 1 The sentence has been modified to emphasis the additional clock cycles required to process subnormal numbers as the main cause of performance loss. ▷ I would suggest replacing ”CSR” with ”MXCSR” in the comments of Line 200 (to ensure no ambiguity with CSR storage). I also suggest adding a number and a legend to this code in order to describe what it does (without the need to go in the main text). Thanks, the manuscript has been modified accordingly. ▷ - ”rpecision.” → ”precision.” Thanks, it is corrected. Reply to Reviewer #2 ▷ It would be better if the authors could add some detailed analysis on the characteristics of matrices. For example, the matrix ss1 is small but shows low ratio of ”Reordering and Analysis” ratio in Figure 7. This may caused the better speedup in Figure 1. So, the readers will be interested in why this matrix has shown different behavior. Thanks for the suggestion. It would certainly be of interest to investigatr how the characteristics of the matrices studied impact the observed speedup, but it is not the primary goal of this manuscript. A separate work will be required to cover in depth the influence of different sparse matrix characteristics on the speedup. However, we have added a new paragraph in the middle of the section “Analysis of Results for Sparse LU Factorization” in order to provide some insight into why small or moderate size matrices can sometimes exhibit a good speedup. ▷ Also, discussions on the results for iterative solvers are limited. It would be better to add the reasons why it has shown less speedup on GPU than CPU. In addition to that, there should be more description about the reason why the speedup of ILU is not significant. The manuscript is mainly focused on sparse direct solvers, but we also give insight into the speedup to expect if one has to implement single precision iterative solver based on SpMV or ILU kernels currently available in MKL and cuSPARSE libraries. It is unclear why the speedups for the GPU (Figure 10) are smaller than those for the CPU (Figure 11). We are comparing different ILU implementations on different processors, without having access to the source code for the GPU implementation. Our main purpose, though, is to see whether the speedups are significant on our realistic test set for these two setups and we find that they are not, in either case—they are close to 1x and well short of 1.5x. Our key finding, then, is that using single precision does not give a significant performance benefit with an ILU preconditioner, which we think some researchers may find surprising. The reasons for the poor speedups are not clear. 2 "
Here is a paper. Please give your review comments after reading it.
284
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Microservices is an emerging paradigm for developing distributed systems. With their widespread adoption, more and more work investigated the relation between microservices and security. Alas, the literature on this subject does not form a well-defined corpus: it is spread over many venues and composed of contributions mainly addressing specific scenarios or needs. In this work, we conduct a systematic review of the field, gathering 290 relevant publications-at the time of writing, the largest curated dataset on the topic. We analyse our dataset along two lines: a) quantitatively, through publication metadata, which allows us to chart publication outlets, communities, approaches, and tackled issues; b) qualitatively, through 20 research questions used to provide an aggregated overview of the literature and to spot gaps left open. We summarise our analyses in the conclusion in the form of a call for action to address the main open challenges.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Microservices is an emerging development paradigm, where software is built as a composition of multiple services (the 'microservices'). Each microservice implements the business logic of a component of the application and is independently executable and deployable. Microservices interact with each other via message-passing APIs <ns0:ref type='bibr' target='#b15'>(Dragoni et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Over the last 6 years, microservices have become a popular topic and one of the go-to approaches for many cloud computing projects. According to Web of Science, more than 1000 articles about microservices have been published since 2014. The year 2020 accounts for more than 400 of them, which points out that interest in the topic is still rising. Microservices are popular because they bring substantial advantages with respect to scalability in cloud environments and flexibility in the process of software development. By separating application components as independent services, software designers can specialise each component by using a dedicated technology and then integrate all such heterogeneous components via technology-agnostic APIs.</ns0:p><ns0:p>Alas, the advantages of microservices come at a cost: distributed systems are hard to manage, and increasing the number of services of an application gives malicious actors a larger attack surface <ns0:ref type='bibr' target='#b15'>(Dragoni et al., 2017)</ns0:ref>. Several security concerns that are particularly relevant for microservices have been identified by <ns0:ref type='bibr' target='#b9'>Chandramouli (2019)</ns0:ref>, and early research has already shown that the application of standard patterns for system reliability needs to take new parameters into consideration-like the locations at which the patterns are deployed <ns0:ref type='bibr' target='#b29'>(Montesi and Weber, 2018)</ns0:ref>.</ns0:p><ns0:p>The importance of security in microservices creates the need for understanding and analysing the state of the art for securing this kind of architectures. It is particularly important to understand which problems are especially relevant for microservice systems, and how existing techniques can contribute to addressing them. However, there is still a lack of systematic investigations of studies at the intersection of security and microservice architectures.</ns0:p><ns0:p>Here, we aim to fill that gap by presenting a systematic review of the state of the art of microservice security. We followed a structured approach, which led us to select and gather 290 peer-reviewed publications. At the time of this writing, this constitutes the largest curated dataset on the topic. We first perform a quantitative analysis on the metadata of the publications, PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61477:1:0:NEW 17 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for example, publication outlets and keywords. This provides insight into the communities and key research concepts that currently characterise the field. We then map each publication to a vector of 20 different markers, corresponding to 20 research questions on microservices security that we formulated based on established security techniques and the field of microservices as a whole.</ns0:p><ns0:p>Our research questions focused on threat models, security approaches, infrastructure, and development approach. We perform correlation analysis to show that our questions are well-posed (independence), and also to confirm that some topics correlate positively (e.g., Intrusion Detection and Intrusion Prevention, and Agile Development and DevOps as well). Findings from our analysis include: issues with technology transfer from academia to industry on microservices security; lack of guidelines for adopting security by design in microservices; lack of appropriate threat models; lack of guidelines for addressing the attack surface given by technology heterogeneity; and security issues when migrating systems to microservices. Our data, findings, and discussions form a useful basis for orienting future developments of the field.</ns0:p><ns0:p>In summary, the main contributions of this work are:</ns0:p><ns0:p>&#8226; the characterisation of Microservices Security as an early-stage, growing research field in need of systematisation and more mature contributions (Section 5.1.1, Section 5.2.1);</ns0:p><ns0:p>&#8226; the identification of the main research communities on the Microservice Security field and the clustering of authors (Section 5.1.2);</ns0:p><ns0:p>&#8226; a presentation of the trends of the main security attacks involving microservice architectures, both from the points of view of threat model (Section 5.2.2) and mitigation (Section 5.2.3);</ns0:p><ns0:p>&#8226; a report on the current infrastructural security solutions for microservices <ns0:ref type='bibr'>(Section 5.2.4)</ns0:ref> as well as the interaction between the main microservices development approaches (such as DevOps and Agile) and security (Section 5.2.5);</ns0:p><ns0:p>&#8226; a correlation analysis of the answers to our research questions in papers, which sheds light on relationships among the different aspects of microsrevice security (Section 5.2.7);</ns0:p><ns0:p>&#8226; a summary of the main open challenges that emerged from our study, which form a call for action for the community of researchers and practitioners working in the field of microservice security (Section 6).</ns0:p></ns0:div> <ns0:div><ns0:head>Structure of the article</ns0:head><ns0:p>We start by providing a summary of related work in Section 2. In Section 3 and Section 4 we detail the method we followed to conduct the systematic literature review and the research questions, respectively. We present our results in Section 5 and we conclude in Section 6 with a discussion on the outstanding challenges.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>To the best of our knowledge, the published works that are closest to ours are those by <ns0:ref type='bibr'>Vale et al. (2019)</ns0:ref> and <ns0:ref type='bibr'>Almeida et al. (2017)</ns0:ref>. <ns0:ref type='bibr'>Vale et al. (2019)</ns0:ref> present a systematic mapping that identifies the security mechanisms used in microservice-based systems. Contrary to our work, which provides a general overview on the state of the art of microservices security, the authors narrow their focus on cataloguing the security technologies and mechanisms adopted by developers of microservice-based systems-e.g., authentication and authorisation-leaving out other subjects related to security, like threat models and development methods. Similarly to <ns0:ref type='bibr'>Vale et al., Almeida et al. (2017)</ns0:ref> concentrate on surveying the technologies and standards for security, privacy, and communication used in the area of microservice architectures in the cloud.</ns0:p><ns0:p>Extending our view to articles that, at the time of this writing, are not available as peerreviewed publications, we mention the work by <ns0:ref type='bibr' target='#b17'>Hannousse and Yahiouche (2020)</ns0:ref> and <ns0:ref type='bibr' target='#b35'>Ponce et al. (2021)</ns0:ref>. <ns0:ref type='bibr' target='#b17'>Hannousse and Yahiouche (2020)</ns0:ref> present a systematic categorisation of threats on microservice architectures and propose a selection of possible mitigations. <ns0:ref type='bibr' target='#b35'>Ponce et al. (2021)</ns0:ref> look at how 'security smells' affect microservice-based applications and how to mitigate the effects of such smells through refactoring. As for the proposals by <ns0:ref type='bibr'>Vale et</ns0:ref> Hannousse and Yahiouche narrow their investigation down to the threats identified in the literature. Similarly, the work of <ns0:ref type='bibr' target='#b35'>Ponce et al. (2021)</ns0:ref> focuses on the programming of microservices.</ns0:p><ns0:p>In addition to the related work discussed above, there are quite a few neighbouring surveys with respect to our work that are interesting to discuss: while these studies are not dedicated to the topic of microservice security, they explicitly mention security as an important concern for microservices in different contexts-software engineering, Internet of Things, containerisation, etc. The purpose of reviewing neighbouring related work is twofold:</ns0:p><ns0:p>1. It shows the multifaceted nature of microservice security, giving concrete evidence of the need for an investigation which is both wider and deeper, as we do in this work.</ns0:p><ns0:p>2. It provides a general overview of the challenges and possible uncovered research topics related to security in microservices-which inspired some of the questions presented in Section 3. <ns0:ref type='bibr' target='#b15'>Dragoni et al. (2017)</ns0:ref> present an overview of microservices, including a discussion of the origins of the paradigm, its state of the art, and future challenges. They identify a number of trust and security challenges posed by the paradigm. We mention a few examples. Service reuse, one of the key benefits pushed for in the microservice paradigm, requires adopting secure mechanisms for service authentication and authorisation. The increased granularity and heterogeneity of microservice architectures extends considerably the attack surface of these systems. The sophisticated DevOps infrastructure required to operate microservices effectively is a new attack vector. <ns0:ref type='bibr' target='#b16'>Garriga (2017)</ns0:ref> conducted a preliminary analysis toward a taxonomy of microservices architectures. While not addressing in particular security concerns, Garriga reports that the security subject is not extensively addressed, highlighting how monitoring and microservice communication trust chains should receive particular attention. <ns0:ref type='bibr' target='#b20'>Joseph and Chandrasekaran (2019)</ns0:ref> reviewed approaches proposed in the literature to deal with the various concerns of microservice-based systems. The authors mention the large attack area offered by microservices subject to insider/privilege-escalation attacks and network security issues. <ns0:ref type='bibr'>Casale et al. (2016)</ns0:ref> surveyed the topics of European research projects in the area of software engineering. Regarding microservices security, they highlight four main challenges: increasing the usage of software validation and verification methods; improving the trust and interoperability of services through (self/federated)-certification of outputs based on standards; adopting a securityby-design approach on the whole software lifecycle; and helping developers with addressing discontinuities in the chain of compositionality between services and execution environments-e.g., due to data leakages derived from fragile container-host interactions. <ns0:ref type='bibr'>Lichtenth&#228;ler et al. (2019)</ns0:ref> investigate and discuss the challenges of migrating monoliths to microservices. They observe that security should be part of the migration planning phase to begin with, and that developers need models and frameworks to help them elicit, track, and manage the (frequently implicit) assumptions and invariants induced by the migration of the legacy system. These observations are shared with <ns0:ref type='bibr'>Di Francesco et al. (2017)</ns0:ref>, who suggest that the microservice architectural style has a direct impact on the design of a system and that researchers are still investigating how to leverage its characteristics with respect to system quality and security. Di Francesco et al. note that there exists uncertainty about the realisation of microservices, indicating the need for comprehensive references to help programmers in the multifaceted aspects of microservice development.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Summary table and comparison with related works. For each row/work in the table, we report: its reference; its publication year; its type (systematic literature review (SLR), survey, etc.); the number of publications it encompasses; whether it analyses white (peer reviewed) literature; whether it analyses grey (blog posts, etc) literature; the sources it used to search its dataset.</ns0:p><ns0:p>addressing the concerns of context-aware security (in IoT systems), especially for authentication and authorisation. Also <ns0:ref type='bibr'>Yu et al. (2019)</ns0:ref> surveyed the literature on microservice-based fog applications to elicit the security risks threatening them. The main threats highlighted include: kernel-level leakage vulnerabilities linked to containerised deployment; man-in-the-middle/insider attacks on datatransmission interception; the need to verify when services become compromised/misbehave; and network-level vulnerabilities on data-routing alteration.</ns0:p><ns0:p>The table 1, shows the differences between these various works, in numerical and boolean terms. As clearly evincible, our work expands the previous works by adding a conspicuous amount of analysed publications; using white literature at its roots and following the trend and methods of the main Systematic White Literature Reviews.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>REVIEW METHOD</ns0:head><ns0:p>In this section, we describe and motivate the steps we followed to perform our systematic review.</ns0:p><ns0:p>Following the guidelines by <ns0:ref type='bibr' target='#b38'>Snyder (2019)</ns0:ref>, and as depicted in Fig. <ns0:ref type='figure'>1</ns0:ref>, we started by searching and retrieving the literature for relevant publications from several data sources by using the same keyword query. We then performed a manual revision process of the automatically selected publications to exclude publications out of the scope of this study and perform snowballing-i.e., recursively adding to the dataset relevant publications cited by the already selected publications.</ns0:p><ns0:p>The resulting dataset consists of 290 publications. We analysed these publications to collect statistical and transparent answers to our research questions, which are detailed in Section 4. 1 1 The list of the publications and their bibliography information is publicly available at https://doi.org/10. 5281/zenodo.4774894.</ns0:p></ns0:div> <ns0:div><ns0:head>5/50</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61477:1:0:NEW 17 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed Figure <ns0:ref type='figure'>1</ns0:ref>. Schema of the method followed to gather the dataset for this review.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Selection Query and Collection of Publications</ns0:head><ns0:p>Security in microservices includes complex and heterogeneous topics, ranging from development to infrastructural concerns. In our choice of a selection query to gather an initial dataset, it was important to pick a sufficiently general query. For this reason, we adopted the query 'Microservice AND Security' for our initial search, capturing all the publications containing both terms in any of their title, abstract, or body. us to rely on peer review. Thanks to the more uniform organisation of white literature, we are also more confident in the level of consistency of our choice and application of the selection criteria. This is not to say that grey literature is not worth investigating. Blog posts, personal websites, technical reports, white papers, etc., are often the preferred venues for practitioners to share ideas. However, as also pointed out in <ns0:ref type='bibr' target='#b39'>Soldani (2019)</ns0:ref>, 'it is very difficult to uniquely measure the quality of grey literature when conducting a systematic, controllable, and replicable secondary study' and we are not aware of a standard method for the evaluation of grey literature.</ns0:p><ns0:p>Analysing the grey literature was beyond the quality goal of this article and we leave it as future work.</ns0:p><ns0:p>Accordingly to this strategy, we collected publications from 6 different publishers, focusing on peer-reviewed publications. We did not, for example, use Google Scholar or arXiv, since they also list resources that are not peer-reviewed. We list the publishers, reporting the respective numbers of publications that matched our query:</ns0:p><ns0:p>&#8226; ACM (https://dl.acm.org/), 478 publications;</ns0:p><ns0:p>&#8226; IEEE explore (https://ieeexplore.ieee.org/), 181 publications;</ns0:p><ns0:p>&#8226; Springer (https://link.springer.com/), 345 publications;</ns0:p><ns0:p>&#8226; Scopus (https://www.scopus.com/home.uri), 134 publications;</ns0:p><ns0:p>&#8226; Science Direct (https://www.sciencedirect.com/), 358 publications;</ns0:p><ns0:p>&#8226; Wiley (https://onlinelibrary.wiley.com/), 208 publications.</ns0:p><ns0:p>This gave us an initial dataset of 1704 publications in total. We collected publications published up to the 31st of December 2020, using the academic subscriptions provided by the affiliations of the authors-the University of Bologna and the University of Southern Denmark. To guarantee the same level of trustworthiness and authenticity, we retrieved the publications only from the official entries avoiding external sources such as the authors' personal websites.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Publications Triage</ns0:head><ns0:p>The publications retrieved from the publishers were processed in three steps to check if they should be excluded according to distinct exclusion criteria. Graphically, in Fig. <ns0:ref type='figure'>1</ns0:ref>, these steps are labelled as 2nd, 3rd, and 4th Step(s).</ns0:p><ns0:p>In the 2nd Step, we looked at whether the keywords 'Microservice' and 'Security' were used. We excluded a publication if the keywords appeared only in the bibliography. Moreover, we excluded the publication if it was too short (less than two pages), publications not written in English, and duplicate publications already listed in another publisher source.</ns0:p><ns0:p>In the 3rd Step, we looked at the title, abstract, and conclusion of each publication. Publications that do not treat or discuss topics related to microservices and security were excluded. In this step, we also excluded publications in which the security topic was orthogonal or incidental.</ns0:p><ns0:p>In this way, we excluded publications where 'microservices and security' was one of the possible application scenarios, but not the main subject of the study. We also excluded cases in which the work tangentially mentioned the satisfaction of some security aspects, without detailing the design/development of the security technologies to accomplish them. For example, we excluded publications focusing on blockchain technologies where the authors incidentally mention authentication and integrity protection as inherent security properties of blockchain-based implementations.</ns0:p><ns0:p>In the 4th Step, we performed an analysis of the publications, answering to the research questions (RQ) detailed in Section 4. No publications were excluded at this step.</ns0:p><ns0:p>At this point, the following publications remained in the dataset (268 in total):</ns0:p><ns0:p>&#8226; ACM, 67 publications;</ns0:p><ns0:p>&#8226; IEEE explore, 59 publications;</ns0:p><ns0:p>&#8226; Springer, 46 publications;</ns0:p><ns0:p>&#8226; Scopus, 28 publications;</ns0:p><ns0:p>&#8226; Science Direct, 53 publications;</ns0:p><ns0:p>&#8226; Wiley, 15 publications.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Snowballing</ns0:head><ns0:p>As the last (5th) step for the systematic literature review, we performed a backward snowballing process <ns0:ref type='bibr' target='#b52'>(Wohlin, 2014)</ns0:ref> with the objective of identifying additional relevant references for our study from the works cited by the already selected publications.</ns0:p><ns0:p>All references collected in this way underwent the triage by following the Steps 2, 3, and 4. Each referenced publication accepted for inclusion by these steps was then added to the dataset of selected publications. Snowballing was recursively performed on these newly-added publications until reaching a fixed point; i.e., until no new publications was added to the dataset.</ns0:p><ns0:p>The outcome of repeatedly applying the snowballing process led to the following results: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; 40 references in the first round, from which we selected 9 publications;</ns0:p><ns0:p>&#8226; 22 references in the second round, from which we selected 8 publications;</ns0:p><ns0:p>&#8226; 5 references in the third round, from which we selected 5 publications;</ns0:p><ns0:p>&#8226; 4 references in the fourth round, where we selected 0 publications.</ns0:p><ns0:p>The 4 cycles of snowballing yielded 22 additional publications that were included in the dataset to reach the final size of 290 publications.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>RESEARCH QUESTIONS</ns0:head><ns0:p>In this section, we detail the research questions that guided our systematic review.</ns0:p><ns0:p>Usually, the research questions for systematic literature reviews are fairly broad and do not amount to more than six. In our case, we chose to adopt more questions (20) but dichotomous (i.e., with yes-or-no answers), to favour precision and objectiveness. To define the questions and seek guidance in categorising the relevant security issues for microservices, we took inspiration from the related work presented Section 2, as well as from the state of the art in standards and methods, namely the NIST Special Publication 800-204 'Security Strategies for Microservicebased Application Systems' <ns0:ref type='bibr' target='#b9'>(Chandramouli, 2019)</ns0:ref>.</ns0:p><ns0:p>Our questions are collected in four macro groups (Gs), each covering a different concern.</ns0:p><ns0:p>&#8226; G1: Threat Model. Questions on threat modelling and how threats are dealt with.</ns0:p><ns0:p>&#8226; G2: Security Approach. Questions on the security approach, e.g., whether it is preventive, adaptive, proactive, or reactive.</ns0:p><ns0:p>&#8226; G3: Infrastructure. Questions on the infrastructure that microservices run on.</ns0:p><ns0:p>&#8226; G4: Development. Questions on the development process.</ns0:p><ns0:p>The questions in each group are reported in the remainder of this section.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>First group: Threat Model</ns0:head><ns0:p>Mapping the usage of threat models is important to see gaps when a security violation must be handled, or if known models are outdated and need to be adjusted. The NIST report, for instance, hints at the importance of identifying the threats looming over a microservices architecture <ns0:ref type='bibr' target='#b9'>(Chandramouli, 2019)</ns0:ref>. The usage of a formal threat model has proven to be extremely useful in the identification of attack types and their strategic countermeasures <ns0:ref type='bibr' target='#b12'>(Death, 2017)</ns0:ref>.</ns0:p><ns0:p>Several threat models exist in the literature. The most famous one is STRIDE <ns0:ref type='bibr' target='#b22'>(Kohnfelder and Garg, 1999)</ns0:ref> named after the Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of privilege security threats. Other threat models however exists, such as PASTA <ns0:ref type='bibr' target='#b45'>(UcedaVelez and Morana, 2015)</ns0:ref> or OWASP (OWASP Foundation, 2020).</ns0:p><ns0:p>In our review and with this first group of questions, we aimed to understand whether a publication followed a known model, strategy, or guideline. Alternatively, we wanted to know if new security models were proposed.</ns0:p><ns0:p>This group consists of the following questions.</ns0:p><ns0:p>&#8226; Q1: Does the publication mention STRIDE, or at least consider all of its aspects?</ns0:p><ns0:p>&#8226; Q2: Even without explicitly mentioning STRIDE, does the publication involve at least one of its aspects (Spoofing, Tampering, ...)?</ns0:p><ns0:p>&#8226; Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; Q5: Does the publication mention policies, workflows, or guidelines to handle violations?</ns0:p><ns0:p>In particular, with question Q1 and Q3 we looked for the adoption of STRIDE, being the most popular threat model. In the remaining questions, we investigate if the publication defined some threat model-either from the literature or a newly one introduced in that publication-or at least discussed equivalent principles or guidelines without mentioning STRIDE.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Second Group: Security Approach</ns0:head><ns0:p>Many related works cite the usage of preventive measures to secure microservices <ns0:ref type='bibr'>(M&#225;rquez and Astudillo, 2019;</ns0:ref><ns0:ref type='bibr'>Vale et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b16'>Garriga, 2017;</ns0:ref><ns0:ref type='bibr'>Almeida et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b1'>Ahmed et al., 2019;</ns0:ref><ns0:ref type='bibr'>Soldani et al., 2018)</ns0:ref> while some indicate the need for further research in the other directions of proaction, reaction, and adaptation <ns0:ref type='bibr'>(Vale et al., 2019;</ns0:ref><ns0:ref type='bibr'>M&#225;rquez and Astudillo, 2019)</ns0:ref>. With this second block of questions, we wanted to go deeper into the security aspects, considering the specific security approaches, solutions, and also the role that microservices play.</ns0:p><ns0:p>This group consists of the following questions.</ns0:p><ns0:p>&#8226; Q6: Does the publication mention Intrusion Detection System (IDS) functionalities?</ns0:p><ns0:p>&#8226; Q7: Does the publication mention Intrusion Prevention Systems (IPS) functionalities?</ns0:p><ns0:p>&#8226; Q8: Does the publication mention Threat Intelligence?</ns0:p><ns0:p>&#8226; Q9: Does the publication mention Exfiltration Leaks?</ns0:p><ns0:p>&#8226; Q10: Does the publication address Insider Threats?</ns0:p><ns0:p>&#8226; Q11: Are microservices part of the solution?</ns0:p><ns0:p>&#8226; Q12: Are privacy and GDPR considered?</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Third Group: Infrastructure</ns0:head><ns0:p>The NIST report by <ns0:ref type='bibr' target='#b9'>Chandramouli (2019)</ns0:ref> dedicates a large part of its content to infrastructural security solutions for microservices. Similarly, the majority of the mentioned related work in Section 2 presents or at least cites infrastructural solutions for security, acknowledging that the infrastructure of microservice systems is typically complex, encompassing concerns that span from service deployment and service-to-service coordination <ns0:ref type='bibr'>(discovery, composition, consistency)</ns0:ref> to the definition of security-specific mechanisms (authorisation, authentication).</ns0:p><ns0:p>In this group of questions, we aimed at finding information on the infrastructure configurations considered in the publication. This group consists of the following questions.</ns0:p><ns0:p>&#8226; Q13: Does the publication specify how the proposed architecture is controlled or managed (e.g., in a centralised, decentralised, or hybrid way)?</ns0:p><ns0:p>&#8226; Q14: Does the publication mention Infrastructure-as-a-Service?</ns0:p><ns0:p>&#8226; Q15: Does the publication mention service discovery?</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4'>Fourth Group: Development</ns0:head><ns0:p>Microservices are often associated with software development practices like DevOps and Agile <ns0:ref type='bibr' target='#b5'>(Balalaie et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b46'>Vadapalli, 2018)</ns0:ref> which, in turn, are heavily influenced by the inclusion of security-oriented practices <ns0:ref type='bibr'>(Casale et al., 2016;</ns0:ref><ns0:ref type='bibr'>Lichtenth&#228;ler et al., 2019;</ns0:ref><ns0:ref type='bibr'>Cerny and Donahoo, 2016;</ns0:ref><ns0:ref type='bibr'>Soldani et al., 2018)</ns0:ref>.</ns0:p><ns0:p>In this last set of questions, we aimed at checking the extent to which these practices are used also in the setting of security, for example by verifying whether specific development processes and security standards are considered.</ns0:p><ns0:p>This group consists of the following questions. </ns0:p></ns0:div> <ns0:div><ns0:head n='5'>REVIEW RESULTS</ns0:head><ns0:p>In this section, we present the outcome of the literature review. We start by presenting quantitative results obtained from the metadata of the publications in our dataset. This is useful to map the trends over time and the current shape of the field, in terms of the number of contributions, type (proceedings, articles), communities, and keywords (and their relations).</ns0:p><ns0:p>Then, we present results derived from the analysis of the types of contributions (theoretical, applicative, etc.) and of the relation between the selected dataset and our research questions (cf. Section 4). This part is aimed at providing a detailed insight on existing research patterns, gaps, and uncovered areas of the field. We close the subsection with a correlation analysis of the questions, providing a quantitative look over the relationships between them. For reference, we also report our dataset in tabular form, where each entry is associated with the positive answers given to our research questions.</ns0:p></ns0:div> <ns0:div><ns0:head>Insights</ns0:head><ns0:p>In the following subsections, we highlight in boxes (like this one) the main insights that emerge from our analysis. Each insight motivates an open challenge, which we write in bold as the heading of the insight. We will use these challenges in Section 6 to structure our discussion about useful future directions for research on microservice security.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1'>Metadata analysis</ns0:head><ns0:p>We start our quantitative analysis of the collected dataset by presenting in Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>(a) the time distribution of the selected publications. As expected, security in microservice systems gained a 3 https://www.iso.org/isoiec-27001-information-security.html <ns0:ref type='table' target='#tab_6'>2021:05:61477:1:0:NEW 17 Sep 2021)</ns0:ref> Manuscript to be reviewed lot of academic interest in the latest years. This is reflected by the sharp increase in the number of publications since 2014. In Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>(a), we report the number of collected publications per year.</ns0:p><ns0:p>As a reference to indicate the degree of growth of the field, we report in Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>(b) the yearly ratio (in parts per million) between the collected publications and the overall number of publications in computer science 5 .</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1.1'>Publication Outlets</ns0:head><ns0:p>From the plot in Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>(c) we see that conferences and journal venues are the most common outlets, while books/collections are underrepresented. This last fact indicates the early stage of the field, where established references are still lacking. However, conference proceedings are almost matched by journal articles, marking a maturing trend of results that are solid enough to constitute material for more structured contributions, as those found in peer-reviewed journals.</ns0:p><ns0:p>We now concentrate on the specific conferences and journals where the publications in our dataset have been published. In Figs. <ns0:ref type='figure'>3 and 4</ns0:ref>, we report this result in two versions: i) in tabular form, on the left-hand side of Figs. <ns0:ref type='figure'>3 and 4</ns0:ref>, with the acronym, the full name, and the number of contributions in our dataset of the venues with the most contributions and ii) on the right-hand side of Figs. <ns0:ref type='figure'>3 and 4</ns0:ref>, showing the data on the left as a pie chart.</ns0:p><ns0:p>Regarding the distribution of publications over the different categories of venues, we note how the audience of journals and conferences vary. In fact, there is no predominance of securityoriented or even software engineering venues, which could have been the most likely targets.</ns0:p><ns0:p>Instead, the analysed publications appear at venues addressing a broad range of topics, from networking to cloud computing, and on open journals such as IEEE Access and ACM Queue.</ns0:p><ns0:p>Furthermore, there is no clear preferred venue that dominates the others, but contributors are rather scattered over many neighbouring venues.</ns0:p><ns0:p>We give a twofold interpretation of the phenomenon. On the one hand, this fact can indicate that microservice security is perceived as of cross-disciplinary interest, each contribution seeing it from the lens of its specific area (whether it be software engineering, networks, sensors, cloud computing, etc.). On the other hand, we notice the lack of specific venues dedicated to microservices, and least of all, dedicated to microservice security. </ns0:p></ns0:div> <ns0:div><ns0:head>Insights</ns0:head><ns0:p>Fragmentation of outlets: there are no reference venues for the area of microservice security (neither journals nor conferences). This makes it difficult for researchers and practitioners to keep up with the state of the art, as well as to find dedicated conventions where they can discuss this topic with the rest of the community interested in the area. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Our analysis extracted 16 clusters from our dataset. We report in Table <ns0:ref type='table' target='#tab_6'>2</ns0:ref> the result of our analysis, labelling each cluster from A to P. For each Cluster, we report the name of the author, the number of publications (# pub.) in our dataset and their affiliation.</ns0:p><ns0:p>The measure gives some interesting insights. First, clusters F, G, J, and L are totally localised in one country or the same University/Institute, they are relatively small (compared to the others in the Table <ns0:ref type='table'>)</ns0:ref>, and include some of the most prolific authors (J and L in particular). &#8226; 'closed', localised clusters (F, L, , P) that tend to be small but whose core authors tend to be the most prolific (L).</ns0:p><ns0:p>Given their larger reach, semi-open and open clusters have a better chance to gather an impactful community around the topic. Our call to the authors in the field (particularly the closed clusters that tend to be prolific but rather localised) is to establish international collaborations and coordinate to foster the advancement and growth of the field.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1.3'>Concepts and Keywords</ns0:head><ns0:p>We conclude our quantitative analysis by providing a graphical representation of the main keywords present in the abstract of the contributions in our dataset. To conduct our analysis, we used VOSviewer by Van Eck and Waltman (2010), a software that offers text mining functionalities for constructing and visualising co-occurrence networks of important terms extracted from a given corpus. Specifically, we ignored basic words and copyright statements and performed a full count of the words present in the text. We considered only words occurring more than fifteen times, sizing them by their relevance in terms of occurrences. The resulting graph, however, is still too large and dispersive to convey useful information: for the sake of clarity, we present here a visualisation including only the top 60% most-occurring words.</ns0:p><ns0:p>We report the visualisation of the analysis in Fig. <ns0:ref type='figure'>5</ns0:ref>.</ns0:p><ns0:p>VOSviewer automatically clustered the words in 4 areas using its modularity-based clustering algorithm, which is a variant of the cluster algorithm developed by <ns0:ref type='bibr' target='#b10'>Clauset et al. (2004)</ns0:ref> to detect communities (clusters) in a network that also considers modularity.</ns0:p><ns0:p>We can interpret the clusters as follows:</ns0:p><ns0:p>&#8226; The blue area marks the main terms of this study, grouping words like microservice and system. The result does not surprise, since those words describe the design of the systematic selection we performed.</ns0:p><ns0:p>&#8226; The green area marks technical terms as container or attack.</ns0:p></ns0:div> <ns0:div><ns0:head>14/50</ns0:head><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref>. Word-Net of the abstracts in our dataset.</ns0:p><ns0:p>&#8226; The red area identifies application terms, e.g., the targets or reasons of the research, if it is an industrial or research-focused article. We find for instance the word Internet-of-Things, as it is mainly cited with industry and research applications rather than along with terms like container and cloud.</ns0:p><ns0:p>&#8226; The yellow area includes words that identify the subject of a study, whether it be some tool, data (of the system, of the users), users, and they privacy. The word tool here is peculiar, as it acts as a bridge between the other areas. Also, this finding is somehow expected, as the field of microservice security is marked by a fairly practical orientation towards automatisation of processes and control.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2'>Publication Context Analysis</ns0:head><ns0:p>In this section, we discuss trends and considerations derived from reading the selected publications and the research question detailed in Section 4.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.1'>Types of Publications</ns0:head><ns0:p>In Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref> we report the distribution of the type of research contribution-whether theoretical, practical, mixed or a review.</ns0:p><ns0:p>More precisely, regarding the type of research contribution, we mapped every publication in our dataset to one of the following types:</ns0:p><ns0:p>&#8226; Theoretical for publications that present an approach for a specific problem without any implementation artefact. &#8226; Applicative for publications that describe an implemented application possibly with its validation.</ns0:p><ns0:p>&#8226; Theoretical and Applicative for publications that develop a theory and provide a practical tool, framework, program, or application.</ns0:p><ns0:p>&#8226; Review for both literature reviews and social studies (e.g., on developers).</ns0:p><ns0:p>Reviews constitute 15% of the works, marking the fragmented shape of the field, which is in rapid expansion and in need of studies to map its research landscape. Besides reviews, the other contributions in the field are distributed among a 52% share that introduces new theoretical results, a 20% share that contributes by pairing new theoretical proposals with implementations, and the remaining 11% describing pure applications. The fact that the main publications in the field are theoretical is surprising, given the prominently applied nature of microservices. Indeed, excluding reviews, we have that for every 5 publications slightly more than 3 (64% of them) are purely theoretical. We attribute this figure to two phenomena. The first marks the current exploratory trend of the field, which is still engaged in proposing new ideas and in evaluating and maturing them into models amenable to implementation. The second phenomenon relates to the impact that microservices have at the processes/organisational level, with works that are intrinsically theoretical because their contribution can be hardly crystallised into automated implementations, e.g., for proposals of attack models or techniques for handling security within organisations and development teams. Notwithstanding the possible explanations above, it is worth noting the (quantitative) distance between contributions from academia and applications available to practitioners and the industry, which is an indicator of untapped potential for joint synergies between the two communities.</ns0:p><ns0:p>After having characterised the type of publications in the field, we proceed by exploring the results from the answer of the research questions following the 4 macro-groups presented in Section 4.</ns0:p></ns0:div> <ns0:div><ns0:head>Insights</ns0:head><ns0:p>Technology transfer: the field of microservice security is still in the early phase of new idea proposals. There are just a few implementations of these ideas, which hinders industrial adoption. those publications to adopt a threat model vary, from publications that use the model to motivate their proposed solutions to reviews that use the model to structure their overview of the state of the art. Interestingly, in ca. 80% of those publications that mention the usage of at least one known threat model, the model is tailored to work on a specific application scenario. This is an indication of the lack of usage of a generic threat model for microservice security. We conjecture that this lack of usage of generic threat models is due to the fact that the majority of research done on microservice security comes from the software (engineering, languages) side of the field, rather than from the side of security, which advocates for a security-by-design approach.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.2'>Threat</ns0:head><ns0:p>A complementary explanation of that phenomenon is that there is no affirmed threat model for microservices, e.g., due to the difficulty of making the model specific enough for microservices yet avoiding the infamous problem of threat explosion, where the effort required to prioritise and consider all threats starts exceeding the benefits of proposing methods to manage them Wuyts <ns0:ref type='bibr'>(2021)</ns0:ref> where the authors resorted to defining smaller, customised threat models rather than adopting standard ones, due to the problem of requiring conspicuous adaptation efforts to tailor them to such complex and multifaceted architectures.</ns0:p><ns0:p>Regarding the possible attacks addressed in the publications, Fig. <ns0:ref type='figure'>7</ns0:ref> categorises the publications based on the STRIDE threats, following up on question Q2 asking if the publication involves at least one of threats of the STRIDE classification. The most commonly tackled attacks are of the 'spoofing' and 'denial of service' kinds. This is an effect of the push for fine-granularity and independence of services advocated by microservices, where applications result from several small (in size), independent software components that communicate with each other. Such decentralised communication/coordination is one of the most important attack vectors for microservice applications, in particular, the possibility to disguise a communication from an unknown source as being from a known, trusted source, which matches the spoofing attack category. Such attacks, along with tampering and repudiation ones (which together represent more then half of the attack types found in our collection), entail the need for solutions to address attacks centred around exploits of data provenance.</ns0:p><ns0:p>A similar consideration can be made for denial-of-service attacks, where the flexible scalability of microservices allows malicious intruders to, e.g., scale up peripheral microservices and hit more central and well-protected components with (distributed) overpowering attacks. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Insights Adoption of security-by-design: security in microservice frequently comes as an afterthought, whereas it should be one of the main concerns for their engineering. Data provenance: the quantity of spoofing, tampering, and repudiation attacks highlights the need to address the general problem of data provenance in microservices. Dedicated attack trees and threat models: while there are attacks that specifically pertain to microservices, such as those that leverage the scalability of microservice architectures to cause denial of service, there are no dedicated threat models to help developers become aware of those particular threats.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.3'>Security Approach (Mitigation)</ns0:head><ns0:p>In terms of mitigation solutions to security issues proposed by the publications (questions Q6-Q10), the most common approach (45 publications) is to address specific problems, such as authentication or exfiltration, rather than suggesting a general approach. Publications dealing with architectural aspects rarely address the overall picture (only 25, roughly 8%, publications focus on IDS, IPS, Exfiltration Leaks and Threat Intelligence). Again, they focus on local threats like intra-communications or authentication (question Q11). These observations suggest that there is a lack of security approaches that address applications across the full stack.</ns0:p><ns0:p>As far as privacy and GDPR are involved (question Q12), surprisingly, only 9 publications consider privacy protection as relevant or worthy of analysis. In particular, only one publication <ns0:ref type='bibr' target='#b78'>Badii et al. (2019)</ns0:ref> considers the GDPR as a guideline to follow in order to protect the privacy of users. Examples of this kind of guideline application are shown in <ns0:ref type='bibr' target='#b50'>Voigt and Von dem Bussche (2017)</ns0:ref>. Considering that many of the solutions included in the dataset are Cloud-based solutions, it is surprising to note that only one publication claims to be GDPR compliant.</ns0:p></ns0:div> <ns0:div><ns0:head>Insights</ns0:head><ns0:p>Global view/control: the distributed nature of microservices introduces the need for technologies that provide global yet decentralised observability and control, i.e., tools that aid in the enforcement of security policies over a whole architecture without single points of failure.</ns0:p><ns0:p>React &amp; recover techniques: while we found solution to prevent and detect attacks, there are only a few proposals about how microservice systems could react to and recover from them.</ns0:p><ns0:p>Comprehensive technological references: microservices use diverse sets of technology stacks, each characterised by peculiar exploits. To secure microservice architectures effectively, implementors need dedicated technological references to avoid known threats.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.4'>Infrastructure</ns0:head><ns0:p>We start the discussion by first focusing on the type of microservice infrastructure used by the various contributions. Specifically, we have 205 publications in our dataset that answer positively to question Q13. The breakdown of the answers is:</ns0:p><ns0:p>&#8226; 39% (80) describe a centralised approach;</ns0:p><ns0:p>&#8226; 24% (49) use a decentralised approach;</ns0:p><ns0:p>&#8226; 17% (35) resort to a hybrid approach;</ns0:p><ns0:p>&#8226; 20% (41) do not specify which approach they use.</ns0:p><ns0:p>The most widely adopted turns out to be the centralised one. We conjecture two explanations behind this observation. First, the centralised approach has the merit of simplifying the definition, deployment, monitoring, and evolution of policies holding over all the components in a given architecture-traded off with scalability issues and single-point-of-failure concerns. Second, Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>we note that, among the approaches that appeared early in the literature, many focused on converting monolithic applications into microservice applications. Clearly, having a centralised controller that manages the orchestration of microservices helps this process and is closer in spirit to the monolithic workflow. However, the advent of federated, multi-cloud solutions (that prevent the identification/deployment of a centralised authority over the whole peer network) as well as new distributed-consensus technologies (e.g., blockchains), has led to a decentralisation of control, making new decentralised or hybrid solutions emerge (in our dataset) starting from 2018.</ns0:p><ns0:p>As an example, in 2015 and 2016, we find publications such as <ns0:ref type='bibr' target='#b102'>Callegati et al. (2016)</ns0:ref> and <ns0:ref type='bibr' target='#b231'>Lysne et al. (2016)</ns0:ref> which presented centralised approaches to enable security in microservice platforms, while starting from 2018 hybrid and decentralised solutions appear like <ns0:ref type='bibr' target='#b269'>Pahl and Donini (2018)</ns0:ref> for certificate-based authentication or <ns0:ref type='bibr' target='#b73'>Andersen et al. (2018)</ns0:ref>, <ns0:ref type='bibr' target='#b72'>Andersen et al. (2017)</ns0:ref> where authors propose a decentralise high-fidelity city-scale emulation to verify the scalability of the authorisation tier.</ns0:p><ns0:p>We notice that the advent of new distributed-consensus technologies also affected the orchestration approach of microservice solutions. SECurity-as-a-Service (SECaaS) framework for elastic deployment and provisioning of security services. Another interesting work has been done in <ns0:ref type='bibr' target='#b152'>Falah et al. (2020)</ns0:ref> where authors brought the concept of a digital twin to show how a microservice infrastructure approach can speed up the process of deploy complex infrastructure components.</ns0:p><ns0:p>Infrastructure as a Service (IaaS), which is the focus of question Q14, is also a recurrent topic in our dataset, with 66 publications yielding a positive answer. IaaS include solutions that provide and manage low-level infrastructural components, like computing resources, data storage, network components, etc. We notice that IaaS is mentioned mainly as the modality used to deploy the solution but is not studied as a security subject/mechanism per se. Works such as <ns0:ref type='bibr'>Sultan et al. (2019)</ns0:ref> emerge as exceptions; their authors analysed the security benefits obtained using a container-based infrastructure exposed as a service.</ns0:p><ns0:p>Question Q15 investigates Service Discovery, i.e., the automatic detection of services and their functionalities available in a given architecture/network. 16 publications mention Service Discovery in the context of security. Mainly, they propose architectures that support reactive mechanisms for the detection of security issues. Of those, only 2 mention service registration procedures that include data for performing the preventive analysis of the composition, with the goal of statically finding and fixing possible vulnerabilities and misconfigurations: Callegati et al.</ns0:p><ns0:p>(2018) and <ns0:ref type='bibr' target='#b21'>Kamble and Sinha (2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Insights</ns0:head><ns0:p>Global view/control: while there is not a definitive approach to microservice security control (whether it be centralised, decentralised, or hybrid), there is a recognised need for applying security control policies in a consistent way across all microservices belonging in the same architecture.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.5'>Development</ns0:head><ns0:p>DevOps and Agile are recurring topics in our dataset. Based on the answer to question Q16, 76 publications used the DevOps approach, while, answering to Q17, 57 used Agile methods-of those 99 publications which represent the 40% of all publications in our dataset, 10 mention both approaches. There is a common consensus in these publications that Agile/DevOps is important in security because microservices seem to be the perfect match for this type of software development model <ns0:ref type='bibr' target='#b49'>(Vehent (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b19'>Hsu (2018)</ns0:ref>). In particular, microservices align with the tenet of both approaches: to assign dedicated, independent teams to the development of small and Migration is one of the main challenges faced in this context; migrating applications introduces important security concerns <ns0:ref type='bibr'>Lwakatare et al. (2019)</ns0:ref> that are difficult to track, due to the lack of appropriate devices (both organisational and linguistic) to elicit them from the source codebase and make sure they hold in the migrated one. Another major challenge is the coordination between development teams in the context of privacy-handling issues <ns0:ref type='bibr' target='#b167'>Gupta et al. (2019)</ns0:ref>. Also, security becomes a challenging aspect since the (small, independent) teams need to know many aspects of security <ns0:ref type='bibr' target='#b212'>Leite et al. (2019)</ns0:ref> and those DevOps criteria for testing, building, and deployment automation are often neither properly followed in industrial environments <ns0:ref type='bibr' target='#b91'>Bogner et al. (2019)</ns0:ref>, nor for automated scans <ns0:ref type='bibr' target='#b117'>Chondamrongkul et al. (2020)</ns0:ref>.</ns0:p><ns0:p>When considering domain-and model-driven approaches (questions Q18 and Q19), 16</ns0:p><ns0:p>publications consider domain-driven approaches and 26 consider model-driven ones, such as <ns0:ref type='bibr' target='#b196'>Kapferer and Zimmermann (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b75'>Avritzer et al. (2020)</ns0:ref>. These topics are therefore not as widespread as DevOps. Moreover, all citations in these cases are just brief references of the development approach, and lack a discussion on how one of the two approaches can be used in a security context on microservices.</ns0:p><ns0:p>The last question in this category, Q20, concerns security standards, i.e., curated sets of technologies, policies, concepts, safeguards, guidelines, assessments, procedures, training programmes that should be adopted to reduce security risks and mitigate attacks. The answers we gathered for this question surprised us. Indeed, security standards are a staple element of industries and organisations that want to impose and guarantee a certain level of security on their Finally, Yarygina (2018) performs a deep analysis on securing microservices, citing and analysing several know standards for both microservice management and security purposes.</ns0:p></ns0:div> <ns0:div><ns0:head>Insights</ns0:head><ns0:p>Migration to microservices: there are no established techniques to help developers migrate legacy systems to microservice architectures, and in particular to identify the possible security threats that come from such a migration. DevSecOps: agile and DevOps practices are widely used when developing microservices, yet only a few publications address how security is addressed and combined in these practices.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.6'>Additional considerations</ns0:head><ns0:p>By analysing our dataset, we were surprised to find many citations to blockchain technologies (as reported above) as well as the lack of mainstream technologies like service mesh and serverless.</ns0:p><ns0:p>Regarding blockchain technologies, we found 31 publications mentioning or explicitly using blockchains. The decentralisation and independence of microservices constitute a good pairing for the usage of blockchain technologies. <ns0:ref type='bibr'>(2019b)</ns0:ref>, where the trust-chain of the blockchain is combined with a decentralised microservice architecture to create strong smart contract systems or <ns0:ref type='bibr'>Lu et al. (2021)</ns0:ref> where authors proposed a model-driven engineering approach for blockchain applications with microservice.</ns0:p><ns0:p>New approaches for microservices design and usage such as service mesh <ns0:ref type='bibr'>Li et al. (2019)</ns0:ref>, i.e., a dedicated infrastructure layer for facilitating service-to-service communications between microservices is just mentioned by 3 works: <ns0:ref type='bibr' target='#b269'>Pahl and Donini (2018)</ns0:ref>, where the authors indicate a service mesh architecture for authenticating services-securely adding information to their executables and validating the correct execution of distributed entities with such certificate-based approach-and <ns0:ref type='bibr' target='#b314'>Suneja et al. (2019)</ns0:ref>, which mentions the service-mesh sidecar pattern used to control security. Another interesting work regarding service mesh is <ns0:ref type='bibr' target='#b168'>Hahn et al. (2020)</ns0:ref> where authors analysed under several scenarios issues and challenges in Service Meshes Similarly, serverless <ns0:ref type='bibr' target='#b18'>Hendrickson et al. (2016)</ns0:ref> is mentioned only in 4 publications. We did not expect to find (50%) more citations of serverless than those regarding service mesh. Serverless is a cloud computing execution model in which the cloud provider dynamically manages the allocation/scaling of machine resources depending on inbound requests. Indeed, while the service mesh is a technology born within the (micro)service-oriented context, serverless is a more neighbouring concept to that of stateless microservice deployment.</ns0:p><ns0:p>In this context, the most relevant publication is <ns0:ref type='bibr' target='#b105'>Casale et al. (2019)</ns0:ref>, which presents the results of a European research project to develop a model-driven DevOps framework for creating and managing applications based on serverless computing. Its main result consists in designing applications as fine-grained and independent microservices that can efficiently and optimally exploit the serverless paradigm. The serverless term, despite starting to get momentum, is still loosely related to microservices.</ns0:p><ns0:p>Given their increasing importance and impact in the industry and their close relation with microservices, we argue that both service mesh and serverless will attract the general attention of the research community in the near future, as well as that of security research.</ns0:p></ns0:div> <ns0:div><ns0:head>Insights</ns0:head><ns0:p>Comprehensive technological references: the progressive adoption of new technologies in the world of microservices (such as blockchains, service meshes, and serverless) calls for dedicated investigations and reports on their impact on the security of these systems.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.7'>Correlation between Research Questions</ns0:head><ns0:p>The amount of data collected in our dataset is large enough to represent a statistically-relevant sample. In this section, we leverage this to study correlations between our research questions, by We report in Table <ns0:ref type='table' target='#tab_11'>3</ns0:ref> the correlation matrix-excluding research question Q1, since no publication answered it. While the obtained matrix is symmetric and we could report just one half, in Table <ns0:ref type='table' target='#tab_11'>3</ns0:ref> we report the full matrix for convenience, to provide a more immediate view of how each question correlates with all of the other ones.</ns0:p><ns0:p>We conditionally colour the cells of the This result can be interpreted as an indication that the research questions used in this work are mostly orthogonal, and thus suited to cover the reviewed subject with almost no wasteful overlap.</ns0:p><ns0:p>No anti-correlation was found, i.e., negative correlations over the 30% threshold in absolute value. In the following, we comment on all positive correlations above 30%.</ns0:p><ns0:p>Q2-Q4 (32,80%) The questions relate the use of STRIDE threat model with one of is identified specific threats. This seems to be an obvious correlation since we are looking for a specific STRIDE path or at least one of his threats. Manuscript to be reviewed </ns0:p></ns0:div> <ns0:div><ns0:head>Q7-Q6 (77,49%</ns0:head><ns0:note type='other'>Computer Science Driven Development often</ns0:note></ns0:div> <ns0:div><ns0:head n='5.3'>Threats to validity</ns0:head><ns0:p>Our study is subject to limitations that can be categorised into construct validity, external validity, internal validity, and reliability following the guidelines of <ns0:ref type='bibr' target='#b37'>Runeson et al. (2012)</ns0:ref>. '. To mitigate a potential misinterpretation and making sure that the constructs discussed in the interview questions are not interpreted differently by the researchers, we adopted various triangulation rounds using online meetings and we designed a set of binary research questions to foster objectivity in answering them.</ns0:p><ns0:p>Another potential risk regards whether we were exhaustive during data collection, i.e., whether we may have missed any significant publication in our review. This risk cannot be completely mitigated but to minimise this risk we deliberately chose to have simple and broad keywords giving more initial hits that later were further filtered out. Moreover, we conducted a snowballing process to extend our initial dataset looking for potentially relevant publications that our query did not select.</ns0:p><ns0:p>External validity regards the applicability of a set of results in a more general context and is not a concern for this study since we focus on the intersection of the fields of microservices and security without any attempt of generalising the findings to a broader context. We do not claim that either our qualitative or our quantitative findings should also hold for other large fields.</ns0:p><ns0:p>Internal validity is of concern when causal relations are examined when there is a risk that the investigated factor is also affected by a third factor. This thread is not a concern for this study because we presented only correlations between different factors but did not examine causal relations.</ns0:p><ns0:p>Reliability concerns to what extent the data collection and analysis depend on the actual researchers. This risk has been partially mitigated by selecting as many objective criteria as possible for the filtering and by requiring at least a two-people consensus in case of more subjective decisions. In particular, the retrieval of the publications was performed by using search engines. The first filtering of the results (Step 2, cf. Section 3) was conducted by running a script that uses objective criteria such as counting the number of keywords present and the length of the publication. These automatically computed results were double-checked by at least one author to prevent problems due to the parsing of PDFs and to make sure that the language of the publication was English. The second filtering (Step 3, cf. Section 3) performed by reading the title, abstract, and (if needed) the body of the publication, was performed in parallel by two authors. Decision conflicts were solved by discussion involving at least two authors until a consensus was reached. For the publication analysis (Step 4, cf. Section 3), due to the binary nature and formulation of the questions, the 20 research questions were answered by the author assigned to the publication. To detect possible observer bias and errors, we selected a random subset of 15 papers and had a different author answer to the research questions. The calculation of the kappa index of agreement as proposed in <ns0:ref type='bibr' target='#b11'>Cohen (1960)</ns0:ref> over the two result sets yielded a value of &#954; = 0.99998, giving us statistical confidence over the perceived precision of questions and objectiveness of answers.</ns0:p><ns0:p>The reliability of the study is strengthened by being open and explicit about the process of data collection and analysis. For transparency, reproducibility, and reuse, we report the data used in this study at https://doi.org/10.5281/zenodo.4774894, which includes both the final dataset with the answers to all the research questions and also the set of rejected publications along with the reason for exclusion.</ns0:p><ns0:p>We also report in the Appendix each entry of our dataset and its answers to our research questions.</ns0:p></ns0:div> <ns0:div><ns0:head>23/50</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61477:1:0:NEW 17 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science 6 DISCUSSION AND FUTURE DIRECTIONS</ns0:note><ns0:p>In this article, we presented a systematic review of the literature regarding microservice security.</ns0:p><ns0:p>To conduct our research, we followed a structured approach that allowed us to gather 290 peer-reviewed publications, which, at the time of writing, constitutes the largest curated dataset on the topic.</ns0:p><ns0:p>To study our dataset, we conducted first an investigation on the metadata of the publications, which gave us some insight to map what are the publication outlets, the communities, and the key research concepts that characterise the field. Then, we performed an analysis, associating each element in our dataset to a vector of 20 different markers-presented in the form of 20 research questions.</ns0:p><ns0:p>Since our markers belong in four micro-groups (of threat-model, security, infrastructure, and development approaches), we used that partition to provide an overview of the literature through the lenses of each cluster. As a byproduct of our analysis on the content of each publication, we found concepts and topics that we did not include in our questions but that recur in multiple publications, e.g., the usage of blockchain or service-mesh technologies. To provide a more comprehensive picture of the field, we described and contextualised also these additional elements.</ns0:p><ns0:p>Since our dataset forms a statistically relevant vector field, we also performed a correlation study over the components of the vectors and reported the strongest correlations (e.g., between intrusion-detection (IDS) and intrusion-prevention (IPS) systems in microservice deployments) along with possible explanations of the identified phenomena.</ns0:p><ns0:p>In the following, we draw a summary of the main open challenges that emerged from our study, which forms a call for action for the community of researchers and practitioners working in the field of microservice security and its neighbouring areas.</ns0:p><ns0:p>Data provenance: the distributed nature of microservices calls for the certification of their outputs, which other federated services receive as input and need to trust. However, there is a lack of best practices and/or standards for such a task.</ns0:p><ns0:p>Technology transfer: there exists a sensible amount of research on microservices security, but transferring those results-e.g., viable methods and tools for validation and verification-to the industry is difficult and applications are almost non-existent.</ns0:p><ns0:p>Security-by-design adoption: while many advocate for adopting security-by-design at all stages of a microservice lifecycle (from design to monitoring), there are no established references nor guidelines on how these principles can be reliably adopted in practice.</ns0:p><ns0:p>Dedicated attack trees and threat models: threats in microservice systems can come from multiple sources, from the interaction of the layers of a chosen technology stack to how microservices interact with each other-e.g., in an exclusive network, on a federated basis, on the Web, etc. Practitioners lack dedicated attack trees and threat models to help them consider and tackle the multifaceted attack surface of microservice architectures.</ns0:p><ns0:p>Comprehensive technological references: microservice development entails the use of (heterogeneous) technology stacks, whose combinations and interactions give way to exploits at different levels. These include data leakage due to host-container interactions, threats to encryption reliability due to interacting heterogeneous standards and data-format conversions, as well as surreptitious attacks through software libraries hijacking. Besides the lack of dedicated threat models, there is also a need for concrete references to secure specific technology stacks.</ns0:p><ns0:p>Migration to microservices: several works provide structures and methods to migrate legacy systems to microservices architectures. However, there are no established techniques to elicit the assumptions and invariants (e.g., on shared-memory communication, runtime environment, concurrent/interleaved database accesses, etc.) of the legacy system that the developers of the microservices must deal with-least of all considering how those factors impact the security aspects of the migrated architecture. An additional step in this direction would benefit from following principled security-by-design disciplines. Fragmentation of outlets: researchers (and practitioners) working on microservices security do not have reference venues (neither journals nor conferences). This has at least two negative consequences. First, it makes it more difficult to gather the relevant work that constitutes the current state-of-the-art of their field-a need to which this study provides a partial solution, in the form of a snapshot of the current field landscape. Second, reference venues work also as gathering and exchange points for researchers to discuss current problems and new ideas, form interest groups, and concretise new contributions and projects to advance the knowledge in the field. Here, our call for action is at the community level, advocating for the establishment of a few reference, high-quality venues able to focus, inform, and orient the agenda of the field.</ns0:p><ns0:p>Regarding the future steps of the line of work of this contribution, we notice that here we focused our investigation on peer-reviewed publications. However, in the general field of microservices (and their security, by extension) the grey literature-which includes nonpeer-reviewed reports, working papers, government documents (e.g., those by NIST), white papers-constitutes a relevant body of knowledge that deserves separate studies. As future work, we intend to pursue an activity similar to what we presented in this work, but purposed to investigate the grey literature</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>al. and Almeida 2/50 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61477:1:0:NEW 17 Sep 2021) Manuscript to be reviewed Computer Science et al., the difference between our work and Hannousse and Yahiouche (2020) lies on generality:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Q3: If STRIDE aspects or equivalent are considered, does the publication propose/discuss a concrete implementation/solution (either developed by the same author or one taken from the literature)? &#8226; Q4: Does the publication consider or follow another threat model rather than STRIDE without introducing a new one? 8/50 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61477:1:0:NEW 17 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>&#8226;:</ns0:head><ns0:label /><ns0:figDesc>Q16Does the publication mention DevOps, Continuous Integration, Continuous Deployment, or Continuous Delivery?</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Time and category distribution of publications.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>4 https://tools.ietf.org/html/rfc5280 10/50 PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Four</ns0:head><ns0:label /><ns0:figDesc>other clusters follow a different trend: C,H,P and I. They are big-size clusters (respectively 6,10,8 and 6), they count one core author (respectively with 3,3,4 and 3 publications) but they are rather homogeneous, the first mainly including authors from Brazil, Finland and the fourth one is from Portugal. Clusters A, B, D, K, M, N and O are the most varied. Cluster A, is the largest (22 authors) and most heterogeneous one: it includes 6 core authors from 5 different countries (Brazil, Germany, Italy, Switzerland, and the UK) and 12 co-authors from 4 countries different from those of the core authors (Australia, France, Portugal and the US). Cluster B includes 6 core authors over 24 members, distributed over just 5 countries (Brazil, Germany, Italy, Greece and Switzerland). Cluster D includes 8 authors, of which 6 are core and come both from either China or the US. Cluster K is another big cluster of 16 authors with include 3 core authors from the US and Germany. Clusters M, N and O follow the same trend of cluster D. This means that these clusters are built around 2 core authors which represent the main affiliation provenance, respectively Holland, Germany and Switzerland, US and UK. Overall, the communities of core authors in the dataset is distributed among three types of clusters: &#8226; 'open' clusters (A, B, D, K) of co-authors linked by a few (if not one) core authors and diverse affiliations; &#8226; 'semi-open' clusters (C, G, M, N and O ) of localised collaborators with sporadic, external collaborations;</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Type of publications.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>et al. (2018). Threat explosion is a known problem of neighbouring areas to microservices, like cloud, edge, and fog computing Di Francesco et al. (2017); Ibrahim et al. (2019); Guija and Siddiqui (2018); Lou et al. (2020); Flora (2020); Truong and Klein (2020); Russinovich et al.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>For example, works such as Xu et al. (2019b) propose a decentralised, blockchain-based data-access control for microservices. Recent contributions also tackled the problem of authentication and authorisation in decentralised settings, e.g., B&#225;n&#225;ti et al. (2018) develops a workflow-oriented authorisation framework to enforce authorisation policies in a decentralised manner, Taha et al. (2019) presents a new algorithm that distribute tasks on clusters of vehicular ad-hoc networks, Zhiyi et al. (2018) proposes a secure decentralised energy management framework, and Tourani et al. (2019) describes a decentralised data-centric</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61477:1:0:NEW 17 Sep 2021) Manuscript to be reviewed Computer Science independent components within the architecture Continuous Integration (CI) process. However, the majority of the selected publications provide no in-depth security analysis of any of the two development approaches, but rather indicate the inclusion of generic security measures in the steps of the development methods. Only three works, namely Mansfield-Devine (2018), Anisetti et al. (2019) and<ns0:ref type='bibr' target='#b204'>Kumar and Goyal (2020)</ns0:ref>, propose concrete and specific variants of the DevOps approach that tackle security issues-in particular Mansfield-Devine (2018) explicitly cites the guidelines of DevSecOps<ns0:ref type='bibr' target='#b19'>Hsu (2018)</ns0:ref>.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>members and collaborators (often also for certification purposes<ns0:ref type='bibr' target='#b41'>-Stewart et al. (2012)</ns0:ref>,<ns0:ref type='bibr' target='#b220'>Lie et al. (2020)</ns0:ref>). Despite their widespread use in practice, only 7 publications mention security standards.In particular,<ns0:ref type='bibr' target='#b307'>Souppaya et al. (2017)</ns0:ref> mentions the usage of X.509 to verify a secure method for key exchange between microservice. In<ns0:ref type='bibr' target='#b94'>Brenner et al. (2017)</ns0:ref> the authors show a solution for securing microservices through the SGX Intel Standard. The authors of<ns0:ref type='bibr' target='#b343'>Vassilakis et al. (2016)</ns0:ref> analyse the concept of Small-Cell-as-a-Service, i.e., a technological paradigm for the development of Virtualised Mobile Edge Computing Environments, using several mobile standards for 5G and SDN networks (e.g., MobileFlow<ns0:ref type='bibr' target='#b33'>Pentikousis et al. (2013)</ns0:ref> and VNFs<ns0:ref type='bibr' target='#b0'>Agarwal et al. (2019)</ns0:ref>).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Blockchain trend.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>mentions Model-Driven Development as an alternative approach and vice versa; Q18-Q17 (33,08%) The questions relate Domain-Driven Development and Agile methods, indicating a correlation, mainly because often Agile methods employ Domain-Driven Development.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>Construct validity 'reflects to what extent the operational measures that are studied really represent what the researcher has in mind and what is investigated according to the research questions.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>control: the distributed nature of microservices makes it difficult to check the correct implementation of architecture-wide security policies, especially when each microservice has a dedicated security configuration. The issue is further exacerbated by the DevOps practice of having different teams deal separately with all aspects of the microservices they develop, including the implementations of their security policies. This fact highlights the need for tools that provide global overviews and guarantees on the security policies, protocols, and invariants of microservice systems.React &amp; recover techniques: while the literature on preventive and detective measures against attacks abound, little has been done on how microservices should react to attacks and, as a consequence, recover their normal behaviour.DevSecOps: Agile and DevOps practices are widely used when developing microservices, yet there is no established reference on how these approaches should integrate security in all their aspects (from team culture, management and communication to develop technologies and techniques) and into the lifecycle of microservices.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>7/50 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2021:05:61477:1:0:NEW 17 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Cluster Authors Correspondence.</ns0:figDesc><ns0:table /><ns0:note>13/50PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61477:1:0:NEW 17 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head /><ns0:label /><ns0:figDesc>11% 32.80% 8.10% 13.75% 3.19% -7.74% 12.36% 0.41% 24.68% -4.12% -22.74% -8.06% 6.27% -3.88% -6.71% 8.45% 0.19% 0.41% Q3 27.11%28.59% 7.37% 18.93% 29.11% 7.49% 8.68% 12.81% 16.69% 8.54% 0.75% 5.51% 15.10% -6.93% -10.82% 3.57% 2.26% 12.81% Q4 32.80% 28.59% 12.18% 6.30% 5.05% 10.50% 6.05% 8.45% 17.28% 9.01% -10.39% -7.34% -0.42% 1.05% 1.34% 6.88% -6.90% 8.45% Q5 8.10% 7.37% 12.18% 6.15% 8.26% 12.31% 5.58% 13.51% -12.44% 5.04% -2.41% 5.31% 9.24% -10.48% -7.12% -13.61% -9.07% 8.06% Q6 13.75% 18.93% 6.30% 6.15% 77.49% 22.89% 14.23% 14.86% 3.83% 5.58% 0.88% 17.76% 14.23% -0.42% 2.99% 6.90% 4.96% -5.83% Q7 3.19% 29.11% 5.05% 8.26% 77.49% 20.77% 12.55% 17.68% 1.97% 0.88% 0.78% 15.67% 12.55% 4.41% -1.48% 10.17% 7.44% -5.14% Q8 -7.74% 7.49% 10.50% 12.31% 22.89% 20.77% 10.03% 14.15% -5.27% 20.31% 20.12% 31.15% 13.76% 8.28% 9.01% 4.83% -4.89% -8.03% Q9 12.36% 8.68% 6.05% 5.58% 14.23% 12.55% 10.03% 25.72% 3.19% 13.09% -0.84% 15.70% 14.01% -0.66% 3.25% 8.28% 14.01% -3.80%</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='6'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='10'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell>Q2</ns0:cell><ns0:cell>Q3</ns0:cell><ns0:cell>Q4</ns0:cell><ns0:cell>Q5</ns0:cell><ns0:cell>Q6</ns0:cell><ns0:cell>Q7</ns0:cell><ns0:cell>Q8</ns0:cell><ns0:cell>Q9</ns0:cell><ns0:cell>Q10</ns0:cell><ns0:cell>Q11</ns0:cell><ns0:cell>Q12</ns0:cell><ns0:cell>Q13</ns0:cell><ns0:cell>Q14</ns0:cell><ns0:cell>Q15</ns0:cell><ns0:cell>Q16</ns0:cell><ns0:cell>Q17</ns0:cell><ns0:cell>Q18</ns0:cell><ns0:cell>Q19</ns0:cell><ns0:cell>Q20</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>Q2 27.Q10 0.41% 12.81% 8.45% 13.51% 14.86% 17.68% 14.15% 25.72%</ns0:cell><ns0:cell /><ns0:cell cols='10'>8.88% 10.14% 0.37% 7.54% 6.04% 5.95% 3.53% 2.93% 6.04% 12.17%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>Q11 24.68% 16.69% 17.28% -12.44% 3.83% 1.97% -5.27% 3.19% 8.88%</ns0:cell><ns0:cell /><ns0:cell cols='9'>0.36% 5.13% -17.71% 0.15% 12.62% 9.16% 9.02% 6.24% 8.88%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>Q12 -4.12% 8.54% 9.01% 5.04% 5.58% 0.88% 20.31% 13.09% 10.14% 0.36%</ns0:cell><ns0:cell /><ns0:cell cols='8'>-1.44% 9.26% 4.38% -1.62% 6.16% 1.34% -4.32% -2.81%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>Q13 -22.74% 0.75% -10.39% -2.41% 0.88% 0.78% 20.12% -0.84% 0.37% 5.13% -1.44%</ns0:cell><ns0:cell /><ns0:cell cols='7'>26.24% 9.08% 11.22% 13.12% 7.16% 5.77% 5.29%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>Q14 -8.06% 5.51% -7.34% 5.31% 17.76% 15.67% 31.15% 15.70% 7.54% -17.71% 9.26% 26.24%</ns0:cell><ns0:cell /><ns0:cell cols='6'>22.90% 10.67% 12.47% 11.75% 8.50% -3.18%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='13'>Q15 6.27% 15.10% -0.42% 9.24% 14.23% 12.55% 13.76% 14.01% 6.04% 0.15% 4.38% 9.08% 22.90%</ns0:cell><ns0:cell /><ns0:cell cols='5'>13.07% 10.85% 18.85% 14.01% -3.80%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='14'>Q16 -3.88% -6.93% 1.05% -10.48% -0.42% 4.41% 8.28% -0.66% 5.95% 12.62% -1.62% 11.22% 10.67% 13.07%</ns0:cell><ns0:cell /><ns0:cell cols='4'>57.34% 25.21% 19.94% 0.85%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='15'>Q17 -6.71% -10.82% 1.34% -7.12% 2.99% -1.48% 9.01% 3.25% 3.53% 9.16% 6.16% 13.12% 12.47% 10.85% 57.34%</ns0:cell><ns0:cell /><ns0:cell cols='3'>33.08% 10.85% -2.13%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='16'>Q18 8.45% 3.57% 6.88% -13.61% 6.90% 10.17% 4.83% 8.28% 2.93% 9.02% 1.34% 7.16% 11.75% 18.85% 25.21% 33.08%</ns0:cell><ns0:cell /><ns0:cell cols='2'>40.00% -4.94%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='17'>Q19 0.19% 2.26% -6.90% -9.07% 4.96% 7.44% -4.89% 14.01% 6.04% 6.24% -4.32% 5.77% 8.50% 14.01% 19.94% 10.85% 40.00%</ns0:cell><ns0:cell /><ns0:cell>-3.80%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='18'>Q20 0.41% 12.81% 8.45% 8.06% -5.83% -5.14% -8.03% -3.80% 12.17% 8.88% -2.81% 5.29% -3.18% -3.80% 0.85% -2.13% -4.94% -3.80%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='10'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61477:1:0:NEW 17 Sep 2021)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>21/50</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Correlation matrix among research questions.way of the answers that the publications in our dataset give to each of them. Correlations can be used to understand which of the different aspects of microservice security are most commonly in a positive correlation (paired) in the dataset, and which ones are negatively correlated (mutually exclusive).</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head>Table ,</ns0:head><ns0:label>,</ns0:label><ns0:figDesc>Looking at the Table, we notice the predominance of light-coloured cells.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>first, attributing colour intensity according to</ns0:cell></ns0:row><ns0:row><ns0:cell>correlation absolute value-maximal intensity for 100% and degrading towards 0%-, second,</ns0:cell></ns0:row><ns0:row><ns0:cell>setting a transition threshold above 30% (absolute value) from green to orange, to help to spot</ns0:cell></ns0:row><ns0:row><ns0:cell>relevant correlations.</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_13'><ns0:head /><ns0:label /><ns0:figDesc>). The questions ask if the publication mentions IPS or IDS functionalities respectively. The strong correlation indicates how IPS and IDS are strictly related. Indeed, The questions relate the Agile development practice with DevOps and</ns0:figDesc><ns0:table><ns0:row><ns0:cell>in practice, IDS may exist without IPS, but not the opposite, because prevention mechanisms</ns0:cell></ns0:row><ns0:row><ns0:cell>are typically built as a reaction to a detected attack;</ns0:cell></ns0:row><ns0:row><ns0:cell>Q8-Q14 (31,15%) The questions relate Threat Intelligence functionalities with Infrastructure</ns0:cell></ns0:row><ns0:row><ns0:cell>as a Service deployment, which can define a campaign strategy for a Threat Intelligence</ns0:cell></ns0:row><ns0:row><ns0:cell>analysis.</ns0:cell></ns0:row><ns0:row><ns0:cell>Q17-Q16 (57,34%)</ns0:cell></ns0:row></ns0:table><ns0:note>Continuous Integration. As also emphasised in other studies likeLwakatare et al. (2019), this correlation can be easily explained by the fact that DevOps is sometimes considered an Agile method or its evolution. Processes adopting DevOps, therefore, adopt also Agile;Q19-Q18 (40,00%) The questions relate Domain-Driven Development and Model-Driven Development. We conjecture that this correlation is present because mentions of Domain-22/50 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61477:1:0:NEW 17 Sep 2021)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61477:1:0:NEW 17 Sep 2021)Manuscript to be reviewed</ns0:note> </ns0:body> "
"September 17, 2021 Dear Editors & Reviewers, we would like to thank the reviewers for their remarks and constructive criticisms. We thoroughly addressed all the comments leading to a refactoring of the previous version. In the following, we address in more detail the comments of each reviewer. Best regards, Davide Berardi Saverio Giallorenzo Jacopo Mauro Andrea Melis Fabrizio Montesi Marco Prandini Reviewer 1 Comment 1.1 In the introduction, the authors can draw a table showing the difference between the current survey and existing surveys. We included a table in the Related Work section to summarise the distinctive information of related studies (e.g., year of publication, number of surveyed artefacts, sources, ...) and to provide a compact comparison. The placement of the table there, rather than in the Introduction as suggested by the reviewer, was preferred because that is the section where we introduce the listed studies; we hope this decision can be considered in line with the spirit of the reviewer’s suggestion. Comment 1.2 Some of the recent papers related to security such as the following can be discussed in the paper: ˆ ”Iwendi, C., Jalil, Z., Javed, A. R., Reddy, T., Kaluri, R., Srivastava, G., & Jo, O. (2020). Keysplitwatermark: Zero watermarking algorithm for software protection against cyberattacks. IEEE Access, 8, 72650-72660., ˆ Bhardwaj, A., Shah, S. B. H., Shankar, A., Alazab, M., Kumar, M., & Gadekallu, T. R. (2020). Penetration testing framework for smart contract Blockchain. Peer-to-Peer Networking and Applications, 1-16.” After a thorough examination of the mentioned papers, we believe they do not fit the selection criteria for our study (they barely mention our keywords). We deem it inadvisable to extend the scope of this paper beyond the area of security in microservices. For this reason, in agreement with the editor, we decided not to include them. Comment 1.3 List out the main contributions of the survey. We have added a list of the main contributions at the end of the Introduction. Comment 1.4 The authors can add a section, lessons learnt and future directions to pave the way for researchers interested to work on this topic. We have restructured our concluding section (now named “Discussion and Future Directions”) to aggregate all the findings and the discussion on future directions. For each main topic emerging as a result of our review, we highlighted what aspects are left uncovered by existing literature (see the “Insight” boxes in section 5.2). In section 6, every insight is recalled and briefly discussed to motivate the need for further research. Reviewer 2 Comment 2.1 However, I would like to see an explicit motivation on the exclusion of the grey literature. (please see next comment) Comment 2.2 I would like to authors to explicitly motivate why they did not include grey literature in their survey. Especially in a field like microservices which is driven by practitioners this may provide complementary perspective. Perhaps there would be blog posts on ”React & recover Techniques” and ”DevSecOps” which makes the call for action even more worthwhile. We agree it was a necessary clarification. We added the motivation in the second paragraph of section 3.1, explaining our choice of focusing only on published and peer-reviewed articles to avoid the pitfalls also highlighted in the widely cited work by Soldani [1]: “it is very difficult to uniquely measure the quality of grey literature when conducting a systematic, controllable and replicable secondary study”. Comment 2.3 They way to authors list papers in the bibliography is very annoying to look up references. In the end I resorted to a search on the digital PDF document while reading the article in paper form. 2 Citations start with the family name of the first author followed by the year (e.g. Casale et al.[2016]) However in the bibliography the first name comes first, family name comes second followed by all other author names; title, misc and the year at the very end of the entry. (i.e. Giuliano Casale, Cristina Chesta, ... Cloud Forward, pages 34–42, 2016). The alphabetical sorting of the references is thus hard to infer and scanning the list is counterintuitive. Note that some authors have multiple first names (e.f. Mohammad Bany Taha) which makes it even more difficult to discern the alphabetical sorting of the references. The authors also split the reference section in two subsection (References + Publications from the Dataset). So scanning from the end of the bibliography was awkward. (Looking for the entry ”Wuyts 538 et al. [2018]” I started from the back and did not find it on the first attempt. Only the PDF search revealed what was happening) We acknowledge that navigating the large set of references can be sometimes not straightforward. There are two aspects to consider. We still believe that the set of papers we presented as related work, to put our paper in context, are better left in a separate list with respect to the papers we included as the subject of our review. We are grateful to the PeerJ CS editors, who allowed us to substantially exceed the page limit and permitted us to put all the references to the reviewed papers directly in the bibliography, rather than in the supplemental materials. As far as the bibliography presentation is concerned, we follow the journal guidelines and we will be happy to comply with the editor requests in case style changes are needed. Comment 2.4 ”Methodology.” Don’t use the term ”methodology” for what is a method, it is inflation of words. The postfix -OLOGY stands for ”study of”. (i.e. biology = the study of the living organisms; psychology = is the study of the human mind; geology = is the study of the earth). Thus methodology is ”the study of the methods”. We thank the reviewer for the useful insight, we have revised the paper replacing “methodology” with “method” (also double-checking that it was the most appropriate word for the concept) as suggested. Comment 2.5 lines 101-102 ”We analysed these publications to collect statistical and objective answers”. There is no such thing as an ”objective” answer. The very fact that you classified papers already implies a subjective interpretation. I would suggest to replace ”objective” by this by ”transparant”. Agreed, fixed as suggested. Comment 2.6 Line 360-361 ”This is reflected by the sharp increase in the number of publications since 2014.” I am not convinced by this argument. The overall number of publications in computers science is steadily increasing, so the absolute number of papers does not say much. 3 We have computed the fractions of papers published on microservices, using the data from https://dblp.org/statistics/publicationsperyear.html to estimate the number of publications in computer science per year. The fraction of the publications on microservices, year by year from 2014 until 2020 is: 0,0008%, 0,0015%, 0,0055%, 0,0111%, 0,0168%, 0,0252%, 0,0314%. The publications on microservices are therefore steadily increasing in percentage over all the publications in computer science. We have added this information in Section 5.1, providing in Figure 2(a-b) an illustration of this quantitive backing to our statement. Reviewer 3 Comment 3.1 As for the search for studies, some concerns should be clarified by the authors. Firstly, why did they choose to restrict their focus to “white literature” only? It is known that industry is heavily investing on microservices, with quite many solutions (also for security) being proposed and posted in industry-driven outlets (whitepapers, blog posts, YouTube channels, etc.). I am not saying that the authors should include “grey literature” as well, but they should at least clarify why they decided to keep it out from their study. (The authors actually notice this in their conclusions, hence raising the point of why they did not consider grey literature from the beginning) Agreed. We included an explanation of why we leave the grey literature out of this review at the beginning of Section 3. For a few more details, please see our answer to Comment 2.2, since the second reviewer also highlighted this issue. Comment 3.2 In addition, I am not sure on the “repeatability” of one of the exclusion criteria, viz., “We also excluded cases in which the work tangentially mentioned the satisfaction of some security aspects, without detailing the design/development of the security technologies to accomplish them”. Whilst all other exclusion criteria are objective, this criteria seem to potentially be threaten by observer bias (as an observer may a manuscript to satisfy this conditions and hence get excluded, while another may consider the treating of security “not so tangent”, so to say). The authors should clarify how they limited/avoided possible observer biases when evaluating this criteria, e.g., in a “Threats to Validity” section. We thank the reviewer for noting that we left out the description of an important methodological aspect, thus allowing us to improve the paper. We have now written a ”Threats to Validity” subsection where we detail all the threats to our study, and we outline the steps we took to mitigate them. In particular, to limit the risk of non-repeatability, the retrieval of the papers was performed by using the search engines of the selected publishers. The first filtering of the results (Step 2, cf. Section 3) was conducted by running a script by using objective criteria such as counting the number of keywords present and the length of the paper. These automatically computed results were double-checked by at least one author to prevent problems due to the parsing of PDFs and to make sure that the language of the paper was indeed English. The second filtering (Step 3, wrt. Section 3) was performed in parallel by at least two authors, by reading the title, abstract and, if needed, the body of the paper. Decision conflicts were solved by discussion by at least two authors until a consensus was reached. 4 Moreover, to allow the reproduction of our study, we now have released the dataset containing the reason for exclusion of all the papers that were not considered in the final dataset. Comment 3.3 Finally, the authors should conclude this section by listing all selected studies (e.g., in a table). This would help keeping the authors review self-contained, hence helping readers in better understanding their results. Our original draft had indeed such a table, linked to the complete bibliography of the reviewed papers. We initially placed those items as supplemental material to stay within the recommended page limits of the journal. As already noted as a reply to comment 2.3, we are grateful to the PeerJ CS editors, who allowed us to substantially exceed the page limit and permitted us to put all the references and tables regarding the reviewed papers directly in the paper. Comment 3.4 As for the research questions, the authors state that they adopted 20 dichotomous questions with the goal of favoring precision and objectiveness. It is however not clear whether/how they ensured such precision and objectiveness. The marking of a publication as “yes” or “no” for a research question seems indeed to be subject to observer biases and errors. How did the authors avoid/limit this? This should be discussed in a “Threats to Validity” section. We have expanded the Threat to validity section clarifying that for the publication analysis, due to the binary nature and formulation of the questions, the 20 research questions were answered by the main author assigned to the publication. To detect observer bias and errors, we selected a random subset of 15 papers and had a different author answer to the research questions. The calculation of the Cohen’s kappa index over the two result sets yielded a value κ = 0.99998, giving us some confidence over the perceived precision of questions and the objectiveness of answers. Comment 3.5 The presentation of results can and must be improved, by also expanding their discussion. For instance, the authors should consider presenting the publication outlets (viz., conferences and journals) as plots, so that readers can visually observe them (rather than finding a flat list of names). Much better it would be to cluster venues according to some criteria and to show aggregated results. For instance, the authors could expand their discussion on the communities where microservices’ security is most discussed (with some graphical support). We have made integrations to the text in section 5.1 as suggested, including also diagrams for an immediate graphical representation of the data related to venues. Comment 3.6 5 Also, I am not sure on whether what the authors call “qualitative results” actually provide “qualitative” information. For instance, the distribution of types of publication is still a “quantitative” information, with which the authors partition the different contributions in the field (here the authors use the word “survey” to denote “reviews”). This type of information is typically present in systematic literature reviews, and it is usually classified as a “quantitative result”. The same holds for the other aspects discussed in Sects. 5.2.2-5. The authors state “how many” publications were marked as “yes” for the research questions pertaining to each aspect, hence providing “quantitative information” on the distribution for such aspects. To make the things more “qualitative” the authors should better enter into the details of ”what” is discussed in each publication and ”how”. Whilst the “what” can be easily shown with tables showing the authors’ classification of considered works (e.g., with rows associated with works, columns with research questions, and checkmarks placed in cells to denote that a research question is treated in a research work), the “how” requires the authors to expand their discussion by discussing how (clusters of) works discuss/tackle the research questions. To avoid any confusion, we have revised the structure of the paper calling the section as “Metadata analysis” and “Publication analysis” and propagated the changes to all the text. We have also changed the label of survey in reviews in the “types of publications” as suggested by the reviewer. Comment 3.7 As a minor comment, in Section 5.2.6 the authors state that they “were surprised to find many citations to blockchain technologies (as reported above) as well as the lack of more and more mainstream technologies like service mesh and serverless”. I am not sure on whether “service mesh” are more mainstream than “blockchains”. The authors should consider rewording this, or at least cite a reference showing the higher recognition/usage of “service mesh” if compared with “blockchains”. We agree that the comparison was not sufficiently backed by data. In this case, we decided to revise the phrase withdrawing the comparison. Comment 3.8 In their concluding remarks (Sect. 6), the authors draw some concluding remarks on the open challenges that emerged from their study. A reader however misses the links between the results presented in Sect. 5 and the open challenges/research directions listed in Sect. 6. The authors should try to make such links more explicit. For instance, they could introduce open challenges (e.g., in a “highlighting box”) in Sect. 5, immediately after the discussion highlighting the need/openness of such challenge. They could then retake/recap the open challenges (as they currently do) in Sect. 6. As anticipated as a reply to comment 1.4, we have restructured our concluding section (now named “Discussion and Future Directions”) to aggregate all the findings and the discussion on future directions. 6 For each main topic emerging as a result of our review, we highlighted what aspects are left uncovered by existing literature (see the “Insight” boxes in section 5.2). In section 6, every insight is recalled and briefly discussed to motivate the need for further research. Comment 3.9 Last, but not least, systematic literature reviews, and systematic studies in general, are known to be prone to possible threats to their validity (like those I tried to highlight in my former comments above). The authors should discuss how they mitigated/avoided possible threats to the validity of their study in a devoted section. As previously stated, we now provide a section “Threats to Validity” where we discuss how we mitigated/avoided possible threats to the validity of our study. Comment 3.10 I am not sure on whether the word “survey” used by the authors in the manuscript’s title and throughout the text is correct. Snyder’s guidelines, used by the authors to design their research, speak about “systematic literature reviews”. I would hence recommend the authors to revise their wording in “systematic literature review”, both in the title and all over the manuscript. Such wording is that most commonly associated with the type of studies like that presented by the authors in this manuscript, hence helping potential readers to better grab the type of content they would find in this manuscript. We meant review since the beginning, we sincerely apologise for a mistake slipping through right in the paper’s title. Fixed. Comment 3.11 The related work discussion should be expanded to include other relevant related reviews on “microservices & security”, e.g., that on “microservices security smells” by Ponce et al. (https://arxiv.org/abs/2104.13303) . Thanks for the suggestion. This paper was not yet published when we wrote the first version, we are taking advantage of the revision process to include it. Comment 3.12 The authors should cite all papers they considered in their literature review, to give credit to the authors who published such papers. The authors present selected studies and their classification in a supplemental appendix. As noticed in my above comments, such information is not supplemental, but crucial for readers to understand which papers were considered and how they were classified to answer to the authors’ research questions. The information in the appendix should hence be included in the main text of the manuscript. 7 As previously stated, thanks to the additional page concession of the editor we are now able to list all the selected papers and move the table into the main part. Please note that the table, along with additional information that could be useful for reproducibility purposes is available via Zenodo. https://doi.org/10.5281/zenodo.4774894 Comment 3.13 - “Alas” -¿ “Unfortunately” or “At the same time”? - p5/28: “within the” -¿ “up to the” Fixed. References [1] Jacopo Soldani. Grey literature: A safe bridge between academy and industry? ACM SIGSOFT Softw. Eng. Notes, 44(3):11–12, 2019. 8 "
Here is a paper. Please give your review comments after reading it.
285
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Publication and archival of scientific results is still commonly considered the responsability of classical publishing companies. Classical forms of publishing, however, which center around printed narrative articles, no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. In this article, we propose to design scientific data publishing as a web-based bottom-up process, without top-down control of central authorities such as publishing companies.</ns0:p><ns0:p>Based on a novel combination of existing concepts and technologies, we present a server network to decentrally store and archive data in the form of nanopublications, an RDFbased format to represent scientific data. We show how this approach allows researchers to publish, retrieve, verify, and recombine datasets of nanopublications in a reliable and trustworthy manner, and we argue that this architecture could be used as a low-level data publication layer to serve the Semantic Web in general. Our evaluation of the current network shows that this system is efficient and reliable.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Modern science increasingly depends on datasets, which are however left out in the classical way of publishing, i.e. through narrative (printed or online) articles in journals or conference proceedings. This means that the publications describing scientific findings become disconnected from the data they are based on, which can seriously impair the verifiability and reproducibility of their results. Addressing this issue raises a number of practical problems: How should one publish scientific datasets and how can one refer to them in the respective scientific publications? How can we be sure that the data will remain available in the future and how can we be sure that data we find on the web have not been corrupted or tampered with? Moreover, how can we refer to specific entries or subsets from large datasets, for instance, to support a specific argument or hypothesis?</ns0:p><ns0:p>To address some of these problems, a number of scientific data repositories have appeared, such as Figshare and Dryad. 1 Furthermore, Digital Object Identifiers (DOI) have been advocated to be used not only for articles but also for scientific data <ns0:ref type='bibr' target='#b43'>(Paskin, 2005)</ns0:ref>. While these approaches certainly improve the situation of scientific data, in particular when combined with Semantic Web techniques, they have nevertheless a number of drawbacks: They have centralized architectures, they give us no possibility to check whether the data have been (deliberately or accidentally) modified, and they do not support access or referencing on a more granular level than entire datasets (such as individual data entries). We argue that the centralized nature of existing data repositories is inconsistent with the decentralized manner in which science is typically performed, and that it has serious consequences with respect to reliability and trust.</ns0:p><ns0:p>The organizations running these platforms might at some point go bankrupt, be acquired by investors who do not feel committed to the principles of science, or for other reasons become unable to keep their websites up and running. Even though the open licenses enforced by these data repositories will probably ensure that the datasets remain available at different places, there exist no standardized (i.e. automatable) procedures to find these alternative locations and to decide whether they are trustworthy or not.</ns0:p><ns0:p>Even if we put aside these worst-case scenarios, websites have typically not a perfect uptime and might be down for a few minutes or even hours every once in a while. This is certainly acceptable for most use cases involving a human user accessing data from these websites, but it can quickly become a problem in the case of automated access embedded in a larger service. Furthermore, it is possible that somebody gains access to the repository's database and silently modifies part of the data, or that the data get corrupted during the transfer from the server to the client. We can therefore never perfectly trust any data we get, which significantly complicates the work of scientists and impedes the potential of fully automatic analyses. Lastly, existing forms of data publishing have for the most part only one level at which data is addressed and accessed: the level of entire datasets (sometimes split into a small number of tables). It is in these cases not possible to refer to individual data entries or subsets in a way that is standardized and retains the relevant metadata and provenance information. To illustrate this problem, let us assume that we conduct an analysis using, say, 1000 individual data entries from each of three very large datasets (containing, say, millions of data entries each). How can we now refer to exactly these 3000 entries to justify whatever conclusion we draw from them? The best thing we can currently do is to republish these 3000 data entries as a new dataset and to refer to the large datasets as their origin. Apart from the practical disadvantages of being forced to republish data just to refer to subsets of larger datasets, other scientists need to either (blindly) trust us or go through the tedious process of semi-automatically verifying that each of these entries indeed appears in one of the large datasets. Instead of republishing the data, we could also try to describe the used subsets, e.g. in the form of SPARQL queries in the case of RDF data, but this doesn't make it less tedious, keeping in mind that older versions of datasets are typically not provided by public APIs such as SPARQL endpoints.</ns0:p><ns0:p>Below, we present an approach to tackle these problems, which builds upon existing Semantic Web technologies, in particular RDF and nanopublications, adheres to accepted web principles, such as decentralization and REST APIs, and supports the FAIR guiding principles of making scientific data Findable, Accessible, Interoperable, and Reusable <ns0:ref type='bibr' target='#b52'>Wilkinson et al. (2016)</ns0:ref>. Specifically, our research question is: Can we create a decentralized, reliable, and trustworthy system for publishing, retrieving, and archiving Linked Data in the form of sets of nanopublications based on existing web standards and infrastructure? It is important to note here that the word trustworthy has a broad meaning and there are different kinds of trust involved when it comes to retrieving and using datasets from some third party.</ns0:p><ns0:p>When exploring existing datasets, a certain kind of trust is needed to decide whether an encountered dataset is appropriate for the given purpose. A different kind of trust is needed to decide whether an obtained file correctly represents a specific version of a specific dataset that has been chosen to be used.</ns0:p><ns0:p>Only the second kind of trust can be achieved with a technological solution alone, and we use the word trustworthy in this paper in this narrow technical sense covering the second kind of trust. This article is an extended and revised version of a previous conference paper <ns0:ref type='bibr'>(Kuhn et al., 2015)</ns0:ref>. These extensions include, most importantly, a new evaluation on the retrieval of nanopublication datasets over an unreliable connection, a description of the new feature of surface patterns, the specific protocol applied by existing servers, a server network that is now three times as large as before (15 instead of 5 server instances), a much more detailed walk-through example, and five new Figures <ns0:ref type='bibr'>(2, 4, 7, 8, and 9</ns0:ref>). per year. The reasons for this problem are quite clear: SPARQL endpoints provide a very powerful query interface that causes heavy load in terms of memory and computing power on the side of the server.</ns0:p><ns0:p>Clients can request answers to very specific and complex queries they can freely define, all without paying a cent for the service. This contrasts with almost all other HTTP interfaces, in which the server imposes (in comparison to SPARQL) a highly limited interface, where the computational costs per request are minimal.</ns0:p><ns0:p>To solve these and other problems, more light-weight interfaces were suggested, such as the read-write Linked Data Platform interface <ns0:ref type='bibr' target='#b50'>(Speicher et al., 2015)</ns0:ref>, the Triple Pattern Fragments interface <ns0:ref type='bibr' target='#b51'>(Verborgh et al., 2014)</ns0:ref>, as well as infrastructures to implement them, such as CumulusRDF <ns0:ref type='bibr' target='#b31'>(Ladwig and Harth, 2011)</ns0:ref>. These interfaces deliberately allow less expressive requests, such that the maximal cost of each individual request can be bounded more strongly. More complex queries then need to be evaluated by clients, which decompose them in simpler subqueries that the interface supports <ns0:ref type='bibr' target='#b51'>(Verborgh et al., 2014)</ns0:ref>.</ns0:p><ns0:p>While this constitutes a scalability improvement (at the cost of, for instance, slower queries), it does not necessarily lead to perfect uptimes, as servers can be down for other reasons than excessive workload.</ns0:p><ns0:p>We propose here to go one step further by relying on a decentralized network and by supporting only identifier-based lookup of nanopublications. Such limited interfaces normally have the drawback that traversal-based querying does not allow for the efficient and complete evaluation of certain types of queries <ns0:ref type='bibr' target='#b19'>(Hartig, 2013)</ns0:ref>, but this is not a problem with the multi-layer architecture we propose below, because querying is only performed at a higher level where these limitations do not apply.</ns0:p><ns0:p>A well-known solution to the problem of individual servers being unreliable is the application of a decentralized architecture where the data is replicated on multiple servers. A number of such approaches related to data publishing have been proposed, for example in the form of distributed file systems based on cryptographic methods for data that are public <ns0:ref type='bibr' target='#b14'>(Fu et al., 2002)</ns0:ref> or private <ns0:ref type='bibr' target='#b9'>(Clarke et al., 2001)</ns0:ref>. In contrast to the design principles of the Semantic Web, these approaches implement their own internet protocols and follow the hierarchical organization of file systems. Other approaches build upon the existing BitTorrent protocol and apply it to data publishing <ns0:ref type='bibr' target='#b32'>(Markman and Zavras, 2014;</ns0:ref><ns0:ref type='bibr' target='#b10'>Cohen and Lo, 2014)</ns0:ref>, and there is interesting work on repurposing the proof-of-work tasks of Bitcoin for data preservation <ns0:ref type='bibr' target='#b34'>(Miller et al., 2014)</ns0:ref>. There exist furthermore a number of approaches to applying peer-to-peer networks for RDF data <ns0:ref type='bibr' target='#b12'>(Filali et al., 2011)</ns0:ref>, but they do not allow for the kind of permanent and provenance-aware publishing that we propose below. Moreover, only for the centralized and closed-world setting of database systems, approaches exist that allow for robust and granular references to subsets of dynamic datasets <ns0:ref type='bibr' target='#b45'>(Proell and Rauber, 2014)</ns0:ref>.</ns0:p><ns0:p>The approach that we present below is based on previous work, in which we proposed trusty URIs to make nanopublications and their entire reference trees verifiable and immutable by the use of cryptographic hash values <ns0:ref type='bibr'>(Kuhn and</ns0:ref><ns0:ref type='bibr'>Dumontier, 2014, 2015)</ns0:ref>. This is an example of such a trusty URI: http://example.org/r1.RA5AbXdpz5DcaYXCh9l3eI9ruBosiL5XDU3rxBbBaUO70</ns0:p><ns0:p>The last 45 characters of this URI (i.e. everything after '.') is what we call the artifact code. It contains a hash value that is calculated on the RDF content it represents, such as the RDF graphs of a nanopublication.</ns0:p><ns0:p>Because this hash is part of the URI, any link to such an artifact comes with the possibility to verify its content, including other trusty URI links it might contain. In this way, the range of verifiability extends to the entire reference tree. Generating these trusty URIs does not come for free, in particular because the normalization of the content involves the sorting of the contained RDF statements. For small files such as nanopublications, however, the overhead is minimal, consisting only of about 1 millisecond per created nanopublication when the Java library is used <ns0:ref type='bibr'>(Kuhn and</ns0:ref><ns0:ref type='bibr'>Dumontier, 2014, 2015)</ns0:ref>. Furthermore, we argued in previous work that the assertion of a nanopublication need not be fully formalized, but we can allow for informal or underspecified assertions <ns0:ref type='bibr' target='#b26'>(Kuhn et al., 2013)</ns0:ref>, to deal with the fact that the creation of accurate semantic representations can be too challenging or too time-consuming for many scenarios and types of users. This is particularly the case for domains that lack ontologies and standardized terminologies with sufficient coverage. These structured but informal statements are supposed to provide a middle ground for the situations where fully formal statements are not feasible. We proposed a controlled natural language <ns0:ref type='bibr' target='#b21'>(Kuhn, 2014)</ns0:ref> for these informal statements, which we called AIDA (standing for the introduced restriction on English sentences to be atomic, independent, declarative, and absolute), and we had shown before that controlled natural language can also serve in the fully formalized case as a user-friendly syntax for representing scientific facts <ns0:ref type='bibr' target='#b30'>(Kuhn et al., 2006)</ns0:ref>. We also sketched how 'science bots' could autonomously produce and publish nanopublications, and how algorithms could thereby be tightly linked to their generated data <ns0:ref type='bibr' target='#b24'>(Kuhn, 2015b)</ns0:ref>, which requires the existence of a reliable and trustworthy publishing system, such as the one we present here.</ns0:p></ns0:div> <ns0:div><ns0:head>APPROACH</ns0:head><ns0:p>Our approach on scientific data publishing builds upon the general Linked Data approach of lifting data on the web to linked RDF representations <ns0:ref type='bibr' target='#b2'>(Berners-Lee, 2006)</ns0:ref>. We only deal here with structured data and assume that is is already present in an RDF representation. to discover things by navigating through this data space <ns0:ref type='bibr' target='#b2'>(Berners-Lee, 2006)</ns0:ref>. We argue that approaches following this principle can only be reliable and efficient if we have some sort of guarantee that the resolution of any single identifier will succeed within a short time frame in one way or another, and that the processing of the received representation will only take up a small amount of time and resources. This requires that (1) RDF representations are made available on several distributed servers, so the chance that they all happen to be inaccessible at the same time is negligible, and that (2) these representations are reasonably small, so that downloading them is a matter of fractions of a second, and so that one has to process only a reasonable amount of data to decide which links to follow. We address the first requirement by proposing a distributed server network and the second one by building upon the concept of nanopublications. Below we explain the general architecture, the functioning and the interaction of the nanopublication servers, and the concept of nanopublication indexes.</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture</ns0:head><ns0:p>There are currently at least three possible architectures for Semantic Web applications (and mixtures thereof), as shown in a simplified manner in Figure <ns0:ref type='figure'>1</ns0:ref>. The first option is the use of plain HTTP GET requests to dereference a URI. Applying the follow-your-nose principle, resolvable URIs provide the data based on which the application performs the tasks of finding relevant resources, running queries, Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>analyzing and aggregating the results, and using them for the purpose of the application. This approach aligns very well with the principles and the architecture of the web, but the traversal-based querying it entails comes with limitations on efficiency and completeness <ns0:ref type='bibr' target='#b19'>(Hartig, 2013)</ns0:ref>. If SPARQL endpoints are used, as a second option, most of the workload is shifted from the application to the server via the expressive power of the SPARQL query language. As explained above, this puts servers at risk of being overloaded. With a third option such as Triple Pattern Fragments, servers provide only limited query features and clients perform the remainder of the query execution. This leads to reduced server costs, at the expense of longer query times.</ns0:p><ns0:p>We can observe that all these current solutions are based on two-layer architectures, and have moreover no inherent replication mechanisms. A single point of failure can cause applications to be unable to complete their tasks: A single URI that does not resolve or a single server that does not respond can break the entire process. We argue here that we need distributed and decentralized services to allow for robust and reliable applications that consume Linked Data. In principle, this can be achieved for any of these two-layer architectures by simply setting up several identical servers that mirror the same content, but there is no standardized and generally accepted way of how to communicate these mirror servers and how to decide on the client side whether a supposed mirror server is trustworthy. Even putting aside these difficulties, two-layer architectures have further conceptual limitations. The most low-level task of providing Linked Data is essential for all other tasks at higher levels, and therefore needs to be the most stable and robust one. We argue that this can be best achieved if we free this lowest layer from all tasks except the provision and archiving of data entries (nanopublications in our case) and decouple it from the tasks of providing services for finding, querying, or analyzing the data. This makes us advocate a multi-layer architecture, a possible realization of which is shown at the bottom of Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p><ns0:p>Below we present a concrete proposal of such a low-level data provision infrastructure in the form of a nanopublication server network. Based on such an infrastructure, one can then build different kinds of services operating on a subset of the nanopublications they find in the underlying network. 'Core services' could involve things like resolving backwards references (i.e. 'which nanopublications refer to the given one?') and the retrieval of the nanopublications published by a given person or containing a particular URI. Based on such core services for finding nanopublications, one could then provide 'advanced services' that allow us to run queries on subsets of the data and ask for aggregated output. These higher layers can of course make use of existing techniques such as SPARQL endpoints and Triple Pattern Fragments or even classical relational databases, and they can cache large portions of the data from the layers below (as nanopublications are immutable, they are easy to cache). For example, an advanced service could allow users to query the latest versions of several drug-related datasets, by keeping a local triple store and providing users with a SPARQL interface. Such a service would regularly check for new data in the server network on the given topic, and replace outdated nanopublications in its triple store with new ones.</ns0:p><ns0:p>A query request to this service, however, would not involve an immediate query to the underlying server network, in the same way that a query to the Google search engine does not trigger a new crawl of the web.</ns0:p><ns0:p>While the lowest layer would necessarily be accessible to everybody, some of the services on the higher level could be private or limited to a small (possibly paying) user group. We have in particular scientific data in mind, but we think that an architecture of this kind could also be used for Semantic Web content in general.</ns0:p></ns0:div> <ns0:div><ns0:head>Nanopublication Servers</ns0:head><ns0:p>As a concrete proposal of a low-level data provision layer, as explained above, we present here a decentralized nanopublication server network with a REST API to provide and distribute nanopublications.</ns0:p><ns0:p>To ensure the immutability of these nanopublications and to guarantee the reliability of the system, these nanopublications are required to come with trusty URI identifiers, i.e. they have to be transformed on the client side into such trusty nanopublications before they can be published to the network. The nanopublication servers of such a network connect to each other to retrieve and (partly) replicate their nanopublications, and they allow users to upload new nanopublications, which are then automatically distributed through the network. Figure <ns0:ref type='figure' target='#fig_0'>2</ns0:ref> shows a schematic depiction of this server network.</ns0:p><ns0:p>Basing the content of this network on nanopublications with trusty URIs has a number of positive consequences for its design: The first benefit is that the fact that nanopublications are always small (by definition) makes it easy to estimate how much time is needed to process an entity (such as validating its Nanopublications that have trusty URI identifiers can be uploaded to a server (or loaded from the local file system by the server administrator), and they are then distributed to the other servers of the network. They can then be retrieved from any of the servers, or from multiple servers simultaneously, even if the original server is not accessible. hash) and how much space to store it (e.g. as a serialized RDF string in a database). Moreover it ensures that these processing times remain mostly in the fraction-of-a-second range, guaranteeing quick responses, and that these entities are never too large to be analyzed in memory. The second benefit is that servers do not have to deal with identifier management, as the nanopublications already come with trusty URIs, which are guaranteed to be unique and universal. The third and possibly most important benefit is that nanopublications with trusty URIs are immutable and verifiable. This means that servers only have to deal with adding new entries but not with updating them, which eliminates the hard problems of concurrency control and data integrity in distributed systems. (As with classical publications, a nanopublication -once published to the network -cannot be deleted or 'unpublished,' but only marked retracted or superseded by the publication of a new nanopublication.) Together, these aspects significantly simplify the design of such a network and its synchronization protocol, and make it reliable and efficient even with limited resources.</ns0:p><ns0:p>Specifically, a nanopublication server of the current network has the following components:</ns0:p><ns0:p>&#8226; A key-value store of its nanopublications (with the artifact code from the trusty URI as the key)</ns0:p><ns0:p>&#8226; A long list of all stored nanopublications, in the order they were loaded at the given server.</ns0:p><ns0:p>We call this list the server's journal, and it consists of a journal identifier and the sequence of nanopublication identifiers, subdivided into pages of a fixed size. (1000 elements is the default: page 1 containing the first 1000 nanopublications; page 2 the next 1000, etc.)</ns0:p><ns0:p>&#8226; A cache of gzipped packages containing all nanopublications for a given journal page</ns0:p><ns0:p>&#8226; Pattern definitions in the form of a URI pattern and a hash pattern, which define the surface features of the nanopublications stored on the given server</ns0:p><ns0:p>&#8226; A list of known peers, i.e. the URLs of other nanopublication servers</ns0:p><ns0:p>&#8226; Information about each known peer, including the journal identifier and the total number of nanopublications at the time it was last visited</ns0:p><ns0:p>The server network can be seen as an unstructured peer-to-peer network, where each node can freely decide which other nodes to connect to and which nanopublications to replicate.</ns0:p><ns0:p>The URI pattern and the hash pattern of a server define the surface features of the nanopublications that this server cares about. We called them surface features, because they can be determined by only looking at the URI of a nanopublication. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>a server replicate about 0.05% of all nanopublications. Nanopublication servers are thereby given the opportunity to declare which subset of nanopublications they replicate, and need to connect only to those other servers whose subsets overlap. To decide on whether a nanopublication belongs to a specified subset or not, the server only has to apply string matching at two given starting points of the nanopublication URI (i.e. the first position and position 43 from the end -as the hashes of the current version of trusty URIs are 43 bytes long), which is computationally cheap.</ns0:p><ns0:p>Based on the components introduced above, the servers respond to the following request (in the form of HTTP GET):</ns0:p><ns0:p>&#8226; Each server needs to return general server information, including the journal identifier and the number of stored nanopublications, the server's URI pattern and hash pattern, whether the server accepts POST requests for new nanopublications or servers (see below), and informative entries such as the name and email address of the maintainer and a general description. Additionally, some server-specific limits can be specified: the maximum number of triples per nanopublication (the default is 1200), the maximum size of a nanopublication (the default is 1 MB), and the maximum number of nanopublications to be stored on the given server (unlimited by default).</ns0:p><ns0:p>&#8226; Given an artifact code (i.e. the final part of a trusty URI) of a nanopublication that is stored by the server, it returns the given nanopublication in a format like TriG, TriX, N-Quads, or JSON-LD (depending on content negotiation).</ns0:p><ns0:p>&#8226; A journal page can be requested by page number as a list of trusty URIs.</ns0:p><ns0:p>&#8226; For every journal page (except for incomplete last pages), a gzipped package can be requested containing the respective nanopublications.</ns0:p><ns0:p>&#8226; The list of known peers can be requested as a list of URLs.</ns0:p><ns0:p>In addition, a server can optionally support the following two actions (in the form of HTTP POST requests):</ns0:p><ns0:p>&#8226; A server may accept requests to add a given individual nanopublication to its database.</ns0:p><ns0:p>&#8226; A server may also accept requests to add the URL of a new nanopublication server to its peer list.</ns0:p><ns0:p>Server administrators have the additional possibility to load nanopublications from the local file system, which can be used to publish large amounts of nanopublications, for which individual POST requests are not feasible.</ns0:p><ns0:p>Together, the server components and their possible interactions outlined above allow for efficient decentralized distribution of published nanopublications. Specifically, current nanopublication servers follow the following procedure. 3 Every server s keeps its own list of known peer P s . For each peer p on that list that has previously been visited, the server additionally keeps the number of nanopublications on that peer server n p and its journal identifier j p , as recorded during the last visit. At a regular interval, every peer server p on the list of known peers is visited by server s:</ns0:p><ns0:p>1. The latest server information is retrieved from p, which includes its list of known peers P p , the number of stored nanopublications n p , the journal identifier j p , the server's URI pattern U p , and its hash pattern H p .</ns0:p><ns0:p>2. All entries in P p that are not yet on the visiting server's own list of known peers P s are added to P s .</ns0:p><ns0:p>3. If the visiting server's URL is not in P p , the visiting server s makes itself known to server p with a POST request (if this is supported by p).</ns0:p><ns0:p>4. If the subset defined by the server's own URI/hash patterns U s and H s does not overlap with the subset defined by U p and H p , then there won't be any nanopublications on the peer server that this server is interested in, and we jump to step 9. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>5. The server will start at position n to look for new nanopublications at server p: n is set to the total number of nanopublications of the last visit n p , or to 0 if there was no last visit (nanopublication counting starts at 0).</ns0:p><ns0:p>6. If the retrieved journal identifier j p is different from j p (meaning that the server has been reset</ns0:p><ns0:p>since the last visit), n is set to 0.</ns0:p><ns0:p>7. If n = n p , meaning that there are no new nanopublications since the last visit, the server jumps to step 9. The current implementation is designed to be run on normal web servers alongside with other applications, with economic use of the server's resources in terms of memory and processing time. In order to avoid overload of the server or the network connection, we restrict outgoing connections to other servers to one at a time. Of course, sufficient storage space is needed to save the nanopublications (for which we currently use MongoDB), but storage space is typically much easier and cheaper to scale up than memory or processing capacities. The current system and its protocol are not set in stone but, if successful, will have to evolve in the future -in particular with respect to network topology and partial replicationto accommodate a network of possibly thousands of servers and billions of nanopublications.</ns0:p></ns0:div> <ns0:div><ns0:head>Nanopublication Indexes</ns0:head><ns0:p>To make the infrastructure described above practically useful, we have to introduce the concept of indexes.</ns0:p><ns0:p>One of the core ideas behind nanopublications is that each of them is a tiny atomic piece of data. This implies that analyses will mostly involve more than just one nanopublication and typically a large number of them. Similarly, most processes will generate more than just one nanopublication, possibly thousands or even millions of them. Therefore, we need to be able to group nanopublications and to identify and use large collections of them.</ns0:p><ns0:p>Given the versatility of the nanopublication standard, it seems straightforward to represent such collections as nanopublications themselves. However, if we let such 'collection nanopublications' contain other nanopublications, then the former would become very large for large collections and would quickly lose their property of being nano. We can solve part of that problem by applying a principle that we can call reference instead of containment: nanopublications cannot contain but only refer to other nanopublications, and trusty URIs allow us to make these reference links almost as strong as containment links. To emphasize this principle, we call them indexes and not collections.</ns0:p><ns0:p>However, even by only containing references and not the complete nanopublications, these indexes can still become quite large. To ensure that all such index nanopublications remain nano in size, we need to put some limit on the number of references, and to support sets of arbitrary size, we can allow indexes to be appended by other indexes. We set 1000 nanopublication references as the upper limit any single index can directly contain. This limit is admittedly arbitrary, but it seems to be a reasonable compromise between ensuring that nanopublications remain small on the one hand and limiting the number of nanopublications Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Nanopublications can (but need not) be elements of one or more indexes. An index can have sub-indexes and can append to another index, in either case acquiring all nanopublications.</ns0:p><ns0:p>needed to define large indexes on the other. A set of 100,000 nanopublications, for example, can therefore be defined by a sequence of 100 indexes, where the first one stands for the first 1000 nanopublications, the second one appends to the first and adds another 1000 nanopublications (thereby representing 2000 of them), and so on up to the last index, which appends to the second to last and thereby stands for the entire set. In addition, to allow datasets to be organized in hierarchies, we define that the references of an index can also point to sub-indexes. In this way we end up with three types of relations: an index can append to another index, it can contain other indexes as sub-indexes, and it can contain nanopublications as elements.</ns0:p><ns0:p>These relations defining the structure of nanopublication indexes are shown schematically in Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>.</ns0:p><ns0:p>Index (a) in the shown example contains five nanopublications, three of them via sub-index (c). The latter is also part of index (b), which additionally contains eight nanopublications via sub-index (f). Two of these eight nanopublications belong directly to (f), whereas the remaining six come from appending to index (e). Index (e) in turn gets half of its nanopublications by appending to index (d). We see that some nanopublications may not be referenced by any index at all, while others may belong to several indexes at the same time. The maximum number of direct nanopublications (or sub-indexes) is here set to three for illustration purposes, whereas in reality this limit is set to 1000.</ns0:p><ns0:p>In addition to describing sets of data entries, nanopublication indexes can also have additional metadata attached, such as labels, descriptions, further references, and other types of relations at the level of an entire dataset. Below we show how this general concept of indexes can be used to define sets of new or existing nanopublications, and how such index nanopublications can be published and their nanopublications retrieved.</ns0:p></ns0:div> <ns0:div><ns0:head>Trusty Publishing</ns0:head><ns0:p>Let us consider two simple exemplary scenarios to illustrate and motivate the general concepts. To demonstrate the procedure and the general interface of our implementation, we show here the individual steps on the command line in a tutorial-like fashion, using the np command from the nanopub-java library <ns0:ref type='bibr' target='#b23'>(Kuhn, 2015a)</ns0:ref>. Of course, users should eventually be supported by graphical interfaces, but command line tools are a good starting point for developers to build such tools. To make this example completely reproducible, these are the commands to download and compile the needed code from a Bash shell (requiring Git and Maven):</ns0:p><ns0:p>$ git clone https://github.com/Nanopublication/nanopub-java.git $ cd nanopub-java</ns0:p></ns0:div> <ns0:div><ns0:head>$ mvn package</ns0:head><ns0:p>And for convenience reasons, we can add the bin directory to the path variable:</ns0:p></ns0:div> <ns0:div><ns0:head>$ PATH=$(pwd)/bin:$PATH</ns0:head><ns0:p>To publish some new data, they have to be formatted as nanopublications. We use the TriG format here and define the following RDF prefixes: For each of these nanopublications, we can check their publication status with the following command (referring to the nanopublication by its URI or just its artifact code): Next, we can make an index pointing to these three nanopublications: Once published, we can check the status of this index and its contained nanopublications:</ns0:p><ns0:p>$ np status -r RAXsXUhY8iDbfDdY6sm64hRFPr7eAwYXRlSsqQAz1LE14 1 index nanopub; 3 content nanopubs Again, after just a few minutes this nanopublication will be distributed in the network and available on multiple servers. From this point on, everybody can conveniently and reliably retrieve the given set of nanopublications. The only thing one needs to know is the artifact code of the trusty URI of the index:</ns0:p></ns0:div> <ns0:div><ns0:head>$ np get -c RAXsXUhY8iDbfDdY6sm64hRFPr7eAwYXRlSsqQAz1LE14</ns0:head><ns0:p>This command downloads the nanopublications of the index we just created and published.</ns0:p><ns0:p>As another exemplary scenario, let us imagine a researcher in the biomedical domain who is interested in the protein CDKN2A and who has derived some conclusion based on the data found in existing nanopublications. Specifically, let us suppose this researcher analyzed the five nanopublications specified by the following artifact codes (they can be viewed online by appending the artifact code to the URL http://np.inn.ac/ or the URL of any other nanopublication server):</ns0:p></ns0:div> <ns0:div><ns0:head>RAEoxLTy4pEJYbZwA9FuBJ6ogSquJobFitoFMbUmkBJh0</ns0:head></ns0:div> <ns0:div><ns0:head>RAoMW0xMemwKEjCNWLFt8CgRmg_TGjfVSsh15hGfEmcz4</ns0:head></ns0:div> <ns0:div><ns0:head>RA3BH_GncwEK_UXFGTvHcMVZ1hW775eupAccDdho5Tiow</ns0:head><ns0:p>RA3HvJ69nO0mD5d4m4u-Oc4bpXlxIWYN6L3wvB9jntTXk</ns0:p></ns0:div> <ns0:div><ns0:head>RASx-fnzWJzluqRDe6GVMWFEyWLok8S6nTNkyElwapwno</ns0:head><ns0:p>These nanopublications about the same protein come from two different sources: The first one is from the BEL2nanopub dataset, whereas the remaining four are from neXtProt. 4 These nanopublications can be downloaded as above with the np get command and stored in a file, which we name here cdkn2a-nanopubs.trig.</ns0:p><ns0:p>In order to be able to refer to such a collection of nanopublications with a single identifier, a new index is needed that contains just these five nanopublications. This time we give the index a title (which is The generated index is stored in the file index.cdkn2a-nanopubs.trig, and our exemplary researcher can now publish this index to let others know about it: Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>There is no need to publish the five nanopublications this index is referring to, because they are already public (this is how we got them in the first place). The index URI can now be used to refer to this new collection of existing nanopublications in an unambiguous and reliable manner. This URI can be included in the scientific publication that explains the new finding, for example with a reference like the following:</ns0:p><ns0:p>[1] Data about CDKN2A from BEL2nanopub &amp; neXtProt. Nanopublication index http://np.i nn.ac/RA6jrrPL2NxxFWlo6HFWas1ufp0OdZzS XKwQDXpJg3CY, 14 April 2015.</ns0:p><ns0:p>In this case with just five nanopublications, one might as well refer to them individually, but this is obviously not an option for cases where we have hundreds or thousands of them. The given web link allows everybody to retrieve the respective nanopublications via the server np.inn.ac. The URL will not resolve should the server be temporarily or permanently down, but because it is a trusty URI we can retrieve the nanopublications from any other server of the network following a well-defined protocol (basically just extracting the artifact code, i.e. the last 45 characters, and appending it to the URL of another nanopublication server). This reference is therefore much more reliable and more robust than links to other types of data repositories. In fact, we refer to the datasets we use in this publication for evaluation purposes, as described below in Section 4, in exactly this way (NP Index RAY lQruua, 2015;</ns0:p><ns0:p>NP Index RACy0I4f w, 2015; NP Index RAR5dwELYL, 2015; NP Index RAXy332hxq, 2015; NP Index RAVEKRW0m6, 2015; NP Index RAXFlG04YM, 2015; NP Index RA7SuQ0e66, 2015).</ns0:p><ns0:p>The new finding that was deduced from the given five nanopublications can, of course, also be published as a nanopublication, with a reference to the given index URI in the provenance part: </ns0:p></ns0:div> <ns0:div><ns0:head>}</ns0:head><ns0:p>We can again transform it to a trusty nanopublication, and then publish it as above.</ns0:p><ns0:p>Some of the features of the presented command-line interface are made available through a web interface for dealing with nanopublications that is shown in Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>. The supported features include the generation of trusty URIs, as well as the publication and retrieval of nanopublications. The interface allows us to retrieve, for example, the nanopublication we just generated and published above, even though we used an example.org URI, which is not directly resolvable. Unless it is just about toy examples, we should of course try to use resolvable URIs, but with our decentralized network we can retrieve the data even if the original link is no longer functioning or temporarily broken.</ns0:p></ns0:div> <ns0:div><ns0:head>EVALUATION</ns0:head><ns0:p>To evaluate our approach, we want to find out whether a small server network run on normal web servers, without dedicated infrastructure, is able to handle the amount of nanopublications we can expect to become publicly available in the next few years. Our evaluation consists of three parts focusing on the different aspects of dataset publication, server performance, and dataset retrieval, respectively. At the time the first part of the evaluation was performed, the server network consisted of three servers in Zurich, New Haven, and Ottawa. Seven new sites in Amsterdam, Stanford, Barcelona, Ghent, Athens, Leipzig, and Haverford have joined the network since. The current network of 15 server instances on 10 sites (in 8 countries) is shown in Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>, which is a screenshot of a nanopublication monitor that we have implemented 5 .</ns0:p><ns0:p>Such monitors regularly check the nanopublication server network, register changes (currently once per </ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation Design</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_8'>1</ns0:ref> shows seven existing large nanopublication datasets. Five of these datasets were used for the first part of the evaluation (the other two were not yet available at the time this part of the evaluation was conducted) , which tested the ability of the network to store and distribute new datasets. These five datasets consist of a total of more than 5 million nanopublications and close to 200 million RDF triples, including nanopublication indexes that we generated for each dataset. The total size of these five datasets when stored as uncompressed TriG files amounts to 15.6 GB. Each of the datasets is assigned to one of the three servers, where it is loaded from the local file systems. The first nanopublications start spreading to the other servers, while others are still being loaded from the file system. We therefore test the reliability and capacity of the network under constant streams of new nanopublications coming from different servers, and we use two nanopublication monitors (in Zurich and Ottawa) to evaluate the responsiveness of the network.</ns0:p><ns0:p>In the second part of the evaluation we expose a server to heavy load from clients to test its retrieval capacity. For this we use a service called Load Impact 6 to let up to 100 clients access a nanopublication server in parallel. We test the server in Zurich over a time of five minutes under the load from a linearly increasing number of clients (from 0 to 100) located in Dublin. These clients are programmed to request a randomly chosen journal page, then to go though the entries of that page one by one, requesting the respective nanopublication with a probability of 10%, and starting over again with a different page.</ns0:p><ns0:p>As a comparison, we run a second session, for which we load the same data into a Virtuoso SPARQL endpoint on the same server in Zurich (with 16 GB of memory given to Virtuoso and two 2.40 GHz Intel Xeon processors). Then, we perform exactly the same stress test on the SPARQL endpoint, requesting the nanopublications in the form of SPARQL queries instead of requests to the nanopublication server interface. This comparison is admittedly not a fair one, as SPARQL endpoints are much more powerful and are not tailor-made for the retrieval of nanopublications, but they provide nevertheless a valuable and well-established reference point to evaluate the performance of our system.</ns0:p><ns0:p>While the second part of the evaluation focuses on the server perspective, the third part considers the client side. In this last part, we want to test whether the retrieval of an entire dataset in a parallel fashion from the different servers of the network is indeed efficient and reliable. We decided to use a medium-sized dataset and chose LIDDI (NP Index RA7SuQ0e66, 2015), which consists of around 100,000 triples. We tested the retrieval of this dataset from a computer connected to the internet via a basic plan from a regular internet service provider (i.e. not via a fast university network) with a command like the following:</ns0:p><ns0:p>$ np get -c -o nanopubs.trig RA7SuQ0e661LJdKpt5EOS2DKykf1ht9LFmNaZtFSDMrXg</ns0:p><ns0:p>In addition, we wanted to test the retrieval in a situation where the internet connection and/or the nanopublication servers are highly unreliable. For that, we implemented a version of an input stream that introduces errors to simulate such unreliable connections or servers. With a given probability (set to 1% for this evaluation), each read attempt to the input stream (a single read attempt typically asking for about 8000 bytes) either leads to a randomly changed byte or to an exception being thrown after a delay of 5 seconds (both having an equal chance of occurring of 0.5%). This behavior can be achieved with the following command, which is obviously only useful for testing purposes:</ns0:p><ns0:p>$ np get -c -o nanopubs.trig --simulate-unreliable-connection \</ns0:p></ns0:div> <ns0:div><ns0:head>RA7SuQ0e661LJdKpt5EOS2DKykf1ht9LFmNaZtFSDMrXg</ns0:head><ns0:p>For the present study, we run each of these two commands 20 times. To evaluate the result, we can investigate whether the downloaded sets of nanopublications are equivalent, i.e. lead to identical files when normalized (such as transformed to a sorted N-Quads representation). Furthermore, we can look into the amount of time this retrieval operation takes, and the number of times the retrieval of a single nanopublication from a server fails and has to be repeated.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation Results</ns0:head><ns0:p>The first part of the evaluation lasted 13 hours and 21 minutes, at which point all nanopublications were replicated on all three servers, and therefore the nanopublication traffic came to an end. Figure <ns0:ref type='figure'>6</ns0:ref> shows the rate at which the nanopublications were loaded at their first, second, and third server, respectively.</ns0:p><ns0:p>The network was able to handle an average of about 400,000 new nanopublications per hour, which Figure <ns0:ref type='figure'>6</ns0:ref>. This diagram shows the rate at which nanopublications are loaded at their first, second, and third server, respectively, over the time of the evaluation. At the first server, nanopublications are loaded from the local file system, whereas at the second and third server they are retrieved via the server network.</ns0:p><ns0:p>corresponds to more than 100 new nanopublications per second. This includes the time needed for loading each nanopublication once from the local file system (at the first server), transferring it through the network two times (to the other two servers), and for verifying it three times (once when loaded and twice when received by the other two servers). Figure <ns0:ref type='figure'>7</ns0:ref> shows the response times of the three servers as measured by the two nanopublication monitors in Zurich (top) and Ottawa (bottom) during the time of the evaluation. We see that the observed latency is mostly due to the geographical distance between the servers and the monitors. The response time was always less than 0.21 seconds when the server was on the same continent as the measuring monitor. In 99.77% of all cases (including those across continents) the response time was below 0.5 seconds, and it was always below 1.1 seconds. Not a single one of the 4802 individual HTTP requests timed out, led to an error, or received a nanopublication that could not be successfully verified.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>8</ns0:ref> shows the result of the second part of the evaluation. The nanopublication server was able to handle 113,178 requests in total (i.e. an average of 377 requests per second) with an average response time of 0.12 seconds. In contrast, the SPARQL endpoint answering the same kind of requests needed 100 times longer to process them (13 seconds on average), consequently handled about 100 times fewer requests (1267), and started to hit the timeout of 60 seconds for some requests when more than 40 client accessed it in parallel. In the case of the nanopublication server, the majority of the requests were answered within less than 0.1 seconds for up to around 50 parallel clients, and this value remained below 0.17 seconds all the way up to 100 clients. As the round-trip network latency alone between Ireland and Zurich amounts to around 0.03 to 0.04 seconds, further improvements can be achieved for a denser network due to the reduced distance to the nearest server.</ns0:p><ns0:p>For the third part of the evaluation, all forty retrieval attempts succeeded. After normalization of the downloaded datasets, they were all identical, also the ones that were downloaded through an input stream that was artificially made highly unreliable. Figure <ns0:ref type='figure' target='#fig_10'>9</ns0:ref> shows the number of retrieval failures and the amount of time that was required for the retrieval. With the normal connection, the downloading of nanopublications from the network almost always succeeded on the first try. Of the 98,184 nanopublications that had to be downloaded (98,085 content nanopublications plus 99 nanopublication indexes), fewer than 10 such download attempts failed in 18 of the 20 test runs. In the remaining two runs, the connection happened to be temporarily unreliable for 'natural' reasons, and the number of download failures rose to 181 and 9458, respectively. This, however, had no effect on the success of the download in a timely manner. On average over the 20 test runs, the entire dataset was successfully downloaded in 235 Manuscript to be reviewed seconds, with a maximum of 279 seconds. Unsurprisingly, the unreliable connection leads a much larger average number of failures and retries, but these failures have no effect on the final downloaded dataset, as we have seen above. On average, 2486 download attempts failed and had to be retried in the unreliable setting. In particular because half of these failures included a delay of 5 seconds, the download times are more than doubled, but still in a very reasonable range with an average of 517 seconds and a maximum below 10 minutes.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>In summary, the first part of the evaluation shows that the overall replication capacity of the current server network is around 9.4 million new nanopublications per day or 3.4 billion per year. The results of the second part show that the load on a server when measured as response times is barely noticeable for up to 50 parallel clients, and therefore the network can easily handle 50 &#8226; x parallel client connections or more, where x is the number of independent physical servers in the network (currently x = 10). The second part thereby also shows that the restriction of avoiding parallel outgoing connections for the replication between servers is actually a very conservative measure that could be relaxed, if needed, to allow for a higher replication capacity. The third part of the evaluation shows that the client-side retrieval of entire datasets is indeed efficient and reliable, even if the used internet connection and/or some servers in the network are highly unreliable.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION AND CONCLUSION</ns0:head><ns0:p>We have presented here a low-level infrastructure for data sharing, which is just one piece of a bigger ecosystem to be established. The implementation of components that rely on this low-level data sharing infrastructure is ongoing and future work. This includes the development of 'core services' (see Section 3.1) on top of the server network to allow people to find nanopublications and 'advanced services' to query and analyze the content of nanopublications. In addition, we need to establish standards and best practices of how to use existing ontologies (and to define new ones where necessary) to describe properties and relations of nanopublications, such as referring to earlier versions, marking nanopublications as retracted, and reviewing of nanopublications.</ns0:p><ns0:p>Apart from that, we also have to scale up the current small network. As our protocol only allows for simple key-based lookup, the time complexity for all types of requests is sublinear and therefore scales up well. The main limiting factor is disk space, which is relatively cheap and easy to add. Still, the servers will have to specialize even more, i.e. replicate only a part of all nanopublications, in order to handle really large amounts of data. In addition to the current surface feature definitions via URI and hash patterns, a number of additional ways of specializing are possible in the future: Servers can restrict themselves to particular types of nanopublications, e.g. to specific topics or authors, and communicate this to the network in a similar way as they do it now with URI and hash patterns; inspired by the Bitcoin system, certain servers could only accept nanopublications whose hash starts with a given number of zero bits, which makes it costly to publish; and some servers could be specialized to new nanopublications, providing fast access but only for a restricted time, while others could take care of archiving old nanopublications, Manuscript to be reviewed Computer Science possibly on tape and with considerable delays between request and delivery. Lastly, there could also emerge interesting synergies with novel approaches to internet networking, such as Content-Centric Networking <ns0:ref type='bibr' target='#b20'>(Jacobson et al., 2012)</ns0:ref>, with which -consistent with our proposal -requests are based on content rather than hosts.</ns0:p><ns0:p>We argue that data publishing and archiving can and should be done in a decentralized manner. We believe that the presented server network can serve as a solid basis for semantic publishing, and possibly also for the Semantic Web in general. It could contribute to improve the availability and reproducibility of scientific results and put a reliable and trustworthy layer underneath the Semantic Web.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Schematic representation of the decentralized server architecture. Nanopublications that have trusty URI identifiers can be uploaded to a server (or loaded from the local file system by the server administrator), and they are then distributed to the other servers of the network. They can then be retrieved from any of the servers, or from multiple servers simultaneously, even if the original server is not accessible.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>8.</ns0:head><ns0:label /><ns0:figDesc>All journal pages p starting from the one containing n until the end of the journal are downloaded one by one (considering the size of journal pages, which is by default 1000 nanopublications): (a) All nanopublication identifiers in p (excluding those before n) are checked with respect to whether (A) they are covered by the visiting server's patterns U s and H s and (B) they are not already contained in the local store. A list l is created of all nanopublication identifiers of the given page that satisfy both, (A) and (B). (b) If the number of new nanopublications |l| exceeds a certain threshold (currently set to 5), the nanopublications of p are downloaded as a gzipped package. Otherwise, the new nanopublications (if any) are requested individually.(c) The retrieved nanopublications that are in list l are validated using their trusty URIs, and all valid nanopublications are loaded to the server's nanopublication store and their identifiers are added to the end of the server's own journal. (Invalid nanopublications are ignored.) 9. The journal identifier j p and the total number of nanopublications n p for server p are remembered for the next visit, replacing the values of j p and n p .</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Schematic example of nanopublication indexes, which are themselves nanopublications.Nanopublications can (but need not) be elements of one or more indexes. An index can have sub-indexes and can append to another index, in either case acquiring all nanopublications.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>@</ns0:head><ns0:label /><ns0:figDesc>prefix : &lt;http://example.org/np1#&gt;. @prefix xsd: &lt;http://www.w3.org/2001/XMLSchema#&gt;. @prefix dc: &lt;http://purl.org/dc/terms/&gt;. of three graphs plus the head graph. The latter defines the structure of the nanopublication by linking to the other graphs: or hypothesis of the nanopublication goes into the assertion graph: publication info graph provide meta-information about the assertion and the entire nanopublication, respectively: constitute a very simple but complete nanopublication. To make this example a bit more interesting, let us define two more nanopublications that have different assertions but are otherwise identical:We save these nanopublications in a file nanopubs.trig, and before we can publish them, we have to assign them trusty URIs: the file trusty.nanopubs.trig, which contains transformed versions of the three nanopublications that now have trusty URIs as identifiers, as shown by the output lines above. Looking into the file we can verify that nothing has changed with respect to the content, and now we are ready to publish them:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>$</ns0:head><ns0:label /><ns0:figDesc>np publish index.cdkn2a-nanopubs.trig 12/21 PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9112:1:1:REVIEW 11 May 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>5Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The web interface of the nanopublication validator can load nanopublications by their trusty URI (or just their artifact code) from the nanopublication server network. It also allows users to directly publish uploaded nanopublications.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. This screenshot of the nanopublication monitor interface (http://npmonitor.inn.ac) showing the current server network. It currently consists of 15 server instances on 10 physical servers in Zurich, New Haven, Ottawa, Amsterdam, Stanford, Barcelona, Ghent, Athens, Leipzig, and Haverford.</ns0:figDesc><ns0:graphic coords='16,141.73,63.78,413.57,229.60' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. Server response times under heavy load, recorded by the monitors during the first evaluation</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure9. The number of failures (above) and required time (below) when downloading the LIDDI dataset from the server network over a normal connection as well as a connection that has been artificially made unreliable.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:02:9112:1:1:REVIEW 11 May 2016)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='15,141.73,189.62,413.61,130.32' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>5/21 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2016:02:9112:1:1:REVIEW 11 May 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>a RAQoZlp22LHIvtYqHCosPbUtX8yeGs1Y5AfqcjMneLQ2I URL: http://np.inn.ac/RAQoZlp22LHIvtYqHCosPbUtX8yeGs1Y5AfqcjMneLQ2I URL: http://ristretto.med.yale.edu:8080/nanopub-server/RAQoZlp22LHIvtYqHCosPbU... URL: http://nanopubs.stanford.edu/nanopub-server/RAQoZlp22LHIvtYqHCosPbUtX8yeG... URL: http://nanopubs.semanticscience.org:8082/RAQoZlp22LHIvtYqHCosPbUtX8yeGs1Y... URL: http://rdf.disgenet.org/nanopub-server/RAQoZlp22LHIvtYqHCosPbUtX8yeGs1Y5A... URL: http://app.tkuhn.eculture.labs.vu.nl/nanopub-server-2/RAQoZlp22LHIvtYqHCo... URL: http://nanopubs.restdesc.org/RAQoZlp22LHIvtYqHCosPbUtX8yeGs1Y5AfqcjMneLQ2I URL: http://nanopub.backend1.scify.org/nanopub-server/RAQoZlp22LHIvtYqHCosPbUt... URL: http://nanopub.exynize.com/RAQoZlp22LHIvtYqHCosPbUtX8yeGs1Y5AfqcjMneLQ2I Found on 9 nanopub servers.</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>$ np status -a RAQoZlp22LHIvtYqHCosPbUtX8yeGs1Y5AfqcjMneLQ2I URL: http://np.inn.ac/RAQoZlp22LHIvtYqHCosPbUtX8yeGs1Y5AfqcjMneLQ2I Found on 1 nanopub server. This is what you see immediately after publication. Only one server knows about the new nanopublication. Some minutes later, however, the same command leads to something like this: 11/21 PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9112:1:1:REVIEW 11 May 2016) Manuscript to be reviewed Computer Science $ np status -</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head /><ns0:label /><ns0:figDesc>This creates a local file index.nanopubs.trig containing the index, identified by the URI shown above. As this index is itself a nanopublication, we can publish it in the same way:</ns0:figDesc><ns0:table><ns0:row><ns0:cell>$ np mkindex -o index.nanopubs.trig trusty.nanopubs.trig</ns0:cell></ns0:row><ns0:row><ns0:cell>Index URI: http://np.inn.ac/RAXsXUhY8iDbfDdY6sm64hRFPr7eAwYXRlSsqQAz1LE14</ns0:cell></ns0:row></ns0:table><ns0:note>$ np publish index.nanopubs.trig 1 nanopub published at http://np.inn.ac/</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Existing datasets in the nanopublication format, five of which were used for the first part of the evaluation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>15/21 PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9112:1:1:REVIEW 11 May 2016)</ns0:note></ns0:figure> <ns0:note place='foot' n='1'>nanopub published at http://np.inn.ac/ 4 See https://github.com/tkuhn/bel2nanopub and http://nextprot2rdf.sourceforge.n et, respectively, and Table 1</ns0:note> <ns0:note place='foot' n='6'>https://loadimpact.com 14/21 PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9112:1:1:REVIEW 11 May 2016)Manuscript to be reviewed</ns0:note> </ns0:body> "
"Below are our responses to the specific points raised by the editor and the reviewers. > Editor's comments > Two reviewers have flagged the originality of the submission with respect to > the previous ISWC'15 publication. It is critical that the authors clarify, in > their response and in the paper, the difference between this submission and > their previous paper. There is an expectation that there must be a significant > difference between the two papers. (1) We would like to point out that we clearly stated in the first submission (at the end of the Introduction section) that it is an extended version of the existing conference paper. We also submitted with the first submission a supplemental file called 'trustypublishingx-new-content-marked.pdf' where all the differences to the conference paper were highlighted. Having said that, we agree that we should have specified the differences also in the main paper, which we did for this revision. Specifically, we now write: 'These extensions include, most importantly, a new evaluation on the retrieval of nanopublication datasets over an unreliable connection, a description of the new feature of surface patterns, the specific protocol applied by existing servers, a server network that is now three times as large as before (15 instead of 5 server instances), a much more detailed walk-through example, and five new Figures (2, 4, 7, 8 and 9). We furthermore present more details and discussions on topics including applications in the Humanities, traversal-based querying, underspecified assertions, caching between architectural layers, and access of the server network via a web interface.' > Some claims are made about nano-publications, such as they are never edited or > they are of the same size. These claims need to be justified. (2) The passage saying that nanopublications are of the same size was indeed confusing. We only meant that they are all small, which is true by definition. They are also immutable by definition (and this definition can be enforced by techniques like Trusty URIs). We clarified the respective sentence in the beginning of the Approach section. > The authors claim some benefit of their approach, but these are not > necessarily backed by empirical evaluation. In particular, issues around > reliability and scalability have been flagged. I invite the authors to revisit > their claim and/or evaluation section and ensure that claims are suitably > evidenced by evaluation. (3) We agree that the reliability was not sufficiently tested and that we cannot present any strong conclusions with respect to the network's scalability. To resolve this, we conducted an additional small experiment to test the reliability aspect in a third part of our evaluation, and we removed the scalability aspect from our claims and research question. > Reviewer Comments > Reviewer 1 (Daniel Garijo) > The paper is well written and relevant for the Peerj journal, but I have > several concerns with its originality and novelty. When I was reading the > paper I had the feeling that I had already reviewed this work. By the time I > reached section 2 I realized that indeed I had, because I was a reviewer of > the ISWC paper from which this work is derived. I have thoroughly compared > both versions, and my conclusion is that the contribution of this paper is > exactly the same as the one described in the previous paper. More details are > provided on this version (e.g., the part on how the servers propagate the > nanopublication has been added), which is great (and I asked about this in my > original review, so it's fantastic to see that the authors have acknowledged > some of my comments), but I fail to see what does this work add to the > previous original paper: the approach remains the same, the introduction and > conclusions are almost a copy-paste of the original one and even the two > evaluations are almost identical: table 1 has 2 more datasets but are not used > in the evaluations, and figure 7 has been redrawn, but that's all. The web > interface of the nanopublication validator is new, but that is not a > significant contribution to become a new publication. (4) Yes, it is an extended version of a journal paper, as stated in the end of the Introduction section, and therefore naturally contains long passages of identical text. We now state explicitly the differences in the main paper. See our answer to this point (1) above. Specifically, these are the main additions: - description of surface pattern feature - specific protocol - larger server network - more detailed example - five new figures - added discussions on various topics - new evaluation on dataset retrieval (new for this revision) Given this novel added content, including the extended evaluation, and the fact that PeerJ explicitly supports extended versions of conference papers, we believe the paper has a substantial added value compared to the previous work. > The authors state that a nanopublication is never edited, just added. Then, > what would happen if an author wanted to delete or retract a given published > fact? How would the deletion affect the other copies among the server network? (5) That is a good and important question. We understand the term 'publishing' as 'making available in the future to third parties in a reliable fashion'. Therefore, there cannot be any 'unpublishing' operation that would delete something that is already published. In other words, nanopublications once published cannot be deleted. This is exactly the same as for classical publications: You cannot 'unpublish' an existing publication. You can only publish a *new* entity stating that the earlier publication should be considered retracted, or a new entity that is an erratum of an earlier publication by pointing to errors and their correction. Also with nanopublications and our server network, you can publish a new nanopublication stating that a previous one should be considered retracted or has to be corrected in a specified manner. The precise vocabulary and model to be used in such a case has still to be established, which is ongoing work. The structure of such a vocabulary and model is in any case independent of the technical solutions presented in this paper. We clarify this now in the paper in the section Nanopublication Servers. > The data used for the evaluations i.e., for building figure 7 and 8 doesn't > seem to be available. (6) They are available in the repository for the conference paper and the repository for this extended journal paper: - https://bitbucket.org/tkuhn/trustypublishing-study/ - https://bitbucket.org/tkuhn/trustypublishingx-study/ Specifically, the data for Figure 7 can be found here: - https://bitbucket.org/tkuhn/trustypublishingx-study/src/0a4bd777f246/resptimes/?at=master And the data for Figure 8 is here: - https://bitbucket.org/tkuhn/trustypublishing-study/src/fabb8e968964f5a0b4bf6479a60e1a18debf961d?at=master > Reviewer 2 (Anonymous) > C1. The authors stated two main requirements: Having a reliable mechanism for > hosting and referencing datasets, but also the ability to reference and > retrieve datasets at different granularity levels. With respect to the second > requirements, the authors did not discuss the process by which a datasets is > transformed into a set of nanopublications. The model proposed by the authors > support nano-publication and nano-publication indexes, which references other > nano-publication. The question the reader may ask is how nan-publication and > their container nano-publications indexes are obtained given a dataset. > > C2. A second aspect that is related to the first comment is the heterogeneity > of datasets. Scientific datasets are usually heterogeneous, and the most part > of it is not stored in the form of RDF, but instead in CSV, relational, etc. > Can the solution proposed by the authors cater for this kind of datasets? (7) Thank you for indicating this. Based on your comment, we included a clarification of this issues in the paper. We focus only on RDF datasets and assume that the scientific data to be published is already in such a Linked Data representation. We agree that this was confusing and we should have stated it explicitly. We now include the assumption of Linked Data in our research question, and we explain this in more detail and give references to existing approaches on transforming data from different formats into RDF: 'Our approach on scientific data publishing builds upon the general Linked Data approach of lifting data on the web to linked RDF representations (Berners-Lee, 2006). We only deal here with structured data and assume that is is already present in an RDF representation. The question of how to arrive at such a representation from other formats has been addressed by countless approaches — for example Sequeda et al. (2012); Han et al. (2008) — and is therefore outside of the scope of this paper.' > C3. Regarding reliability, it is not clear on how the dataset is replicated > among the server. In other words, what is the replication scheme used to > ensure the availability of the datasets. (8) The replication scheme is described in the second part of the subsection 'Nanopublication Servers', after 'Specifically, current nanopublication servers follow the following procedure ...'. > C4. I also note that in the evaluation section, the problem of reliability has > not been examined. (9) Yes, we agree. Thank you for raising this point. To resolve this issue, we added a third part to the evaluation to test the reliability from a client's perspective with a small additional experiment. See the third parts of the sections Evaluation Design and Evaluation Results. > C5. The publication of datasets should in general be accompanied with the > publication of metadata within catalogues that inform prospective users with > their availability. I think that this aspect is worth discussing in the paper. (10) Yes, it is in fact possible (and to a limited extent already implemented in the nanopub-java library) to attach additional metadata to nanopublication indexes. We now mention this possibility in the paper in the end of the section about Nanopublication Indexes. > C6. At the end of the introduction, the authors state that the article is an > extended version of a previous article. The authors need to state clearly what > is new in the paper submitted to the PeerJ. (11) Yes, we agree. We addressed this and added a description of the differences. See our answer (1) above. > C7. The URIs used by the authors contain the artifact code which obtained by > analyzing the data content. This raises the question as to the cost that this > operation may incur when building the URIs. This aspect was touched on in the > paper, but briefly, and need further discussion. (12) We agree. We added the following passage to explain this: 'Generating these trusty URIs does not come for free, in particular because the normalization of the content involves the sorting of the contained RDF statements. For small files such as nanopublications, however, the overhead is minimal, consisting only of about 1 millisecond per created nanopublication when the Java library is used (Kuhn and Dumontier, 2014, 2015).' > C8 In Figure 2 and the text that explain the Figure, the author use the > expression “propagation”. Later this term is explained, but until then it is > not clear for the reader what is meant by propagation. (13) Yes, 'propagation' is a confusing term. We call it now 'distribution'. Nanopublications are distributed in the network simply by servers retrieving nanopublications from other servers via the described interface. > C9. In page 5, 1st paragraph towards the end: reminder -> remainder (14) Fixed. > Reviewer 3 (Anonymous) > 1. Although past literature regarding trusty URIs was cited, I did not > understand how they fit into the nanopublication server network (section 3.2) > until I re-read the paper from ESWC. So the idea is that the proposed approach > prerequisites the ability of having trusty URIs for all the nanopublications > exchanged within the network. It is kind of what figure 2 shows and what > section 3.2 says. But could have been made clearer. (15) We clarified the respective passages and made it clear that the trusty URIs have to be generated on the client side before a nanopublication can be published to the network. > Also, does this mean that all the nanopublications exchange in this network > must be published using the nanopublication java library, or something > similar? (16) No, the nanopublication Java library makes this more convenient but is not required. Nanopublications can also be edited manually and can be published via a regular HTTP POST request. Only the generation of trusty URIs requires the use of one of the trusty URI libraries (see http://trustyuri.net/) or a library based on it (such as nanopub-java). > 2. On line 257 the authors claim that there is an assumption that all > nanopublications should be all similar in size, which makes it one of the > advantages of the proposed approach. As someone who do know about > nanopublications, I am not sure this is a universal true. Have the authors got > some citations to support this statement, or is this a kind of best practices > expected from nanopublication users? (17) The phrase 'similar in size' was indeed misleading. We only meant that nanopublications are always small, which is true by definition (e.g. Mons introduced nanopublications as 'the smallest unit of publication', and initially the assertion part consisted of only one triple by definition but this restriction was later relaxed). We now clarified the respective sentence in the paper. For our argument, moreover, it is only important that they are *reasonably* small, they don't have to be very small. We need to be sure a single entity is not in the range of GBs or TBs or more, but rather kBs or MBs at most, so a single entity fits into memory easily and can be handled without a large overhead (including normalization with sorting). > The hypothesis of the submission is: Can we create a decentralized,  reliable, > trustworthy, and scalable system for publishing, retrieving, and archiving > datasets in the form of  sets of nanopublications based on existing Web > standards and infrastructure? Therefore, we expect the experiment to evaluate > the following aspects of the proposed system: its reliability, > trustworthiness, and scalability. However, in my opinion the experiments are > not completely satisfactory. (18) We agree. To resolve this, we removed the scalability aspect from our claims and research question, and we added a third experiment to test the reliability aspect, see also point (9) above. Also, we only cover trustworthiness in a narrow technical sense, which we now clarify in the paper: 'It is important to note here that the word trustworthy has a broad meaning and there are different kinds of trust involved when it comes to retrieving and using datasets from some third party. When exploring existing datasets, a certain kind of trust is needed to decide whether an encountered dataset is appropriate for the given purpose. A different kind of trust is needed to decide whether an obtained file correctly represents a specific version of a specific dataset that has been chosen to be used. Only the second kind of trust can be achieved with a technological solution alone, and we use the word trustworthy in this paper in this narrow technical sense covering the second kind of trust.' > In terms of evaluation1, it does not seem ideal to take more than 13 hours to > load and populate the test datasets. Obviously it will probably take marginal > time to load and populate a singular nanopublication. But as the authors say, > it’s often that people would publish a collection of nanopublications as a > dataset. What would be the typical size then? Would make any sense to evaluate > the load time for different sized nanopublication datasets? I would say 13 > hours is a very long time even for 200 million triples. (19) It is true that 13 hours is a long time if the goal is just to send a dataset file from A to B. Our system, however, is not about sending datasets around, but about reliably publishing them, which is an entirely different problem. Considering that it can take weeks or months to get a scientific article published, 13 hours don't seem like a long time. Once you have a scientific article published, the whole system of publishers and libraries guarantees that it is going to stay available in the future (at least as long as the whole system survives), and in the same way our system guarantees that the published dataset remains available in the future (again at least as long as the whole system survives). Considering that our network allows for additional guarantees with respect to immutability and verifiability of the published datasets and considering the fact that the evaluation showed that we chose quite conservative settings that could be relaxed if we wanted to maximize publishing speed, we think that the publishing capacity of the network is more than reasonable. The maximum current publishing capacity corresponds to more than 100 trillion triples per year, even if we account for some degradation of the service under such high load, which is about three times as much as the entire current Linked Open Data cloud (around 38 trillion triples at the time of writing: http://lodlaundromat.org/). > In terms of the second evaluation, what is the SPARQL query used for testing > the Virtuoso server? The experiment could use more explanation. Currently, the > procedure seems to be very much tailored to the described system, for example, > going through the internal journal page etc. Is this a realistic > nanopublication retrieval scenario? Also, this did not demonstrate retrieval > in a decentralized setting, but simply a stress testing of a single server. (20) The used SPARQL query can be found in this folder of the repository for the conference paper: - https://bitbucket.org/tkuhn/trustypublishing-study/src/fabb8e968964f5a0b4bf6479a60e1a18debf961d/loadimpact/?at=master This is the content of the file (somehow the content doesn't show up in the BitBucket interface): - https://bitbucket.org/tkuhn/trustypublishing-study/raw/fabb8e968964f5a0b4bf6479a60e1a18debf961d/loadimpact/get-nanopub.sparql?at=master Yes, the procedure is tailored to the described system and therefore not a fair comparison, as we write in the paper, but this comparison provides nevertheless a valuable reference point to assess the relative performance of the system. The retrieval procedure basically requests random nanopublications, and the going through the internal journal page is only to get possible nanopublication URIs to ask for. Due to their implementation, our nanopublication servers cannot get any performance benefit out of the fact that two subsequent requests tend to be about nanopublications from the same internal page, and the requests for nanopublication pages make up only about 1% of all requests (for each page of 1000 nanopublications, on average 100 nanopublications are retrieved). We agree that this second part of the evaluation does not demonstrate the retrieval in a decentralized setting from the server perspective, for which we added in this revision a third evaluation part that tests exactly that. > The evaluation section seems a bit incomplete and weak. How about scalability? > What would be the scalability bottleneck of the proposed design, the indexes > or the propagation procedure? Could the system handle communication of 100 or > several hundred nodes? (21) This is a very good point and it is true that we cannot present any strong conclusion on scalability. We removed all claims about scalability from the revised paper. Scalability of a such a server network is difficult to test before the server network actually has a considerable size, which is not something we can achieve on our own. Moreover, the use of simulations is also difficult for such unsupervised peer-to-peer networks, because the structure of the network does not only depend on the number and location of servers, but also on the specific configurations the server owners choose to apply, in particular with respect to which subset of nanopublications the server is supposed to replicate (via the setting of surface patterns). For these reasons, we chose to evaluate only the current network. With the presented results, we can be reasonably confident that the network design scales to the next order of magnitude, though not necessarily beyond that. However, it seems highly unlikely that for this extremely simple case of monotonous key-based lookup no scalable solution should exist, even if that should require modifications of our current protocol in the future. It is in any case true that our protocol will have to evolve. > If evaluations are incomplete in terms of supporting the whole hypothesis > (such as trustworthiness), then the authors should explain whether the > evaluation may be incomplete. (22) We made sure in this revision that the hypothesis is better matched with the evaluation part. For that, we adjusted the research question, we made our specific definition of 'trustworthy' explicit, and we added a third part to the evaluation. See your answers (9) and (18) above. "
Here is a paper. Please give your review comments after reading it.
286
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Publication and archival of scientific results is still commonly considered the responsability of classical publishing companies. Classical forms of publishing, however, which center around printed narrative articles, no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. In this article, we propose to design scientific data publishing as a web-based bottom-up process, without top-down control of central authorities such as publishing companies.</ns0:p><ns0:p>Based on a novel combination of existing concepts and technologies, we present a server network to decentrally store and archive data in the form of nanopublications, an RDFbased format to represent scientific data. We show how this approach allows researchers to publish, retrieve, verify, and recombine datasets of nanopublications in a reliable and trustworthy manner, and we argue that this architecture could be used as a low-level data publication layer to serve the Semantic Web in general. Our evaluation of the current network shows that this system is efficient and reliable.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Modern science increasingly depends on datasets, which are however left out in the classical way of publishing, i.e. through narrative (printed or online) articles in journals or conference proceedings. This means that the publications describing scientific findings become disconnected from the data they are based on, which can seriously impair the verifiability and reproducibility of their results. Addressing this issue raises a number of practical problems: How should one publish scientific datasets and how can one refer to them in the respective scientific publications? How can we be sure that the data will remain available in the future and how can we be sure that data we find on the web have not been corrupted or tampered with? Moreover, how can we refer to specific entries or subsets from large datasets, for instance, PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9112:2:1:CHECK 22 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to support a specific argument or hypothesis?</ns0:p><ns0:p>To address some of these problems, a number of scientific data repositories have appeared, such as Figshare and Dryad. 1 Furthermore, Digital Object Identifiers (DOI) have been advocated to be used not only for articles but also for scientific data <ns0:ref type='bibr' target='#b44'>(Paskin, 2005)</ns0:ref>. While these approaches certainly improve the situation of scientific data, in particular when combined with Semantic Web techniques, they have nevertheless a number of drawbacks: They have centralized architectures, they give us no possibility to check whether the data have been (deliberately or accidentally) modified, and they do not support access or referencing on a more granular level than entire datasets (such as individual data entries). We argue that the centralized nature of existing data repositories is inconsistent with the decentralized manner in which science is typically performed, and that it has serious consequences with respect to reliability and trust.</ns0:p><ns0:p>The organizations running these platforms might at some point go bankrupt, be acquired by investors who do not feel committed to the principles of science, or for other reasons become unable to keep their websites up and running. Even though the open licenses enforced by these data repositories will probably ensure that the datasets remain available at different places, there exist no standardized (i.e. automatable) procedures to find these alternative locations and to decide whether they are trustworthy or not.</ns0:p><ns0:p>Even if we put aside these worst-case scenarios, websites have typically not a perfect uptime and might be down for a few minutes or even hours every once in a while. This is certainly acceptable for most use cases involving a human user accessing data from these websites, but it can quickly become a problem in the case of automated access embedded in a larger service. Furthermore, it is possible that somebody gains access to the repository's database and silently modifies part of the data, or that the data get corrupted during the transfer from the server to the client. We can therefore never perfectly trust any data we get, which significantly complicates the work of scientists and impedes the potential of fully automatic analyses. Lastly, existing forms of data publishing have for the most part only one level at which data is addressed and accessed: the level of entire datasets (sometimes split into a small number of tables). It is in these cases not possible to refer to individual data entries or subsets in a way that is standardized and retains the relevant metadata and provenance information. To illustrate this problem, let us assume that we conduct an analysis using, say, 1000 individual data entries from each of three very large datasets (containing, say, millions of data entries each). How can we now refer to exactly these 3000 entries to justify whatever conclusion we draw from them? The best thing we can currently do is to republish these 3000 data entries as a new dataset and to refer to the large datasets as their origin. Apart from the practical disadvantages of being forced to republish data just to refer to subsets of larger datasets, other scientists need to either (blindly) trust us or go through the tedious process of semi-automatically verifying that each of these entries indeed appears in one of the large datasets. Instead of republishing the data, we could also try to describe the used subsets, e.g. in the form of SPARQL queries in the case of RDF data, but this doesn't make it less tedious, keeping in mind that older versions of datasets are typically not provided by public APIs such as SPARQL endpoints.</ns0:p><ns0:p>Below, we present an approach to tackle these problems, which builds upon existing Semantic Web technologies, in particular RDF and nanopublications, adheres to accepted web principles, such as decentralization and REST APIs, and supports the FAIR guiding principles of making scientific data Findable, Accessible, Interoperable, and Reusable <ns0:ref type='bibr' target='#b53'>(Wilkinson et al., 2016)</ns0:ref>. Specifically, our research question is: Can we create a decentralized, reliable, and trustworthy system for publishing, retrieving, and archiving Linked Data in the form of sets of nanopublications based on existing web standards and infrastructure? It is important to note here that the word trustworthy has a broad meaning and there are different kinds of trust involved when it comes to retrieving and using datasets from some third party.</ns0:p><ns0:p>When exploring existing datasets, a certain kind of trust is needed to decide whether an encountered dataset is appropriate for the given purpose. A different kind of trust is needed to decide whether an obtained file correctly represents a specific version of a specific dataset that has been chosen to be used.</ns0:p><ns0:p>Only the second kind of trust can be achieved with a technical solution alone, and we use the word trustworthy in this paper in this narrow technical sense covering the second kind of trust. This article is an extended and revised version of a previous conference paper <ns0:ref type='bibr'>(Kuhn et al., 2015)</ns0:ref>.</ns0:p><ns0:p>These extensions include, most importantly, a new evaluation on the retrieval of nanopublication datasets over an unreliable connection, a description of the new feature of surface patterns, the specific protocol applied by existing servers, a server network that is now three times as large as before (15 instead of 5 server instances), a much more detailed walk-through example, and five new Figures <ns0:ref type='bibr'>(2, 4, 7, 8, and 9</ns0:ref>).</ns0:p><ns0:p>1 http://figshare.com, http://datadryad.org SPARQL endpoints. Assuming that each endpoint is available 95% of the time and their availabilities are independent from each other, this means at least one of them will be down during close to five months per year. The reasons for this problem are quite clear: SPARQL endpoints provide a very powerful query interface that causes heavy load in terms of memory and computing power on the side of the server.</ns0:p><ns0:p>Clients can request answers to very specific and complex queries they can freely define, all without paying a cent for the service. This contrasts with almost all other HTTP interfaces, in which the server imposes (in comparison to SPARQL) a highly limited interface, where the computational costs per request are minimal.</ns0:p><ns0:p>To solve these and other problems, more light-weight interfaces were suggested, such as the read-write Linked Data Platform interface <ns0:ref type='bibr' target='#b51'>(Speicher et al., 2015)</ns0:ref>, the Triple Pattern Fragments interface <ns0:ref type='bibr' target='#b52'>(Verborgh et al., 2014)</ns0:ref>, as well as infrastructures to implement them, such as CumulusRDF <ns0:ref type='bibr' target='#b32'>(Ladwig and Harth, 2011)</ns0:ref>. These interfaces deliberately allow less expressive requests, such that the maximal cost of each individual request can be bounded more strongly. More complex queries then need to be evaluated by clients, which decompose them in simpler subqueries that the interface supports <ns0:ref type='bibr' target='#b52'>(Verborgh et al., 2014)</ns0:ref>.</ns0:p><ns0:p>While this constitutes a scalability improvement (at the cost of, for instance, slower queries), it does not necessarily lead to perfect uptimes, as servers can be down for other reasons than excessive workload.</ns0:p><ns0:p>We propose here to go one step further by relying on a decentralized network and by supporting only identifier-based lookup of nanopublications. Such limited interfaces normally have the drawback that traversal-based querying does not allow for the efficient and complete evaluation of certain types of queries <ns0:ref type='bibr' target='#b21'>(Hartig, 2013)</ns0:ref>, but this is not a problem with the multi-layer architecture we propose below, because querying is only performed at a higher level where these limitations do not apply.</ns0:p><ns0:p>A well-known solution to the problem of individual servers being unreliable is the application of a decentralized architecture where the data is replicated on multiple servers. A number of such approaches related to data sharing have been proposed, for example in the form of distributed file systems based on cryptographic methods for data that are public <ns0:ref type='bibr' target='#b14'>(Fu et al., 2002)</ns0:ref> or private <ns0:ref type='bibr' target='#b9'>(Clarke et al., 2001)</ns0:ref>. In contrast to the design principles of the Semantic Web, these approaches implement their own internet protocols and follow the hierarchical organization of file systems. Other approaches build upon the existing BitTorrent protocol and apply it to data publishing <ns0:ref type='bibr' target='#b33'>(Markman and Zavras, 2014;</ns0:ref><ns0:ref type='bibr' target='#b10'>Cohen and Lo, 2014)</ns0:ref>, and there is interesting work on repurposing the proof-of-work tasks of Bitcoin for data preservation <ns0:ref type='bibr' target='#b35'>(Miller et al., 2014)</ns0:ref>. There exist furthermore a number of approaches to applying peer-to-peer networks for RDF data <ns0:ref type='bibr' target='#b12'>(Filali et al., 2011)</ns0:ref>, but they do not allow for the kind of permanent and provenance-aware publishing that we propose below. Moreover, only for the centralized and closed-world setting of database systems, approaches exist that allow for robust and granular references to subsets of dynamic datasets <ns0:ref type='bibr' target='#b47'>(Proell and Rauber, 2014)</ns0:ref>.</ns0:p><ns0:p>The approach that we present below is based on previous work, in which we proposed trusty URIs to make nanopublications and their entire reference trees verifiable and immutable by the use of cryptographic hash values <ns0:ref type='bibr'>(Kuhn and</ns0:ref><ns0:ref type='bibr'>Dumontier, 2014, 2015)</ns0:ref>. This is an example of such a trusty URI: http://example.org/r1.RA5AbXdpz5DcaYXCh9l3eI9ruBosiL5XDU3rxBbBaUO70</ns0:p><ns0:p>The last 45 characters of this URI (i.e. everything after '.') is what we call the artifact code. It contains a hash value that is calculated on the RDF content it represents, such as the RDF graphs of a nanopublication.</ns0:p><ns0:p>Because this hash is part of the URI, any link to such an artifact comes with the possibility to verify its content, including other trusty URI links it might contain. In this way, the range of verifiability extends to the entire reference tree. Generating these trusty URIs does not come for free, in particular because the normalization of the content involves the sorting of the contained RDF statements. For small files such as nanopublications, however, the overhead is minimal, consisting only of about 1 millisecond per created nanopublication when the Java library is used <ns0:ref type='bibr'>(Kuhn and</ns0:ref><ns0:ref type='bibr'>Dumontier, 2014, 2015)</ns0:ref>. Furthermore, we argued in previous work that the assertion of a nanopublication need not be fully formalized, but we can allow for informal or underspecified assertions <ns0:ref type='bibr' target='#b27'>(Kuhn et al., 2013)</ns0:ref>, to deal with the fact that the creation of accurate semantic representations can be too challenging or too time-consuming for many scenarios and types of users. This is particularly the case for domains that lack ontologies and standardized terminologies with sufficient coverage. These structured but informal statements are supposed to provide a middle ground for the situations where fully formal statements are not feasible. We proposed a controlled natural language <ns0:ref type='bibr' target='#b23'>(Kuhn, 2014)</ns0:ref> for these informal statements, which we called AIDA (standing for the introduced restriction on English sentences to be atomic, independent, declarative, and absolute), and we had shown before that controlled natural language can also serve in the fully formalized case as a user-friendly syntax for representing scientific facts <ns0:ref type='bibr' target='#b31'>(Kuhn et al., 2006)</ns0:ref>. We also sketched how 'science bots' could autonomously produce and publish nanopublications, and how algorithms could thereby be tightly linked to their generated data <ns0:ref type='bibr' target='#b25'>(Kuhn, 2015b)</ns0:ref>, which requires the existence of a reliable and trustworthy publishing system, such as the one we present here.</ns0:p></ns0:div> <ns0:div><ns0:head>APPROACH</ns0:head><ns0:p>Our approach on scientific data publishing builds upon the general Linked Data approach of lifting data on the web to linked RDF representations <ns0:ref type='bibr' target='#b2'>(Berners-Lee, 2006)</ns0:ref>. We only deal here with structured data and assume that is is already present in an RDF representation. The question of how to arrive at such a representation from other formats has been addressed by countless approaches -for example <ns0:ref type='bibr' target='#b50'>Sequeda et al. (2012)</ns0:ref>; <ns0:ref type='bibr' target='#b19'>Han et al. (2008)</ns0:ref> -and is therefore outside of the scope of this paper. We furthermore exploit the fact that datasets in RDF can be split into small pieces without any effects on their semantics.</ns0:p><ns0:p>After Skolemization of blank nodes, RDF triples are independent and can be separated and joined without restrictions. Best practices of how to define meaningful small groups of such triples still have to emerge, but an obvious and simple starting point is grouping them by the resource in subject position. We focus here on the technical questions and leave these practical issues for future research.</ns0:p><ns0:p>Specifically, our approach rests upon the existing concept of nanopublications and our previously introduced method of trusty URIs. It is a proposal of a reliable implementation of accepted Semantic Web principles, in particular of what has become known as the follow-your-nose principle: Looking up a URI should return relevant data and links to other URIs, which allows one (i.e. humans as well as machines)</ns0:p><ns0:p>to discover things by navigating through this data space <ns0:ref type='bibr' target='#b2'>(Berners-Lee, 2006)</ns0:ref>. We argue that approaches following this principle can only be reliable and efficient if we have some sort of guarantee that the resolution of any single identifier will succeed within a short time frame in one way or another, and that the processing of the received representation will only take up a small amount of time and resources. This requires that (1) RDF representations are made available on several distributed servers, so the chance that they all happen to be inaccessible at the same time is negligible, and that (2) these representations are reasonably small, so that downloading them is a matter of fractions of a second, and so that one has to process only a reasonable amount of data to decide which links to follow. We address the first requirement by proposing a distributed server network and the second one by building upon the concept of nanopublications. Below we explain the general architecture, the functioning and the interaction of the nanopublication servers, and the concept of nanopublication indexes.</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture</ns0:head><ns0:p>There are currently at least three possible architectures for Semantic Web applications (and mixtures thereof), as shown in a simplified manner in Figure <ns0:ref type='figure'>1</ns0:ref>. The first option is the use of plain HTTP GET Manuscript to be reviewed</ns0:p><ns0:p>Computer Science requests to dereference a URI. Applying the follow-your-nose principle, resolvable URIs provide the data based on which the application performs the tasks of finding relevant resources, running queries, analyzing and aggregating the results, and using them for the purpose of the application. This approach aligns very well with the principles and the architecture of the web, but the traversal-based querying it entails comes with limitations on efficiency and completeness <ns0:ref type='bibr' target='#b21'>(Hartig, 2013)</ns0:ref>. If SPARQL endpoints are used, as a second option, most of the workload is shifted from the application to the server via the expressive power of the SPARQL query language. As explained above, this puts servers at risk of being overloaded. With a third option such as Triple Pattern Fragments, servers provide only limited query features and clients perform the remainder of the query execution. This leads to reduced server costs, at the expense of longer query times.</ns0:p><ns0:p>We can observe that all these current solutions are based on two-layer architectures, and have moreover no inherent replication mechanisms. A single point of failure can cause applications to be unable to complete their tasks: A single URI that does not resolve or a single server that does not respond can break the entire process. We argue here that we need distributed and decentralized services to allow for robust and reliable applications that consume Linked Data. In principle, this can be achieved for any of these two-layer architectures by simply setting up several identical servers that mirror the same content, but there is no standardized and generally accepted way of how to communicate these mirror servers and how to decide on the client side whether a supposed mirror server is trustworthy. Even putting aside these difficulties, two-layer architectures have further conceptual limitations. The most low-level task of providing Linked Data is essential for all other tasks at higher levels, and therefore needs to be the most stable and robust one. We argue that this can be best achieved if we free this lowest layer from all tasks except the provision and archiving of data entries (nanopublications in our case) and decouple it from the tasks of providing services for finding, querying, or analyzing the data. This makes us advocate a multi-layer architecture, a possible realization of which is shown at the bottom of Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p><ns0:p>Below we present a concrete proposal of such a low-level data provision infrastructure in the form of a nanopublication server network. Based on such an infrastructure, one can then build different kinds of services operating on a subset of the nanopublications they find in the underlying network. 'Core services' could involve things like resolving backwards references (i.e. 'which nanopublications refer to the given one?') and the retrieval of the nanopublications published by a given person or containing a particular URI. Based on such core services for finding nanopublications, one could then provide 'advanced services' that allow us to run queries on subsets of the data and ask for aggregated output. These higher layers can of course make use of existing techniques such as SPARQL endpoints and Triple Pattern Fragments or even classical relational databases, and they can cache large portions of the data from the layers below (as nanopublications are immutable, they are easy to cache). For example, an advanced service could allow users to query the latest versions of several drug-related datasets, by keeping a local triple store and providing users with a SPARQL interface. Such a service would regularly check for new data in the server network on the given topic, and replace outdated nanopublications in its triple store with new ones.</ns0:p><ns0:p>A query request to this service, however, would not involve an immediate query to the underlying server network, in the same way that a query to the Google search engine does not trigger a new crawl of the web.</ns0:p><ns0:p>While the lowest layer would necessarily be accessible to everybody, some of the services on the higher level could be private or limited to a small (possibly paying) user group. We have in particular scientific data in mind, but we think that an architecture of this kind could also be used for Semantic Web content in general.</ns0:p></ns0:div> <ns0:div><ns0:head>Nanopublication Servers</ns0:head><ns0:p>As a concrete proposal of a low-level data provision layer, as explained above, we present here a decentralized nanopublication server network with a REST API to provide and distribute nanopublications.</ns0:p><ns0:p>To ensure the immutability of these nanopublications and to guarantee the reliability of the system, these nanopublications are required to come with trusty URI identifiers, i.e. they have to be transformed on the client side into such trusty nanopublications before they can be published to the network. The nanopublication servers of such a network connect to each other to retrieve and (partly) replicate their nanopublications, and they allow users to upload new nanopublications, which are then automatically distributed through the network. Figure <ns0:ref type='figure' target='#fig_0'>2</ns0:ref> shows a schematic depiction of this server network.</ns0:p><ns0:p>Basing the content of this network on nanopublications with trusty URIs has a number of positive Manuscript to be reviewed Nanopublications that have trusty URI identifiers can be uploaded to a server (or loaded from the local file system by the server administrator), and they are then distributed to the other servers of the network. They can then be retrieved from any of the servers, or from multiple servers simultaneously, even if the original server is not accessible.</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>consequences for its design: The first benefit is that the fact that nanopublications are always small (by definition) makes it easy to estimate how much time is needed to process an entity (such as validating its hash) and how much space to store it (e.g. as a serialized RDF string in a database). Moreover it ensures that these processing times remain mostly in the fraction-of-a-second range, guaranteeing that responses are always quick, and that these entities are never too large to be analyzed in memory. The second benefit is that servers do not have to deal with identifier management, as the nanopublications already come with trusty URIs, which are guaranteed to be unique and universal. The third and possibly most important benefit is that nanopublications with trusty URIs are immutable and verifiable. This means that servers only have to deal with adding new entries but not with updating them, which eliminates the hard problems of concurrency control and data integrity in distributed systems. (As with classical publications, a nanopublication -once published to the network -cannot be deleted or 'unpublished,' but only marked retracted or superseded by the publication of a new nanopublication.) Together, these aspects significantly simplify the design of such a network and its synchronization protocol, and make it reliable and efficient even with limited resources.</ns0:p><ns0:p>Specifically, a nanopublication server of the current network has the following components:</ns0:p><ns0:p>&#8226; A key-value store of its nanopublications (with the artifact code from the trusty URI as the key)</ns0:p><ns0:p>&#8226; A long list of all stored nanopublications, in the order they were loaded at the given server.</ns0:p><ns0:p>We call this list the server's journal, and it consists of a journal identifier and the sequence of nanopublication identifiers, subdivided into pages of a fixed size. (1000 elements is the default: page 1 containing the first 1000 nanopublications; page 2 the next 1000, etc.)</ns0:p><ns0:p>&#8226; A cache of gzipped packages containing all nanopublications for a given journal page</ns0:p><ns0:p>&#8226; Pattern definitions in the form of a URI pattern and a hash pattern, which define the surface features of the nanopublications stored on the given server</ns0:p><ns0:p>&#8226; A list of known peers, i.e. the URLs of other nanopublication servers</ns0:p><ns0:p>&#8226; Information about each known peer, including the journal identifier and the total number of nanopublications at the time it was last visited</ns0:p><ns0:p>The server network can be seen as an unstructured peer-to-peer network, where each node can freely decide which other nodes to connect to and which nanopublications to replicate.</ns0:p><ns0:p>The URI pattern and the hash pattern of a server define the surface features of the nanopublications that this server cares about. We called them surface features, because they can be determined by only looking at the URI of a nanopublication. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>in nanopublications whose hash in the trusty URI start with one of the specified character sequences (separated by blank spaces). As hashes are represented in Base64 notation, this particular hash pattern would let a server replicate about 0.05% of all nanopublications. Nanopublication servers are thereby given the opportunity to declare which subset of nanopublications they replicate, and need to connect only to those other servers whose subsets overlap. To decide on whether a nanopublication belongs to a specified subset or not, the server only has to apply string matching at two given starting points of the nanopublication URI (i.e. the first position and position 43 from the end -as the hashes of the current version of trusty URIs are 43 bytes long), which is computationally cheap.</ns0:p><ns0:p>Based on the components introduced above, the servers respond to the following request (in the form of HTTP GET):</ns0:p><ns0:p>&#8226; Each server needs to return general server information, including the journal identifier and the number of stored nanopublications, the server's URI pattern and hash pattern, whether the server accepts POST requests for new nanopublications or servers (see below), and informative entries such as the name and email address of the maintainer and a general description. Additionally, some server-specific limits can be specified: the maximum number of triples per nanopublication (the default is 1200), the maximum size of a nanopublication (the default is 1 MB), and the maximum number of nanopublications to be stored on the given server (unlimited by default).</ns0:p><ns0:p>&#8226; Given an artifact code (i.e. the final part of a trusty URI) of a nanopublication that is stored by the server, it returns the given nanopublication in a format like TriG, TriX, N-Quads, or JSON-LD (depending on content negotiation).</ns0:p><ns0:p>&#8226; A journal page can be requested by page number as a list of trusty URIs.</ns0:p><ns0:p>&#8226; For every journal page (except for incomplete last pages), a gzipped package can be requested containing the respective nanopublications.</ns0:p><ns0:p>&#8226; The list of known peers can be requested as a list of URLs.</ns0:p><ns0:p>In addition, a server can optionally support the following two actions (in the form of HTTP POST requests):</ns0:p><ns0:p>&#8226; A server may accept requests to add a given individual nanopublication to its database.</ns0:p><ns0:p>&#8226; A server may also accept requests to add the URL of a new nanopublication server to its peer list.</ns0:p><ns0:p>Server administrators have the additional possibility to load nanopublications from the local file system, which can be used to publish large amounts of nanopublications, for which individual POST requests are not feasible.</ns0:p><ns0:p>Together, the server components and their possible interactions outlined above allow for efficient decentralized distribution of published nanopublications. Specifically, current nanopublication servers follow the following procedure. 3 Every server s keeps its own list of known peer P s . For each peer p on that list that has previously been visited, the server additionally keeps the number of nanopublications on that peer server n p and its journal identifier j p , as recorded during the last visit. At a regular interval, every peer server p on the list of known peers is visited by server s:</ns0:p><ns0:p>1. The latest server information is retrieved from p, which includes its list of known peers P p , the number of stored nanopublications n p , the journal identifier j p , the server's URI pattern U p , and its hash pattern H p .</ns0:p><ns0:p>2. All entries in P p that are not yet on the visiting server's own list of known peers P s are added to P s .</ns0:p><ns0:p>3. If the visiting server's URL is not in P p , the visiting server s makes itself known to server p with a POST request (if this is supported by p).</ns0:p><ns0:p>4. If the subset defined by the server's own URI/hash patterns U s and H s does not overlap with the subset defined by U p and H p , then there won't be any nanopublications on the peer server that this server is interested in, and we jump to step 9. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>5. The server will start at position n to look for new nanopublications at server p: n is set to the total number of nanopublications of the last visit n p , or to 0 if there was no last visit (nanopublication counting starts at 0).</ns0:p><ns0:p>6. If the retrieved journal identifier j p is different from j p (meaning that the server has been reset</ns0:p><ns0:p>since the last visit), n is set to 0.</ns0:p><ns0:p>7. If n = n p , meaning that there are no new nanopublications since the last visit, the server jumps to step 9. The current implementation is designed to be run on normal web servers alongside with other applications, with economic use of the server's resources in terms of memory and processing time. In order to avoid overload of the server or the network connection, we restrict outgoing connections to other servers to one at a time. Of course, sufficient storage space is needed to save the nanopublications (for which we currently use MongoDB), but storage space is typically much easier and cheaper to scale up than memory or processing capacities. The current system and its protocol are not set in stone but, if successful, will have to evolve in the future -in particular with respect to network topology and partial replicationto accommodate a network of possibly thousands of servers and billions of nanopublications.</ns0:p></ns0:div> <ns0:div><ns0:head>Nanopublication Indexes</ns0:head><ns0:p>To make the infrastructure described above practically useful, we have to introduce the concept of indexes.</ns0:p><ns0:p>One of the core ideas behind nanopublications is that each of them is a tiny atomic piece of data. This implies that analyses will mostly involve more than just one nanopublication and typically a large number of them. Similarly, most processes will generate more than just one nanopublication, possibly thousands or even millions of them. Therefore, we need to be able to group nanopublications and to identify and use large collections of them.</ns0:p><ns0:p>Given the versatility of the nanopublication standard, it seems straightforward to represent such collections as nanopublications themselves. However, if we let such 'collection nanopublications' contain other nanopublications, then the former would become very large for large collections and would quickly lose their property of being nano. We can solve part of that problem by applying a principle that we can call reference instead of containment: nanopublications cannot contain but only refer to other nanopublications, and trusty URIs allow us to make these reference links almost as strong as containment links. To emphasize this principle, we call them indexes and not collections.</ns0:p><ns0:p>However, even by only containing references and not the complete nanopublications, these indexes can still become quite large. To ensure that all such index nanopublications remain nano in size, we need to put some limit on the number of references, and to support sets of arbitrary size, we can allow indexes to be appended by other indexes. We set 1000 nanopublication references as the upper limit any single index can directly contain. This limit is admittedly arbitrary, but it seems to be a reasonable compromise between ensuring that nanopublications remain small on the one hand and limiting the number of nanopublications Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Nanopublications can (but need not) be elements of one or more indexes. An index can have sub-indexes and can append to another index, in either case acquiring all nanopublications.</ns0:p><ns0:p>needed to define large indexes on the other. A set of 100,000 nanopublications, for example, can therefore be defined by a sequence of 100 indexes, where the first one stands for the first 1000 nanopublications, the second one appends to the first and adds another 1000 nanopublications (thereby representing 2000 of them), and so on up to the last index, which appends to the second to last and thereby stands for the entire set. In addition, to allow datasets to be organized in hierarchies, we define that the references of an index can also point to sub-indexes. In this way we end up with three types of relations: an index can append to another index, it can contain other indexes as sub-indexes, and it can contain nanopublications as elements.</ns0:p><ns0:p>These relations defining the structure of nanopublication indexes are shown schematically in Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>.</ns0:p><ns0:p>Index (a) in the shown example contains five nanopublications, three of them via sub-index (c). The latter is also part of index (b), which additionally contains eight nanopublications via sub-index (f). Two of these eight nanopublications belong directly to (f), whereas the remaining six come from appending to index (e). Index (e) in turn gets half of its nanopublications by appending to index (d). We see that some nanopublications may not be referenced by any index at all, while others may belong to several indexes at the same time. The maximum number of direct nanopublications (or sub-indexes) is here set to three for illustration purposes, whereas in reality this limit is set to 1000.</ns0:p><ns0:p>In addition to describing sets of data entries, nanopublication indexes can also have additional metadata attached, such as labels, descriptions, further references, and other types of relations at the level of an entire dataset. Below we show how this general concept of indexes can be used to define sets of new or existing nanopublications, and how such index nanopublications can be generated and published, and their nanopublications retrieved.</ns0:p><ns0:p>As a side note, dataset metadata can be captured and announced as nanopublications even for datasets that are not (yet) themselves available in the nanopublication format. The HCLS Community Profile of dataset descriptions <ns0:ref type='bibr' target='#b16'>(Gray et al., 2015)</ns0:ref> provides a good guideline of which of the existing RDF vocabularies to use for such metadata descriptions.</ns0:p></ns0:div> <ns0:div><ns0:head>Trusty Publishing</ns0:head><ns0:p>Let us consider two simple exemplary scenarios to illustrate and motivate the general concepts. To demonstrate the procedure and the general interface of our implementation, we show here the individual steps on the command line in a tutorial-like fashion, using the np command from the nanopub-java library <ns0:ref type='bibr' target='#b24'>(Kuhn, 2015a)</ns0:ref>. Of course, users should eventually be supported by graphical interfaces, but command line tools are a good starting point for developers to build such tools. To make this example completely reproducible, these are the commands to download and compile the needed code from a Bash shell (requiring Git and Maven): Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To publish some new data, they have to be formatted as nanopublications. We use the TriG format here and define the following RDF prefixes: This is what you see immediately after publication. Only one server knows about the new nanopublication.</ns0:p><ns0:p>Some minutes later, however, the same command leads to something like this: Next, we can make an index pointing to these three nanopublications: Once published, we can check the status of this index and its contained nanopublications:</ns0:p><ns0:p>$ np status -r RAXsXUhY8iDbfDdY6sm64hRFPr7eAwYXRlSsqQAz1LE14 1 index nanopub; 3 content nanopubs Again, after just a few minutes this nanopublication will be distributed in the network and available on multiple servers. From this point on, everybody can conveniently and reliably retrieve the given set of nanopublications. The only thing one needs to know is the artifact code of the trusty URI of the index:</ns0:p></ns0:div> <ns0:div><ns0:head>$ np get -c RAXsXUhY8iDbfDdY6sm64hRFPr7eAwYXRlSsqQAz1LE14</ns0:head><ns0:p>This command downloads the nanopublications of the index we just created and published.</ns0:p><ns0:p>As another exemplary scenario, let us imagine a researcher in the biomedical domain who is interested in the protein CDKN2A and who has derived some conclusion based on the data found in existing nanopublications. Specifically, let us suppose this researcher analyzed the five nanopublications specified by the following artifact codes (they can be viewed online by appending the artifact code to the URL http://np.inn.ac/ or the URL of any other nanopublication server):</ns0:p><ns0:formula xml:id='formula_0'>RAEoxLTy4pEJYbZwA9FuBJ6ogSquJobFitoFMbUmkBJh0 RAoMW0xMemwKEjCNWLFt8CgRmg_TGjfVSsh15hGfEmcz4 RA3BH_GncwEK_UXFGTvHcMVZ1hW775eupAccDdho5Tiow RA3HvJ69nO0mD5d4m4u-Oc4bpXlxIWYN6L3wvB9jntTXk</ns0:formula></ns0:div> <ns0:div><ns0:head>RASx-fnzWJzluqRDe6GVMWFEyWLok8S6nTNkyElwapwno</ns0:head><ns0:p>These nanopublications about the same protein come from two different sources: The first one is from the BEL2nanopub dataset, whereas the remaining four are from neXtProt. 4 These nanopublications can be downloaded as above with the np get command and stored in a file, which we name here cdkn2a-nanopubs.trig.</ns0:p><ns0:p>In order to be able to refer to such a collection of nanopublications with a single identifier, a new index is needed that contains just these five nanopublications. This time we give the index a title (which is optional): Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The generated index is stored in the file index.cdkn2a-nanopubs.trig, and our exemplary researcher can now publish this index to let others know about it:</ns0:p><ns0:p>$ np publish index.cdkn2a-nanopubs.trig 1 nanopub published at http://np.inn.ac/ There is no need to publish the five nanopublications this index is referring to, because they are already public (this is how we got them in the first place). The index URI can now be used to refer to this new collection of existing nanopublications in an unambiguous and reliable manner. This URI can be included in the scientific publication that explains the new finding, for example with a reference like the following:</ns0:p><ns0:p>[1] Data about CDKN2A from BEL2nanopub &amp; neXtProt. Nanopublication index http://np.i nn.ac/RA6jrrPL2NxxFWlo6HFWas1ufp0OdZzS XKwQDXpJg3CY, 14 April 2015.</ns0:p><ns0:p>In this case with just five nanopublications, one might as well refer to them individually, but this is obviously not an option for cases where we have hundreds or thousands of them. The given web link allows everybody to retrieve the respective nanopublications via the server np.inn.ac. The URL will not resolve should the server be temporarily or permanently down, but because it is a trusty URI we can retrieve the nanopublications from any other server of the network following a well-defined protocol (basically just extracting the artifact code, i.e. the last 45 characters, and appending it to the URL of another nanopublication server). This reference is therefore much more reliable and more robust than links to other types of data repositories. In fact, we refer to the datasets we use in this publication for evaluation purposes, as described below in Section 4, in exactly this way (NP Index RAY lQruua, 2015; NP Index RACy0I4f w, 2015; NP Index RAR5dwELYL, 2015; NP Index RAXy332hxq, 2015; NP Index RAVEKRW0m6, 2015; NP Index RAXFlG04YM, 2015; NP Index RA7SuQ0e66, 2015).</ns0:p><ns0:p>The new finding that was deduced from the given five nanopublications can, of course, also be published as a nanopublication, with a reference to the given index URI in the provenance part: </ns0:p></ns0:div> <ns0:div><ns0:head>}</ns0:head><ns0:p>We can again transform it to a trusty nanopublication, and then publish it as above. Some of the features of the presented command-line interface are made available through a web interface for dealing with nanopublications that is shown in Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>. The supported features include the generation of trusty URIs, as well as the publication and retrieval of nanopublications. The interface allows us to retrieve, for example, the nanopublication we just generated and published above, even though we used an example.org URI, which is not directly resolvable. Unless it is just about toy examples, we should of course try to use resolvable URIs, but with our decentralized network we can retrieve the data even if the original link is no longer functioning or temporarily broken.</ns0:p></ns0:div> <ns0:div><ns0:head>EVALUATION</ns0:head><ns0:p>To evaluate our approach, we want to find out whether a small server network run on normal web servers, without dedicated infrastructure, is able to handle the amount of nanopublications we can expect to become publicly available in the next few years. Our evaluation consists of three parts focusing on the different aspects of dataset publication, server performance, and dataset retrieval, respectively. At the time the first part of the evaluation was performed, the server network consisted of three servers in Zurich, New Haven, and Ottawa. Seven new sites in Amsterdam, Stanford, Barcelona, Ghent, Athens, Leipzig, and Haverford The files of the presented studies are available online in two repositories, one for the analyses of the original studies that have been previously published 6 and another one with the files for the additional analyses and diagrams for this extended article 7 .</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation Design</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_9'>1</ns0:ref> shows seven existing large nanopublication datasets. Five of these datasets were used for the first part of the evaluation (the other two were not yet available at the time this part of the evaluation was conducted) , which tested the ability of the network to store and distribute new datasets. These five datasets consist of a total of more than 5 million nanopublications and close to 200 million RDF triples, including nanopublication indexes that we generated for each dataset. The total size of these five datasets when stored as uncompressed TriG files amounts to 15.6 GB. Each of the datasets is assigned to one of the three servers, where it is loaded from the local file systems. The first nanopublications start spreading to the other servers, while others are still being loaded from the file system. We therefore test the reliability and capacity of the network under constant streams of new nanopublications coming from different servers, and we use two nanopublication monitors (in Zurich and Ottawa) to evaluate the responsiveness of the network.</ns0:p><ns0:p>In the second part of the evaluation we expose a server to heavy load from clients to test its retrieval capacity. For this we use a service called Load Impact 8 to let up to 100 clients access a nanopublication server in parallel. We test the server in Zurich over a time of five minutes under the load from a linearly increasing number of clients (from 0 to 100) located in Dublin. These clients are programmed to request a randomly chosen journal page, then to go though the entries of that page one by one, requesting the respective nanopublication with a probability of 10%, and starting over again with a different page.</ns0:p><ns0:p>As a comparison, we run a second session, for which we load the same data into a Virtuoso SPARQL endpoint on the same server in Zurich (with 16 GB of memory given to Virtuoso and two 2.40 GHz Intel Xeon processors). Then, we perform exactly the same stress test on the SPARQL endpoint, requesting the nanopublications in the form of SPARQL queries instead of requests to the nanopublication server interface. This comparison is admittedly not a fair one, as SPARQL endpoints are much more powerful and are not tailor-made for the retrieval of nanopublications, but they provide nevertheless a valuable and well-established reference point to evaluate the performance of our system.</ns0:p><ns0:p>While the second part of the evaluation focuses on the server perspective, the third part considers the client side. In this last part, we want to test whether the retrieval of an entire dataset in a parallel fashion from the different servers of the network is indeed efficient and reliable. We decided to use a medium-sized dataset and chose LIDDI (NP Index RA7SuQ0e66, 2015), which consists of around 100,000 triples. We tested the retrieval of this dataset from a computer connected to the internet via a basic plan from a regular internet service provider (i.e. not via a fast university network) with a command like the following:</ns0:p></ns0:div> <ns0:div><ns0:head>$ np get -c -o nanopubs.trig RA7SuQ0e661LJdKpt5EOS2DKykf1ht9LFmNaZtFSDMrXg</ns0:head><ns0:p>In addition, we wanted to test the retrieval in a situation where the internet connection and/or the nanopublication servers are highly unreliable. For that, we implemented a version of an input stream that introduces errors to simulate such unreliable connections or servers. With a given probability (set to 1% for this evaluation), each read attempt to the input stream (a single read attempt typically asking for about 8000 bytes) either leads to a randomly changed byte or to an exception being thrown after a delay of 5 seconds (both having an equal chance of occurring of 0.5%). This behavior can be achieved with the following command, which is obviously only useful for testing purposes:</ns0:p><ns0:p>$ np get -c -o nanopubs.trig --simulate-unreliable-connection \</ns0:p></ns0:div> <ns0:div><ns0:head>RA7SuQ0e661LJdKpt5EOS2DKykf1ht9LFmNaZtFSDMrXg</ns0:head><ns0:p>For the present study, we run each of these two commands 20 times. To evaluate the result, we can investigate whether the downloaded sets of nanopublications are equivalent, i.e. lead to identical files when normalized (such as transformed to a sorted N-Quads representation). Furthermore, we can look Figure <ns0:ref type='figure'>6</ns0:ref>. This diagram shows the rate at which nanopublications are loaded at their first, second, and third server, respectively, over the time of the evaluation. At the first server, nanopublications are loaded from the local file system, whereas at the second and third server they are retrieved via the server network.</ns0:p><ns0:p>into the amount of time this retrieval operation takes, and the number of times the retrieval of a single nanopublication from a server fails and has to be repeated.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation Results</ns0:head><ns0:p>The first part of the evaluation lasted 13 hours and 21 minutes, at which point all nanopublications were replicated on all three servers, and therefore the nanopublication traffic came to an end. Figure <ns0:ref type='figure'>6</ns0:ref> shows the rate at which the nanopublications were loaded at their first, second, and third server, respectively.</ns0:p><ns0:p>The network was able to handle an average of about 400,000 new nanopublications per hour, which corresponds to more than 100 new nanopublications per second. This includes the time needed for loading each nanopublication once from the local file system (at the first server), transferring it through the network two times (to the other two servers), and for verifying it three times (once when loaded and twice when received by the other two servers). Figure <ns0:ref type='figure'>7</ns0:ref> shows the response times of the three servers as measured by the two nanopublication monitors in Zurich (top) and Ottawa (bottom) during the time of the evaluation. We see that the observed latency is mostly due to the geographical distance between the servers and the monitors. The response time was always less than 0.21 seconds when the server was on the same continent as the measuring monitor. In 99.77% of all cases (including those across continents) Figure <ns0:ref type='figure'>8</ns0:ref> shows the result of the second part of the evaluation. The nanopublication server was able to handle 113,178 requests in total (i.e. an average of 377 requests per second) with an average response time of 0.12 seconds. In contrast, the SPARQL endpoint answering the same kind of requests needed 100 times longer to process them (13 seconds on average), consequently handled about 100 times fewer requests (1267), and started to hit the timeout of 60 seconds for some requests when more than 40 client accessed it in parallel. In the case of the nanopublication server, the majority of the requests were answered within less than 0.1 seconds for up to around 50 parallel clients, and this value remained below 0.17 seconds all the way up to 100 clients. As the round-trip network latency alone between Ireland and Zurich amounts to around 0.03 to 0.04 seconds, further improvements can be achieved for a denser network due to the reduced distance to the nearest server.</ns0:p><ns0:p>For the third part of the evaluation, all forty retrieval attempts succeeded. After normalization of the downloaded datasets, they were all identical, also the ones that were downloaded through an input stream that was artificially made highly unreliable. Figure <ns0:ref type='figure' target='#fig_9'>9</ns0:ref> shows the number of retrieval failures and Manuscript to be reviewed the amount of time that was required for the retrieval. With the normal connection, the downloading of nanopublications from the network almost always succeeded on the first try. Of the 98,184 nanopublications that had to be downloaded (98,085 content nanopublications plus 99 nanopublication indexes), fewer than 10 such download attempts failed in 18 of the 20 test runs. In the remaining two runs, the connection happened to be temporarily unreliable for 'natural' reasons, and the number of download failures rose to 181 and 9458, respectively. This, however, had no effect on the success of the download in a timely manner. On average over the 20 test runs, the entire dataset was successfully downloaded in 235 seconds, with a maximum of 279 seconds. Unsurprisingly, the unreliable connection leads a much larger average number of failures and retries, but these failures have no effect on the final downloaded dataset, as we have seen above. On average, 2486 download attempts failed and had to be retried in the unreliable setting. In particular because half of these failures included a delay of 5 seconds, the download times are more than doubled, but still in a very reasonable range with an average of 517 seconds and a maximum below 10 minutes.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>In summary, the first part of the evaluation shows that the overall replication capacity of the current server network is around 9.4 million new nanopublications per day or 3.4 billion per year. The results of the second part show that the load on a server when measured as response times is barely noticeable for up to 50 parallel clients, and therefore the network can easily handle 50 &#8226; x parallel client connections or more, where x is the number of independent physical servers in the network (currently x = 10). The second part thereby also shows that the restriction of avoiding parallel outgoing connections for the replication between servers is actually a very conservative measure that could be relaxed, if needed, to allow for a higher replication capacity. The third part of the evaluation shows that the client-side retrieval of entire datasets is indeed efficient and reliable, even if the used internet connection or some servers in the network are highly unreliable.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION AND CONCLUSION</ns0:head><ns0:p>We have presented here a low-level infrastructure for data sharing, which is just one piece of a bigger ecosystem to be established. The implementation of components that rely on this low-level data sharing infrastructure is ongoing and future work. This includes the development of 'core services' (see Section 3.1) on top of the server network to allow people to find nanopublications and 'advanced services' to query and analyze the content of nanopublications. In addition, we need to establish standards and best practices of how to use existing ontologies (and to define new ones where necessary) to describe properties and relations of nanopublications, such as referring to earlier versions, marking nanopublications as retracted, and reviewing of nanopublications.</ns0:p><ns0:p>Apart from that, we also have to scale up the current small network. As our protocol only allows for simple key-based lookup, the time complexity for all types of requests is sublinear and therefore scales up well. The main limiting factor is disk space, which is relatively cheap and easy to add. Still, the servers will have to specialize even more, i.e. replicate only a part of all nanopublications, in order to handle really Manuscript to be reviewed</ns0:p><ns0:p>Computer Science large amounts of data. In addition to the current surface feature definitions via URI and hash patterns, a number of additional ways of specializing are possible in the future: Servers can restrict themselves to particular types of nanopublications, e.g. to specific topics or authors, and communicate this to the network in a similar way as they do it now with URI and hash patterns; inspired by the Bitcoin system, certain servers could only accept nanopublications whose hash starts with a given number of zero bits, which makes it costly to publish; and some servers could be specialized to new nanopublications, providing fast access but only for a restricted time, while others could take care of archiving old nanopublications, possibly on tape and with considerable delays between request and delivery. Lastly, there could also emerge interesting synergies with novel approaches to internet networking, such as Content-Centric Networking <ns0:ref type='bibr' target='#b22'>(Jacobson et al., 2012)</ns0:ref>, with which -consistent with our proposal -requests are based on content rather than hosts.</ns0:p><ns0:p>We argue that data publishing and archiving can and should be done in a decentralized manner. We believe that the presented server network can serve as a solid basis for semantic publishing, and possibly also for the Semantic Web in general. It could contribute to improve the availability and reproducibility of scientific results and put a reliable and trustworthy layer underneath the Semantic Web.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Schematic representation of the decentralized server architecture. Nanopublications that have trusty URI identifiers can be uploaded to a server (or loaded from the local file system by the server administrator), and they are then distributed to the other servers of the network. They can then be retrieved from any of the servers, or from multiple servers simultaneously, even if the original server is not accessible.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>8.</ns0:head><ns0:label /><ns0:figDesc>All journal pages p starting from the one containing n until the end of the journal are downloaded one by one (considering the size of journal pages, which is by default 1000 nanopublications): (a) All nanopublication identifiers in p (excluding those before n) are checked with respect to whether (A) they are covered by the visiting server's patterns U s and H s and (B) they are not already contained in the local store. A list l is created of all nanopublication identifiers of the given page that satisfy both, (A) and (B). (b) If the number of new nanopublications |l| exceeds a certain threshold (currently set to 5), the nanopublications of p are downloaded as a gzipped package. Otherwise, the new nanopublications (if any) are requested individually.(c) The retrieved nanopublications that are in list l are validated using their trusty URIs, and all valid nanopublications are loaded to the server's nanopublication store and their identifiers are added to the end of the server's own journal. (Invalid nanopublications are ignored.) 9. The journal identifier j p and the total number of nanopublications n p for server p are remembered for the next visit, replacing the values of j p and n p .</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Schematic example of nanopublication indexes, which are themselves nanopublications.Nanopublications can (but need not) be elements of one or more indexes. An index can have sub-indexes and can append to another index, in either case acquiring all nanopublications.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>And for convenience reasons, we can add the bin directory to the path variable:$ PATH=$(pwd)/bin:$PATH 10/21 PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9112:2:1:CHECK 22 Jul 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>}</ns0:head><ns0:label /><ns0:figDesc>of three graphs plus the head graph. The latter defines the structure of the nanopublication by linking to the other graphs:The actual claim or hypothesis of the nanopublication goes into the assertion graph::assertion { ex:mosquito ex:transmits ex:malaria.}The provenance and publication info graph provide meta-information about the assertion and the entire nanopublication, respectively: constitute a very simple but complete nanopublication. To make this example a bit more interesting, let us define two more nanopublications that have different assertions but are otherwise identical:We save these nanopublications in a file nanopubs.trig, and before we can publish them, we have to assign them trusty URIs: the file trusty.nanopubs.trig, which contains transformed versions of the three nanopublications that now have trusty URIs as identifiers, as shown by the output lines above. Looking into the file we can verify that nothing has changed with respect to the content, and now we are ready to publish them: $ np publish trusty.nanopubs.trig 3 nanopubs published at http://np.inn.ac/ For each of these nanopublications, we can check their publication status with the following command (referring to the nanopublication by its URI or just its artifact code):</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The web interface of the nanopublication validator can load nanopublications by their trusty URI (or just their artifact code) from the nanopublication server network. It also allows users to directly publish uploaded nanopublications.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. This screenshot of the nanopublication monitor interface (http://npmonitor.inn.ac) showing the current server network. It currently consists of 15 server instances on 10 physical servers in Zurich, New Haven, Ottawa, Amsterdam, Stanford, Barcelona, Ghent, Athens, Leipzig, and Haverford.</ns0:figDesc><ns0:graphic coords='16,141.73,63.78,413.57,229.60' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. Server response times under heavy load, recorded by the monitors during the first evaluation</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure9. The number of failures (top) and required time (bottom) when downloading the LIDDI dataset from the server network over a normal connection as well as a connection that has been artificially made unreliable.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:02:9112:2:1:CHECK 22 Jul 2016)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='15,141.73,189.62,413.61,130.32' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>5/21 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2016:02:9112:2:1:CHECK 22 Jul 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Existing datasets in the nanopublication format, five of which were used for the first part of the evaluation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>15/21</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='5'>https://github.com/tkuhn/nanopub-monitor 6 https://bitbucket.org/tkuhn/trustypublishing-study/ 7 https://bitbucket.org/tkuhn/trustypublishingx-study/ 8 https://loadimpact.com 14/21 PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9112:2:1:CHECK 22 Jul 2016)Manuscript to be reviewed</ns0:note> </ns0:body> "
"Below are our responses to the specific points raised by the editor and the reviewers. > Editor's comments > Please address the remaining comments raised by reviewers 1 and 2 Done. See below. > Reviewer 1 (Daniel Garijo) > > > > What I would like to recommend the authors is to include the link to the repository with the data from the evaluations (https://bitbucket.org/tkuhn/trustypublishing-study/src/). That way anyone interested in the data would be able to access it. That is a very good idea. The links to the two repositories are now included in the beginning of the evaluation section. > Reviewer 2 (Anonymous) > > > > > > > > > C1. The authors did not address my first comment in the previous round. “The authors stated two main requirements: Having a reliable mechanism for hosting and referencing datasets, but also the ability to reference and retrieve datasets at different granularity levels. With respect to the second requirements, the authors did not discuss the process by which a datasets is transformed into a set of nanopublications. The model proposed by the authors support nano-publication and nano-publication indexes, which references other nano-publication. The question the reader may ask is how nan-publication and their container nano-publications indexes are obtained given a dataset”. We now understand your point about data being represented in RDF but not in nanopublication format. We previously misunderstood it as being covered by our response to C2. To address it, we added a few sentences to the first paragraph of the approach section, sketching the problem and a starting point for its solution: 'We furthermore exploit the fact that datasets in RDF can be split into small pieces without any effects on their semantics. After Skolemization of blank nodes, RDF triples are independent and can be separated and joined without restrictions. Best practices of how to define meaningful small groups of such triples still have to emerge, but an obvious and simple starting point is grouping them by the resource in subject position. We focus here on the technical questions and leave these practical issues for future research.' With respect to the generation of nanopublication indexes, we now explain this explicitly in the end of the section Nanopublication Indexes, and refer to the subsequent section, which explains how the nanopublication library can be used for this. > > > > C2. In their response to C2 in the previous round, the authors suggest that heterogeneity of the data model used in datasets (e.g., CSV and relational) can be resolved by using existing state of the art technique to translate data in those models into RDF. This approach may be expensive, I was wondering if > > > > > > > lightweight approaches which do not attempt to translate the original data, but instead create metadata that describe them using nanopublication would be more realistic and cost effective as a solution. Of course the granularity of retrieval in this case would be a whole dataset, but there are scenarios where this solution would be acceptable. I think that a discussion in these lines that clariy the options to the reader would be helpful, specially that linked data form only a small proportion of available scientific data. This is a very good point. Thank you for raising it. We have in fact been thinking about such light-weight dataset announcements, in particular by using the HCLS Community Profile for dataset descriptions. We now include the following paragraph at the end of the section on nanopublication indexes: 'As a side note, dataset metadata can be captured and announced as nanopublications even for datasets that are not (yet) themselves available in the nanopublication format. The HCLS Community Profile of dataset descriptions (Gray et al., 2015) provides a good guideline of which of the existing RDF vocabularies to use for such metadata descriptions. "
Here is a paper. Please give your review comments after reading it.
287
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Wireless Sensor Networks connect a set of highly flexible wireless devices with small weight and size. They are used to monitor and control the environment by organizing the acquired data at a central device. Constructing fully connected networks using low power consumption sensors, devices, and protocols is one of the main challenges facing wireless Sensor Networks, especially in places where it is difficult to establish wireless networks in a normal way, such as military areas, archaeological sites, agricultural districts, construction sites, and so on. This paper proposes an approach for constructing and extending Bi-Directional mesh networks using low power consumption technologies inside various indoors and outdoors architectures called 'an adaptable Spider -Mesh topology.'.</ns0:p><ns0:p>The use of ESP-NOW protocol as a communication technology added an advantage of longer communication distance versus a slight increase of consumed power. It provides 15 times longer distance compared to BLE protocol while consuming only twice as much power. So according to our theoretical and experimental comparisons the proposed approach could provide higher network coverage while maintaining an acceptable level of power consumption.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Monitoring, measuring and controlling systems are rapidly evolving in today's world.</ns0:p><ns0:p>Embedded systems, the Internet of Things (IoT), remote environmental monitoring, and smart home automation are just a few of the applications that have motivated by the creation of Wireless Sensor Networks (WSNs) <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Wireless sensor networks typically connect smart sensors to other IoT modules for collecting, monitoring, and remotely controlling real-time physical parameters such as air temperature, humidity, air pressure, soil moisture, and other environmental variables. The collected data is sent to a central base station for automatic decision-making or notification of decision-makers <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>.</ns0:p><ns0:p>Recently, the promising use of WSNs and IoT-based systems has prompted researchers to build wireless networks in places where it is difficult to do so using conventional low-power technologies, such as military zones, marine environment <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> construction sites <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>, openair archaeological sites <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>, agriculture districts <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref> and others.</ns0:p><ns0:p>Implementing low-power consumption, low-cost, and high-flexibility wireless devices with small weight and size is required to build and scale up WSNs and IoT applications. Due to its technical specifications, performance properties, functionality, and affordability, the Esp8266 Tree and Mesh topologies. The Mesh topology is the best option for constructing, extending and controlling the Wireless Sensor Networks (WSNs).</ns0:p><ns0:p>Researchers have studied and evaluated the effects of star and mesh Wireless Sensor Network (WSNs) topologies on response time, throughput, traffic drop and delay using Zigbee communication protocols . The results demonstrated that the network functions better with mesh topology than star topology <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>.</ns0:p><ns0:p>Other research measured and analyzed the average value of delay, throughput and packet loss parameters of star mesh and tress topologies using Zigbee communication protocols. The result show that the start topology is stable on measuring throughput and packet loss besides; it had the smallest delay value. However the mesh and tree topologies had the advantage of being able to send data over longer distances and adding more nodes than the star topology <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>.</ns0:p><ns0:p>Wireless sensor networks (WSNs) with high power consumption technologies can extend network lifetime and enable efficient, reliable, and dependable wireless communications <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>. The power consumption technologies is divided into three parts; Sensing, Communication and Data processing. Bluetooth Low Energy and ESP-NOW protocols are two ultra-low-power consumption communication protocols that can be used in a wide variety of WSNs and IoTbased systems.</ns0:p><ns0:p>According to the Bluetooth Low Energy Core Specification, Bluetooth Low Energy is used to establish three kinds of network structure: Star, Mesh and Tree structure network <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>. Star topology is the simplest topology. BLE mesh is the most flexible and reliable network structure which has the ability to extend the network coverage area. It is a complicated and is not considered a power consumption and latency efficient protocol <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>. A Tree structure network consists of three different types of nodes: the root node, the intermediate node and the leaf node. it can connect more nodes than the star network. Moreover, it's routing rules are significantly simpler than mesh routing rules <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>.</ns0:p><ns0:p>The goal of this article is constructing an efficient network with mesh topology that can be used to extend and control Wireless Sensor Networks using low-energy communication protocols, as well as studying and analyzing the permissible distance between network nodes and the consumed power of each node. Selecting an energy-efficient routing mechanism for our suggested approach is an challenging task that necessitates a lot of experiments and comparisons. As a result, the extended work of this paper will focus heavily on choosing the most appropriate routing protocol for the network topology and the communication protocol that will be presented in this paper.</ns0:p></ns0:div> <ns0:div><ns0:head>III. WIRELESS SENSOR NETWORKS COMMUNICATION PROTOCOLS</ns0:head><ns0:p>Communication protocols are essential for connecting devices and sharing data in wireless sensor networks and Internet of Things devices. Specific communication protocols are required for building networks for monitoring and controlling construction sites, the marine environment, and archaeological sites. These protocols should be low-power and capable of sharing data across all network nodes. The following communication protocols will be used to test the proposed approach in this study.</ns0:p></ns0:div> <ns0:div><ns0:head>A. BLUETOOTH LOW ENERGY</ns0:head><ns0:p>Classic Bluetooth, Bluetooth Low Energy, Wi-Fi, and ESP-NOW Protocols are all supported by the ESP32 boards. Compared to Classic Bluetooth, Bluetooth Low Energy is designed to use significantly less power while maintaining a similar communication range.</ns0:p><ns0:p>To connect devices using the BLE protocol, devices can act as a Central / Master (smart phones or PCs) or Peripheral / Slave (small devices such as smart watches or ESP32 boards). Peripheral devices advertise their existence and wait for the central device to connect to them, whereas the central device scans nearby devices and connects them. Devices can be either a Client or a Server after establishing a BLE connection, as depicted in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. A server device has local resources such as 'profiles, services, and characteristics' that clients can read <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>FIGURE. 1. POINT-TO-POINT CONNECTION METHODOLOGY USING BLE PROTOCOL</ns0:head><ns0:p>Recently, mesh networks can be configured utilizing the BLE protocol, although there are several limitations. According to our experiments, the maximum distance between the server and clients is six meters, and the maximum number of clients connected to the server at the same time is three devices.</ns0:p><ns0:p>Researchers have been able to overcome these challenges in a variety of ways, including switching some client devices into break mode to allow other devices to connect in their place, or using a time division system to switch Server / Client mode for some devices on the network, but these techniques are extremely complex <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref>. The existing limitations of using the BLE protocol for establishing and expanding the Wireless Sensors Network pushed us to look for other low power consumption protocols that could address these difficulties.</ns0:p></ns0:div> <ns0:div><ns0:head>B. ESP-NOW</ns0:head><ns0:p>ESP-NOW is a fast wireless communication proprietary protocol developed by 'Espressif organization' that may be used to transfer small messages (up to 250 bytes) between ESP32 boards <ns0:ref type='bibr'>[17]</ns0:ref>. As demonstrated in Figure <ns0:ref type='figure'>2</ns0:ref>, data is encapsulated in a vendor-specific action frame and then sent from one device to another. The pairing between devices is required prior to their communication. After pairing, the connection becomes secure and peer-to-peer, with no need for handshake process <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>FIGURE. 2. ESP-NOW VENDOR-SPECIFIC FRAME FORMAT (ADAPTED FROM [19])</ns0:head><ns0:p>ESP-NOW protocol is similar to the low-power 2.4GHz wireless connectivity. This protocol allows multiple low-power devices to communicated to each other and exchange data between ESP32 boards without the use of Wi-Fi or Bluetooth technologies <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> summarizes the essential differences between the Bluetooth Low Energy and ESP-NOW protocols. As demonstrated in Figure <ns0:ref type='figure' target='#fig_0'>3</ns0:ref>, the ESP-NOW protocol allows to configure One-Way or Two-Way communication methodologies between the connected ESP32 boards. </ns0:p></ns0:div> <ns0:div><ns0:head>ESP-NOW ONE-WAY COMMUNICATION</ns0:head><ns0:p>It's simple to set up one-way communication between ESP32 boards. One-Way communication methodology can be divided into two types: One-to-Many and Many-to-One. In this type of communication methodology, the sent data may be sensor readings or controlling commands (Switching ON and OFF devices, Moving Servo motor, changing RGB color values or other command) <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>.</ns0:p><ns0:p>As shown in Figure <ns0:ref type='figure'>4</ns0:ref> (a), one ESP32 board transfer the same or different data to other ESP32 boards in a One-to-Many communication methodology. This setup is suitable for building a remote control system <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>. As shown in Figure <ns0:ref type='figure'>4</ns0:ref> (b), one ESP32 board receives data from other ESP32 boards in a Many-to-One communication methodology. This setup is suitable for collecting data from multiple sensor nodes connected to other ESP32 boards <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>. <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>)</ns0:p></ns0:div> <ns0:div><ns0:head>FIGURE. 4. ESP32 ONE-WAY COMMUNICATION STYLES (ADAPTED FROM</ns0:head></ns0:div> <ns0:div><ns0:head>2) ESP -NOW TWO-WAY COMMUNICATION</ns0:head><ns0:p>Two-way communication between ESP32 boards is supported through the ESP -NOW protocol. In this communication style, each board can act as both a sender and a receiver. So ESP32 boards can actually work as a transceiver.</ns0:p><ns0:p>As demonstrated in Figure <ns0:ref type='figure'>5</ns0:ref>, the ESP-NOW Two-Way communication methodology is suitable for creating a mesh network in which many ESP32 boards can transfer data to each other. This methodology can be used to create a network for sharing sensor readings and monitoring system in weather station, construction sites and archaeological sites <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>. <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref> 3</ns0:p></ns0:div> <ns0:div><ns0:head>FIGURE. 5. ESP-NOW TWO-WAY COMMUNICATION MESH NETWORK</ns0:head></ns0:div> <ns0:div><ns0:head>) INTEGRATING ESP-NOW WITH WI-FI SIMULTANEOUSLY</ns0:head><ns0:p>The ESP32 board can be used as a web server in Wi-Fi station mode, Access Point mode, or both. These capabilities enable us to develop a wide range of IoT applications and deploy diverse network architectures <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref>. As shown in Figure <ns0:ref type='figure'>6</ns0:ref>, in some applications, we need to host an ESP32 board as a web server while also integrating it with the ESP-NOW communication protocol <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref>. <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref> Integrating WSNs with the Internet Protocol (IP) to develop Internet of Things (IoT) applications is one of the most important goals for WSNs. IoT systems enables things (a person with a heart monitor implant or a car with built-in sensors to inform the driver when tire pressure is low) to monitor the real-time live objects anytime and everywhere. Integrating ESP-NOW with Wi-Fi in places where Wi-Fi technology is available allows us to build these applications <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>FIGURE. 6. USING ESP-NOW AND WI-FI SIMULTANEOUSLY</ns0:head></ns0:div> <ns0:div><ns0:head>IV. AN ADAPTIVE SPIDER -MESH TOPOLOGY</ns0:head><ns0:p>The proposed approach acquires its novelty from constructing an adaptive spider mesh topology using the ESP-NOW protocol, which is incorporated into ESP32 devices. Instead of using the traditional tree and mesh topologies, the adaptive spider mesh topology is proposed for extending the network coverage. The ESP-NOW protocol can be used as a bi-directional communication protocol that can overcome the BLE protocol's connectivity constraints.</ns0:p></ns0:div> <ns0:div><ns0:head>FIGURE. 7. FOUR LEVELS OF AN ADAPTIVE SPIDER MESH TOPOLOGY USING ESP32 BOARDS AND ESP-NOW PROTOCOL</ns0:head><ns0:p>As illustrated in Figure <ns0:ref type='figure'>7</ns0:ref>, the adaptive Spider -Mesh topology can have four levels, ranging from 0 to 3. Nodes are labeled with numbers like 1, 2, 3, etc. Nodes is this topology are organized into several levels, ranging from 0 to n. So, by increasing the number of levels, the network coverage can be extended.</ns0:p><ns0:formula xml:id='formula_0'>&#119873; = 1 + &#119899; -1 &#8721; &#119894; = 0 &#119909; * (2) &#119894; &#119899; &#8805; 1 (1)</ns0:formula><ns0:p>The number of nodes can be easily determined using Equation ( <ns0:ref type='formula'>1</ns0:ref>), as shown in Table <ns0:ref type='table'>3</ns0:ref>, where N is the total number of nodes that may be connected using the proposed approach, n is the number of levels, and x is the number of nodes in the first level. In the last level, each node is connected to just three other nodes, but in the other levels, each node is connected to five other nodes.</ns0:p><ns0:p>The use of ESP-NOW protocol as a communication technology added many benefits, including the ability to exchange data between ESP32 devices without switching network nodes mode, and the ability to connect one board to seven other boards at once (supporting up to 20 nodes based on recent researches' experiments <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>). Using the capabilities of the ESP-Now protocol, an adaptive Spider -Mesh topology has been proposed for constructing Bi-Directional mesh networks.</ns0:p><ns0:p>The proposed approach studies the possibility of expanding network coverage in locations where traditional Wi-Fi networks or permanent energy sources are difficult to establish, as well as the maximum permissible distance between network nodes using the ESP-NOW protocol inside various indoor and outdoor architectures and the maximum network node lifetime.</ns0:p></ns0:div> <ns0:div><ns0:head>V. EXPERIMENTAL RESULTS</ns0:head><ns0:p>Two versions of ESP32 development boards were used in the experiments: the standard ESP-32S DEV KIT DOIT board with 30 GPIOs pins and the Wemos D1 R32 UNO ESP32 board. A power supply was also employed, which consisted of Wemos 18650 rechargeable lithium batteries (3.7 v and 4800 mAh). The Arduino IDE version 1.8.12 has been used to upload code to ESP32 boards in our experiments.</ns0:p><ns0:p>In the first series of experiments, we tested the compatibility between ESP-32S DEV boards and Wemos D1 R32 UNO boards. The two ESP32 boards have been utilized as a transceiver to exchange various data types (up to 250 Mbit/s) from other boards. These experiments revealed that the two versions of ESP32 boards are extremely compatible.</ns0:p><ns0:p>Exchanging data with the proposed approach through one-way and two-way communication methodologies, to control a group of peripherals based on the received value was the target of the second series of experiments. We were able to ensure the reliability and dependability of the proposed approach in exchanging data. Using the ESP-NOW protocol, we measured the maximum achievable distance between transmitter and receivers under various construction conditions using these experiments on three different indoor architectures and a set of outdoor regions.</ns0:p><ns0:p>The proposed approach was shown to be capable of establishing a simple and fully connected network inside various indoor and outdoor structures in the second series of experiments. As shown in Figure <ns0:ref type='figure'>8</ns0:ref>, the maximum distance between network nodes inside various indoor structures is around 15.5 meters, while the maximum distance between network nodes for outside environments is roughly 90 meters as shown in Figure <ns0:ref type='figure'>9</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>FIGURE. 8. THE DISTANCE BETWEEN SENDER AND RECEIVERS INSIDE THREE DIFFERENT INDOOR ARCHITECTURE</ns0:head></ns0:div> <ns0:div><ns0:head>FIGURE. 9. THE DISTANCE BETWEEN SENDER AND RECEIVERS IN OUTDOOR ARCHITECTURES</ns0:head><ns0:p>The target of the last series of experiments is to identify the network node lifetime. This target was achieved by measuring the energy consumed in sending and receiving data using the Bluetooth Low Energy and ESP-Now protocols.</ns0:p></ns0:div> <ns0:div><ns0:head>Battery Life(&#119879;) =</ns0:head><ns0:p>&#119861;&#119886;&#119905;&#119905;&#119890;&#119903;&#119910; &#119862;&#119886;&#119901;&#119886;&#119888;&#119894;&#119905;&#119910; &#119894;&#119899; &#119898;&#119860;&#8462; (&#119876;)</ns0:p><ns0:formula xml:id='formula_1'>&#119871;&#119900;&#119886;&#119889; &#119862;&#119906;&#119903;&#119903;&#119890;&#119899;&#119905; &#119894;&#119899; &#119898;&#119860; (&#119868;)<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>The network node lifetime may be easily determined using Equation (2) <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref>, where T is the battery lifetime (network node lifetime), measured in hours (h); Q is the battery charge capacity which in our case is 4800 milliamps per hour (mAh); and I is the average current that a load is drawn from it which can be measured in milliamps (mA) as illustrated in Figure <ns0:ref type='figure' target='#fig_1'>10</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>FIGURE. 10. USING DIGITAL MULTIMETER TO MEASURE THE AVERAGE LOAD CURRENT</ns0:head><ns0:p>According to our experiments, the lifetime of a server node using the Bluetooth Low Energy protocol is around 62 hours (4800 (mAh) / 76.4 (mA), whereas the lifetime of client nodes is roughly 64 hours (4800 (mAh) / 74 (mA). The lifetime of a sender node using the ESP-NOW protocol is 37 hours (4800 (mAh) / 129 (mA), while the lifetime of a receiver node is 39 hours (4800 (mAh) / 121 (mA). Keeping in mind that all previous experimental results are running in active power mode. Table <ns0:ref type='table'>4</ns0:ref> summarizes our experimental results and demonstrates the benefits of using Deep-sleep power mode to save the battery power for thousands of hours.</ns0:p><ns0:p>A comparison between the proposed low power consumption communication protocols in this paper has been summarized in Table <ns0:ref type='table'>5</ns0:ref>, while a comparison between an existing short range low PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64859:1:0:NEW 8 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science power consumption communication protocols has be summarized in Table <ns0:ref type='table'>6</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_1'>11</ns0:ref> illustrates the difference between our experimental results and existing researcher results <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>FIGURE. 11. A COMPARISON BETWEEN OUR EXPERIMENTAL RESULTS AND THE EXISTING RESEARCHER RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head>VI. Conclusion and Future work</ns0:head><ns0:p>Recently, researchers are working to build a mesh network with low-power sensors, devices, and protocols in military sites, archaeological sites, smart parking, farmlands, and construction sites. This paper proposes an adaptive low power consumption mesh approach called 'An Adaptive Spider-mesh topology'. To be able to fairly appraise this study, we must examine network characteristics such as the desired network topology, communication protocol, maximum number of connected nodes simultaneously, communication methods, power consumed, and network node lifetime utilizing the proposed approach.</ns0:p><ns0:p>Although the ESP-NOW protocol is a proprietary protocol that consumes twice as much power of the Bluetooth low Energy protocol, the experimental results show that it is an efficient Bi-Directional communication protocol for developing the proposed approach. Inside various indoor structures, the maximum distance between sender and receiver is roughly 15 meters, whereas the maximum distance for outside environments is approximately 90 meters. According to our experiments, the transmitter node can simultaneously connect to up to seven receiving nodes. The maximum distance between server and client nodes in the case of BLE protocol cannot exceed 6 meters, and the maximum number of connected client nodes to a server node cannot exceed three nodes.</ns0:p><ns0:p>The proposed approach can be used in future work to collect, analyze and monitor unexpected weather conditions that may have severe consequences for construction equipment and materials at construction sites. Choosing the most appropriate energy-efficient routing protocol to the proposed approach is one of the most important challenges we will attempt to address in future research. Machine learning techniques like as Neural Networks, Support Vector Machines, Decision Trees, and other approaches will be integrated with the proposed approach in assisting decision makers or automatically taking decisions In many fields of our daily life. Developing An efficient wireless network solutions using low energy technologies in-body or underwater is an interesting research topic for many researchers. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>FIGURE. 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>FIGURE. 3. ONE-WAY AND TWO-WAY COMMUNICATION (ADAPTED FROM<ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 point</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,42.52,178.87,525.00,383.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,199.12,525.00,329.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,178.87,525.00,270.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,282.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,289.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,199.12,525.00,297.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>TABLE I WIRELESS</ns0:head><ns0:label>I</ns0:label><ns0:figDesc>COMMUNICATION TECHNOLOGIES COMPARISON (ADAPTED FROM<ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Specification</ns0:cell><ns0:cell>Bluetooth</ns0:cell><ns0:cell>Z-Wave</ns0:cell><ns0:cell>ZigBee</ns0:cell><ns0:cell>Thread</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Point-to-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Network Type</ns0:cell><ns0:cell>point,</ns0:cell><ns0:cell>Mesh</ns0:cell><ns0:cell>Mesh</ns0:cell><ns0:cell>Mesh</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>scatternet</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Maximum</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Nodes</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>232</ns0:cell><ns0:cell>65,536</ns0:cell><ns0:cell>250</ns0:cell></ns0:row><ns0:row><ns0:cell>connected</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Distance</ns0:cell><ns0:cell>Approximatel y 10-100 meters</ns0:cell><ns0:cell>100 meters with no obstructi ons</ns0:cell><ns0:cell>Approxi mately 10-20 meters</ns0:cell><ns0:cell>Normall y 20-30 meters</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>110 kbps</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Throughput</ns0:cell><ns0:cell>24 Mbit/s</ns0:cell><ns0:cell>40 kbps</ns0:cell><ns0:cell>maximu</ns0:cell><ns0:cell>250 kbps</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>m</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Spread Spectrum</ns0:cell><ns0:cell>AFH</ns0:cell><ns0:cell>DSSS</ns0:cell><ns0:cell>DSSS</ns0:cell><ns0:cell>DSSS</ns0:cell></ns0:row><ns0:row><ns0:cell>Modulation</ns0:cell><ns0:cell>GFSK</ns0:cell><ns0:cell>GFSK</ns0:cell><ns0:cell>OQPSK</ns0:cell><ns0:cell>OQPSK</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Monitori</ns0:cell><ns0:cell>Monitori</ns0:cell><ns0:cell>Monitori</ns0:cell></ns0:row><ns0:row><ns0:cell>Data</ns0:cell><ns0:cell>Exchanging data</ns0:cell><ns0:cell>ng and control</ns0:cell><ns0:cell>ng and control</ns0:cell><ns0:cell>ng and control</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>data</ns0:cell><ns0:cell>data</ns0:cell><ns0:cell>data</ns0:cell></ns0:row><ns0:row><ns0:cell>Power Consumption</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>Low</ns0:cell></ns0:row><ns0:row><ns0:cell>Voice Capable</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>No</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Banking</ns0:cell></ns0:row><ns0:row><ns0:cell>Security</ns0:cell><ns0:cell>56-128 bit key derivation</ns0:cell><ns0:cell>AES-128</ns0:cell><ns0:cell>AES-128</ns0:cell><ns0:cell>-class, public-key cryptogr</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>aphy</ns0:cell></ns0:row><ns0:row><ns0:cell>Cost</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>High</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>Low</ns0:cell></ns0:row><ns0:row><ns0:cell>Backwards Compatibility</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>Yes</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>TABLE II DEDUCTIVE</ns0:head><ns0:label>II</ns0:label><ns0:figDesc>COMPARISON BETWEEN BLE AND ESP-NOW</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>PROTOCOLS</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Specification</ns0:cell><ns0:cell>BLE</ns0:cell><ns0:cell>ESP-NOW</ns0:cell></ns0:row><ns0:row><ns0:cell>Protocol</ns0:cell><ns0:cell>Standard</ns0:cell><ns0:cell>Proprietary</ns0:cell></ns0:row><ns0:row><ns0:cell>Communication methodology</ns0:cell><ns0:cell>Master / Slave</ns0:cell><ns0:cell>Sender / Receiver</ns0:cell></ns0:row><ns0:row><ns0:cell>Communication mode</ns0:cell><ns0:cell>Unidirectional</ns0:cell><ns0:cell>Bidirectional</ns0:cell></ns0:row><ns0:row><ns0:cell>Maximum connected Slave Nodes</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>7 'up to our experiments'</ns0:cell></ns0:row><ns0:row><ns0:cell>Maximum distance between nodes</ns0:cell><ns0:cell>6 meter</ns0:cell><ns0:cell>15.5 meter 'indoor' 90 meter 'outdoor' according to our experiments</ns0:cell></ns0:row><ns0:row><ns0:cell>Power consumption</ns0:cell><ns0:cell>Very Low</ns0:cell><ns0:cell>Low</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>TABLE III NUMBER</ns0:head><ns0:label>III</ns0:label><ns0:figDesc>OF LEVELS AND NODES FOR THE PROPOSED</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='3'>APPROACH</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Levels</ns0:cell><ns0:cell cols='3'>Number of Nodes</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell cols='2'>1 + 5 = 6</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell cols='4'>1+ 5(2) 0 + 5(2) 1 = 16</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell cols='4'>1+ 5(2) 0 + 5(2) 1 + 5(2) 2 = 36</ns0:cell></ns0:row><ns0:row><ns0:cell>n</ns0:cell><ns0:cell>&#119873; = 1 +</ns0:cell><ns0:cell>&#119899; -1 &#119894; = 0 &#8721;</ns0:cell><ns0:cell>&#119909; * (2) &#119894;</ns0:cell><ns0:cell>&#119899; &#8805; 1</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>TABLE IV EXPERIMENTAL</ns0:head><ns0:label>IV</ns0:label><ns0:figDesc>RESULTS SUMMERY USING 18650 4800 mAh 3.7V LI-ION BATTERY</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Power mode</ns0:cell><ns0:cell cols='2'>Protocol</ns0:cell><ns0:cell>Node State</ns0:cell><ns0:cell>Power Consumption</ns0:cell><ns0:cell>Node Lifetime</ns0:cell><ns0:cell>Response Time</ns0:cell></ns0:row><ns0:row><ns0:cell>Active</ns0:cell><ns0:cell cols='2'>BLE BLE</ns0:cell><ns0:cell>Server Client</ns0:cell><ns0:cell>76.4 mA 74 mA</ns0:cell><ns0:cell>62 HRS 49 MIN 64 HRS 51 MIN</ns0:cell><ns0:cell>15 milliseconds</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>ESP-NOW ESP-NOW</ns0:cell><ns0:cell>Sender Receiver</ns0:cell><ns0:cell>129 mA 121 mA</ns0:cell><ns0:cell>37 HRS 12 MIN 39 HRS 40 MIN</ns0:cell><ns0:cell>4 milliseconds</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>ULP co-processor is powered on</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>150 &#181;A</ns0:cell><ns0:cell>32000 HRS</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ULP</ns0:cell><ns0:cell>sensor-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Deep-sleep</ns0:cell><ns0:cell cols='2'>monitored pattern</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>100 &#181;A</ns0:cell><ns0:cell>48000 HRS</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>RTC timer +RTC</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>memory</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell><ns0:cell>10 &#181;A</ns0:cell><ns0:cell>480000 HRS</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>TABLE VI COMPARISON</ns0:head><ns0:label>VI</ns0:label><ns0:figDesc>BETWEEN EXISTING LOW POWER CONSUMPTION COMMUNICATION PROTOCOLS<ns0:ref type='bibr' target='#b26'>[26]</ns0:ref> </ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Module/ Communication protocol IEEE Protocol Designed for network protocol V DD (Volt) I TX (mA) I RX (mA) I sleep (&#181;A) Max. Bit Rate</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>ANY900</ns0:cell><ns0:cell>802.15.4</ns0:cell><ns0:cell>ZigBee</ns0:cell><ns0:cell>3.3</ns0:cell><ns0:cell>33</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>&lt;6</ns0:cell><ns0:cell>250 (Kb/S)</ns0:cell></ns0:row><ns0:row><ns0:cell>MRF24J40MA</ns0:cell><ns0:cell>802.15.4</ns0:cell><ns0:cell>ZigBee</ns0:cell><ns0:cell>3.3</ns0:cell><ns0:cell>23</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>250 (Kb/S)</ns0:cell></ns0:row><ns0:row><ns0:cell>RC2400</ns0:cell><ns0:cell>802.15.4</ns0:cell><ns0:cell>ZigBee + 6lowpan</ns0:cell><ns0:cell>3.3</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>250 (Kb/S)</ns0:cell></ns0:row><ns0:row><ns0:cell>CC2430</ns0:cell><ns0:cell>802.15.4</ns0:cell><ns0:cell>ZigBee</ns0:cell><ns0:cell>3.3</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>0.9</ns0:cell><ns0:cell>250 (Kb/S)</ns0:cell></ns0:row><ns0:row><ns0:cell>deRFmega128-22M00</ns0:cell><ns0:cell>802.15.4</ns0:cell><ns0:cell>Zigbee + 6lowpan</ns0:cell><ns0:cell>3.3</ns0:cell><ns0:cell>12.7</ns0:cell><ns0:cell>17.6</ns0:cell><ns0:cell>&lt;1</ns0:cell><ns0:cell>250 (Kb/S)</ns0:cell></ns0:row><ns0:row><ns0:cell>deRFsam3 23M10-2</ns0:cell><ns0:cell>802.15.4</ns0:cell><ns0:cell>ZigBee + 6lowpan</ns0:cell><ns0:cell>3.3</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>&lt;2</ns0:cell><ns0:cell>250 (Kb/S)</ns0:cell></ns0:row><ns0:row><ns0:cell>RN171</ns0:cell><ns0:cell>802.11 b/g</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>3.3</ns0:cell><ns0:cell>190</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>(Mb/S)</ns0:cell></ns0:row><ns0:row><ns0:cell>QCA4004</ns0:cell><ns0:cell>802.11 n</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>3.3</ns0:cell><ns0:cell>250</ns0:cell><ns0:cell>75</ns0:cell><ns0:cell>130</ns0:cell><ns0:cell>(Mb/S)</ns0:cell></ns0:row><ns0:row><ns0:cell>GS1011M</ns0:cell><ns0:cell>802.11 b</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>3.3</ns0:cell><ns0:cell>150</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>150</ns0:cell><ns0:cell>(Mb/S)</ns0:cell></ns0:row><ns0:row><ns0:cell>G2M5477</ns0:cell><ns0:cell>802.11 b/g</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>3.3</ns0:cell><ns0:cell>212</ns0:cell><ns0:cell>37.8</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>(Mb/S)</ns0:cell></ns0:row><ns0:row><ns0:cell>RS9110-N-11-02</ns0:cell><ns0:cell>802.11 b/g/n</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>3.3</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>520</ns0:cell><ns0:cell>(Mb/S)</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Rebuttal Letter Dear Editors We appreciate the reviewers for their generous comments on the manuscript , and we had modified the manuscript to address their concerns. Reviewer 1 (Anonymous) Additional comments The authors proposed proposes an approach for constructing and extending Bi-Directional mesh networks using low power consumption technologies inside various indoors and outdoors architectures called 'an adaptable Spider - Mesh topology.'. The use of ESP-NOW protocol as a communication technology added the advantage of longer communication distance versus a slight increase of consumed power. It provides 15 times longer distance compared to the BLE protocol while consuming only twice as much power. In my opinion, this work is interesting and should be published after minor revisions. I consider that the authors should clarify several aspects of the manuscript, for improving the clarity of the presentation. These are listed below. 1. pg 7 line 236, In particular, the term running current is used in Electric Motor Testing, why authors have used the term in the manuscript? Agreed, the term running current is very confusing and requires more explanation. So, we reformulate equation 2 to Illustrate the network life time 'Buttery life time' can be calculated using battery capacity (Q) and load Current (I) Which the chip consumes to transfer data using different protocols. We have re-explained this equation in page 7, lines 244 -262, In addition to; adding new figure 'Figure 10' to illustrate how to measure the average of load current in our experiments, page 7 , line 250. 2. The authors can be to consider show their lifetime experimental results in a table. Indeed, this is a very important note. So we have added a new table (Table 4) which can summarize our experimental results to increase the readability for readers. 3. Can the authors measure the time of the response responses time between the different topologies? We have measured the response time using ESP-NOW and BLE protocols, It turns out that the response time using ESP-NOW protocol is approximately 4 milliseconds, while the response time using BLE protocol is approximately 15 milliseconds. We presented these results in (Table 4). page 7, lines 256-257. Reviewer 2 (N Saeed) Basic reporting This paper proposes an energy-efficient routing protocol for wireless sensor networks. The paper is well-written and easy to follow. Following are my comments: 1. There has been a great amount of work done on developing energy-efficient routing protocols, it would be better to distinguish your work from the existing literature or clearly mention the novel contributions. Actually, Developing energy-efficient routing protocols is an important topic. The main objective of this research is to focus on constructing an efficient network with a appropriate mesh topology that can be used to extend and control Wireless Sensor Networks using low-energy communication protocols, In addition to studying the main characteristics of the proposed approach such as: • Measuring the maximum distance between sender and receiver inside different indoor and outdoor architectures using two different low power consumption communication protocols. • Measuring the power consumption for the two proposed low power consumption protocols in this paper 'Bluetooth Low Energy protocol and ESP-Now Protocol' Selecting an energy-efficient routing protocol for our proposed approach is an challenging task that necessitates a lot of experiments and comparisons which requires a separate research to clarify its importance. The extended work of this paper will focus heavily on choosing the most appropriate routing protocol for the proposed approach in this paper. We have mentioned the importance of this topic in the future work section, page 8, lines 281-283. We have thought of adding the following paragraphs to our paper that talk about the importance of developing energy-efficient routing protocols, but this may mislead readers about the main purpose of this article. What is your thoughts about it? 'Developing energy-efficient routing protocols is the most challenging task in Wireless Sensor networks(WSNs) and Internet of Things (IoT) applications comparable with ad hoc and cellular networks [1]. In the last decade, different energy-efficient routing protocols have received a lot of attention. Routing protocols for WSNs and IoT based applications are often classified into two main categories: Non-geographic-based routing protocols and Geographic-based routing protocols [2]. In non-geographic-based routing protocols, network nodes can send their data directly to the base station using clustering algorithms. Cluster-based routing mechanism has been proposed to select the optimal Cluster Heads to form the clusters, such as the improved version of the genetic algorithm [3] and a neuro-fuzzy system [4]. Other protocols have recently been devised that use a multi-hop communication mechanism instead of geographic routing algorithms to determine the next-hop relay nodes [5]. In geographic-based routing protocols, each network node must be aware of its neighbors' locations as well as the data packets' destinations. Each network node selects the best nearby node nearest to the destination as its next-hop relay node. The next hop of each node can be chosen either by measuring the distance and energy [6] or with the assistance of a grid structure [7]. When the target destination is static, the geographic-based routing protocol is energy-efficient because the shortest path for data transmission is chosen [2]. The proposed approach in this paper belongs to the geographic-based routing protocol category.' [1] Yarinezhad, R. (2019). Reducing delay and prolonging the lifetime of wireless sensor network using efficient routing protocol based on mobile sink and virtual infrastructure. Ad Hoc Networks, 84, 42-55.‏ [2] Yarinezhad, R., & Azizi, S. (2021). An energy-efficient routing protocol for the Internet of Things networks based on geographical location and link quality. Computer Networks, 193, 108116.‏ [3] Oladimeji, M. O., Turkey, M., & Dudley, S. (2017). HACH: Heuristic Algorithm for Clustering Hierarchy protocol in wireless sensor networks. Applied Soft Computing, 55, 452-461.‏ [4] Thangaramya, K., Kulothungan, K., Logambigai, R., Selvi, M., Ganapathy, S., & Kannan, A. (2019). Energy aware cluster and neuro-fuzzy based routing algorithm for wireless sensor networks in IoT. Computer Networks, 151, 211-223. [5] Chowdhury, S., & Giri, C. (2019). EETC: Energy efficient tree-clustering in delay constrained wireless sensor network. Wireless Personal Communications, 109(1), 189-210.‏ [6]‏ Maurya, S., Jain, V. K., & Chowdhury, D. R. (2019). Delay aware energy efficient reliable routing for data transmission in heterogeneous mobile sink wireless sensor network. Journal of Network and Computer Applications, 144, 118-137.‏ [7] Yarinezhad, R., & Hashemi, S. N. (2019). A routing algorithm for wireless sensor networks based on clustering and an fpt-approximation algorithm. Journal of Systems and Software, 155, 145-161.‏ 2. I wonder if the proposed work can be used in extreme environments such as in-body or underwater {https://repository.kaust.edu.sa/handle/10754/664913, https://ieeexplore.ieee.org/abstract/document/8891506} Agreed, this is very confusing question, our approach relies on radio frequency communication technology which is different from wireless technologies 'such as sonar and optical wireless communications ' used underwater . Recently, researchers were able to decode underwater sonar signals vibrations by airborne receivers. This is an important research topic in my opinion, and I believe it warrants further investigation. We have mentioned the importance of this topic in the future work section, page 8, lines 285-287. 3. The results do not show a comparison to existing works to show its effectiveness. Indeed, this is a very important note. So we have added two tables (Table 5 and Table 6) and one Figure (Figure 11) to present a comparison between the proposed low power consumption communication protocols in our paper (Table 5) to existing short range low power consumption communication protocols (Table 6). we mentioned that in page 7, lines 258 -262. We believe that the manuscript is now ready to be published in PeerJ. Thank you for your Time and response. Eng. Mostafa Ibrahim Labib PhD Researcher, Department of Computer Science on behalf of all authors. "
Here is a paper. Please give your review comments after reading it.
288
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. The principal component analysis (PCA) is known as a multivariate statistical model for reducing dimensions into a representation of principal components.</ns0:p><ns0:p>Thus, the PCA is commonly adopted for establishing psychometric properties, i.e., the construct validity. Autoencoder is a neural network model, which has also been shown to perform well in dimensionality reduction. Although there are several ways the PCA and autoencoders could be compared for their differences, most of the recent literature focused on differences in image reconstruction, which are often sufficient for training data.</ns0:p><ns0:p>In the current study, we looked at details of each autoencoder classifier and how they may provide neural network superiority that can better generalize non-normally distributed small datasets. Methodology. A Monte Carlo simulation was conducted, varying the levels of non-normality, sample sizes, and levels of communality. The performances of autoencoders and a PCA were compared using the mean square error, mean absolute value, and Euclidian distance. The feasibility of autoencoders with small sample sizes was examined. Conclusions. With extreme flexibility in decoding representation using linear and non-linear mapping, this study demonstrated that the autoencoder can robustly reduce dimensions, and hence was effective in building the construct validity with a sample size as small as 100. The autoencoders could obtain a smaller mean square error and small Euclidian distance between original dataset and predictions for a small nonnormal dataset. Hence, when behavioral scientists attempt to explore the construct validity of a newly designed questionnaire, an autoencoder could also be considered an alternative to a PCA.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Selecting a proper sample size is critical when planning an empirical study. The minimum sample size is often calculated based on selected statistical procedures. For example, inferential statistics based on an independent-sample t-test can apply the formula, , to achieve the minimum sample</ns0:p></ns0:div> <ns0:div><ns0:head n='1.2'>Construct Validity</ns0:head><ns0:p>Construct validity is a psychological property defined as the degree to which a measure assesses the theoretical construct intended to be measured <ns0:ref type='bibr' target='#b6'>(Cronbach &amp; Meehl, 1955)</ns0:ref>. One cannot assess confounding influences of random error without estimating the construct validity when designing a questionnaire. Evaluation of what defines a psychological construct is the determinant of the test performance. A better representation of the latent psychological construct is desirable for almost all psychological tests. Thus, construct validity, in general, is considered the most fundamental aspect of psychometrics. <ns0:ref type='bibr' target='#b5'>Campbell and Fiske (1959)</ns0:ref> proposed two views of construct validity: convergent validity and discriminant validity. Convergent validity refers to the degree of agreement among measurements of the same constructs that should be related based on theory. Discriminant validity refers to the distinction of concepts that are not supposed to be related and are in fact, unrelated. Campbell and Fiske developed four steps based on inspecting the multitrait-multimethod (MTMM) matrix to operationally define convergent validity and discriminant validity. Since the concept of construct validity was introduced, an extensive effort has been made ever since to seek numerical representations of the construct validity. Starting with Douglas Jackson who employed a component analysis as an integral part of the development of psychological measures, the PCA has become a standard method for questionnaire development <ns0:ref type='bibr' target='#b19'>(Jackson, 1970)</ns0:ref>. Traditionally, the PCA and FA are two of the most often employed statistical procedures in the social behavioral sciences commonly used to suggest factor profiles as latent constructs <ns0:ref type='bibr' target='#b37'>(Sherman, 1986;</ns0:ref><ns0:ref type='bibr' target='#b49'>Yoon et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b10'>Fontes et al., 2017)</ns0:ref>.</ns0:p><ns0:p>In FAs, the confirmatory factor analysis (CFA) and exploratory factor analysis (EFA) are two methods that facilitate the transition from many observed variables to a smaller number of latent variables. Both FA models are commonly used to address the construct validity. The CFA is a tool that researchers can adopt to test the validity by comparing alternatively proposed a priori models at the latent factor level. Advantages of the CFA for being more informative than Campbell &amp; Fiske's criteria are that it provides a statistical justification of the model fit and the degree of fit for the convergent validity and divergent validity <ns0:ref type='bibr' target='#b2'>(Bagozzi, R.P., Yi &amp; Phillips, 1991)</ns0:ref> In addition to the CFA, there are several statistical models based upon which the factors explored by a questionnaire are validated. For example, the PCA is a mathematical algorithm in which observations are described by several inter-correlated quantitatively dependent variables. A PCA is the default datareduction technique in SPSS software and was adopted by many researchers, including <ns0:ref type='bibr' target='#b31'>Mohammadbeigi et al. (2015)</ns0:ref>, and <ns0:ref type='bibr' target='#b33'>Parker et al. (2010)</ns0:ref>. A PCA can be conducted to examine the construct validity due to the PCA's ability to integrate the full bivariate cross-correlation matrix of all item-wise measurements through dimension reduction. Its goal is to extract important information from the total number of observed variables, and represent it as a set of new orthogonal variables called principal components. These principal components, or latent variables, summarize the observed data table and display the pattern of similarity of the observations <ns0:ref type='bibr' target='#b0'>(Abdi &amp; Williams, 2010)</ns0:ref>.</ns0:p><ns0:p>Most educational researchers, behavioral scientists, and social science researchers treat uncorrelated principal components as independent identities. However, the property of principal components being independent of each other only holds when the principal components are uncorrelated, and the multivariate items are normally distributed <ns0:ref type='bibr' target='#b22'>(Kim &amp; Kim, 2012)</ns0:ref>. If the input data are not normally distributed, the variance explained by one of the traits will overlap that of another trait. PCAs have also been criticized due to limited linear mapping representation.</ns0:p><ns0:p>Based on the underlying definition of dimensional reduction, it is less informative to aggregate all scales into a single latent variable score. A set of items or scales may share similar conceptual underpinnings but not necessarily be identical <ns0:ref type='bibr' target='#b39'>(Stangor, 2014)</ns0:ref>. Using a PCA, a large number of items can be reduced to fewer components with possibly more variance explained than with other methods of factoring <ns0:ref type='bibr'>(Hamzah, Othman &amp; Hassan, 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.3'>Autoencoder</ns0:head><ns0:p>Autoencoders are characterized by their function of extracting important information and representing it in another space. Such a network consists of three symmetrical layers: input, hidden, and output layers <ns0:ref type='bibr' target='#b17'>(Hinton &amp; Zemel, 1994</ns0:ref>). An autoencoder attempts to approximate the original data so that the output is similar to the input after feed-forward propagation. The input is projected to the hidden layer that is commonly designed to be of a lower dimensionality. After information is passed through the hidden layer, the output of the network should ideally resemble the original input as closely as possible. As a result, the latent space contains all of the necessary information to describe the data <ns0:ref type='bibr' target='#b23'>(Ladjal, Newson &amp; Pham, 2019)</ns0:ref>. In a simple autoencoder framework, each neuron is fully connected to all neurons in the previous layer, where neurons in a single layer function completely independently and share no connections. As illustrated by from <ns0:ref type='bibr' target='#b29'>Meng, Ding &amp; Xue (2017)</ns0:ref>, autoencoder tries to learn function S(&#8901;) such that:</ns0:p><ns0:p>Eq(1) S W,W ' ,b1,b2</ns0:p><ns0:p>(&#119883;) &#8776; &#119883; W is weight matrix connected input layer and hidden layer while W' is weight matrix connected hidden layer and output layer. b 1 and b 2 are bias vectors of hidden layer and output layer. S(&#8901;) can be divided into two phases: from input layer to hidden layer is encoding phase (Eq2) and from hidden to output layer is decoding phase (Eq3).</ns0:p><ns0:p>Eq(2)</ns0:p><ns0:formula xml:id='formula_0'>h = f(W &#215; X + b 1 )</ns0:formula><ns0:p>Eq(3) Y = g(W ' &#215; h + b2)</ns0:p></ns0:div> <ns0:div><ns0:head n='1.4'>Autoencoder versus PCA</ns0:head><ns0:p>When one is interested in establishing the construct validity, it is often intuitive to apply a PCA to extract latent factors. It quickly becomes apparent that the PCA shares lots of similarities with an autoencoder. Both methods can serve as tools for feature generation and selection by their ability to reduce dimensions. Despite autoencoder neural networks bearing a significant resemblance to PCAs, there is one major difference between these two networks. In contrast to a PCA, an autoencoder applies a non-linear transformation to the input, and so the autoencoder could be more flexible. That is, although the PCA can effectively reduce the linear dimensionality, it still suffers when relationships among the variables are not linear. Aside from the linearity restriction, a PCA may fail by its loose assumption about the input data distribution. As Shlen (2014) pointed out, even though the PCA algorithm is in essence completely nonparametric, because a PCA is unconcerned with the source of the data, it might not capture key features of data variations. That is, a PCA makes no assumptions about the distribution of the data. However, only when the data are assumed to be normal from a multivariate perspective will the joint distribution of the principal components be normal from a multivariate perspective. Then, the principal components will have an obvious geometrical interpretation where the first component can be determined by locating the chord of maximum distance in the ellipsoid</ns0:p><ns0:formula xml:id='formula_1'>(Chatfield &amp; ( x -&#956; ) T &#931; -1 ( &#119909; -&#120583; ) = constant Collins, 1981).</ns0:formula><ns0:p>As a result, we can directly compare various forms of autoencoders to a PCA when we attempt to build the construct validity of a small sample when the data are not normally distributed. Four different forms of autoencoders are considered in this study, including a simple autoencoder with a single-layered autoencoder and three other candidates briefly described as follows.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.5'>Tie-weighted autoencoder</ns0:head><ns0:p>An autoencoder is a neural network with a symmetrical structure. Although the input is compressed and the output is reconstructed through its latent-space representation, there is no guarantee that the weights of the encoder and decoder are identical. Thus, we can impose an additional optimization restriction so that the weights of the decoder layer are tied to the weights of the encoder layer. By tying the weights, the number of parameters that needs to be trained and the risk of overfitting are reduced. Tieweight autoencoder is the one we set it to be W=W' from Eq(2) and Eq(3) <ns0:ref type='bibr' target='#b29'>(Meng, Ding &amp; Xue, 2017)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head n='1.6'>Deep Autoencoder</ns0:head><ns0:p>In the autoencoder framework, there is no limitation on the number of layers for the encoder or decoder. That is, the autoencoder can go deep and can be implemented with a stack of layers. Theoretically, the more hidden layers there are, the more features can be learned from the hidden layers. Although the layers can be stacked, the layers are often designed to remain symmetrical with respect to the central layers.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.7'>Independent Encoded Autoencoder</ns0:head><ns0:p>A preferable feature of a PCA is that the weight vectors are independent of each other. If orthogonality is imposed, each encoded feature explains unique information, and a smaller number of encoder layers can be achieved. Thus, we also adopted an orthogonal autoencoder (COAE) for comparison, which is capable of simultaneously extracting latent embedding and predicting the clustering assignment <ns0:ref type='bibr' target='#b45'>(Wang et al., 2019)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head n='1.8'>Sample size</ns0:head><ns0:p>There are only a few studies concerning the requirement of the sample size on the dimensional reduction performance of a PCA. So there is no consensus as to how large is large enough for conducting a PCA. <ns0:ref type='bibr' target='#b12'>Forcino (2012)</ns0:ref> found that a too-small sample size is more likely to lead to erroneous conclusions. <ns0:ref type='bibr'>Manjarres-Martinez et al. (2012)</ns0:ref> tested the stability performance of three ordination methods in terms of their bootstrap-generated sampling variances. Bootstrap resampling techniques are used to generate larger samples which may provide more-precise evaluations of the sampling error. Some researchers recommend sample sizes in relation to the number of variables or correlation structure. For example, <ns0:ref type='bibr'>Hatcher and O'Rouke (2013)</ns0:ref> suggested that the sample size should be larger than five times the number of variables. <ns0:ref type='bibr' target='#b18'>Hutcheson and Sofroniou (1999)</ns0:ref> recommended that a minimum of n=150 is required for a high community correlation structure. <ns0:ref type='bibr' target='#b30'>Mundfrom et al. (2005)</ns0:ref> found that n&gt;100 was required for medium community while <ns0:ref type='bibr' target='#b25'>MacCallum et al. (2001)</ns0:ref> achieved satisfactory results even for data with numbers of items greater than the sample size. In contrast, <ns0:ref type='bibr' target='#b47'>Yeung and Ruzzo (2001)</ns0:ref> showed that a PCA is not suitable for dimensionality reduction tasks when p is greater than n. The performance of a PCA is worsened when a nonlinear relationship is present with limited samples.</ns0:p><ns0:p>A deep neural network is competitive in solving nonlinear dimensional reductions for highdimensional data. Although it may seem legitimate that a massive amount of data is required to train a deep neural network, some researchers claim that deep learning can still be adopted even if n is small. Seyfio&#287;lu &amp; G&#252;rb&#252;z (2017) compared a convolutional autoencoder and two convolutional neural networks, VGGNet and GoogleNet, in terms of the training sample sizes. They found that when the sample size exceeded 650, the convolutional autoencoder outperformed transfer learning and random initialization.</ns0:p><ns0:p>Although there is a great number of dimensionality reduction algorithms being developed, the feasibility of their use with small-sample, non-normal data is still unknown. A limited sample size and an non-linear data distribution may also increase the likelihood of overfitting and decrease the accuracy. To overcome the pitfalls of the sample size issue, the principal objective of the current study was to examine the influence of sample size on the latent structure of PCAs and autoencoders using a Monte Carlo simulation. The performances of the PCA and various transformations by autoencoders were evaluated using both simulated data and a real dataset pertaining to quantifying the concept of curiosity.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>The detailed design of the simulation consisted of three major states (Figure <ns0:ref type='figure'>1</ns0:ref>): data generation, dimensionality-reduction algorithms, and performance evaluation. The input dataset was first divided into two sub-datasets: a training set and testing set. Then various forms of autoencoders along with a PCA were applied to select desirable encoded dimensions of attributes. Finally, for latent dimensionality classification of the obtained reduced-dimensional data, a reconstruction error was applied to evaluate the algorithms.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Data Generation</ns0:head><ns0:p>A Monte Carlo simulation was used for this study. To avoid the overfitting issue, separate datasets were used for the dimension-reduction algorithms. Each simulated dataset was divided into two parts: 80% of samples were used as a training set, and the remaining 20% were used as a test set. The dimensionreduction techniques, including various type of autoencoders and a PCA, were trained on the training dataset. After the classifier was built, the testing data were used to test the effectiveness of the classifier. Each simulated dataset was simulated based on its degree of non-normality, correlation among items, and sample size. Correlation matrices of continuous variables, each representing a questionnaire correlation structure, were generated for each condition by three manipulated variables: the degree of communality, the degree of non-normality, and the sample size. The data generation python code was stored in Github, and can be assessed at https://github.com/robbinlin/data-generation-/blob/e94206b6a16751961c3db57fbe93017dc050d746/data_generation_20211005.ipynb PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2021:07:63569:1:1:NEW 13 Oct 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='2.2'>Non-normality</ns0:head><ns0:p>A variety of mathematical algorithms have been developed over the years to simulate conditions of non-normal distributions <ns0:ref type='bibr' target='#b9'>(Fan et al., 2002;</ns0:ref><ns0:ref type='bibr' target='#b11'>Fleishman, 1978;</ns0:ref><ns0:ref type='bibr' target='#b34'>Ramberg et al., 1979;</ns0:ref><ns0:ref type='bibr' target='#b35'>Schmeiser &amp; Deutsch, 1977)</ns0:ref>. <ns0:ref type='bibr' target='#b11'>Fleishman (1978)</ns0:ref> also introduced a method for generating sample data from a population with desired degrees of skewness and kurtosis. That method uses a cubic transformation to transform a standard univariate normally distributed variable to obtain a nonnormal variable with specified degrees of skewness and kurtosis. The transformation developed by Fleishman takes the form of Eq4 Y = a + bZ + cZ 2 + dZ 3 ; where Y is the transformed non-normal variable, Z is a standard normal random variable, and a, b, c, and d are coefficients needed for transforming the unit around the unit normal to a non-normal variable with specified degrees of population skewness and kurtosis <ns0:ref type='bibr' target='#b4'>(Byrd, 2008)</ns0:ref>. These coefficients were tabulated in <ns0:ref type='bibr' target='#b11'>Fleishman (1978)</ns0:ref> for selected combinations of degrees of skewness and kurtosis. <ns0:ref type='bibr' target='#b11'>Fleishman (1978)</ns0:ref> derived a system of nonlinear equations that given the target distribution mean, variance, skewness, and kurtosis, could be solved for coefficients to produce a third-order polynomial approximation to the desired distribution <ns0:ref type='bibr' target='#b9'>(Fan et al., 2002)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Correlation structure</ns0:head><ns0:p>In order to generate non-normal correlated observation, the interaction between inter-variable correlation and degree of non-normality needs to be considered since difference combination of intervariable correlations and non-normality conditions would cause sample data to deviate from the specified correlation pattern. The abovementioned Fleishman's method can be extended to multivariate non-normal data with a specified correlation <ns0:ref type='bibr' target='#b46'>(Wicklin, 2013)</ns0:ref>. For example, two non-normal variables, Y1 and Y2, can be generated with specified skewness and kurtosis from equation 4. i.e., Eq5 <ns0:ref type='figure'>and d</ns0:ref> 2 can be derived from Fleishman's table once the degree of skewness and kurtosis are known. After these coefficients (a i , b i , c i , d i ) are found, the intermediate correlations could be derived by specifying R x1x2 , the population correlation between two non-normal variables Y 1 and Y 2 . <ns0:ref type='bibr' target='#b41'>Vale and Maurelli (1983)</ns0:ref> demonstrated that through the following relationship, , Eq7</ns0:p><ns0:formula xml:id='formula_2'>Y 1 = a 1 + b 1 Z + c 1 Z 2 + d 1 Z 3 Eq6 . Y 2 = a 2 + b 2 Z + c 2 Z 2 + d 2 Z 3 Coefficients a 1 , b 1 , c 1 , d 1 , a 2 , b 2 , c 2 ,</ns0:formula><ns0:formula xml:id='formula_3'>R x 1 &#119909; 2 = &#120588;(&#119887; 1 &#119887; 2 + 3&#119887; 1 &#119889; 2 + 3&#119889; 1 &#119887; 2 + 9&#119889; 1 &#119889; 2 ) + &#120588; 2 (2&#119888; 1 &#119888; 2 ) + &#120588; 3 (6&#119889; 1 &#119889; 2 )</ns0:formula><ns0:p>the intermediate correlation, , can be derived. These coefficients in the Fleishman power transformation &#961; above are required to derive intermediate correlations, . After all of the intermediate correlation &#961;</ns0:p><ns0:p>coefficients are assembled into an intermediate correlation matrix, this intermediate correlation matrix is then used to extract factor patterns to transform uncorrelated items into correlated items <ns0:ref type='bibr' target='#b41'>(Vale &amp; Maurelli, 1983)</ns0:ref>. <ns0:ref type='bibr' target='#b20'>Kaiser and Dickman (1962)</ns0:ref> presented a matrix decomposition procedure that imposes a specified correlation matrix on a set of uncorrelated random normal variables specified with population correlations R as represented by the imposed correlation matrix. The basic matrix decomposition procedure takes the following form <ns0:ref type='bibr' target='#b20'>(Kaiser &amp; Dickman, 1962)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_4'>; Eq8 R k * N = F k * k &#215; X &#119896; * N</ns0:formula><ns0:p>where k is the number of variables, N is the number of sample, is the resulting data matrix, F is the R principal component factor pattern coefficients obtained by principal component factorization to the PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63569:1:1:NEW 13 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science desired population matrix R, and X are uncorrelated k random variables, with N observations. After the intermediate correlations are derived from Vale and Maurelli 's formula using iterative Newton-Raphson method, we then apply the assembled intermediate correlation matrix to Kaiser and Dickman's method as R in order to generate N sample.</ns0:p><ns0:p>In the current study, three correlation structures for 15 items were simulated to approximate three communalities of high, in which communalities were assigned values of 0.8; wide, in which they could have value from 0.6 to 0.9; and low, in which they could have value from 0.3 and 0.5. The communality estimate is the estimated proportion of variance in each variable that is accounted for <ns0:ref type='bibr' target='#b26'>(Mamdoohi et al, 2016)</ns0:ref>. These estimates reflect the proportion of variation in that variable explained by the latent factors <ns0:ref type='bibr'>(Youn &amp; Pearce, 2013)</ns0:ref>. In other words, a high communality means that if we perform multiple regression of curiosity against the three common factors, we obtain a satisfactory proportion of the variation in curiosity explained by the factor model. These estimates reflect the variance of a variable in common with all others together. The communality is denoted by h 2 and is the summation of the squared correlations of the variable with the factors (Barton, K., Cattell, R. B., &amp; Curran, J. ( <ns0:ref type='formula'>1973</ns0:ref>)). The formula for deriving the communalities is where a denotes the loadings for j variables. The communality Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_5'>h 2 &#119895; = &#119886; 2 &#119895;1 + &#119886; 2 &#119895;2 + &#8230;&#119886; 2</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Correlation matrix for high communality [ 1 0.7 0.7 0.7 0.7 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.7 1 0.7 0.7 07 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.7 0.7 1 0.7 0.7 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.7 0.7 0.7 1 0.7 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.7 0.7 0.7 0.7 1 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 1 0.7 0.7 0.7 0.7 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.7 1 0.7 0.7 0.7 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 07 0.7 1 0.7 0.7 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 07 0.7 0.7 1 0.7 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.7 0.7 0.7 0.7 1 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 1 0.7 0.7 0.7 0.7 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.7 1 0.7 0.7 0.7 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.7 0.7 1 0.7 0.7 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.7 0.7 0.7 1 0.7 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.7 0.7 0.7 0. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='2.4'>Sample Size</ns0:head><ns0:p>The sample size was chosen based on the recommendations of <ns0:ref type='bibr' target='#b30'>Mundfrom et al. (2005)</ns0:ref>. Sample size increments were according to the following algorithm: &#61548; When n&lt;200, it was increased by 10; &#61548; When n&lt;500, it was increased by 50;</ns0:p><ns0:p>2.5 Dimensionality-reduction algorithms &#61548; Simple autoencoder : A simple autoencoder is an autoencoder with two main components: The Encoder and the decoder in addition to the latent-space representation layer also known as the bottle neck layer. Linear activation function and SGD optimizer were adopted. The encoding and decoding algorithms will be chosen to be parametric functions and to be differentiable with respect to the loss function <ns0:ref type='bibr' target='#b8'>(Chollet, 2016)</ns0:ref>. By minimizing the reconstruction loss, the parameters of the encoder and decoder can be optimized. The basic autoencoder, which refers to simple autoencoder in section 1.4 is a autoencoder with a single fully-connected neural layer as encoder and as decoder (Figure <ns0:ref type='figure'>2a</ns0:ref>). &#61548; Tie-weighted autoencoder : A tie-weighted autoencoder is a single-layer autoencoder with three neurons in addition to a decoder and an encoder layer. A restriction is imposed so that the weights of the encoder and decoder are identical. &#61548; Deep autoencoder : In order to demonstrate that the autoencoder algorithms do not have to limit to a single layer as encoder or decoder, we could instead use a stack of layers, so called 'deep autoencoder' in section 1.6. A deep autoencoder is a sevenlayer autoencoder. The first layer is a dense layer with 11 neurons follow by two layers each with six neurons. Before the decoder, a bottleneck layer with three neurons is included. The architecture is depicted in Figure <ns0:ref type='figure'>2b</ns0:ref>. &#61548; Independent encoded autoencoder : With this custom layer, we impose penalty on the sum of offdiagonal elements of the encoded features covariance to create uncorrelated features as well as applying orthogonality on both encoder and decoder Weights.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.6'>Real Dataset</ns0:head><ns0:p>In this subsection, we implemented the PCA and autoencoders with a real dataset-a Sports Fan Curiosity dataset--to see how the autoencoders differed from the PCA and provide visualization results. More specifically, we compared the ability to build psychometric properties between the autoencoders and the PCA on a Sports Fan Curiosity questionnaire. Curiosity is considered a fundamental intrinsic motivational subdomain for initiating human exploratory behaviors in many field of study, such as psychology, education, and sports. The Sports Fan Curiosity questionnaire is commonly used in the field of sports management, which is a subset of questionnaires for measuring behavioral intentions. This subcategory was designed to measure and quantify the construct of curiosity. Although leisure management has received greater attention, little is known about how the curiosity construct can be realized with a structured questionnaire. We deployed the autoencoders for this questionnaire and evaluated their ability to identify the latent construct. Items of the Sports Fan Curiosity questionnaire are listed in Table <ns0:ref type='table'>1</ns0:ref>. The data set along with the python code was stored in Google Colab and Google drive, and can be assessed at https://colab.research.google.com/drive/1pC8A10sRVUHDttkLF2ATb51CcpnIzD6u?usp=sharing PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63569:1:1:NEW 13 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='2.7'>Performance Metrics</ns0:head></ns0:div> <ns0:div><ns0:head n='2.7.1'>Reconstruction Error:</ns0:head><ns0:p>The performance of the establishment of construct validity was evaluated using the mean square error (MSE), mean absolute error (MAE), and the average of the Euclidean distance.</ns0:p><ns0:p>Denote inputs to the network by X and the outputs of the network by y. Then the network can be described by mapping from inputs to outputs: y=f(x). Reconstruction of the input is then a mapping from outputs to inputs . It is then reasonable to measure the reconstruction error as with a given X = &#119892;(&#119910;)</ns0:p><ns0:p>X -X error function,</ns0:p><ns0:p>. In this study, the reconstruction error is defined as the average of the squares of the &#1013;(x) errors for subject n with dimension m, i.e.,</ns0:p><ns0:formula xml:id='formula_6'>MSE = 1 n &#8226; 1 &#119898; &#119899; &#8721; &#119894; = 1 &#119898; &#8721; &#119895; = 1 (&#119883; &#119894;,&#119895; -&#119909; &#119894;,&#119895; ) 2</ns0:formula></ns0:div> <ns0:div><ns0:head n='2.7.2'>Mean Absolute Error (MAE):</ns0:head><ns0:p>The MAE is defined as the deviation between the paired predicted value ( ) and original value (X). X &#119894; That is, it is the average of the absolute errors for subject n with dimension m:</ns0:p><ns0:formula xml:id='formula_7'>MAE = &#119899; &#8721; &#119894; = 1 &#119898; &#8721; &#119895; = 1 |&#119909; &#119894;,&#119895; -&#119909; &#119894;,&#119895; | &#119899;&#119898; 2.7.3 Normalized-Euclidean Distances (NEDs)</ns0:formula><ns0:p>The average of the Euclidean distance between each pair of samples in X and was also X &#119894; calculated to accommodate possible missing values for subject n with dimension m.</ns0:p><ns0:formula xml:id='formula_8'>NED = &#119899; &#8721; &#119894; = 1 &#119898; &#8721; &#119895; = 1 ( &#119909; &#119894;,&#119895; |&#119909; &#119894;,&#119895; | - &#119909; &#119894;,&#119895; |&#119909; &#119894;,&#119895; | ) In summary, a</ns0:formula><ns0:p>factorial design was implemented according to the manipulated variables 3 &#215; 3 &#215; 27 of community level, non-normality level, and sample size, resulting in a total of 243 population conditions (Table <ns0:ref type='table'>2</ns0:ref>). Each of the scenario was simulated for 10 times. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>The efficacy of the autoencoders was identified using a Monte Carlo simulation that manipulated five population parameters. The correlation structure was determined with three levels of communality (high, wide, and low), three levels of normality (normal, slightly un-normal, and un-normal), and nine levels of hidden layers of neurons. One should be aware that the current study focused on the feasibility of analyzing small samples. As a result, sample sizes beyond 1000 were not considered.</ns0:p><ns0:p>Overall, the autoencoders had smaller reconstruction errors compared to the PCA counterpart. Specifically, the tied-weight autoencoder, deep autoencoder, and independent-feature autoencoder outperformed the PCA in all three communality conditions. The Tied autoencoder was the most stable algorithm among all candidates, as its SD for reconstruction was the smallest (Table <ns0:ref type='table' target='#tab_0'>3</ns0:ref>). In general, the autoencoder family produced smaller MAEs and NEDs compared to the PCA, except for the simple encoder. The simple autoencoder generated larger MSEs and MAEs for all three communality conditions. When the communality was low, performances of the deep autoencoder, tied-weight autoencoder, and independent autoencoder were less affected, in contrast to the PCA and simple encoder.</ns0:p><ns0:p>With respect to the input data distributions, it was observed that the performances of the autoencoder family and PCA were not affected by non-normality conditions for this small sample size simulation. However, it was interesting to observe that for data that was extremely un-normal, the tiedweight encoder, deep autoencoder, and independent-feature encoder outperformed the PCA in terms of the MSE and MAE. Of all autoencoder variations, the tied-weight encoder generated the smallest MSE (Table <ns0:ref type='table' target='#tab_1'>4</ns0:ref>). We also evaluated reconstruction errors under different combinations of communality conditions and non-normality conditions. When input data were normally distributed, the resulting reconstruction errors were similar among the three communalities. However, if the correlation structure was low, the MSE increased for non-normal data, even if it only slightly deviated from a normal distribution (Figure <ns0:ref type='figure'>3</ns0:ref>).</ns0:p><ns0:p>Regarding the sample size, the MSE for the autoencoder family decreased as the sample size increased. MSEs for the deep autoencoder, tied-weight encoder, and independent encoder were smaller than that for the PCA for a small sample size. The MSE appeared to decrease to a local minimum when the sample size reached 200 (Figure <ns0:ref type='figure'>4</ns0:ref>). The simple encoder had the largest MSE, while the tied-weight encoder had the smallest MSE for all of the different sample sizes considered. On the other hand, the PCA had the largest Euclidean distance between the original data and its predictions, while the NEDs were smaller for the tied-weight encoder and deep autoencoder. We further analyzed reconstruction errors for all of the autoencoder family and PCA under different combinations of normality conditions and communality. MSEs for the tied-weight encoder continued to decrease until the sample size reached 200 regardless of the deviation from a normal distribution and communality conditions(Figure <ns0:ref type='figure'>4</ns0:ref> &amp; Figure <ns0:ref type='figure'>5</ns0:ref>). A similar trend was also found for the deep autoencoder. The MSE was negatively associated with the sample size, and the MSE stabilized for values of n greater than 400. In contrast, the reconstruction error for the PCA was much contaminated by the weak commonality condition. If the correlation structure was low communality, the MSE was always higher than those of the wide and high communality. A similar pattern was also observed for the MAE, as the PCA had a larger MAE when the correlation structure was low communality. In contrast, the deep autoencoder seemed to be insensitive to community conditions and deviations from normality in terms of the MAE compared to that obtained from PCA (Figure <ns0:ref type='figure'>6</ns0:ref>).</ns0:p><ns0:p>There are several algorithms available for determining the number of retaining factors for a PCA, e.g., a scree plot and parallel analysis. However, there are no guidelines to choose the size of the bottleneck layer in the autoencoder. From the simulation results, the autoencoder seemed to perform PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63569:1:1:NEW 13 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>better compared to the PCA especially when k was small (Figure <ns0:ref type='figure'>7</ns0:ref>). Even when the number of components was correctly specified by the PCA (which was k=3 in our simulation), all of the autoencoder variations besides the simple encoder outperformed the PCA in terms of MSE.</ns0:p></ns0:div> <ns0:div><ns0:head>1.1With a Real Dataset</ns0:head><ns0:p>We directly compared reconstruction errors of both the deep autoencoder and PCA on the Sports Fan Curiosity questionnaire. Based on previous simulation result, a random sample of 100 subjects was chosen from the original data of 400 subjects to examine the performance with small data. When the PCA was applied to the curiosity data, three components were extracted with an R 2 of 0.53 and an MSE of 0.46 while R 2 reached to 60.1% and the MSE decreased to 0.36 for the autoencoder. We adopted t-distributed stochastic neighboring entities (t-SNEs) to visualize the reduced dimension. It was observed that the three latent components could separate the clusters using the PCA, but there was clearly some information that overlapped the extracted components. In contrast to the PCA, the autoencoder could better separate the three underlying constructs, as we saw that there was a significant improvement over the PCA (Figure <ns0:ref type='figure' target='#fig_3'>8</ns0:ref>). Regarding the sport fan curiosity questionnaire, if the questionnaire is construct valid, all items together well represent the underlying construct. Based on the weights estimated from the autoencoder's bottleneck layers, each item could be classified into one of the three latent dimensions by its weights in absolute values. For example, the item 'I enjoy collecting and calculating statistics of my favorite basketball team' has the highest correlation with construct 1. Similarly, we see that item 'Watching basketball games with my friends is joyful' has the highest correlation with construct 2. As such, autoencoders can elucidate how different items and constructs relate to one another and help develop new theories. For example, in Table <ns0:ref type='table' target='#tab_2'>5b</ns0:ref>, the items 'I enjoy collecting and calculating statistics of my favorite basketball team,' 'I enjoy reading articles about basketball players, teams, events, and games,' 'I am eager to learn more about basketball' and 'I enjoy any movement that occurs during a basketball game' appear to have large coefficient on one latent construct, which we assign as 'Learning Motivation' factor. 'Watching basketball games with my friends is joyful,' 'I enjoy probing deeply into basketball,' and 'I often imagine how my favorite basketball team is playing to defeat their opponent' depict the 'Social Interaction 'factor. The three extracted constructs were knowledge, social interaction, and facility. From the weighted estimate from the bottleneck layer in the autoencoder, each item was classified into one of three latent constructs based on the largest weights among the three clusters. As a result, the first component consisted of items 1, 5, 10, and 11. Items 2, 3, and 6 were classified into cluster 2, while items 4, 7, 8, and 9 belonged to cluster 3. Based on the items in each cluster, three latent constructs were identified, i.e., learning motivation, social interaction, and facility (Table <ns0:ref type='table' target='#tab_2'>5b</ns0:ref>).</ns0:p><ns0:p>The results from the current study were conceptually equivalent to the three constructs of sport fan curiosity scale developed by <ns0:ref type='bibr' target='#b32'>Park, Ha and Mahony (2014)</ns0:ref>, i.e., specific information, general information and sport facility information. The slight differences between findings from the study and the work of Park et al may result from different research contexts. More specifically, the real dataset used in the study was collected from a specific sports context (i.e., basketball) whereas Park et al developed the sport fan curiosity scale in a general sport context. However, the findings from the study with small sample size (n=100) yielded psychometrically similar factor structure to the work of Park et al with a much larger sample size (n=407). Consequently, the effectiveness and efficiency of the proposed methodology in the study enrich the relevant literature theoretically and practically.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63569:1:1:NEW 13 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The PCA and autoencoders are two common ways of reducing the dimensionality of a high-dimensional feature space. The PCA is a linear-transformation algorithm which projects features onto an orthogonal basis and thus, generates uncorrelated features. An autoencoder is an unsupervised learning technique that can be adopted to tackle the task of representative learning. That is, a bottleneck layer in the network is imposed so that knowledge is compressed through the bottleneck layer, and the output is a representation of the original input. A correlation structure that exists in the data can be learned and consequently leveraged when forcing the input through the network's bottleneck. This characteristic of an autoencoder allows the encoder to compress information into a low-dimensional space to model complicated nonlinear associations.</ns0:p><ns0:p>One of the major objectives of the current study was to provide a feasibility analysis for adopting a neural network model for establishing construct validity as a psychometric property. Our simulation results indicated that if a low communality correlation structure existed, the reconstruction error for the autoencoder would increase for the non-normal data scenario when the sample size was small. This result showed that although the computational resources for the neural network were generally more expensive, besides the simple encoder, the reconstruction error for the autoencoder was uniformly smaller across three different communality conditions as well as three different non-normality conditions.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>A common practical question in conducting factor analyses and PCAs is how many subjects are sufficient to obtain a reliable estimate. There are many rules of thumb proposed suggesting a certain absolute sample size, such as a minimum of 250 or 500. Some suggested a size-to-time ratio to be as high as 10 times as many subjects as variables. There is no universally applicable answer, but the answer depends upon the clarity of the structure being examined, i.e., the communality of the variables. If there is well-structured communality among the variables, the size of the sample required goes down. As the number of items for a factor increase, the sample size needed decreases. That is, if the number of variables is high relative to the number of factors (e.g.,15:3), and the communality is high, then sample sizes as small as 60~100 are adequate. In FA models, more subjects are always better, but what is more important is to have good markers for each factor (high communality) as well as many markers (a high item-to-factor ratio) than it is to increase the number of subjects. Unfortunately, although it will never be wrong advice to have many markers of high communality, if using an autoencoder to analyze the structure of items rather than tests, the communality requirement will tend to be low. In cases where communality is low and data are not linearly distributed, increasing the number of subjects is advised. Based on this simulation study, the autoencoder tended to perform relatively better when the neurons of the bottleneck, k, were small, which means that the same reconstruction accuracy of the original scale could be achieved with fewer components and hence a smaller dataset. This is important when dealing with many items or variables in a questionnaire. For all sample sizes being considered, the tied-weight encoder had relatively small reconstruction errors. Based on this Monte Carlo simulation, we demonstrated that it is feasible for an autoencoder to be used in psychometric research to establish the construct validity with a small sample size. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:p>Flow chart for the Monte Carlo (MCMC) simulations. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure 8</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>&#119895;&#119898;</ns0:head><ns0:label /><ns0:figDesc>levels correspond inversely to levels of importance of unique factors. High communalities imply several variables load highly on the same factor and the model error is low.<ns0:ref type='bibr' target='#b25'>(MacCallum, Widaman, Preacher &amp; Hong, 2001)</ns0:ref>. These three correlation structures were designed to mimic a three-factor solution and are presented as follows.PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63569:1:1:NEW 13 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Factor patterns were estimated with one of these three correlation matrices. These pattern matrices were then adopted to generate 15 correlated normal variables with specified population correlation coefficients, variable means, standard deviations (SDs), skewness, and kurtosis.PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63569:1:1:NEW 13 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>.</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63569:1:1:NEW 13 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8 tSNE visualizations for PCA and Autoencoder</ns0:figDesc><ns0:graphic coords='43,42.52,178.87,525.00,195.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,178.87,525.00,183.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Performance metrics for three communality conditions</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>.PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63569:1:1:NEW 13 Oct 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Performance metrics for three normality conditions</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Normal</ns0:cell><ns0:cell>Slightly un-normal</ns0:cell><ns0:cell>Un-normal</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63569:1:1:NEW 13 Oct 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 5 (on next page)</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Absolute values of bottleneck weights on the Curiosity dataset</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63569:1:1:NEW 13 Oct 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>Table 5. Absolute values of bottleneck weights on the Curiosity dataset 2 a. Bottleneck weights estimates from the population</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Learning</ns0:cell><ns0:cell>Social</ns0:cell><ns0:cell>Facility</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Motivation</ns0:cell><ns0:cell>Interaction</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Learning</ns0:cell><ns0:cell>Social</ns0:cell><ns0:cell>Facility</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Motivation 0.431</ns0:cell><ns0:cell>Interaction 0.090</ns0:cell><ns0:cell>0.138</ns0:cell><ns0:cell>I enjoy collecting and calculating statistics of my favorite basketball team.</ns0:cell></ns0:row><ns0:row><ns0:cell>0.329 0.532</ns0:cell><ns0:cell>0.029 0.067</ns0:cell><ns0:cell>0.237 0.299</ns0:cell><ns0:cell>I enjoy collecting and calculating statistics of my favorite basketball team. I enjoy reading articles about basketball players, teams, events, and games.</ns0:cell></ns0:row><ns0:row><ns0:cell>0.474 0.322</ns0:cell><ns0:cell>0.178 0.239</ns0:cell><ns0:cell>0.293 0.312</ns0:cell><ns0:cell>I enjoy reading articles about basketball players, teams, events, and games. I am eager to learn more about basketball.</ns0:cell></ns0:row><ns0:row><ns0:cell>0.335 0.541</ns0:cell><ns0:cell>0.158 0.051</ns0:cell><ns0:cell>0.261 0.170</ns0:cell><ns0:cell>I am eager to learn more about basketball. I enjoy any movement that occurs during a basketball game.</ns0:cell></ns0:row><ns0:row><ns0:cell>0.68 0.308</ns0:cell><ns0:cell>0.003 0.767</ns0:cell><ns0:cell>0.227 0.124</ns0:cell><ns0:cell>I enjoy any movement that occurs during a basketball game. Watching basketball games with my friends is joyful.</ns0:cell></ns0:row><ns0:row><ns0:cell>0.385 0.089</ns0:cell><ns0:cell>0.433 0.440</ns0:cell><ns0:cell>0.415 0.100</ns0:cell><ns0:cell>Watching basketball games with my friends is joyful. I enjoy probing deeply into basketball.</ns0:cell></ns0:row><ns0:row><ns0:cell>0.351 0.355</ns0:cell><ns0:cell>0.575 0.685</ns0:cell><ns0:cell>0.114 0.077</ns0:cell><ns0:cell>I enjoy probing deeply into basketball. I often imagine how my favorite basketball team is playing to defeat their</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>opponent.</ns0:cell></ns0:row><ns0:row><ns0:cell>0.314</ns0:cell><ns0:cell>0.662</ns0:cell><ns0:cell>0.039</ns0:cell><ns0:cell>I often imagine how my favorite basketball team is playing to defeat their</ns0:cell></ns0:row><ns0:row><ns0:cell>0.431</ns0:cell><ns0:cell>0.175</ns0:cell><ns0:cell>0.431</ns0:cell><ns0:cell>opponent. I enjoy exploring my favorite basketball stadiums or facilities.</ns0:cell></ns0:row><ns0:row><ns0:cell>0.125 0.117</ns0:cell><ns0:cell>0.519 0.203</ns0:cell><ns0:cell>0.469 0.615</ns0:cell><ns0:cell>I enjoy exploring my favorite basketball stadiums or facilities. I am interested in learning how much it costs to build a brand new</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>basketball stadium.</ns0:cell></ns0:row><ns0:row><ns0:cell>0.011</ns0:cell><ns0:cell>0.385</ns0:cell><ns0:cell>0.023</ns0:cell><ns0:cell>I am interested in learning how much it costs to build a brand new</ns0:cell></ns0:row><ns0:row><ns0:cell>0.126</ns0:cell><ns0:cell>0.344</ns0:cell><ns0:cell>0.383</ns0:cell><ns0:cell>basketball stadium. I am interested in learning how large a basketball court is.</ns0:cell></ns0:row><ns0:row><ns0:cell>0.104 0.031</ns0:cell><ns0:cell>0.19 0.078</ns0:cell><ns0:cell>0.581 0.224</ns0:cell><ns0:cell>I am interested in learning how large a basketball court is. When I miss a game, I often look for information on television, the</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>internet, or newspaper to catch the game results.</ns0:cell></ns0:row><ns0:row><ns0:cell>0.099</ns0:cell><ns0:cell>0.127</ns0:cell><ns0:cell>0.231</ns0:cell><ns0:cell>When I miss a game, I often look for information on television, the</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>internet, or newspaper to catch the game results.</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63569:1:1:NEW 13 Oct 2021) Manuscript to be reviewed Computer Science 5 b. Bottleneck weights estimates from sample of n=100 6 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63569:1:1:NEW 13 Oct 2021)</ns0:note></ns0:figure> </ns0:body> "
"PeerJ Computer Science October 14, 2021 Dear Editors, Thank you for the opportunity to further revise and re-submit our manuscript entitled “Robustness of autoencoders for establishing psychometric properties based on small sample sizes (#CS-2021:07:63569:0:2:REVIEW)”. The reviewers’ comments are very instructive and helpful for revising our manuscript. We have provided a list of the reviewer’s comments and an explanation of the edits made in response to these comments point by point. We hope the revised manuscript meets the standards for publication in journal of PeerJ Computer Science. Respectfully, Dr. Yen-Kuang, Lin Research Fellow at Biostatistics Center On behalf of all authors. #1 Response to Reviewer 1 Dear Reviewer, Thank you for your comments. Your comments were highly helpful in revising this manuscript and for our future research. We have provided an explanation of the edits made in response to these comments point by point. We hope the revised manuscript meets the standards for publication in PeerJ Computer Science. Comments and Suggestions for Authors Although the work seems serious and well done. I have some reservations about the general form of the article that made its reading difficult. Plus the exact definitions of the ento-encoders (see bellow). Response: Thank you for your suggestions. We have described the definition of autoencoder in page 4-5, line 129-189. The exact definitions of the autoencoders in the colab is depicted in section 2.5 (page 10, line 322-340). Figure 1a and Figure 1b were also included to help visualize the auto-encoder structures. Q1. Some definitions are not clear and a mathematical clear definition: - What is communality and non-cummunality (see references problem bellow) - The formula of line 325 (and in general all the metrics used) should be clarified as one wonders if the sum is running across the data-set or across the dimensions of the data. - In section 2.4: what does the term 'increased by' refer to? n is the number of samples, what is there to increase. In general a clearer mathamtical presentation is needed... Response: Thank you for your valuable suggestion. The communality for a given variable can be interpreted as the proportion of variation in that variable explained by the three factors. In other words, a high communality means that if we perform multiple regression of curiosity items against the three common factors, we obtain a satisfactory proportion of the variation in curiosity explained by the factor model. These estimates reflect the variance of a variable in common with all others together. The communality is denoted by h2 and is the summation of the squared correlations of the variable with the factors (Barton, Cattell & Curran, 1973). The formula for deriving the communalities is where a denotes the loadings for j variables. The communality levels correspond inversely to levels of importance of unique factors. High communalities imply low unique variances and vice versa. (MacCallum, Widaman, Preacher & Hong, 2001). We have provided this information in our revised manuscript (page 8, line 292-303). The performance metrics used in this study was summed across the dimensions of the data. We have included a clearer definition of these metrics in section 2.7 (page 11, line 368-378). In section 2.1-2.4, we described scenarios that are used to generated data in order to test the effectiveness of the dimension-reduction algorithm. Each simulated dataset was simulated based on three parameters: degree of non-normality, correlation among items, and various sample size setting. For the sample size setting, we generated various sample size with unequal increments. For example, for sample size between 50 and 200, the increment was 10 and it was 50 for sample size between 200 and 500. That is, we have considered different simulated dataset with sample size of n=50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 250, 300, 350, 400, 450, 500, 550, 600, 650, 700, 750, 800, 850, 900, 950, and 1000. Q2. I'm also concerned by the exact definitions of the autoencoders: In the colab Response: Thank you for your comments. We appreciate these comments very much. The autoencoder in the colab has three elements: an encoding function, a decoding function, and a distance function between the amount of information loss between the compressed representation of data and the decompressed representation. The encoder and decoder will be chosen to be parametric functions, and to be differentiable with respect to the distance function, so the parameters of the encoding/decoding functions can be optimized to minimize the reconstruction loss, using Stochastic Gradient Descent. The basic autoencoder, which refers to simple autoencoder in section 1.4 is an autoencoder with a single fully-connected neural layer as encoder and as decoder (see figure below). In order to demonstrate that the autoencoder algorithms do not have to limit to a single layer as encoder or decoder, we could instead use a stack of layers, so called “deep autoencoder.” In the colab, we have the following code: Q3. First many references are not listed in the reference section (for example Forcino (2012) Manjarres-Martinez et al. (2012) can not be found in the reference section). Some citation follow the form [number] and other follow the form 'Author (Year)'. It is important to fix this citation problem. Likewise, some figures are mis-referenced (Figure 8 in line 403 is inexistent and surely the authors meant figure 6 in line 390). There is a bulletpoint in line 295 that has no text.? Response: Thank you for your comments. We have fixed the citation problem in the manuscript. The citation form is also corrected to follow the form 'Author (Year)'. Figure 8 in line 403 has been corrected and the bullet point in line 295 was deleted. Q4. My main concern is about the definition of the auto-encoders: In the code submitted the activation are 'linear' this means that the autoencoders boils down to an affine transform. How can an affine transform do better than the PCA in termes of MSE? I think this is because non linear auto-encoders overfit on the small number of samples used for training. Judging by the code submitted, what is called deep-autoencoder has only one layer more than the autoencoder. This is not a really deep architecture. In the code submmited the deepautoencoder has two times 3-size layers while the text mesion one layer with dimension 6. I also encourage the authors to disclose the data generation code (for synthetic data). Response: Thank you for your valuable comments. We understand your concerns that a single layer auto encoder with linear transfer function is nearly equivalent to PCA, where nearly means that the weights found by autoencoder and PCA won't necessarily be the same. However, the subspace spanned by the respective global map W will be the same. In the auto-encoder case, the unique locally and globally optimal map W is the orthogonal projection onto the space spanned by the first p eigenvectors of and this projection is not the exactly the correlation matrix(Baldi & Hornik, 1989). As a result, we tested different structures of autoencoders such as multi-layers autoencoders and tie-weights autoencoders using “RELU”, a non-linear activation function. The results show a similar trend compared with using linear activation, but slightly larger (See table below). Table. MSE under different communality condition using RELU activation function. High Communality Wide Communality Low Communality Metric Algorithm Mean SD Mean SD Mean SD MSE Simple encoder 0.677 0.168 0.725 0.167 0.744 0.144 Tied encoder 0.040 0.009 0.040 0.007 0.040 0.008 PCA 0.217 0.157 0.241 0.192 0.319 0.152 Deep autoencoder 0.049 0.021 0.055 0.021 0.060 0.017 Independent autoencoder 0.045 0.02 0.048 0.021 0.055 0.017 In order to examine whether the autoencoder performing well was due to overfitting, we resample 10 times from the basketball dataset, each replication was conducted with n=100 and calculate the averages of the weights over these 10 replication (Table 5b). The result is consistent with that conducted based on the full sample and showing that 7 out of 10 items (70%) are correctly identified. One of our research objectives is to demonstrate that autoencoder as a particular type of neural network, can compress the data into a latent vector. Because autoencoders learn representations instead of labels, it is a subfield of machine learning, but not necessarily deep learning. Autoencoder could be similar to PCA, but also perform relatively well with small, non-normal, and low-communality dataset based on its specific architecture. Table 5b. Learning Motivation Social Interaction Facility 0.431 0.090 0.138 I enjoy collecting and calculating statistics of my favorite basketball team. 0.532 0.067 0.299 I enjoy reading articles about basketball players, teams, events, and games. 0.322 0.239 0.312 I am eager to learn more about basketball. 0.541 0.051 0.170 I enjoy any movement that occurs during a basketball game. 0.308 0.767 0.124 Watching basketball games with my friends is joyful. 0.089 0.440 0.100 I enjoy probing deeply into basketball. 0.355 0.685 0.077 I often imagine how my favorite basketball team is playing to defeat their opponent. 0.431 0.175 0.431 I enjoy exploring my favorite basketball stadiums or facilities. 0.117 0.203 0.615 I am interested in learning how much it costs to build a brand new basketball stadium. 0.126 0.344 0.383 I am interested in learning how large a basketball court is. 0.031 0.078 0.224 When I miss a game, I often look for information on television, the internet, or newspaper to catch the game results. Furthermore, in order to demonstrate that the autoencoder algorithms do not have to limit to a single layer as encoder or decoder, we could instead use a stack of layers, so called “deep autoencoder” in the colab. We added two additional layers in this revision at both encoder phase and decoder phase, resulting in a total of four additional layers compared to the simple autoencoder architecture. Thanks for the valuable suggestions on providing data generation code for synthetic data. We have provided the data generation code on Google colab and described at page 7 line 240-242. #1 Response to Reviewer 2 Dear Reviewer, Thank you for your comments. Your comments were highly helpful in revising this manuscript and for our future research. We have provided an explanation of the edits made in response to these comments point by point. We hope the revised manuscript meets the standards for publication in PeerJ Computer Science. Comments and Suggestions for Authors Q1: The paper compares PCA and Autoencoder as dimentionality reduction techniques, aiming at showing the superiority of autoencoders in case of small sample sizes (and departures from normality) in the context of psychometric survey data As a preliminary observation, the title of the paper seems not appropriate since, its main content is a MC simulation study and only an application to psychometric data is presented. Moreover, this real data example is based on a single random sample draw from the original data set . The authors should either consider more real data sets or resampling several times form the single one considered averaging on the results also, the results on the whole data set (400obs) should be reported. Response: Thank you very much for reviewing our manuscript and providing suggestions. We agreed with you that a clearer title should be given to this study. We have renamed our study to be “Robustness of autoencoders for establishing psychometric properties based on small sample sizes: Results from a Monte Carlo Simulation Study and a Sports Fan Curiosity Study” Thank you for your suggestions. We resample 10 times from a sport fan curiosity dataset whereby each replication was conducted with n=100 and the averages of the weights over these 10 replication were calculated (Table 5b). The results show that 7 out of 10 items (70%) are correctly identified. The resampling results are provided at Table 5b alone with the results from the whole dataset (n=420) (Table 5a). Table 5b. Learning Motivation Social Interaction Facility 0.431 0.090 0.138 I enjoy collecting and calculating statistics of my favorite basketball team. 0.532 0.067 0.299 I enjoy reading articles about basketball players, teams, events, and games. 0.322 0.239 0.312 I am eager to learn more about basketball. 0.541 0.051 0.170 I enjoy any movement that occurs during a basketball game. 0.308 0.767 0.124 Watching basketball games with my friends is joyful. 0.089 0.440 0.100 I enjoy probing deeply into basketball. 0.355 0.685 0.077 I often imagine how my favorite basketball team is playing to defeat their opponent. 0.431 0.175 0.431 I enjoy exploring my favorite basketball stadiums or facilities. 0.117 0.203 0.615 I am interested in learning how much it costs to build a brand new basketball stadium. 0.126 0.344 0.383 I am interested in learning how large a basketball court is. 0.031 0.078 0.224 When I miss a game, I often look for information on television, the internet, or newspaper to catch the game results. Q2. When applying the PCA, since the items are ordinal (the indication of the scale is missing…) I suggest to employ Polychoric correlations.. Response: Thanks for the insightful suggestions. The sport fan curiosity dataset is composed of Likert type items ranging from 1 to 5. Using Pearson correlation as an estimation of the correlation matrix on ordinal-scale could generate principal components with additional errors. Thus, we have employed PCA on polychoric correlation and updated Figure 7. Q3: In general, the paper lacks definitions and details. While PCA is a very well known multivariate method , NN are in general considered black boxes , several structures are possible and none of the considered architecture are displayed in the paper, making very difficult to understand them and their differences. Response: Thank you for your comments. The autoencoder has three elements: an encoding function, a decoding function, and a distance function between the amount of information loss between the compressed representation of data and the decompressed representation. The encoder and decoder will be chosen to be parametric functions (typically neural networks), and to be differentiable with respect to the distance function, so the parameters of the encoding/decoding functions can be optimized to minimize the reconstruction loss, using Stochastic Gradient Descent. The basic autoencoder, which refers to simple autoencoder in section 1.4 is an autoencoder with a single fully-connected neural layer as encoder and as decoder (see figure below). In order to demonstrate that the autoencoder algorithms do not have to limit to a single layer as encoder or decoder, we could instead use a stack of layers, so called “deep autoencoder” in section 1.6. . Both simple autoencoders and deep autoencoders are neural networks with symmetrical structure. Although the input is compressed and the output is reconstructed through its latent-space representation, there is no guarantee that the weights of the encoder and the decoder are identical. Thus, we can impose an additional optimization restriction so that the weights of the decoder layer are tied to the weights of the encoder layer. We refer to this type of autoencoders as Tie-weight autoencoders. In addition, if the restriction was imposed so that each encoded feature explains unique information, the generated weights composes the Independent Encoded autoencoders. We have provided the explanation at page 4 & 5 line 129-189 and page 10 line 322-340. Q4. Note that there’s no figure 8 in the paper and that citations need to be fixed as the follow different citation standards. Also, on. Line 295 there’s an extra bulletpoint. Response: Thank you for your comments. We have corrected Figure 8 in line 403 and the bullet point in line 295 was deleted. Q5. The three correlation structures are designed to mimic a 3 factor solution organizing 15 items into block of 5 showing the same correlation. For the high communality for example, the value of 0.8 and then always 0.1 for cross blocks items. I think that this design is too extreme and far from any real situation. In section 2.4 (sample size) don’t understand what the sample size increments are meant for.? Response: Thank you for your suggestions. The high communality was designed to imply for very low unique factor loadings. However, we agreed that the correlation structures for high communality could be too extreme. Thus, we have modified the correlation matrix to be The simulation results on Table 2, Table 3, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 are updated accordingly. In general, the results are consistent with previous version. That is, the reconstruction error for the PCA was much contaminated by the weak commonality condition compared. If the correlation structure was low communality, the MSE was always higher than those of the wide and high communality. In section 2.1-2.4, we described scenarios that are used to generated data in order to test the effectiveness of the dimension-reduction algorithm. Each simulated dataset was simulated based on three parameters: degree of non-normality, correlation among items, and various sample size setting. In section 2.4, for the sample size setting, we generated various sample size with unequal increments. For example, for sample size between 50 and 200, the increment was 10 and it was 50 for sample size between 200 and 500. That is, we have considered different simulated dataset with sample size of n=50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 250, 300, 350, 400, 450, 500, 550, 600, 650, 700, 750, 800, 850, 900, 950, and 1000. Q6. Architectures of the NN should be displayed and suggestions about the real data application followed to facilitate the reading and the evaluation. Response: Thanks for your comments. We have provided the details of the autoencoder at page 4 & 5 line 129-189 and page 10 line 322-340. The architecture of the simple and deep autoencoders can be visualized at Figure 8a & 8b. Regarding the sport fan curiosity questionnaire, if the questionnaire is construct valid, all items together well represent the underlying construct. Based on the weights estimated from the autoencoder’s bottleneck layers, each item could be classified into one of the three latent dimensions by its weights in absolute values. For example, the item “I enjoy collecting and calculating statistics of my favorite basketball team” has the highest correlation with construct 1. Similarly, we see that item “Watching basketball games with my friends is joyful.” has the highest correlation with construct 2. As such, autoencoders can elucidate how different items and constructs relate to one another and help develop new theories. For example, in Table 5b, the items “I enjoy collecting and calculating statistics of my favorite basketball team.”, “I enjoy reading articles about basketball players, teams, events, and games.”, “I am eager to learn more about basketball.”, and “I enjoy any movement that occurs during a basketball game.” appear to have large coefficient on one latent construct, which we assign as “Learning Motivation” factor. “Watching basketball games with my friends is joyful.”, “I enjoy probing deeply into basketball.”, and “I often imagine how my favorite basketball team is playing to defeat their opponent.” depict the “Social Interaction “factor. We have provided the explanation to facilitate the reading in page 13 line 439-446. The results from the current study were conceptually equivalent to the three constructs of sport fan curiosity scale developed by Park, Ha and Mahony (2014), i.e., specific information, general information and sport facility information. The slight differences between findings from the study and the work of Park et al may result from different research contexts. More specifically, the real dataset used in the study was collected from a specific sports context (i.e., basketball) whereas Park et al developed the sport fan curiosity scale in a general sport context. However, the findings from the study with small sample size (n=100) yielded psychometrically similar factor structure to the work of Park et al with a much larger sample size (n=407). Consequently, the effectiveness and efficiency of the proposed methodology in the study enrich the relevant literature theoretically and practically. References Baldi, P., & Hornik, K. (1989). Neural networks and principal component analysis: Learning from examples without local minima. Neural networks, 2(1), 53-58. Barton, K., Cattell, R. B., & Curran, J. (1973). Psychological states: Their definition through P-technique and differential R (dR) technique factor analysis. Journal of Behavioural Science. Park, S. H., Ha, J. P, and Mahony, D. Development and validation of a measure of sport fans’ specific curiosity. Journal of Sport Management, 2014. 28: 621-632 MacCallum, R. C., Widaman, K. F., Preacher, K. J., & Hong, S. (2001). Sample size in factor analysis: The role of model error. Multivariate behavioral research, 36(4), 611-637. "
Here is a paper. Please give your review comments after reading it.
289
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Melody and lyrics, reflecting two unique human cognitive abilities, are usually combined in music to convey emotions. Although psychologists and computer scientists have made considerable progress in revealing the association between musical structure and the perceived emotions of music, the features of lyrics are relatively less discussed. Using linguistic inquiry and word count (LIWC) technology to extract lyric features in 2372 Chinese songs, this study investigated the effects of LIWC-based lyric features on the perceived arousal and valence of music. First, correlation analysis shows that, for example, the perceived arousal of music was positively correlated with the total number of lyric words and the mean number of words per sentence and was negatively correlated with the proportion of words related to the past and insight. The perceived valence of music was negatively correlated with the proportion of negative emotion words. Second, we used audio and lyric features as inputs to construct music emotion recognition (MER) models.</ns0:p><ns0:p>The performance of random forest regressions reveals that, for the recognition models of perceived valence, adding lyric features can significantly improve the prediction effect of the model using audio features only; for the recognition models of perceived arousal, lyric features are almost useless. Finally, by calculating the feature importance to interpret the MER models, we observed that the audio features played a decisive role in the recognition models of both perceived arousal and perceived valence. Unlike the uselessness of the lyric features in the arousal recognition model, several lyric features, such as the usage frequency of words related to sadness, positive emotions, and tentativeness, played important roles in the valence recognition model.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The pursuit of emotional experience is a vital motivation for listening to music <ns0:ref type='bibr' target='#b34'>(Juslin &amp; Sloboda, 2001;</ns0:ref><ns0:ref type='bibr' target='#b33'>Juslin &amp; Laukka, 2004)</ns0:ref>, and the ability to convey emotions ensures the important role of music in human life <ns0:ref type='bibr' target='#b86'>(Yang, Dong, &amp; Li, 2018)</ns0:ref>. Therefore, the relationship between music and perceived emotional expression has attracted increasing academic attention in recent decades <ns0:ref type='bibr' target='#b76'>(Swaminathan &amp; Schellenberg, 2015)</ns0:ref>. Most of the related studies have focused on investigating the association between musical structure and perceived emotions. For example, psychologists have made considerable progress in revealing structural factors (e.g., tempo, pitch, and timbre), indicating different emotional expressions <ns0:ref type='bibr' target='#b20'>(Gabrielsson, 2016)</ns0:ref>, and computer scientists have focused on extracting features from audio (audio most commonly refers to sound, as it is transmitted in signal form; e.g., Mel-frequency cepstrum coefficients and Daubechies wavelet coefficient histograms) to automatically identify music emotion <ns0:ref type='bibr' target='#b86'>(Yang, Dong, &amp; Li, 2018)</ns0:ref>. Previous works have shown that sound features were highly correlated with music emotions <ns0:ref type='bibr' target='#b20'>(Gabrielsson, 2016;</ns0:ref><ns0:ref type='bibr' target='#b86'>Yang, Dong, &amp; Li, 2018)</ns0:ref>, but the lyric features have been relatively less discussed. <ns0:ref type='bibr' target='#b7'>Besson et al. (1998)</ns0:ref> proved that melodic and lyrical components in music are processed independently. Although melodic information may be more dominant than lyrics in conveying emotions <ns0:ref type='bibr' target='#b0'>(Ali &amp; Peynircio&#287;lu, 2006)</ns0:ref>, investigating the relationship between lyrical structure and the perceived emotion of music in detail is still necessary.</ns0:p><ns0:p>Music emotion studies related to lyrics have often focused on investigating the differences between the presence and absence of lyrics <ns0:ref type='bibr' target='#b0'>(Ali &amp; Peynircio&#287;lu, 2006;</ns0:ref><ns0:ref type='bibr' target='#b9'>Brattico et al., 2011;</ns0:ref><ns0:ref type='bibr'>Yuet al., 2019)</ns0:ref>, the effects of consistency or differences in melodic and lyrical information <ns0:ref type='bibr' target='#b51'>(Morton &amp; Trehub, 2001;</ns0:ref><ns0:ref type='bibr'>Vidas et al., 2019)</ns0:ref>, or the effects of lyrics with different meanings <ns0:ref type='bibr' target='#b5'>(Batcho et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b74'>Stratton &amp; Zalanowski, 1994)</ns0:ref>. While lyric structures and features have been rarely studied, melodic information has been processed in previous psychology studies <ns0:ref type='bibr' target='#b20'>(Gabrielsson, 2016;</ns0:ref><ns0:ref type='bibr' target='#b76'>Swaminathan &amp; Schellenberg, 2015;</ns0:ref><ns0:ref type='bibr'>Xu, Wen et al., 2020)</ns0:ref>. On the other hand, with the development of natural language processing (NLP) technology, different lyric features have been widely extracted and analyzed in music emotion recognition (MER) studies (e.g., <ns0:ref type='bibr'>Malheiro et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b13'>Delbouys et al., 2018)</ns0:ref>, a field investigating computational models for detecting music emotion <ns0:ref type='bibr' target='#b1'>(Aljanaki, Yang, &amp; Soleymani, 2017;</ns0:ref><ns0:ref type='bibr' target='#b11'>Chen, Lee et al., 2015)</ns0:ref>. These MER studies have typically focused on improving the prediction effect of constructed models but have not interpreted the model and variables. Thus, can the structural factors of lyrics be analyzed in more detail by combining NLP technology? If so, this may facilitate the understanding of the relationship between lyrics and perceived emotions. Therefore, the present study investigated the effects of various lyric features on the perceived emotions in music.</ns0:p><ns0:p>As the soul of music, perceived emotion has been widely discussed in recent decades <ns0:ref type='bibr' target='#b76'>(Swaminathan &amp; Schellenberg, 2015)</ns0:ref>. <ns0:ref type='bibr' target='#b0'>Ali and Peynircio&#287;lu (2006)</ns0:ref> investigated differences in melodies and lyrics conveying the same and mismatched emotions and confirmed the dominance of melody in music emotional information processing. Additionally, they observed that lyrics can strengthen the perception of negative emotions but weaken the perceived positive emotions. A computational study <ns0:ref type='bibr' target='#b29'>(Hu, Downie, &amp; Ehmann, 2009)</ns0:ref> found that negative emotion classification accuracy was improved by adding lyric information, while the opposite effect was obtained for the classification of positive emotions. In contrast, the results of <ns0:ref type='bibr' target='#b39'>Laurier, Grivolla, and Herrera (2008)</ns0:ref> showed that lyrics can facilitate the recognition of happy and sad musical emotions but not angry and violent emotions. Explanatory studies were also conducted instantly following the observed phenomena. <ns0:ref type='bibr' target='#b61'>Pieschl and Fegers (2016)</ns0:ref> advocated using short-term effects on cognition and affect to explain the power of lyrics. By comparing music with and without lyrics, evidence from functional magnetic resonance imaging also indicated the importance of lyrics for negative musical emotions <ns0:ref type='bibr' target='#b9'>(Brattico et al., 2011)</ns0:ref>. Following the work of <ns0:ref type='bibr' target='#b9'>Brattico et al. (2011)</ns0:ref>, neural mechanisms have been continually studied in recent years (e.g., <ns0:ref type='bibr' target='#b25'>Greer et al., 2019;</ns0:ref><ns0:ref type='bibr'>Proverbio, De Benedetto, &amp; Guazzone, 2020)</ns0:ref>. In sum, although subtle conflicts exist in different studies, the substantial role of lyrics in music emotion perception is consistent.</ns0:p><ns0:p>The aforementioned theoretical findings have also been supplemented or utilized in other fields. For instance, developmental psychology studies have proven that lyrical information dominates children's judgment of music emotion <ns0:ref type='bibr' target='#b51'>(Morton &amp; Trehub, 2001;</ns0:ref><ns0:ref type='bibr' target='#b52'>Morton &amp; Trehub, 2007)</ns0:ref>, but adults rely on melody <ns0:ref type='bibr' target='#b0'>(Ali &amp; Peynircio&#287;lu, 2006;</ns0:ref><ns0:ref type='bibr'>Vidas et al., 2019)</ns0:ref>; music therapy studies have widely conducted lyric analyses to extend the understanding of clients' emotional and health states <ns0:ref type='bibr' target='#b6'>(Baker et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b53'>O'Callaghan &amp; Grocke, 2009;</ns0:ref><ns0:ref type='bibr' target='#b70'>Silverman, 2020;</ns0:ref><ns0:ref type='bibr'>Viega &amp; Baker, 2016)</ns0:ref>; and in computational studies, lyrical information used as additional inputs can significantly improve the predictive effects of MER models <ns0:ref type='bibr' target='#b39'>(Laurier, Grivolla, &amp; Herrera, 2008;</ns0:ref><ns0:ref type='bibr' target='#b46'>Malheiro et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b87'>Yu, Tang et al., 2019)</ns0:ref>. These studies provide a development direction and practical value of basic lyrics research and encourage the optimization of basic research.</ns0:p><ns0:p>One of the limitations in previous behavioral research is that the lyric was always treated as a complete object. Studies usually investigated the differences between the presence and absence of lyrics <ns0:ref type='bibr' target='#b0'>(Ali &amp; Peynircio&#287;lu, 2006;</ns0:ref><ns0:ref type='bibr' target='#b88'>Yu, Wu et al, 2019)</ns0:ref> or the effects of lyrics with a certain meaning (e.g., lyrics expressing homesickness in <ns0:ref type='bibr'>Batcho et al., 2008, or happy and</ns0:ref><ns0:ref type='bibr'>sad lyrics in Brattico et al., 2011)</ns0:ref> but rarely analyzed the elements extracted from lyrics. In melody-related research, various musical structural factors (e.g., mode, timbre, tempo, and pitch) have been studied <ns0:ref type='bibr' target='#b33'>(Juslin &amp; Laukka, 2004)</ns0:ref>, and the association between perceived emotion and structural factors has been repeatedly verified in recent decades <ns0:ref type='bibr' target='#b20'>(Gabrielsson, 2016;</ns0:ref><ns0:ref type='bibr' target='#b32'>Hunter &amp; Schellenberg, 2010)</ns0:ref>. Therefore, to better understand the relationship between lyrics and music perception emotions, can similar methods be used to analyze the structural factors of lyrics? In addition, unlike musical structural factors, which have been summarized in musicology, analyzing lyrical factor types remains a challenge. We noticed that recent linguistic and NLPbased computational studies can provide inspiration.</ns0:p></ns0:div> <ns0:div><ns0:head>NLP-based Lyric Features</ns0:head><ns0:p>NLP technology has been widely used to analyze the emotions expressed or perceived in texts, such as book reviews <ns0:ref type='bibr' target='#b91'>(Zhang, Tong, &amp; Bu, 2019)</ns0:ref>, opinions on social media platforms <ns0:ref type='bibr' target='#b83'>(Xu, Li et al., 2020)</ns0:ref>, movie reviews <ns0:ref type='bibr' target='#b36'>(Kaur &amp; Verma, 2017;</ns0:ref><ns0:ref type='bibr' target='#b43'>Lu &amp; Wu, 2019)</ns0:ref>, party statements <ns0:ref type='bibr' target='#b26'>(Haselmayer &amp; Jenny, 2017)</ns0:ref>, and song lyrics <ns0:ref type='bibr' target='#b64'>(Rachman, Samo, &amp; Fatichah, 2019)</ns0:ref>. Knowledge-based approaches and machine learning-based approaches are two common approaches used for emotional analysis <ns0:ref type='bibr' target='#b42'>(Liu &amp; Chen, 2015)</ns0:ref>. The knowledge-based approach is an unsupervised approach that uses an emotional dictionary or lexicon (a list of words or expressions used to express human emotions) to label emotional words in text <ns0:ref type='bibr' target='#b41'>(Liu, 2012)</ns0:ref>. Thus, a high-quality emotional dictionary is the basis of this approach. In contrast, the machine learning approach is usually a supervised approach that requires a labeled dataset to construct emotion recognition models <ns0:ref type='bibr' target='#b55'>(Peng, Cambria, &amp; Hussain 2017)</ns0:ref>. It usually constitutes a process of (a) extracting text features (including lexical features, syntactic features and semantic features), (b) using machine learning algorithms to construct the relationship between extracted features and labeled emotions, and (c) predicting the emotions of untagged texts.</ns0:p><ns0:p>When conducting emotion analyses of song lyrics, machine learning-based approaches have been more prevalent in the past two decades. <ns0:ref type='bibr' target='#b39'>Laurier, Grivolla, and Herrera (2008)</ns0:ref> used lyric feature vectors based on latent semantic analysis (LSA) dimensional reduction and audio features to conduct music mood classification. They found that standard distance-based methods and LSA were effective for lyric feature extraction, although the performance of lyric features was inferior to that of audio features. <ns0:ref type='bibr' target='#b58'>Petrie, Pennebaker, and Sivertsen (2008)</ns0:ref> conducted linguistic inquiry and word count (LIWC) analyses to explore the emotional changes in Beatles' lyrics over time. Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b54'>Panda et al. (2013)</ns0:ref> used support vector machines, K-nearest neighbors, and na&#239;ve Bayes algorithms to map the relationship between music emotion and extracted audio and lyric features. In a recent study, <ns0:ref type='bibr' target='#b93'>Zhou, Chen, and Yang (2019)</ns0:ref> applied unsupervised deep neural networks to perform feature learning and found that this method performed well for audio and lyric data and could model the relationships between features and music emotions effectively. Notably, traditional MER research focuses on improving the prediction effect of the MER models, while our study attempted to use an interpretable way to investigate the relationship between lyrics features and music emotions.</ns0:p><ns0:p>Previous studies have shown a variety of methods for extracting lyric features. Although the lyric feature vectors and the deep learning-based features performed well in MER studies, the meaning of these features is often difficult to understand. Thus, considering the interpretability of lyric features, this study selected the LIWC-based method to extract lyric features. The LIWC software package was first developed to analyze text for more than 70 language dimensions by <ns0:ref type='bibr' target='#b57'>Pennebaker, Francis, and Booth (2001)</ns0:ref>. It has been applied for text analysis in psychological health <ns0:ref type='bibr' target='#b71'>(Slatcher &amp; Pennebaker, 2006)</ns0:ref>, physical health <ns0:ref type='bibr' target='#b56'>(Pennebaker, 2004)</ns0:ref>, and lyric studies <ns0:ref type='bibr' target='#b58'>(Petrie, Pennebaker, &amp; Sivertsen, 2008;</ns0:ref><ns0:ref type='bibr'>Pettijohn &amp; Sacco Jr, 2009)</ns0:ref>. The simplified Chinese version of LIWC (SC-LIWC; <ns0:ref type='bibr' target='#b21'>Gao et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b92'>Zhao et al., 2016)</ns0:ref>, which expanded text features for more than 100 dimensions, was also developed in recent years. This technology was considered for lyric feature extraction in this study.</ns0:p></ns0:div> <ns0:div><ns0:head>The Present Research</ns0:head><ns0:p>In sum, the present study investigates the association between LIWC-based lyric features and the perceived emotions of Chinese songs. First, the direct relationships between the independent variables (lyric features) and the dependent variables (perceived emotions of music) are investigated through correlation analysis. Then, a computational modeling method is considered to examine the effects of lyric features on music emotion perception. Since melody and lyrics are inseparable in music, we use the audio and lyric features extracted in music to predict the perceived emotions. By comparing the prediction effects of the models that use lyric features as input and that lack lyric features, we can intuitively witness the effect of lyrics. Moreover, using interpretable and nonlinear machine learning methods to construct prediction models, different forms of association between lyrics and perceived emotions can be observed <ns0:ref type='bibr' target='#b78'>(Vempala &amp; Russo, 2018;</ns0:ref><ns0:ref type='bibr'>Xu, Wen et al., 2020)</ns0:ref>. Finally, the constructed MER models are also of practical value because the recognized music emotion information can be used in various fields, such as music recommendation <ns0:ref type='bibr' target='#b14'>(Deng et al., 2015)</ns0:ref>, music information retrieval <ns0:ref type='bibr' target='#b16'>(Downie, 2008)</ns0:ref>, and music therapy <ns0:ref type='bibr' target='#b15'>(Dingle et al., 2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head></ns0:div> <ns0:div><ns0:head>Data Collection</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_6'>2021:07:63905:1:0:NEW 29 Sep 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The original music files and emotion annotation results were obtained from the PSIC3839 dataset, a public-free dataset for MER studies <ns0:ref type='bibr'>(Xu, Yun et al., 2020, unpublished data)</ns0:ref>. In this dataset, arousal and valence scores of 3839 songs popular in China were manually annotated by 87 university students using 5-point Likert scales. Based on the multi-dimensional emotion space model, <ns0:ref type='bibr' target='#b38'>Lang (1995)</ns0:ref> suggested that emotions can be categorized in a two-dimensional space by valence and arousal; valence ranges from negative to positive, and arousal ranges from passive (low) to active (high). In the PSIC3839 dataset, valence was evaluated from -2 (negative) to 2 (positive), and arousal was evaluated from -2 (not at all) to 2 (very much). We then downloaded the lyrics of songs from NetEase Cloud Music (https://music.163.com/), a popular music site in China. Considering that the annotators of the PSIC3839 dataset are all native Chinese, only 2372 songs with Chinese lyrics were retained for subsequent analysis.</ns0:p></ns0:div> <ns0:div><ns0:head>Lyric Feature Extraction</ns0:head><ns0:p>To extract the lyric features, the raw data need to be preprocessed. First, since the raw data of lyrics were downloaded online, they contained a large amount of unwanted information, such as singer, composer, and lyricist names and the words 'man' and 'woman' in duet songs. Thus, we manually filtered these unwanted information elements. Second, unlike English texts that are directly composed of separated words, Chinese texts require special tools to divide them into separate words for analysis. For example, the sentence 'he feels happy' in English text is 'hefeelshappy' in Chinese text. Therefore, this study used the Chinese word segmentation tool in the Language Technology Platform <ns0:ref type='bibr' target='#b10'>(Che, Li, &amp; Liu, 2010</ns0:ref>) for text segmentation. Through the above steps, the raw lyrics of each song were processed into Chinese words arranged in order.</ns0:p><ns0:p>After the above data preprocessing, we used SC-LIWC <ns0:ref type='bibr' target='#b21'>(Gao et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b92'>Zhao et al., 2016)</ns0:ref> to extract the lyric features. A total of 98 types of lyric features were calculated for each song, such as the total number of words (WordCount), the proportion of positive emotion words (PosEmo), and the proportion of swear words (Swear). For example, the lyric feature PosEmo, reflecting the usage frequency of positive emotion words in each song, is calculated by dividing the number of positive emotion words in SC-LIWC by the total number of words. All the extracted features are listed and introduced in Supplemental Materials Table <ns0:ref type='table' target='#tab_2'>S1</ns0:ref> Audio Feature Extraction For audio features, this study considered both rhythmic features (by beat and tempo detection) and spectral features related to timbre, pitch, harmony, and so forth (e.g., Mel-frequency cepstrum coefficients, MFCCs). Audio signal preprocessing was first conducted by (a) using a 22050 Hz sampling rate to sample each song and (b) using a short-term Fourier transform to obtain the power spectrogram. Then, using the librosa toolkit <ns0:ref type='bibr' target='#b48'>(McFee et al., 2015)</ns0:ref>, a total of nine low or middle features were extracted, including MFCCs, spectral centroid, spectral bandwidth, spectral roll-off, spectral flatness, spectral contrast, tonal centroid features (tonnetz), chromagram, and tempo. Different spectral features were calculated in different ways. For instance, MFCCs were calculated by selecting the lower cepstral coefficients of a Mel-scaled spectrogram, which was generated by using a Mel filter bank to filter the spectrogram <ns0:ref type='bibr' target='#b50'>(Meyer &amp; Kollmeier, 2009)</ns0:ref>.</ns0:p><ns0:p>Since the extracted features of each song were represented in a subspace of high dimensionality, we conducted feature reduction of each type of feature to reduce the storage and computational space. Principal component analysis (PCA), widely used in MER studies <ns0:ref type='bibr'>(Xu, Wen et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b86'>Yang, Dong, &amp; Li, 2018)</ns0:ref>, was applied to reduce the dimensionality of features. After conducting the PCA, we selected and combined the top 50 dimensions of each type of audio feature as model inputs. Finally, we scaled each continuous audio feature to a value of 0 to 1 via min-max scaling <ns0:ref type='bibr' target='#b35'>(Kahng, Mantik, &amp; Markov, 2002)</ns0:ref>. The above processed features were used as the final audio inputs of the computational models.</ns0:p></ns0:div> <ns0:div><ns0:head>Construction of Computational Models</ns0:head><ns0:p>The proposed modeling method is shown in Figure <ns0:ref type='figure'>1</ns0:ref>. Since the annotation results of the perceived valence and arousal values are continuous variables, this study formulated the construction of computational models as a regression problem, which predicts a real value from observed features <ns0:ref type='bibr' target='#b67'>(Sen &amp; Srivastava, 2012)</ns0:ref>. We used the audio and lyric features as model inputs and the perceived emotion values as ground truth. To explore the effects of lyric features, three types of input sets were considered: (a) audio features only; (b) lyric features only; and (c) combining audio and lyric features. In addition, the ground truth values were also scaled to a value of 0 to 1 via min-max scaling before modeling.</ns0:p><ns0:p>Two machine learning algorithms were then considered to map the inputs and perceived emotion values (ground truth data). Multiple linear regression (MLR) was used as the baseline algorithm. Random forest regression (RFR), showing good performance in MER tasks (e.g., <ns0:ref type='bibr'>Xu, Wen et al., 2020;</ns0:ref><ns0:ref type='bibr'>Xu et al., 2021)</ns0:ref>, was used as the main algorithm. For each RFR model, we used a grid parameter search to obtain the best modeling parameters. The performances of our models were evaluated by the tenfold cross-validation technique. The prediction accuracy of each regressor was measured by R 2 statistics and the root mean-squared error (RMSE) as follows: </ns0:p><ns0:formula xml:id='formula_0'>&#119929; &#120784; = &#120783; - &#119925; &#8721; &#119946; = &#120783; (&#119935; &#119946; -&#119936; &#119946; ) &#120784;</ns0:formula></ns0:div> <ns0:div><ns0:head>Results</ns0:head></ns0:div> <ns0:div><ns0:head>Data Distribution</ns0:head><ns0:p>As the first step in exploring the data, we created a preliminary description of the emotional annotation results of all songs and the content of the lyrics. Figure <ns0:ref type='figure'>2a</ns0:ref> shows the distribution of annotated emotions of songs in the valence-arousal emotion space. We observed that a large proportion of the songs fell under the third quadrant (37.94%; low arousal and negative), followed by the first (34.49%; high arousal and positive) and fourth quadrants (17.07%; high arousal and negative). Pearson correlation analysis shows that the perceived arousal values are positively correlated with valence (r(2371) = 0.537, p &lt; .001).</ns0:p><ns0:p>We then calculated the usage frequency of different words in song sets with different emotions.</ns0:p><ns0:p>Excluding commonly used personal pronouns (e.g., 'I' and 'you') and verbs (e.g., 'is' and 'are'), the top used words are presented in the word clouds (see Figure <ns0:ref type='figure'>2b</ns0:ref>). We observed that the words 'love' and 'world' frequently appear in every quadrant of the valence-arousal emotion space, meaning that these are two popular song themes. The words 'happy' and 'beautiful' frequently appear in positive songs, whereas the words 'lonely' and 'recall' frequently appear in negative songs. The above results allow us to intuitively see the difference in word usage in different emotional songs.</ns0:p></ns0:div> <ns0:div><ns0:head>Correlation Analysis between Perceived Emotions and Lyric Features</ns0:head><ns0:p>In this part, we analyzed how well the independent variables (lyric features) accounted for the dependent variables (perceived arousal and valence values). The lyric features most relevant to arousal and valence are shown in Table <ns0:ref type='table' target='#tab_4'>1 and Table 2</ns0:ref>, respectively. For instance, using Pearson correlation analysis, we found that the perceived arousal values were positively correlated with the total number of words in songs (WordCount, r(2371) = 0.206, p &lt;.01), the mean number of words per sentence (WordPerSentence, r(2371) = 0.179, p &lt;.01), the ratio of Latin words (RateLatinWord, r(2371) = 0.183, p &lt;.01), and the proportion of words related to achievement (Achieve, r(2371) = 0.111, p &lt;.01) and were negatively correlated with the proportion of words related to the past (tPast, r(2371) = -0.124, p &lt;.01) and the proportion of words related to insight (Insight, r(2371) = -0.122, p &lt;.01). For valence, we observed that the perceived valence values were negatively correlated with the proportion of negative emotion words (NegEmo, r(2371) = -0.364, p &lt;.01) and proportion of words related to sadness (Sad, r(2371) = -0.299, p &lt;.01). The entire correlation results are presented in Supplemental Materials TabelS2. The correlation results only reveal the linear relationships between perceived emotions and lyric features. Thus, we then used machine learning methods to investigate other types of relationships.</ns0:p></ns0:div> <ns0:div><ns0:head>Model Prediction Results</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63905:1:0:NEW 29 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In this section, we used MLR and RFR algorithms, different input sets (audio features, lyric features, and combined features), and different ground truth data (arousal and valence) to construct MER models. To obtain relatively good models, a grid parameter search was first conducted to obtain the best performing parameters for each RFR model (results are shown in Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>).</ns0:p><ns0:p>After parameter searching, the performances of the constructed models, evaluated by the tenfold cross-validation technique, are presented in Figure <ns0:ref type='figure'>3</ns0:ref>. Since the tenfold cross-validation technique, using 10% data as the testing data and using the remaining 90% instances as training data to train regressor, used the same folds in the evaluation of different models, a tenfold paired sample t-test can be applied to compare the model results. For the algorithms, the RFR algorithm performed better than the MLR algorithm in all the constructed models. For example, ten-fold paired sample t-test showed that, using combined features to predict perceived arousal values, the RFR-based model reached a mean R 2 value of 0.631 and a mean RMSE value of 0.147, significantly better than the MLR-based model (R 2 = 0.532, t(9) = 4.206, p &lt; .01, d = 1.883; RMSE = 0.165, t(9) = -4.012, p &lt; .01, d = -1.960). Therefore, the subsequent analysis was only conducted on the models based on the RFR algorithm.</ns0:p><ns0:p>For the recognition models of perceived arousal values, paired sample t-test showed that the model using audio features as inputs performed significantly better than the model using lyric features (R 2 : t(9) = 36.335, p &lt; .001, d = 15.219; RMSE: t(9) = -34.693, p &lt; .001, d = -12.167). Although the model using combined features (R 2 = 0.631, RMSE = 0.147) performed slightly better than the model using audio features (R 2 = 0.629, RMSE = 0.147), there was no significant effect (R 2 : t(9) = 0.134, p = .896, d = 0.063; RMSE: t(9) = -0.231, p = .822, d = -0.114). These results revealed that the perceived arousal of music mainly depends on audio information, while lyrics are not important.</ns0:p><ns0:p>For valence, paired sample t-test showed that, although the best performing recognition model of perceived valence values performed worse than the arousal model (R 2 : t(9) = -6.700, p &lt; .001, d = -3.271; RMSE: t(9) = 11.539, p &lt; .001, d = 6.094), both lyric and audio features played important roles. The RFR-based model using audio features only reached a mean R 2 value of 0.371 and a mean RMSE value of 0.214, and the model using lyric features as inputs reached a mean R 2 value of 0.370 and a mean RMSE value of 0.214. When the audio and lyric features were combined as inputs, the new model achieved a mean R 2 value of 0.481 and a mean RMSE value of 0.194, significantly better than the previous two models. These results indicated that the perceived valence of music was influenced by both audio and lyric information.</ns0:p></ns0:div> <ns0:div><ns0:head>Model Interpretability</ns0:head><ns0:p>In the last step, we attempted to explain the best performing RFR-based models by examining the information gain of features. Quiroz, Geangu, &amp; Yong (2018) noted that models constructed using the RFR algorithm can be interpreted by calculating feature importance. Thus, the feature importance of the best performing recognition models of the perceived arousal and valence was calculated and is presented in Figure <ns0:ref type='figure'>4</ns0:ref>. Since tenfold cross-validation was used to evaluate the models, the coefficients of feature importance might differ when predicting different test sets <ns0:ref type='bibr'>(Xu, Wen et al., 2020)</ns0:ref>. Thus, the distribution of feature importance was arranged in descending order of the mean value, and only the top 30 features were included for visibility.</ns0:p><ns0:p>In the arousal recognition model, the audio features played a decisive role, which accounted for 95.01% of the model. The first PCA components of spectral flatness, spectral contrast, chromagram, MFCCs, and spectral bandwidth are the five most contributing features, accounting for 30.71%, 12.38%, 8.77%, 8.62%, and 4.26%, respectively. For the lyric features, the feature importance results are similar to the results of the correlation analysis. The total number of words in songs (WordCount) and the mean number of words per sentence (WordPerSentence) were the top two contributing lyric features, accounting for 0.19% and 0.15% of the model.</ns0:p><ns0:p>For valence, the audio features explained 73.44% of the model. The first PCA components of spectral contrast and spectral flatness also showed good predictive effects on the perceived valence, accounting for 24.82% and 10.42% of the model, respectively. The proportion of negative emotion words (NegEmo) was the most important lyric feature (accounting for 5.32%), followed by the proportion of words related to sadness (Sad, 0.56%), positive emotions (PosEmo, 0.39%), tentativeness (Tentat, 0.23%), and so on. These findings also support the opinion that lyric features can provide more information for recognition models of valence than for recognition models of arousal.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>This study investigated the effects of LIWC-based lyric features on the perceived arousal and valence of music. We first explored the data distribution, and several interesting results were found. First, the emotional distribution (in the valence-arousal emotion space) of music in this study is similar to previous works. Various studies have shown that the perceived valence and arousal of music were positively correlated (e.g., <ns0:ref type='bibr' target='#b12'>Chen, Yang et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b24'>Greenberg et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b73'>Speck et al., 2011)</ns0:ref>. This reveals that the relationship between valence and arousal in music is relatively constant. Second, analyzing the songs' word usage frequency in different quadrants of the valence-arousal emotion space, we then found that the word 'love' frequently appears in each quadrant. The cross-cultural study of <ns0:ref type='bibr' target='#b19'>Freeman (2012)</ns0:ref> has shown that 'romantic love' is the top topic category of Chinese-language pop songs (82.5%), while Western pop songs with the topic of 'romantic love' accounted for only 40%. The high usage frequency of the word 'love' in this study further confirms that love-themed songs are the mainstream in Chinese pop music.</ns0:p><ns0:p>In addition, we observed that the words 'happy' and 'beautiful' frequently appear in positive songs, whereas the words 'lonely' and 'recall' frequently appear in negative songs. This intuitive phenomenon shows that, in general, perceived music emotions are related to lyrics, which encourages us to further explore lyric features.</ns0:p><ns0:p>Although LIWC-based lyric features have been considered in computational studies to improve the model effect (e.g., <ns0:ref type='bibr' target='#b31'>Hu, Chen, &amp; Yang, 2009;</ns0:ref><ns0:ref type='bibr'>Malheiro et al., 2016)</ns0:ref>, the role of LIWC-based lyric features has never been analyzed and discussed individually. Thus, this study then investigated the linear relationship between each lyric feature and the perceived emotion of music. In general, valence is more correlated with the features reflecting the meaning of the lyric text, while arousal is more correlated with the features reflecting the structure of the lyrics. For example, we observed that the perceived valence values were negatively correlated with the usage frequency of words related to negative emotions (e.g., 'sad', 'horrible', 'angry', and 'bitter'), tentativeness (e.g., 'seem', 'dim', 'guess', and 'dubitation'), insight (e.g., 'understanding', 'notice', 'analyze', and 'know'), and exclusiveness (e.g., 'exclude', 'forget', 'ignore', and 'cancel'). It is obvious that the usage of emotional words is related to perceived emotions because emotional words shape emotional percepts <ns0:ref type='bibr' target='#b22'>(Gendron et al., 2012)</ns0:ref>. Words related to tentativeness are often used in sad love songs to express doubts about love (e.g., 'In the next few days, I guess you won't show up either'), and insight words are often used with negative words to portray the sad atmosphere (e.g., 'no one notices me, only a small raindrop accompanies me to wait for dawn'). We believe that some words in lyrics are often used to describe certain behaviors, feelings or scenes, which are related to negative emotions. For instance, nostalgia, characterized by sadness, insomnia, loss of appetite, pessimism, and anxiety <ns0:ref type='bibr' target='#b5'>(Batcho et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b27'>Havlena &amp; Holak, 1991)</ns0:ref>, is one of the themes of songs. Thus, words that describe nostalgia in lyrics are often related to negative emotions. This phenomenon may appear in songs with various themes, such as farewell, war, and tragic love stories.</ns0:p><ns0:p>For arousal, we observed that the perceived arousal values were positively correlated with the total number of lyric words (WordCount) and the mean number of words per sentence (WordPerSentence). We wonder that the size of music may play an important role, because the duration of music is positively correlated with the total number of lyric words (r(2371) = 0.219, p &lt; .01); that is, the longer the music, the more words are in the lyrics. Unfortunately, as shown in Supplemental Materials Figure <ns0:ref type='figure'>S1</ns0:ref>, the duration of music was negatively correlated with the arousal values (r(2371) = -0.111, p &lt; .01). In addition, <ns0:ref type='bibr' target='#b28'>Holbrook and Anand (1990)</ns0:ref> found that the tempo of music is positively correlated with listeners' perceived arousal. Thus, another assumption is that fast-paced songs tend to match more lyric words. Unfortunately, the above hypothesis was not confirmed in the current data set (r(2371) = -0.039, p = .058). This result reminds us to analyze the relationship between audio features and the number of words. We found that the total number of lyric words was positively correlated with chromagram and negatively correlated with MFCCs (see Supplemental Materials Table <ns0:ref type='table' target='#tab_6'>S3</ns0:ref>). Chromagrams reflect the pitch components of music over a short time interval <ns0:ref type='bibr' target='#b65'>(Schmidt, Turnbull, &amp; Kim, 2010)</ns0:ref>. Previous work on screams has found a significant tendency to perceive higher-pitched screams Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>as more emotionally arousing than lower-pitched screams <ns0:ref type='bibr' target='#b66'>(Schwartz &amp; Gouzoules, 2019)</ns0:ref>. However, whether the above phenomenon holds in music is still unknown, and it is worthy of further research. While MFCCs reflect the nonlinear frequency sensitivity of the human auditory system <ns0:ref type='bibr' target='#b81'>(Wang et al., 2012)</ns0:ref>, it is difficult to map well-known music features in conventional musical writing. The low-level audio features in this study may not directly explain the relationship between melody features and lyrics. In fact, <ns0:ref type='bibr' target='#b49'>McVicar, Freeman, and De Bie (2011)</ns0:ref> found it hard to interpret the correlations between arousal and lyric features. Therefore, how to map the relationships among arousal, melody, and lyrics still needs further investigation.</ns0:p><ns0:p>The above correlation analysis reflects the direct connection between lyric features and perceived emotions. We then used audio features and lyric features to construct MER models. By comparing the results of models and calculating feature importance to interpret the constructed models, we investigated the role of lyric features and obtained two major discoveries. First, we found that, compared with lyric features, audio features played a decisive role in the MER models for both perceived arousal and perceived valence. From the perspective of computational modeling, this finding confirms previous conclusions that melodic information may be more dominant than lyrics in conveying emotions <ns0:ref type='bibr' target='#b0'>(Ali &amp; Peynircio&#287;lu, 2006)</ns0:ref>. However, previous works used individual moods affected by music to evaluate the ability of music to convey emotions <ns0:ref type='bibr' target='#b0'>(Ali &amp; Peynircio&#287;lu, 2006;</ns0:ref><ns0:ref type='bibr' target='#b72'>Sousou, 1997)</ns0:ref>, which is not equivalent to the perceived emotions of music. Thus, our study provided more direct evidence that melody information plays a decisive role in the perception of music emotions, and we believe that this result can be generalized to all countries. The second major finding was that, unlike the uselessness of the lyric features in the arousal recognition model, lyric features can significantly improve the prediction effect of the valence recognition model. Feature importance analysis also shows that lyric features, such as the proportion of words related to sadness (Sad), positive emotions (PosEmo), and tentativeness (Tentat), played important roles in the valence recognition model. This finding was consistent with that of <ns0:ref type='bibr' target='#b29'>Hu, Downie, &amp; Ehmann (2009)</ns0:ref>, which showed that lyrics can express the valence dimension of emotion but usually do not express much about the arousal dimension of emotion, rather than the opposite finding shown by <ns0:ref type='bibr'>Malheiro et al. (2016)</ns0:ref>.</ns0:p><ns0:p>We hypothesize that the main reason for the difference in results is that our study and the study of <ns0:ref type='bibr' target='#b29'>Hu, Downie, &amp; Ehmann (2009)</ns0:ref> both focused on Chinese music and participants, but the study of <ns0:ref type='bibr'>Malheiro et al. (2016)</ns0:ref> was conducted in Portugal. Cross-cultural studies have shown that although listeners are similarly sensitive to musically expressed emotion (which is facilitated by psychophysical cues; <ns0:ref type='bibr' target='#b2'>Argstatter, 2016;</ns0:ref><ns0:ref type='bibr' target='#b4'>Balkwill &amp; Thompson, 1999)</ns0:ref>, differences still exist <ns0:ref type='bibr' target='#b90'>(Zacharopoulou &amp; Kyriakidou, 2009)</ns0:ref>. Therefore, we believe that in the Chinese environment, perceived music valence is affected by lyrics, although its influence is not as strong as that of melody information.</ns0:p><ns0:p>As mentioned before, this study is also of practical value. The computational modeling method was first proposed in the field of MER, which aims to automatically recognize the perceptual Manuscript to be reviewed</ns0:p><ns0:p>Computer Science emotion of music <ns0:ref type='bibr' target='#b86'>(Yang, Dong, &amp; Li, 2018)</ns0:ref>. There are many existing songs, but it is difficult for people to manually annotate all emotional information. Thus, MER technology is urgently needed and has made great progress in the past two decades. Recognized emotion information can be used in various application scenarios <ns0:ref type='bibr' target='#b14'>(Deng et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b15'>Dingle et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b16'>Downie, 2008)</ns0:ref>. The collected data and proposed methods in this study can also provide references for future MER research. Notably, the computational modeling methods in MER studies pursue model effects and prediction accuracy, but when they are applied in music psychology research, the interpretability of the model should be taken into account <ns0:ref type='bibr' target='#b78'>(Vempala &amp; Russo, 2018;</ns0:ref><ns0:ref type='bibr'>Xu, Wen et al., 2020)</ns0:ref>. Therefore, we chose MLR and RFR to construct MER models. How to integrate machine learning methods into music psychology research more effectively still needs more exploration.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The present work investigated the effects of LIWC-based lyric features on the perceived arousal and valence of music by analyzing 2372 Chinese songs. Correlation analysis shows that, for example, the perceived arousal of music was positively correlated with the total number of lyric words (WordCount, r(2371) = 0.206, p &lt;.01) and the mean number of words per sentence (WordPerSentence, r(2371) = 0.179, p &lt;.01) and was negatively correlated with the proportion of words related to the past (tPast, r(2371) = -0.124, p &lt;.01) and insight (Insight, r(2371) = -0.122, p &lt;.01). The perceived valence of music was negatively correlated with the proportion of negative emotion words (NegEmo, r(2371) = -0.364, p &lt;.01) and the proportion of words related to sadness (Sad, r(2371) = -0.299, p &lt;.01). We then used audio and lyric features as inputs to construct MER models. The performance of RFR-based models shows that, for the recognition models of perceived valence, adding lyric features can significantly improve the prediction effect of the model using audio features only; for the recognition models of perceived arousal, lyric features are almost useless. Calculating the importance of features to interpret the MER models, we observed that the audio features played a decisive role in the recognition models of both perceived arousal and perceived valence. Unlike the uselessness of the lyric features in the arousal recognition model, several lyric features, such as the proportion of words related to sadness (Sad), positive emotions (PosEmo), and tentativeness (Tentat), played important roles in the valence recognition model. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:p>The proposed modeling method.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63905:1:0:NEW 29 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>Data distribution of the 2372 Chinese songs in this study. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:p>Performance of constructed MER models with different inputs and algorithms. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:p>Distribution of feature importance for RFR-based recognition models of perceived arousal and perceived valence. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63905:1:0:NEW 29 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>(&#119935; &#119946; -&#119936; &#119946; ) &#120784; PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63905:1:0:NEW 29 Sep 2021) Manuscript to be reviewed Computer Science where is the perceived emotion value (ground truth) of each song, is the mean value of the &#119935; &#119946; &#119935; perceived emotion values, is the predicted result of each song, and is the number of testing &#119936; &#119946; &#119925; samples in the tenfold cross-validation technique.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:07:63905:1:0:NEW 29 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>(a) Distribution of annotated emotions in the valence-arousal emotion space. (b) Word clouds of the top words used in each quadrant of the valence-arousal emotion space. The font size depends on the usage frequency of the word (positive correlation). PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63905:1:0:NEW 29 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>(a) Prediction results of perceived arousal recognition models, measured by R 2 statistics. (b) Prediction results of perceived arousal recognition models, measured by RMSE. (c) Prediction results of perceived valence recognition models, measured by R 2 statistics. (d) Prediction results of perceived valence recognition models, measured by RMSE. Error bars indicate the standard deviations. PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63905:1:0:NEW 29 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Arranged in descending order of the mean value, the top 30 features were included for visibility, and the trend of the remaining features was approximately the same. Error bars indicate the standard deviations.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,260.25' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 (on next page)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Correlation between lyric features and perceived arousal in music</ns0:figDesc><ns0:table><ns0:row><ns0:cell>PastM: proportion of past tense markers; Insight: proportion of words related to insight; Time:</ns0:cell></ns0:row><ns0:row><ns0:cell>proportion of words related to time; Achieve: proportion of words related to achievement;</ns0:cell></ns0:row><ns0:row><ns0:cell>tPast: proportion of words related to the past; WordCount: the total number of words;</ns0:cell></ns0:row><ns0:row><ns0:cell>WordPerSentence: average number of words per sentence; RateLatinWord: the ratio of Latin</ns0:cell></ns0:row><ns0:row><ns0:cell>words. **Correlation is significant at the 0.01 level (2-tailed).</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63905:1:0:NEW 29 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Correlation between lyric features and perceived arousal in music</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>9</ns0:cell></ns0:row><ns0:row><ns0:cell>1. Arousal</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2. PastM</ns0:cell><ns0:cell>-.121**</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>3. Insight</ns0:cell><ns0:cell>-.122**</ns0:cell><ns0:cell>.168**</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>4. Time</ns0:cell><ns0:cell>-.115**</ns0:cell><ns0:cell>.299**</ns0:cell><ns0:cell>.139**</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>5. Achieve</ns0:cell><ns0:cell>.111**</ns0:cell><ns0:cell>0.023</ns0:cell><ns0:cell>.146**</ns0:cell><ns0:cell>-0.006</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>6. tPast</ns0:cell><ns0:cell>-.124**</ns0:cell><ns0:cell>.517**</ns0:cell><ns0:cell>.153**</ns0:cell><ns0:cell>.322**</ns0:cell><ns0:cell>0.014</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>7. WordCount</ns0:cell><ns0:cell>.206**</ns0:cell><ns0:cell>.056**</ns0:cell><ns0:cell>.080**</ns0:cell><ns0:cell>0.004</ns0:cell><ns0:cell>.073**</ns0:cell><ns0:cell>0.014</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>8. WordPerSentence .179**</ns0:cell><ns0:cell>.062**</ns0:cell><ns0:cell>.105**</ns0:cell><ns0:cell>0.021</ns0:cell><ns0:cell>.088**</ns0:cell><ns0:cell>0.025</ns0:cell><ns0:cell>.873**</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>9. RateLatinWord</ns0:cell><ns0:cell>.183**</ns0:cell><ns0:cell>-0.029</ns0:cell><ns0:cell>0.02</ns0:cell><ns0:cell>-0.032</ns0:cell><ns0:cell>.103**</ns0:cell><ns0:cell>-0.031</ns0:cell><ns0:cell>.174**</ns0:cell><ns0:cell>.114**</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Notes: **Correlation is significant at the 0.01 level (2-tailed).</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>Abbreviations: PastM: proportion of past tense markers; Insight: proportion of words related to insight; Time: proportion of words related to time; Achieve: proportion of words related to achievement; tPast: proportion of words related to the past; WordCount: the total number of words; WordPerSentence: average number of words per sentence; RateLatinWord: the ratio of Latin words.PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63905:1:0:NEW 29 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Correlation between lyric features and perceived valence in music</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Adverb: proportion of adverbs; TenseM: proportion of tense markers; PastM: proportion of</ns0:cell></ns0:row><ns0:row><ns0:cell>past tense markers; NegEmo: proportion of negative emotion words; Sad: proportion of</ns0:cell></ns0:row><ns0:row><ns0:cell>words related to sadness; CogMech: proportion of words related to cognition; Tentat:</ns0:cell></ns0:row><ns0:row><ns0:cell>proportion of words related to tentativeness; tPast: proportion of words related to the past.</ns0:cell></ns0:row><ns0:row><ns0:cell>**Correlation is significant at the 0.01 level (2-tailed).</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63905:1:0:NEW 29 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Correlation between lyric features and perceived valence in musicNotes: **Correlation is significant at the 0.01 level (2-tailed).</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>9</ns0:cell></ns0:row><ns0:row><ns0:cell>1. Arousal</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2. PastM</ns0:cell><ns0:cell>-.121**</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>3. Insight</ns0:cell><ns0:cell>-.122**</ns0:cell><ns0:cell>.168**</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>4. Time</ns0:cell><ns0:cell>-.115**</ns0:cell><ns0:cell>.299**</ns0:cell><ns0:cell>.139**</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>5. Achieve</ns0:cell><ns0:cell>.111**</ns0:cell><ns0:cell>0.023</ns0:cell><ns0:cell>.146**</ns0:cell><ns0:cell>-0.006</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>6. tPast</ns0:cell><ns0:cell>-.124**</ns0:cell><ns0:cell>.517**</ns0:cell><ns0:cell>.153**</ns0:cell><ns0:cell>.322**</ns0:cell><ns0:cell>0.014</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>7. WordCount</ns0:cell><ns0:cell>.206**</ns0:cell><ns0:cell>.056**</ns0:cell><ns0:cell>.080**</ns0:cell><ns0:cell>0.004</ns0:cell><ns0:cell>.073**</ns0:cell><ns0:cell>0.014</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>8. WordPerSentence</ns0:cell><ns0:cell>.179**</ns0:cell><ns0:cell>.062**</ns0:cell><ns0:cell>.105**</ns0:cell><ns0:cell>0.021</ns0:cell><ns0:cell>.088**</ns0:cell><ns0:cell>0.025</ns0:cell><ns0:cell>.873**</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>9. RateLatinWord</ns0:cell><ns0:cell>.183**</ns0:cell><ns0:cell>-0.029</ns0:cell><ns0:cell>0.02</ns0:cell><ns0:cell>-0.032</ns0:cell><ns0:cell>.103**</ns0:cell><ns0:cell>-0.031</ns0:cell><ns0:cell>.174**</ns0:cell><ns0:cell>.114**</ns0:cell><ns0:cell>1</ns0:cell></ns0:row></ns0:table><ns0:note>Abbreviations: Adverb: proportion of adverbs; TenseM: proportion of tense markers; PastM: proportion of past tense markers; NegEmo: proportion of negative emotion words; Sad: proportion of words related to sadness; CogMech: proportion of words related to cognition; Tentat: proportion of words related to tentativeness; tPast: proportion of words related to the past. PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63905:1:0:NEW 29 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 (on next page)</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The best performing parameters for each random forest regression AF indicates audio features; LF indicates lyric features; and CF indicates combined features.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63905:1:0:NEW 29 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The best performing parameters for each random forest regression Abbreviations: AF indicates audio features; LF indicates lyric features; and CF indicates combined features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Ground</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Parameters</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Inputs</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>truth</ns0:cell><ns0:cell /><ns0:cell cols='5'>n_estimators max_depth min_samples_leaf min_samples_split max_features</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>AF</ns0:cell><ns0:cell>156</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>0.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Arousal</ns0:cell><ns0:cell>LF</ns0:cell><ns0:cell>196</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>0.8</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CF</ns0:cell><ns0:cell>136</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>0.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>AF</ns0:cell><ns0:cell>179</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell>0.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Valenc</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>LF</ns0:cell><ns0:cell>193</ns0:cell><ns0:cell>38</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>0.8</ns0:cell></ns0:row><ns0:row><ns0:cell>e</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>CF</ns0:cell><ns0:cell>191</ns0:cell><ns0:cell>43</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>0.6</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63905:1:0:NEW 29 Sep 2021)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63905:1:0:NEW 29 Sep 2021)Manuscript to be reviewed</ns0:note> </ns0:body> "
"Dear editor and reviewers, Thank you for your helpful comments about our manuscript “Using machine learning analysis to interpret the relationship between music emotion and lyric features.” Following the insightful comments and helpful suggestions, we have revised the manuscript, and the revised parts have been highlighted as yellow. Our point-by-point responses to the reviewers’ comments are as follows. To Reviewer 1#, 1. The authors could stress the definitions for the 'musical' terms used in the paper, such as: melody, lyrics, audio, valence, arousal; Response: Thanks for your helpful suggestion! We have added the definitions of the 'musical' terms in our manuscript. The added text is as follows: (a) lines 81-83, melody and lyrics: “A melody is a linear succession of musical tones that the listener perceives as a single entity (van Waesberghe, 1955); and lyrics is the composition in verse which is sung to a melody to constitute a song”; (b) lines 51-53, audio: “audio most commonly refers to sound, as it is transmitted in signal form”; and (c) lines 201-204, valence and arousal: “Based on the multi-dimensional emotion space model, Lang (1995) suggested that emotions can be categorized in a two-dimensional space by valence and arousal; valence ranges from negative to positive, and arousal ranges from passive (low) to active (high).” 2. How can someone tell if a music is positive or negative without lyric? In the literature, is there a study relating sound features to the emotional response? Response: We apologize for not clearly introducing the relationship between sound features and emotional response. In fact, various studies have proved that sound features and music emotions are highly correlated, and some reviews (e.g., Gabrielsson, 2016; Yang, Dong, & Li, 2018) have summarized relevant researches. Considering this reviewer’s comment, we have added a brief mention about the relationship between sound features and emotional response as follows (lines 54-55): “Previous works have shown that sound features were highly correlated with music emotions (Gabrielsson, 2016; Yang, Dong, & Li, 2018), but the lyric features have been relatively less discussed.” References Gabrielsson A. 2016. The relationship between musical structure and perceived expression. In: Susan H, Ian C, Michael Thaut T, eds. The Oxford Handbook of Music Psychology. Oxford: Oxford University Press, 215–232. Yang X, Dong Y, Li J. 2018. Review of data features-based music emotion recognition methods. Multimedia Systems 24:365–389. DOI: 10.1007/s00530-017-0559-4. 3. The authors mention that the results may depend on the country where the study is conducted since the local culture may affect the way an individual interprets the music. In which extent the results presented here may be generalized to all countries? Response: Thanks for the insightful comment! As mentioned in Discussion, one of our findings, that lyric features can improve the prediction effect of the valence recognition model, was consistent with the finding of a music emotion recognition (MER) study conducted in China, but contrary to that of a study conducted in Portugal. Therefore, we guess that, for people in different countries, lyric features have different effects on music valence perception. However, we also found that the audio features (sound features) played a decisive role in MER models. We believe that this result can be generalized to all countries, because various studies have proved that sound features are related to music emotions. And the manuscript has been revised as follows (lines 450-452): “Thus, our study provided more direct evidence that melody information plays a decisive role in the perception of music emotions, and we believe that this result can be generalized to all countries.” 4. What is the difference between music emotion (Panda et al. 2013) and the idea of emotion perception in music? I think the authors could emphasize how and why this work is different from Panda et al. 2013. Response: In MER studies (including Panda et al., 2013), music emotion refers to the perceived emotion of music, which is the same as this work. However, traditional MER research focuses on improving the prediction effect of the MER models, while our study used an interpretable machine learning model to investigate the relationship between lyrics features and music emotions. The difference in research purpose is the biggest difference between this research and previous MER works. Considering this comment, we have added a brief mention about the difference between previous work and this study as follows (lines 163-166): “Notably, traditional MER research focuses on improving the prediction effect of the MER models, while our study attempted to use an interpretable way to investigate the relationship between lyrics features and music emotions.” 5. Is there an explanation for the correlation between arousal and the total number of words of a song? Do you think that the duration of the music play a significant role? In other words, is there a threshold (minimum number of words / duration) that has to be reached so a music can trigger an emotional response? Response: Thanks for your helpful comments! First, although we found a correlation between arousal and the total number of words of a song, we think this may only be a superficial phenomenon. We guess that there may be some intermediary factors. For example, a song with a larger number of words in lyrics may have a faster rhythm, leading to higher perceptual arousal. However, this hypothesis still needs further experimental verification. Second, although the duration was not considered in this work, we agree with this reviewer that the duration of the music may also play a significant role in music emotion perception. In fact, previous MER studies and music psychology researches have discussed this issue. For perceived emotion of music (emotion conveyed by music), Xiao et al. (2008) found that the minimum desirable duration was 8 seconds. For felt emotion of music (human emotion aroused by music), music psychology studies tend to use longer musical stimuli because it takes a longer time for human emotions to be aroused (e.g., Juslin, Harmat, & Eerola, 2014). References Xiao, Z., Dellandrea, E., Dou, W., & Chen, L. (2008, June). What is the best segment duration for music mood analysis?. In 2008 International Workshop on Content-Based Multimedia Indexing (pp. 17-24). IEEE. Juslin, P. N., Harmat, L., & Eerola, T. (2014). What makes music emotionally significant? Exploring the underlying mechanisms. Psychology of Music, 42(4), 599-623. 6. The authors could explain how the numbers 2371 and 18 (in r(2371) and t(18), respectively) were obtained. Response: The numbers 2371 and 18 indicate the degrees of freedom (DF), which are the numbers of values in the final calculation of a statistic that are free to vary [1]. DF is usually reported together with statistical results, and it is usually calculated as follows: DF = n – k, where n is the number of samples, and k is the number of restricted conditions or variables. In Pearson correlation analysis, we calculated the correlation between two variables based on the sample of 2372 songs, so the sample size n is 2372. The k is 1 in Pearson correlation analysis, so DF is 2371 here (reported together with correlation coefficient r(2371)). In paired sample t-test, we compared the predictive results of the tenfold cross-validation, so the sample size n is 10. We apologize for using the method for calculating the DF in independent sample t-test (DF = n1 +n2 -2) to calculate the DF in paired sample t-test (DF = n -1). Thus, the number should be 9 in t(18). We have replaced “t(18)” with “t(9)” in our manuscript (lines 326-341). [1] 'Degrees of Freedom'. Glossary of Statistical Terms. Animated Software. Retrieved 2008-08-21. 7. The authors could discuss about the use of 'extensive' features (that may depend on the song size) instead of 'intensive' ones (normalized, that may not depend on the song size attributes such as the number of words). If an emotional response (like arousal) is triggered by the song size, does it mean that a longer song will trigger a more intense emotional response? I wonder this relationship is always linear, in the sense that a song that is too long (too short) will trigger the same kind of response that a medium size song. It would be nice to see a two dimensional histogram showing the intensity of the emotional response and the size of the music. Response: Thanks for your helpful suggestions! Following your suggestion, we have added a discussion about the relationship between the size of the music and the emotional response. And a relevant two-dimensional histogram has been added in the Supplemental Material Figure S1. The revised text is as follows (lines 418-424): “For arousal, we observed that the perceived arousal values were positively correlated with the total number of lyric words (WordCount) and the mean number of words per sentence (WordPerSentence). We wonder that the size of music may play an important role, because the duration of music is positively correlated with the total number of lyric words (r(2371) = 0.219, p < .01); that is, the longer the music, the more words are in the lyrics. Unfortunately, as shown in Supplemental Materials Figure S1, the duration of music was negatively correlated with the arousal values (r(2371) = -0.111, p < .01).” Figure S1. The relationship between music duration and arousal. Error bars indicate standard errors. To Reviewer 2#, 1. The manuscript seems somewhat with grammatical/syntax and typographical problems. I leave it to the authors to resolve these copyediting problems by actually thoroughly reading the manuscript. Problems of this sort should definitely not appear in print. Response: Thanks for your suggestion! Following your suggestion, we have thoroughly read our manuscript, and have revised some grammatical problems. In addition, this manuscript has been edited by Language Editing Services. "
Here is a paper. Please give your review comments after reading it.
290
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The Internet of Things (IoT) paradigm keeps growing, and many different IoT devices, such as smartphones and smart appliances, are extensively used in smart industries and smart cities. The benefits of this paradigm are obvious, but these IoT environments have brought with them new challenges, such as detecting and combating cybersecurity attacks against cyber-physical systems. This paper addresses the real-time detection of security attacks in these IoT systems through the combined used of Machine Learning (ML) techniques and Complex Event Processing (CEP). In this regard, in the past we proposed an intelligent architecture that integrates ML with CEP, and which permits the definition of event patterns for the real-time detection of not only specific IoT security attacks, but also novel attacks that have not previously been defined. Our current concern, and the main objective of this paper, is to ensure that the architecture is not necessarily linked to specific vendor technologies and that it can be implemented with other vendor technologies while maintaining its correct functionality. We also set out to evaluate and compare the performance and benefits of alternative implementations. This is why the proposed architecture has been implemented by using technologies from different vendors: firstly, the Mule Enterprise Service Bus (ESB) together with the Esper CEP engine; and secondly, the WSO2 ESB with the Siddhi CEP engine. Both implementations have been tested in terms of performance and stress, and they are compared and discussed in this paper. The results obtained demonstrate that both implementations are suitable and effective, but also that there are notable differences between them: the Mule-based architecture is faster when the architecture makes use of two message broker topics and compares different types of events, while the WSO2-based one is faster when there is a single topic and one event type, and the system has a heavy workload.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION 34</ns0:head><ns0:p>Over the past few years, expectations regarding the use of IoT devices have risen significantly. According 35 to data published by the IoT Analytics company, since 2015 there has been a significant increase in the 36 use of IoT devices, with 7000 million of them being registered in 2018, and this figure is estimated to 37 reach 21500 million in 2025 <ns0:ref type='bibr' target='#b20'>(Lueth, 2018)</ns0:ref>. With this increase in the use of such devices, new security 38 challenges also arise, such as ensuring the security of IoT devices <ns0:ref type='bibr' target='#b2'>(Bertino et al., 2016)</ns0:ref>. Although there 39 implementation of the architecture analogous to the one presented in <ns0:ref type='bibr' target='#b30'>Rold&#225;n et al. (2020)</ns0:ref> and replacing the technologies by the ones in the WSO2 suite. It also requires the implementation of a realistic security attack environment in an IoT network by carrying out various attacks against the TCP, UDP and MQTT protocols, as well as analyzing the response of the architectures in terms of performance and stress tests.</ns0:p><ns0:p>Therefore, the main aim of this paper is twofold: firstly, we aim to demonstrate that our intelligent architecture, which integrates CEP and ML in order to detect IoT security attacks in real time, can be implemented with different integration platforms such as Mule and WSO2, different CEP engines such as Esper and Siddhi and different ML algorithms such as linear regression <ns0:ref type='bibr' target='#b21'>(Montgomery et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Secondly we aim to provide a comprehensive analysis of the performance and benefits of the architecture depending on the different vendor technologies used for its implementation; in particular, a comparison of the architecture implementation with Mule and Esper versus WSO2 and Siddhi is included. In this way, we provide a comparative analysis that can be very useful for the developer when choosing between one technology and another for the implementation of the architecture, depending on the requirements of the specific application domain and case study.</ns0:p><ns0:p>In addition to the research questions and the objectives to be achieved, in this work we rely on a series of assumptions that can be extracted from different works, These are:</ns0:p><ns0:p>&#8226; CEP works successfully in IoT environments. There are different works in which CEP architectures are sucessfully deployed in IoT environments <ns0:ref type='bibr' target='#b30'>(Rold&#225;n et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b5'>Corral-Plaza et al., 2020)</ns0:ref>.</ns0:p><ns0:p>&#8226; CEP engines and ESBs from different vendors can be integrated with our architecture to detect cybersecurity threats in real time: this architecture has already been deployed with Mule <ns0:ref type='bibr' target='#b30'>(Rold&#225;n et al., 2020)</ns0:ref> and there are works describing how to deploy WSO2 in an IoT environment <ns0:ref type='bibr' target='#b13'>(Fremantle, 2015)</ns0:ref>.</ns0:p><ns0:p>The rest of the paper is organized as follows. The Background section describes the background to the paradigms and technologies used in this work. The Related work section describes the most relevant works in the literature, and the Architecture for IoT security section presents the architecture we propose for detecting attacks on IoT devices and how the implementation with the WSO2 suite differs from that of Esper CEP and Mule ESB. The Comparing architecture performance and stress section explains the comparison of the performance and stress tests conducted for these architectures, which have been implemented with Esper/Mule and WSO2. Then, the Results section presents the experiments and results obtained, the Discussion section discuss and answer the four research questions. Finally, the Conclusions and future work section contains our conclusions and some lines for future research.</ns0:p></ns0:div> <ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>This section describes the background to security in the IoT, ML, SOA 2.0 and CEP.</ns0:p></ns0:div> <ns0:div><ns0:head>Security in the Internet of Things</ns0:head><ns0:p>The IoT and cyber-physical system devices are increasingly present in our lives. The features offered by these devices are very attractive and they can be used for many different purposes, among which, we can highlight domotics, the automation and control of production processes, video surveillance and security, and medicine and health care. The various uses that have been given to these devices and the ability to access them via the Internet have attracted the interest of hackers. Unfortunately, the approach followed by developers in the design of security measures for IoT devices has not been as successful as their growth, and this is made evident by the number of cyber-attacks detected in the first half of 2019, which surpassed a hundred million, which is seven times higher than the previous year <ns0:ref type='bibr' target='#b9'>(Demeter et al., 2019)</ns0:ref>. The vector used by attackers in those attacks was mainly brute force, taking advantage of the weak default configuration of the devices and gaining access to them with the default credentials <ns0:ref type='bibr' target='#b9'>(Demeter et al., 2019)</ns0:ref>. These attacks took advantage of the vulnerabilities of the IoT devices to infect them with malicious code and then manipulate them to achieve their goal. The idea behind that malware focused on the creation of bots to be marketed for the carrying out of Denial of Service(DoS) attacks. One of the most widely-spread (and also the first) pieces of malware specially designed for these devices was called Mirai, which is a botnet that inserts malicious code into IoT devices so that they initiate a DoS attack against a certain target. This caused shock and aroused the interest of hackers in these devices.</ns0:p></ns0:div> <ns0:div><ns0:head>3/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61593:1:1:NEW 11 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Another weakness of IoT and cyber-physical system devices is the use of unsafe network services and protocols, due mainly to these devices having several constraints, such as a small memory and a limited battery, which prevent developers from using a usual security setup. These vulnerabilities have been exploited to carry out several attacks that could have been prevented if the necessary measures had been taken. A lack of security in the storage and transfer of data that allows the observation and analysis of the information transmitted by these devices is another critical weakness in the security of IoT devices. In this regard, Message Queuing Telemetry Transport (MQTT) is a very common protocol in the IoT <ns0:ref type='bibr'>(OASIS, 2019)</ns0:ref>. MQTT is a binary protocol that reduces the overhead compared with other application layer protocols. It is a publish/subscribe-based protocol in which a server (there can be more than one), known as the message broker, manages the flow of information, which is organized as a hierarchy of topics. Each client can be a subscriber and a publisher simultaneously. This protocol is similar to MQTT-SN and has several weaknesses, such as allowing the sending of many MQTT packets of a massive size, which overloads the broker. This attack causes a DoS in the MQTT network. Furthermore, an MQTT subscription fuzzing attack could gain information about the available topics because nodes are not authenticated and the information is not ciphered. Moreover, an MQTT disc-wave attack can exploit a failing in several implementations of the MQTT protocol. The specification of MQTT establishes that each client has a unique ID, so if a new client tries to register this ID again, the broker should reject it.</ns0:p><ns0:p>However, many implementations allow a new client to connect with a registered ID, causing the existing client with that ID to be ejected from the previously-created connection.</ns0:p><ns0:p>Finally, a very common attack that can appear in an IoT-based network is scanning. Attackers can perform this procedure to discover devices and open ports in the network. By extending the scanning, attackers can cause a DoS in the network by sending large numbers of reconnaissance packets and congesting the network. The attack generates a large volume of traffic to try to saturate the network and so prevent users from accessing the system. The attack can also take advantage of flaws in the code of an application or part of the open-source code that uses the application. Two of the most common attacks of this type are TCP and UDP flood attacks <ns0:ref type='bibr' target='#b34'>(Warburton, 2021)</ns0:ref>. When the connection is established through the TCP protocol, the client and the server exchange flags to initiate, close or restart the connection, or indicate that the request is urgent; the attacker sends several SYN flags asking to initiate a connection with the server, which is blocked when there are too many ACK requests waiting and the server runs out of resources to serve legitimate clients. A UDP port scan attack consists of sending a UDP packet to multiple ports on the same destination system, then analyzing the response and determining service and host availability. The attacker can determine whether the port is open, closed or filtered through a firewall or packet filter.</ns0:p></ns0:div> <ns0:div><ns0:head>Machine Learning</ns0:head><ns0:p>Machine Learning (ML) can be described as a set of techniques, technologies, algorithms and methodologies used to predict, cluster and classify entities, which can be events, objects, or anything else that can be described with attributes, also known as features, and entity behaviors. Broadly speaking, the best way to obtain these predictions is to model the behavior and attributes of these entities. There are many different algorithms to model these entities using functions which are plotted with these algorithms, and datasets of entities. The behavior, features and context of each entity are different. Therefore, the best algorithm does not exist, as each entity type has its correct algorithm or algorithms, if they even exist. For this reason, it is necessary to analyze these entities and their contexts, preprocess the datasets to allow them to be managed by these algorithms, and perform a feature selection (if it is necessary) to discover the most descriptive set of features. Sometimes, once the feature selection has been made, we can easily obtain the distribution of the entities, which is very useful for choosing the algorithm in a more precise way.</ns0:p><ns0:p>There are different types of machine learning techniques and algorithms, which can be classified as follows:</ns0:p><ns0:p>&#8226; Supervised learning. In this approach, the model is trained with labelled entities, i.e. the model knows the type of each entity in the training dataset. Also, it is possible to find regression techniques that aim to predict a numeric value.</ns0:p><ns0:p>&#8226; Unsupervised learning. This set of techniques does not require labelled entities, so the model learns how to group or classify them with similarity measures.</ns0:p></ns0:div> <ns0:div><ns0:head>4/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61593:1:1:NEW 11 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; Reinforcement learning. This kind of ML uses a prize/penalty approach. When our model performs a correct action, we can provide it with good feedback. When it fails, then it receives a penalty.</ns0:p><ns0:p>In this paper, we have used linear regression <ns0:ref type='bibr' target='#b21'>(Montgomery et al., 2021)</ns0:ref> because our dataset has a linear distribution. We would like to highlight that our approach can be adapted to other mathematical models, if needed.</ns0:p></ns0:div> <ns0:div><ns0:head>Event-Driven Service-Oriented Architectures</ns0:head><ns0:p>Service-Oriented Architecture (SOA) is a paradigm for the design and implementation of loosely-coupled distributed system architectures whose implementation is fundamentally based on services. SOA services offer a well-defined interface in accordance with standards and facilitate communications between the service provider and the consumer in a decoupled way by using standard protocols. Thus, these architectures provide easy interoperability between third party systems in a flexible way, and therefore facilitate system maintenance and evolution when changes are required <ns0:ref type='bibr' target='#b29'>(Papazoglou, 2012)</ns0:ref>.</ns0:p><ns0:p>ED-SOA, or SOA 2.0, has evolved from the traditional SOA. The distinguishing feature of SOA 2.0 is that it facilitates communication between users, applications and services through events, instead of using remote procedure calls <ns0:ref type='bibr' target='#b19'>(Luckham, 2012)</ns0:ref>. With the growth of service components and processes, and the inclusion of events in event-driven service-oriented applications, a new infrastructure is required to support the decoupled communications and to maintain applications flexibly. These requirements are fulfilled by an ESB, which permits interoperability among several communication protocols and heterogeneous data sources and targets <ns0:ref type='bibr' target='#b29'>(Papazoglou, 2012)</ns0:ref>. In this way, an ESB provides and supports interoperability among diverse applications and components through standard interfaces and messaging protocols, also reinforcing the reliability of the communication as well as ensuring their scalability. There are several ESBs available, and in this paper we have selected two well-known ones for their evaluation, namely Mule and WSO2.</ns0:p><ns0:p>The Anypoint platform offers support for the design, implementation and management of APIs and integration (MuleSoft, 2021a). It includes Mule <ns0:ref type='bibr' target='#b27'>(MuleSoft, 2021b)</ns0:ref>, an integration and ESB platform that provides assistance to developers in interconnecting applications, and provides support for various transport protocols, as well as for the transformation of different data formats. It delivers message routing as well as IoT and cloud integration. In addition, it provides a graphical interface for the development of business-to-business integration applications.</ns0:p><ns0:p>WSO2 is an open-source decentralized approach which provides support for building decoupled digital products that are ready to market, with a main focus on APIs and microservices, and a wide range of complementary products and solutions (WSO2, 2021c). WSO2 offers WSO2 Enterprise Integrator, an integration platform which consists of a centralized integration ESB with capabilities for data, process and business-to-business integration. WSO2 ESB (WSO2, 2021d) provides support for multiple transport protocols, data formats and flow integration, as well as IoT and cloud service integration. The product also includes an analysis system for comprehensive monitoring.</ns0:p><ns0:p>As we can see, both ESBs provide similar features and can be used in conjunction with their integration platform with many plugins and solutions for further functionalities, such as stream and event processing.</ns0:p></ns0:div> <ns0:div><ns0:head>Complex Event Processing</ns0:head><ns0:p>Despite all the advantages of SOA 2.0 mentioned in the previous subsection, this type of architecture requires the use of an additional technology that makes it possible to analyze and correlate the vast amounts of data that are present in the field of the IoT in real time. CEP <ns0:ref type='bibr' target='#b19'>(Luckham, 2012)</ns0:ref> fulfills this functionality appropriately as it is a technology that allows the analysis and correlation of heterogeneous data streams in real time in order to detect situations of interest in the domain in question. In particular, the software that is capable of analyzing the data in real time is known as the CEP engine. In order to detect situations of interest, a series of event patterns are defined in the CEP engine <ns0:ref type='bibr' target='#b32'>(Valero et al., 2021)</ns0:ref>.</ns0:p><ns0:p>These patterns represent the conditions that allow us to detect that such a situation has occurred. These rules are applied to the engine's incoming data, which are known as simple events, while the situations of interest detected by the pattern are named complex events. Thus, with CEP we can improve and speed up the decision-making process <ns0:ref type='bibr' target='#b3'>(Boubeta-Puig et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b1'>Benito-Parejo et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b7'>Corral-Plaza et al., 2021)</ns0:ref>.</ns0:p><ns0:p>There are several CEP engines available, and in this paper we have selected two well-known ones to be evaluated, namely Esper and WSO2 CEP. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Esper <ns0:ref type='bibr'>(EsperTech, 2021)</ns0:ref> is an open-source Java-based software engine for CEP, which can quickly process and analyze large volumes of incoming IoT data. Esper comes with the Esper Event Processing Language (EPL), which extends the SQL standard and permits the precise definition of the complex event patterns to be detected. The Esper compiler compiles EPL into byte code in a JAR file for its deployment, and at runtime this byte code is loaded and executed. Esper performs real-time streaming data processing, using parallelization and multithreading when necessary, and it is highly scalable. In addition, it provides the option of implementing distributed stream processing over several machines as well as horizontal scalability, should it be necessary. According to its documentation, Esper 8.1.0 can process around 7.1 million events per second (EsperTech, 2019).</ns0:p><ns0:p>WSO2 CEP is provided within the WSO2 Stream Processor. WSO2 CEP is an open-source CEP engine that facilitates the detection and correlation of events in real time, as well as the notification of alerts, counting in addition with the support of enriched dashboard tools for monitoring. It can be deployed in standalone or distributed modes, and is highly scalable. It uses a streaming processing engine with memory optimization, being able to find patterns of events in real time in milliseconds. According to its specification, a single WSO2 CEP node can handle more than 100K events per second on a regular 4-core machine with 4 GB of RAM and several million events within the JVM (WSO2, 2021c). The cornerstone of the WSO2 CEP is Siddhi (WSO2, 2021b). It uses a language similar to SQL that allows complex queries involving time windows, as well as pattern and sequence detection. In addition, CEP queries can be changed at runtime through the use of templates.</ns0:p><ns0:p>ESB has been used in our system as a tool for transport and information management. This use is quite simple to implement but if the parameters are not specified properly, it could cause problems.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>There is an interesting comparison between Mule, WSO2 and Talend conducted by <ns0:ref type='bibr' target='#b17'>G&#243;rski and Pietrasik (G&#243;rski and Pietrasik, 2017)</ns0:ref>. Note that Talend is beyond the scope of this work. The authors implemented 7 different use cases and tested them with 5, 20 and 50 users simultaneously. Moreover, their work provides measurements of throughput, standard deviation and CPU usage for each experiment, and their results are closely aligned with ours, i.e., WSO2 is always faster except when the output message is enormous (221000 bytes of output message in this case). This is not a problem for our proposal because an IDS does not need big output messages. Moreover, WSO2 obtains a better throughput, whereas the CPU usage is similar in both cases. On the other hand, Mule provides a lower standard deviation, i.e., Mule is more constant than WSO2 when processing different types of events.</ns0:p><ns0:p>Bamhdi's work <ns0:ref type='bibr' target='#b0'>(Bamhdi, 2021)</ns0:ref> is also interesting. In contrast to our work, his paper does not show an active performance comparison between WSO2 and Mule, but instead provides a feature comparison between four ESB platforms (WSO2 and Mule are included among them). Although Bamhdi's work is focused on comparing open source platforms against proprietary ones, it allows us to compare specific features of Mule and WSO2. This comparison, which analyzes 15 features, shows that WSO2 supports the 15 listed capabilities, whereas Mule supports. The only feature which Mule cannot provide is web migration from 5.0 to 6.0; note that WSO2 is the only one that satisfies this feature. <ns0:ref type='bibr' target='#b8'>Dayarathna and Perera (Dayarathna and Perera, 2018)</ns0:ref> compare WSO2 with other ESBs, but Mule is not considered in their work, which provides a brief feature comparison between the Esper (basic version) and Siddhi CEP engines. According to the authors, each language provided by a CEP engine has its pros and cons. On the one hand, Esper (basic version) provides nested queries and debugging support, while Siddhi registered a higher performance than Esper: 8.55 million events/second versus 500,000 events/second.</ns0:p><ns0:p>Another work which is focused on CEP engines is that of <ns0:ref type='bibr' target='#b15'>Giatrakos et al. (Giatrakos et al., 2020)</ns0:ref>. It does not directly compare WSO2 against Mule or Siddhi against Esper, but instead describes different CEP paradigms. In particular, it explains different selection policies, consumption policies and windows.</ns0:p><ns0:p>Moreover, the paper describes the scalability and parallelization of several CEP engines. Although it is quite different from our work, it can be useful in order to understand our work and learn about other CEP engines.</ns0:p><ns0:p>Freire et al. <ns0:ref type='bibr' target='#b12'>(Freire et al., 2019)</ns0:ref> <ns0:ref type='table' target='#tab_4'>2021:05:61593:1:1:NEW 11 Oct 2021)</ns0:ref> Manuscript to be reviewed Computer Science which allows them to obtain a score for each ESB. According to their paper, Mule should be faster than WSO2, but the problem is that this is not demonstrated through experiments. This approach is useful because it allows the measuring of different ESB platforms without implementing experiments; however, it would have been more useful if they had carried out experiments to support their results.</ns0:p><ns0:p>Our paper provides a real performance comparison between Mule and WSO2, following a similar methodology to the one proposed in the papers mentioned, i.e., executing and deploying the proposed platforms under equal conditions and measuring events in relation to time. More specifically, we have analyzed different pattern types, namely time-window-based patterns and prediction patterns. The latter are a novelty with respect to other works, as each network event is compared with a prediction event, and this acts as an anomaly detector.</ns0:p></ns0:div> <ns0:div><ns0:head>ARCHITECTURE FOR IOT SECURITY</ns0:head><ns0:p>This section describes our proposed SOA 2.0 architecture, which integrates CEP and ML paradigms in order to detect attacks on IoT devices. Then, two implementations of this architecture, one using Mule and the other WSO2, are presented with the aim of comparing them under the same conditions in order to find the strengths and benefits of each, which is the novelty and contribution of this research.</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture Proposal</ns0:head><ns0:p>Our proposed architecture for detecting attacks on IoT devices is described below. This architecture, which is an improved version of the architecture we presented in <ns0:ref type='bibr' target='#b30'>Rold&#225;n et al. (2020)</ns0:ref>, is composed of three different parts.</ns0:p><ns0:p>The first module of the architecture, the data sources, consists of the data obtained from the network and the pre-trained model, if available. Otherwise, the model would have to be trained. As shown in Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>, this module may be detached from the rest of the architecture, because it can be replaced by any computer network with an MQTT broker as collector. However, we consider it useful to analyze the whole system to understand its behavior. Note that in this new version of our architecture, an MQTT broker can be used with different topic numbers, with the aim of managing data grouped by type. Additionally, this new version permits the use of pre-trained models as data sources, which allows us to migrate our model from our architecture to other deployments. In addition, pre-trained models provide greater flexibility because they allow training the model outside (or inside) our deployed architecture.</ns0:p><ns0:p>The second module of the architecture, which is in fact the main module, receives raw network data and, optionally, a pre-trained model. This module is responsible for making decisions on the basis of the network data analyzed in real time. This new version of our architecture is more flexible since different CEP engines can be used according to the user's needs.</ns0:p><ns0:p>At this point, the pipeline of the second module should be explained in detail. Through an MQTT inbound endpoint, the raw network data produced by data sources can reach the ESB. These data are preprocessed to make them consumable for the network event generator. The event generator provides network events which can be received and processed in real time by the CEP engine. Moreover, our architecture needs a trained model to predict the network event values. In particular, this model can be used to predict the type of network packet via a predicted value and a threshold, which is computed using the training data. In this case, our model has been built using a linear regression, and is used to predict values and a threshold from a key feature, or features, which is the packet length in our case. These features will vary with each case.</ns0:p><ns0:p>The last module is composed of data sinks which receive the notifications about the decision-making process conducted by the second module. Databases, event systems, emails, logs, or any other system required by end users to receive such notifications are examples of data sinks. Due to its simplicity, an explanatory diagram is not included.</ns0:p><ns0:p>We would like to point out that our architecture allows us to fit the model with raw sensor data; this traffic should be isolated and without any security attacks. There are two ways to obtain prediction patterns: the first is to set a pre-trained model, while the second is to train the model with the isolated network traffic. Regardless of the method which is selected, the architecture uses this model to predict the expected value of each incoming network packet. This prediction is used to create a prediction event which is compared with its corresponding network event. In this way, our architecture is able to obtain patterns which can detect anomalous packets by using the real value, the predicted value and a calculated threshold, since the absolute value of the subtraction of the real value and the predicted value must be smaller than the threshold; otherwise, the packet is anomalous.</ns0:p><ns0:p>Equation 1 describes our predictor in a formal way, where the number 1 means that the network packet belongs to the category used to train the model and obtain the ERROR.</ns0:p><ns0:formula xml:id='formula_0'>f (x) = &#63729; &#63730; &#63731; 1 i f (abs(realValue &#8722; predictedValue) &#8804; ERROR) 0 i f (abs(realValue &#8722; predictedValue) &gt; ERROR) (1)</ns0:formula><ns0:p>It is important to note that we can fit the model with more attacks; for example, if we have traffic from a DoS attack, we can refit our model to detect this attack. The best way to generate patterns is to attack the architecture or obtain traffic from attacks. As mentioned above, we have improved the architecture to accept pre-trained models.</ns0:p><ns0:p>An initial deployment of the architecture could be composed of a few patterns that can be proposed Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and designed by the domain expert, and the anomaly detector, which uses legitimate traffic. When an anomalous pattern is triggered, anomalous packets can be used to generate a new pattern to detect this kind of anomaly again. This means that our architecture can improve and gradually become more accurate over time.</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture Implementation with Mule</ns0:head><ns0:p>In this subsection we explain how our architecture for IoT security has been implemented by using the Mule ESB together with the Esper CEP engine.</ns0:p><ns0:p>The Mule-based architecture is composed of three data flows: DataReceptionAndManagement, ComplexEventReceptionAndDecisionMaking and EventPatternAdditionToEsper (see Fig. <ns0:ref type='figure' target='#fig_6'>2</ns0:ref>).</ns0:p><ns0:p>The DataReceptionAndManagement flow is responsible for receiving data from IoT data sources, transforming them into an event format and then sending them to the Esper CEP engine. Specifically, this flow is implemented with an MQTT inbound endpoint in which a topic is defined to receive the data obtained from data sources. Then, a Java transformer allows the transformation of the received JSON data into Java Map events, which are sent to an Esper CEP engine through a customized message component.</ns0:p><ns0:p>The ComplexEventReceptionAndDecisionMaking flow receives the complex events that are automatically generated by the CEP engine upon detection of previously deployed patterns, and transforms these complex events into JSON format. These are then saved in log files, which are a type of data sink for the architecture.</ns0:p><ns0:p>Finally, the EventPatternAdditionToEsper flow allows the runtime deployment of new event patterns in the CEP engine. To this end, a file input endpoint frequently checks whether there is a new file with an EPL extension, and if there is the event pattern code contained in this file is transformed into a string, which is then deployed in the Esper CEP engine. As shown in Fig. <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>, the architecture receives the data obtained from data sources (NetworkPacket and NetworkPrediction) by using an MQTT broker with two topics. Then, these data are matched through the different event patterns (queries) implemented with SiddhiQL and previously deployed in the Siddhi CEP engine. When a complex event is automatically created upon a pattern detection, it is saved in a log file, which is a data sink for the architecture.</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture Implementation with WSO2</ns0:head></ns0:div> <ns0:div><ns0:head>COMPARING ARCHITECTURE PERFORMANCE AND STRESS</ns0:head><ns0:p>This section presents our comparison of the performance and stress tests conducted for the two architectures implemented with Mule and WSO2.</ns0:p></ns0:div> <ns0:div><ns0:head>Proposed approach</ns0:head><ns0:p>Before analyzing each architecture component in depth, a schematic overview of the steps followed to address this comparison are explained below:</ns0:p><ns0:p>&#8226; First, a virtualized MQTT network, in which clients publish periodically, is deployed.</ns0:p><ns0:p>&#8226; Then, packets are collected from that network to define a normal scenario, in which the system is not under attack.</ns0:p><ns0:p>&#8226; Afterwards, a malicious client is introduced into the network and this client launches the attacks.</ns0:p><ns0:p>Packets that generate attacks are collected to perform the experiments.</ns0:p><ns0:p>&#8226; A number of these packets are preprocessed and used to train the linear regression model. The mean square error for each category to be predicted with the regressor is also extracted.</ns0:p><ns0:p>&#8226; The values of the packets that were not used for training are predicted and saved. They will be used to perform the experiments. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science &#8226; Then, event patterns are defined. To perform a complete comparison, we create a pattern per attack that will be able to detect the attack, as a domain expert would do, except for DoS, which is detected with a regressor because in practice it is difficult to establish a specific pattern for this type of attack. In addition, we create the FeatureAnomaly pattern, which is able to detect anomalies using the linear regressor. This pattern is used to detect unknown attacks, such as Subfuzzing, DoS or</ns0:p><ns0:p>Discwave. And then there is the ProtocolAnomaly pattern, which detects any unknown protocol that should not be present in the network.</ns0:p><ns0:p>&#8226; Both platforms, Mule and WSO2, are deployed with their corresponding patterns.</ns0:p><ns0:p>&#8226; The simulator is used to perform the experiments (see next subsection) in such a way that these experiments are reproducible.</ns0:p><ns0:p>&#8226; Finally, the metrics of the experiments are extracted for comparison. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Simulator</ns0:head><ns0:p>To ensure the reproducibility of the experiments, we implemented an MQTT network simulator. We chose an MQTT simulator because, as mentioned above, it is a widely-used protocol in the IoT paradigm.</ns0:p><ns0:p>Moreover, MQTT networks, by the nature of the protocol, are usually centralized because the broker acts as a centralizer, so that all MQTT packets pass through it. This makes it very easy to set up a network-based IDS in the broker, because there is no need to redirect traffic to another device. This simulator is capable of sending network packets to an MQTT broker, taking as data source different CSV files which contain real network traffic that was previously generated and stored. This is essential because it allows us to use real traffic and to combine the reproducibility of the experiments with data that have been generated in a real MQTT network. The main advantages of our simulator are that it can reliably send such network packets while taking the delay between packets into account, and it allows us to generate several scenarios to test both the proposed architectures.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_9'>4a</ns0:ref> outlines this MQTT network simulator. Note that when we wish to generate heavy workloads, we can use the sum of deltas from the packets or the number of packets as the threshold which is used to stop the generation of packets.</ns0:p><ns0:p>The behavior of the simulator is quite simple. First, it reads the CSV files, which first allows us to avoid the delay that is due to reading each row of the CSV while we are sending them.</ns0:p><ns0:p>When the simulator has read both CSV files (legitimate traffic, and the specific attack), it starts to send packets with MQTT, these being sent using JSON format. The number of packets is defined as described above.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_9'>4b</ns0:ref> shows an MQTT network diagram where there are a certain number of legitimate devices, 4 in our case, and 1 malicious device which attacks the network in different ways. This network is similar to the network used to obtain the network traffic.</ns0:p></ns0:div> <ns0:div><ns0:head>Event Patterns</ns0:head><ns0:p>In our previous work <ns0:ref type='bibr' target='#b30'>(Rold&#225;n et al., 2020)</ns0:ref>, we defined and implemented twelve event patterns in Esper EPL for detecting the following security attacks:</ns0:p><ns0:p>&#8226; TCP/SYN port scan: the malicious device sends a round of 10 or more TCP packets with the SYN flag to 3 or more different ports of the broker in 1 s. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science &#8226; TELNET Mirai: the malicious device simulates the first stage of Mirai, sending connect packets with different tuples (username/password). The pattern can be detected if the attacker sends more than 5 TELNET packets in 1 minute. The TELNET Mirai pattern implements this attack by making use of an intermediate pattern called Src TELNET 1m Batch.</ns0:p><ns0:p>&#8226; MQTT disconnect wave: the malicious device sends many MQTT packets with the connect command. As sending more than 1 connect command is strange, the pattern can be detected if the broker receives more than 5 MQTT connect commands in one minute from a single IP address.</ns0:p><ns0:p>The MQTT Disconnect Wave pattern implements this attack by making use of an intermediate pattern called Src MQTT 1m Batch.</ns0:p><ns0:p>&#8226; MQTT subscription fuzzing: the malicious device tries to subscribe to all topics, so the pattern can be detected if an MQTT client subscribes to more than 20 topics in 5 minutes. The Listing 1 shows the FeatureAnomaly pattern implemented in Esper EPL, while Listing 2 contains the implementation of the same pattern but using the SiddhiQL language. This pattern implements Equation <ns0:ref type='formula'>1</ns0:ref>and allows us to detect unmodeled attacks, such as the DoS with big messages. Moreover, it will detect other attacks, such as disconnect wave or subscription fuzzing, even if we do not define specific patterns to detect them. The ProtocolAnomaly pattern implemented in Esper EPL is shown in Listing 3, while Listing 4 contains the same pattern using the SiddhiQL language.</ns0:p></ns0:div> <ns0:div><ns0:head>MQTT Subscription</ns0:head><ns0:p>Listing 1. FeatureAnomaly pattern implemented in Esper EPL.</ns0:p><ns0:p>@Name( ' F e a t u r e A n o m a l y ' ) @Tag ( name =' domainName ' , v a l u e =' I o T S e c u r i t y A t t a c k s ' ) We have implemented two types of event patterns to detect such attacks. The first type uses a time batch window (SrcDst TCP 1s Batch, SrcDst UDP 5s Batch, SrcDst Xmas 1s Batch, Src TELNET 1m Batch, Src MQTT 1m Batch and Src MQTT 5m Batch) to trigger a complex event when a condition is met.</ns0:p><ns0:p>The second type of pattern allows the comparison of messages coming from 2 broker topics, one that manages prediction and threshold data while the other topic manages real packet information. In this case, the pattern is activated when the difference between the prediction and the real values is higher than a certain threshold; this is useful because we can compare the performance for different attacks but also with different types of patterns.</ns0:p></ns0:div> <ns0:div><ns0:head>Machine Learning Model</ns0:head><ns0:p>Selecting a machine learning model is a very important step in effectively deploying the architecture.</ns0:p><ns0:p>Although this paper does not focus on ML processes, it is important to give a brief explanation of the model we have used.</ns0:p><ns0:p>The first step in defining our ML model was to select the most important features. For this purpose we applied the criteria proposed by KDD99, which are adaptable to our MQTT dataset. In addition, we also added features obtained from MQTT.</ns0:p><ns0:p>Once the features have been selected, they are normalized and binarized when necessary. Then we used Extremely Randomized Trees with our dataset to arrange the features by importance. After that, we selected the most important features <ns0:ref type='bibr' target='#b14'>(Geurts et al., 2006)</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref> show the importance of the binarized features. One or several features are chosen to be the key feature/s, and these are predicted with the rest of the features obtained. Furthermore, this prediction will be compared with the real value for each event, with the error threshold being defined using the mean square error obtained when we train the model. </ns0:p></ns0:div> <ns0:div><ns0:head>Tests</ns0:head><ns0:p>By implementing a network simulator, we were able to measure the performance of our proposed architecture implemented with WSO2 and Mule, and compare them. We designed 14 experiments with 7 different attacks against MQTT, and each test was composed of legitimate traffic and 1 specific attack.</ns0:p><ns0:p>Specifically, we carried out 7 experiments (1 per attack) which used the delay of each packet in order to simulate a network realistically, and 7 experiments without a delay, which allowed us to measure the performance with heavy workloads. Thus, the proposed tests were as follows:</ns0:p><ns0:p>&#8226; TCP-SYN scan (with delay/without delay)</ns0:p><ns0:p>&#8226; UDP port scan (with delay/without delay)</ns0:p><ns0:p>&#8226; XMAS port scan (with delay/without delay)</ns0:p><ns0:p>&#8226; Mirai first stage (with delay/without delay)</ns0:p><ns0:p>&#8226; MQTT disconnect wave (with delay/without delay)</ns0:p><ns0:p>&#8226; MQTT subscription fuzzing (with delay/without delay)</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>This section presents and discusses the experiments and the results obtained when comparing the performance of our architecture implemented with WSO2 and Mule, as well as the limitations of each implementation.</ns0:p><ns0:p>These experiments were carried out under similar conditions for the WSO2-based architecture, composed of the WSO2 ESB and the WSO2 CEP engine, and the Mule-based architecture that integrates Mule ESB with the Esper CEP engine. We would like to point out that WSO2 provides some extra performance features such as multiworkers and PMML models(WSO2, 2021a, 2020), which could enhance the architecture's performance. However, we did not integrate these features in our proposed architecture in order to create similar conditions for both systems.</ns0:p><ns0:p>The results obtained for the two types of tests conducted in this work (performance and stress tests) are discussed below. The implementation code can be accessed in the <ns0:ref type='bibr' target='#b31'>Rold&#225;n-G&#243;mez et al. (2021)</ns0:ref> repository.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance Tests</ns0:head><ns0:p>The results for the performance tests are presented in the following subsections. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Estimated computational complexity</ns0:head><ns0:p>Although it is difficult to give an exact figure for computational complexity because of the internal operations performed by the CEP engines, we estimate the computational complexity on the basis of the steps that we can calculate. Note that this estimation assumes that the model and preprocessing steps are as mentioned. Obviously, this will change if another model or steps are used during the preprocessing step.</ns0:p><ns0:p>To define the computational complexity, we consider the following variables: n, which defines the number of packets, which F is the number of variables where each step is applied (this value will be constant for each step); and v, which defines the different values of each category and is used only for the binarization of categorical attributes. First, we calculate the computational complexity of each step, then the total for the training stage, and then the total at runtime. The estimated computational complexities are as follows: min-max scaler O(2nF 1 ), fill empty values O(nF 2 ), binarization of categorical attributes O(2nF 3 v), and training linear regression model O(nF 2 4 + F 3 4 ). All these steps only have to be carried out once. In addition: predict a value with the regressor and create n events O(F 5 n). In summary, the estimated computational complexity in training is as follows:</ns0:p><ns0:formula xml:id='formula_1'>O(2nF 1 + nF 2 + 2nF 3 v + nF 2 4 + F 3 4 ).</ns0:formula><ns0:p>And the estimated computational complexity at runtime is:</ns0:p><ns0:formula xml:id='formula_2'>O(F 5 n).</ns0:formula><ns0:p>Since we do not know the exact inner workings of CEP engines, it is difficult to calculate the remaining steps. That is why performance experiments, such as those carried out in this paper, are so important.</ns0:p></ns0:div> <ns0:div><ns0:head>TCP SYN Scan</ns0:head><ns0:p>The first experiment performed was composed of legitimate traffic (a simple MQTT network) and a TCP SYN scan. We used our architecture as an IDS to detect attacks or scans. As we can see, the WSO2 implementation triggers the TCP SYN complex event first. Therefore, we can conclude that the WSO2 achieves an earlier detection than the Mule-based one. In this case, TCP SYN starts sooner in the WSO2 scenario, but this delay is shorter than the detection time difference.</ns0:p></ns0:div> <ns0:div><ns0:head>16/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61593:1:1:NEW 11 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head>UDP Port Scan</ns0:head><ns0:p>The UDP scan is slower than the TCP one, and it is useful to study the performance in a different way.</ns0:p><ns0:p>This experiment allowed us to compare the performance when the attack has a low packet sending ratio.</ns0:p><ns0:p>As in the case of the TCP SYN Scan experiment, there was normal traffic and a UDP port scan.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 6. UDP scan attack comparison</ns0:head><ns0:p>Figure <ns0:ref type='figure'>6</ns0:ref> shows the results obtained for this UDP port scan experiment. In conclusion, we can say that WSO2 was faster than Mule again. Mule generated a null window, not being able to detect the third UDP Port Scan complex event. It is important to note that the difference is smaller than in Fig. <ns0:ref type='figure' target='#fig_12'>5</ns0:ref>; this may be because the attacks, in both cases, started at the same time.</ns0:p></ns0:div> <ns0:div><ns0:head>Xmas Port Scan</ns0:head><ns0:p>This scan is not very common and shows how our architecture is able to detect more unusual attacks.</ns0:p><ns0:p>From the point of view of the experiment, it should be like the TCP SYN scan, as both have similar packet sending ratios and event generation characteristics. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Mirai First Stage</ns0:head><ns0:p>This scenario simulates the first stage of Mirai. This attack tries to connect with Telnet using a username/password list. The main aim of this experiment was to check the behavior of our system under common IoT attacks. Figure <ns0:ref type='figure' target='#fig_14'>8</ns0:ref> shows a comparison of the results for the Mirai scenario, executing it on </ns0:p></ns0:div> <ns0:div><ns0:head>DoS Big Message</ns0:head><ns0:p>This scenario simulates a common DoS attack in which the attacker sends big messages quickly to the broker.</ns0:p><ns0:p>This experiment is different to the other ones because time windows are not used. Instead of time windows, each packet is matched with its prediction. As we mentioned above, there are two different ways to detect attacks using our predictor. In this case, the system trains the model with legitimate and isolated traffic, allowing us to detect anomalous packets. Note that each packet which does not match with its prediction, and whose difference exceeds the threshold, can be classified as anomalous. Additionally, we could have fitted a model to detect each specific attack. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_15'>9</ns0:ref> illustrates that, in this case, Mule was faster than WSO2, since WSO2 needed more time to detect all the malicious packets. Therefore, we can conclude that Mule offers better performance when we need to compare different events (network events and prediction events in this case).</ns0:p></ns0:div> <ns0:div><ns0:head>MQTT Disconnect Wave</ns0:head><ns0:p>This scenario provides useful knowledge about both platforms. Here there are time windows and an anomalous packet detector, which works by matching each packet with its prediction, as we did in the DoS experiment. The advantage of this experiment is that it allowed us to check the behavior of the whole proposal deployed with the predictor working. Note that in a real scenario we would not use both methods (time windows and prediction), but it was useful and appropriate to test the performance. As we can conclude from Fig. <ns0:ref type='figure' target='#fig_3'>10</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_3'>11</ns0:ref>, Mule again worked better with predictions (by using 2 topics) than WSO2. On the other hand, WSO2 again detected the first complex event earlier than Mule when using time windows.</ns0:p></ns0:div> <ns0:div><ns0:head>Subscription Fuzzing</ns0:head><ns0:p>In this scenario, we used both methods again (time windows and predictions), but this attack is slower than the discwave one, which meant that the delay between packets was longer than in the discwave attack. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This experiment shows the behavior of our proposal when the system receives an attack with a lower packet sending ratio than DoS or discwave. Figure <ns0:ref type='figure' target='#fig_6'>12</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_7'>13</ns0:ref> show that there are 2 interesting facts that we can extract from this experiment.</ns0:p><ns0:p>The first is that WSO2 detected the second complex event very late when using time windows. As it uses a 5-minute window, the second time window was closed after the attack finished. But the important thing is that, in this case, WSO2 and Mule presented a similar performance with predictions. This is due to the long delay between packets in this experiment. WSO2 again registered the first detection sooner. It seems that Mule was processing a heavy workload when we compared two different events, but WSO2 provided a better brute performance when the system compared features/properties in the same event.</ns0:p></ns0:div> <ns0:div><ns0:head>Stress Test</ns0:head><ns0:p>Additionally, we carried out 7 more experiments in which the network packets had no delay. Although this is not a realistic case, it is very useful because it allows us to study the difference in performance between the two architecture implementations in greater depth. The figures in this subsection compare the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science last simple event detected with the first one, as well as the last complex event detected with the first one, measuring the time differences between these events.</ns0:p></ns0:div> <ns0:div><ns0:head>TCP-SYN without Delay</ns0:head><ns0:p>For each attack mentioned above, we implemented a stress scenario.</ns0:p><ns0:p>In this case, we executed the TCP-SYN scan 100 times, which took about one minute.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 14. TCP-SYN without delay comparison</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_9'>14</ns0:ref> shows the difference between the last and the first simple events detected, as well as that for the last and first complex events detected. Our goal was to discover the processing speed difference between the platforms.</ns0:p><ns0:p>As we can see, WSO2 was faster at processing simple events and complex events than Mule when there was a single broker topic, so this experiment confirms the results obtained in the previous section. It seems that, regardless of the packet delay, WSO2 is faster at processing simple events and complex events when there are no relationships between them.</ns0:p></ns0:div> <ns0:div><ns0:head>UDP Scan without Delay</ns0:head><ns0:p>In this case, the UDP scan was launched 100 times, which took about 37 seconds. </ns0:p></ns0:div> <ns0:div><ns0:head>XMAS Scan without Delay</ns0:head><ns0:p>The XMAS scan was executed 100 times again, which took about 60 seconds.</ns0:p><ns0:p>As we can see in Fig. <ns0:ref type='figure' target='#fig_19'>16</ns0:ref>, the results are consistent with those we have observed above. In this experiment, WSO2 was faster again.</ns0:p></ns0:div> <ns0:div><ns0:head>21/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61593:1:1:NEW 11 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed The first stage of Mirai was executed 100 times, which took about 48 seconds.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>Figure 17. Mirai first stage without delay comparison</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_13'>17</ns0:ref> shows that WSO2 was faster again at processing simple events.</ns0:p></ns0:div> <ns0:div><ns0:head>DoS Big Message without Delay</ns0:head><ns0:p>The DoS scenario does not use time windows, instead it compares each packet with its prediction. We executed the DoS experiment without delay once, which took about 20 seconds.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 18. DoS without delay comparison</ns0:head><ns0:p>The results are very interesting, as illustrated in Fig. <ns0:ref type='figure' target='#fig_14'>18</ns0:ref>. They show that Mule was faster than WSO2 when there was an operation between 2 broker topics. The performance difference between implementations was even bigger than in the experiments with one type of simple event. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head>MQTT Disconnect Wave without Delay</ns0:head><ns0:p>We executed the discwave attack for about 27 seconds, and used the FeatureAnomaly prediction pattern to detect it.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 19. Discwave without delay comparison</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_15'>19</ns0:ref> shows that Mule was faster again when we compared 2 different events. This experiment had the highest workload, and therefore the difference between WSO2 and Mule was even bigger than before.</ns0:p></ns0:div> <ns0:div><ns0:head>MQTT Subscription Fuzzing without Delay</ns0:head><ns0:p>This experiment consisted in running the subfuzzing attack for approximately 47 seconds.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 20. Subscription fuzzing without delay comparison</ns0:head><ns0:p>As we can see in Fig. <ns0:ref type='figure' target='#fig_6'>20</ns0:ref>, Mule was much faster again, so we can conclude that WSO2 is only faster than Mule when there are no comparison operations between different events.</ns0:p><ns0:p>In short, each CEP engine has different advantages. The Esper CEP engine integrated with the Mule ESB is better when there are comparisons between different events, so Esper/Mule performs better on patterns where different events are compared. As an example, we can see this behavior in the anomalous packet pattern. However, when there are no comparisons between different events, WSO2 is faster than Esper/Mule. We can conclude that WSO2 provides a better raw performance, in other words, WSO2</ns0:p><ns0:p>is able to process network packets faster than Esper/Mule but its performance is worse when there are comparisons between events.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>With the obtained results, we can discuss and answer the four research questions posed in Introduction Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Answers to the Research Questions</ns0:p><ns0:p>&#8226; RQ1: Can a real-time data stream processing architecture be implemented with the WSO2 ESB together with the Siddhi CEP engine and be integrated with ML techniques?</ns0:p><ns0:p>-We can definitely affirm that it is possible to implement a streaming data processing architecture using WSO2 ESB together with the Siddhi CEP engine and integrate them with ML techniques. In fact, we have implemented an architecture equivalent to the one presented in <ns0:ref type='bibr' target='#b30'>Rold&#225;n et al. (2020)</ns0:ref>, but using the said WSO2 technologies. We have also tested its functionality in a realistic environment consisting of security attacks in the field of the IoT.</ns0:p><ns0:p>&#8226; RQ2: Can a streaming data processing architecture based on the integration of ML techniques with the WSO2 CEP engine and ESB achieve or improve upon the performance of the previously proposed architecture <ns0:ref type='bibr' target='#b30'>(Rold&#225;n et al., 2020)</ns0:ref>?</ns0:p><ns0:p>-We can undoubtedly say that WSO2 CEP and ESB can achieve a performance similar to that achieved by integrating Esper CEP and Mule ESB in an equivalent streaming data processing architecture for detecting security attacks in the IoT. We have carried out a series of tests with a number of typical attacks on communication protocols in the IoT environment, and we have seen that both architectures achieve an appropriate and similar performance, although we did detect that each of them can achieve a better performance with certain types of patterns, which allows us to answer our next research question.</ns0:p><ns0:p>&#8226; RQ3: What kind of event patterns are processed faster with WSO2/Siddhi and which ones with Mule/Esper, and which of the two architectures is more suitable for supporting high-stress situations?</ns0:p><ns0:p>-On the one hand, we have observed that the WSO2-based architecture is faster at processing simple events when there are no pattern comparisons between different event types. This is because WSO2 has a higher performance when processing simple events. On the other hand, the Mule-based architecture has shown to be faster when comparing different types of events. The behavior of the architectures under stress will depend on the type of pattern conditions. If we are able to avoid patterns with comparisons between events of different types, WSO2 will be faster in a high-stress situation, since its ESB has a higher performance when processing simple events. Otherwise, Mule will be faster.</ns0:p><ns0:p>&#8226; RQ4: Which of these architecture implementations is the best to be deployed in an IoT security attack detection environment?</ns0:p><ns0:p>-Both implementations are effective, but in this context we advocate the choice of WSO2 because it allows us to integrate the different types of events in a general unified event. This dramatically increases the performance of WSO2. Both ESBs can be deployed in an IoT environment, but WSO2 is faster when using this general event (as we can see from the stress experiments). Despite this, Mule can be deployed successfully too, but its performance is worse than that of WSO2.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS AND FUTURE WORK</ns0:head><ns0:p>This paper has presented and compared two implementations of an intelligent SOA 2.0-based architecture integrated with CEP technology and ML techniques that are designed to detect security attacks against IoT systems. Each of the implementations incorporates a CEP engine and an ESB from prestigious vendors:</ns0:p><ns0:p>Esper CEP and Mule ESB on the one hand, and WSO2 ESB and Siddhi CEP on the other.</ns0:p><ns0:p>The validation process, through which the behavior of both architectures was evaluated under the same conditions in a realistic scenario of security attacks on IoT protocols, allowed us to draw the following relevant conclusions:</ns0:p><ns0:p>&#8226; Both implementations of the architecture allow us to detect well-known attacks in the field of IoT protocols, with the corresponding event patterns of these attacks.</ns0:p></ns0:div> <ns0:div><ns0:head>24/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61593:1:1:NEW 11 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; Thanks to the use of ML techniques, the architecture can detect novel attacks that have not previously been defined through specific event patterns.</ns0:p><ns0:p>&#8226; Our architecture is able to work as a pure rule-based IDS with patterns defined by an expert, as well as allowing the addition of patterns for detecting non-modeled attacks in order to act as an anomaly detection architecture.</ns0:p><ns0:p>&#8226; Both architecture implementations present a suitable degree of efficiency for the field of security attacks in the IoT, but each one has its own advantages and drawbacks.</ns0:p><ns0:p>&#8226; The Mule-based architecture is faster when the architecture makes use of 2 message broker topics to compare the values of their features.</ns0:p><ns0:p>&#8226; The WSO2-based architecture is faster when there is a single topic and the system has a heavy workload.</ns0:p><ns0:p>&#8226; To mitigate the performance degradation, suffered by the system under heavy workloads, the operations between the topics can be modified by joining the prediction and network packet data in a general topic, thus mitigating this problem when comparing 2 topics in the WSO2-based architecture.</ns0:p><ns0:p>&#8226; In the Mule-based architecture it is more difficult to overcome this problem because our experiments have shown that the performance of Mule does not improve when there is a single type of topic.</ns0:p><ns0:p>Although our work achieved the proposed objectives, there are certain limitations in specific contexts.</ns0:p><ns0:p>One is that although, the architecture makes it possible to define a threshold automatically, it is still necessary to perform a feature selection process. Another is that despite the fact that the architecture is capable of defining a threshold for one or more features, it is not able to fully generate the pattern.</ns0:p><ns0:p>As future work, we plan to test our architecture in a different network to validate our proposal with other protocols and conditions. We would like to point out that the performance of our proposal is subject to a correct ML process (data extraction, data preprocessing, algorithm selection, etc.). It would also be </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61593:1:1:NEW 11 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>adopt a different approach in which they do not conduct performance experiments directly, but experts enumerate the features of different ESBs. These features are grouped into 3 dimensions: message processing, hotspot detection and fairness execution. Additionally, Freire et al.'s work defines 2 types of features: subjective and objective. The authors assign values for each feature, 6/27 PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61593:1:1:NEW 11 Oct 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Generic architecture to detect attacks on IoT devices.</ns0:figDesc><ns0:graphic coords='9,156.70,63.77,383.63,454.95' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 depicts our architecture for IoT security that is implemented with the WSO2 ESB. Unlike the implementation of the Mule-based architecture, which was integrated with the external Esper CEP engine, the implementation of the WSO2-based architecture does not require integration with an extenal CEP engine since WSO2 provides the Siddhi CEP engine by default.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61593:1:1:NEW 11 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Screenshot of the implemented Mule-based architecture.</ns0:figDesc><ns0:graphic coords='11,201.70,63.78,293.65,456.01' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Screenshot of the implemented WSO2-based architecture.</ns0:figDesc><ns0:graphic coords='12,141.73,63.78,413.57,156.15' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 4 .&#8226;</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. MQTT network and MQTT simulator</ns0:figDesc><ns0:graphic coords='13,224.45,393.61,248.16,236.11' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>i n s e r t i n t o F e a t u r e A n o m a l y s e l e c t a2 . i d a s i d , c u r r e n t t i m e s t a m p ( ) a s t i m e s t a m p , a1 .d e s t I p a s d e s t I p from p a t t e r n [ ( ( e v e r y a1 = N e t w o r k P a c k e t ( ( a1 . p r o t o c o l = 'MQTT' o r a1 . p r o t o c o l = 'TCP ' ) ) ) &#8722;&gt; a2 = N e t w o r k P r e d i c t i o n ( ( a2 . i d = a1 . i d and ( a2 . p a c k e t L e n g t h P r e d i c t &lt; ( a1 . p a c k e t L e n g t h &#8722; a2 . p a c k e t L e n g t h P r e d i c t S q u a r e d E r r o r ) o r a2 . p a c k e t L e n g t h P r e d i c t &gt; ( a1 . p a c k e t L e n g t h + a2 . p a c k e t L e n g t h P r e d i c t S q u a r e d E r r o r ) ) ) ) ) ] Listing 2. FeatureAnomaly pattern implemented in SiddhiQL. @info ( name =' F e a t u r e A n o m a l y ' ) from ( ( e v e r y a1 = N e t w o r k P a c k e t [ ( a1 . p r o t o c o l == 'MQTT' o r a1 . p r o t o c o l == 'TCP ' ) ] ) &#8722;&gt; a2 = N e t w o r k P r e d i c t i o n [ ( a2 . i d == a1 . i d and ( a2 . p a c k e t L e n g t h P r e d i c t &lt; ( a1 . p a c k e t L e n g t h &#8722; a2 . p a c k e t L e n g t h P r e d i c t S q u a r e d E r r o r ) o r a2 . p a c k e t L e n g t h P r e d i c t &gt; ( a1 . p a c k e t L e n g t h + a2 . p a c k e t L e n g t h P r e d i c t S q u a r e d E r r o r ) ) ) ] ) 13/27 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61593:1:1:NEW 11 Oct 2021) Manuscript to be reviewed Computer Science s e l e c t a2 . i d a s i d , t i m e : t i m e s t a m p I n M i l l i s e c o n d s ( ) a s t i m e s t a m p , a1 . d e s t I p a s d e s t I p i n s e r t i n t o F e a t u r e A n o m a l y ; Listing 3. ProtocolAnomaly pattern implemented in Esper EPL. @Name( ' P r o t o c o l A n o m a l y ' ) @Tag ( name =' domainName ' , v a l u e =' I o T S e c u r i t y A t t a c k s ' ) i n s e r t i n t o P r o t o c o l A n o m a l y s e l e c t a1 . i d a s i d , c u r r e n t t i m e s t a m p ( ) a s t i m e s t a m p , a1 . d e s t I p a s d e s t I p from p a t t e r n [ ( e v e r y a1 = N e t w o r k P a c k e t ( ( a1 . p r o t o c o l ! = 'TCP ' and a1 . p r o t o c o l ! = 'UDP' and a1 . p r o t o c o l ! = 'MQTT' and a1 . p r o t o c o l ! = 'ARP' and a1 . p r o t o c o l ! = 'DHCP' and a1 . p r o t o c o l ! = 'MDNS' and a1 . p r o t o c o l ! = ' NTP' and a1 . p r o t o c o l ! = 'ICMP ' and a1 . p r o t o c o l ! = ' ICMPv6 ' and a1 . p r o t o c o l ! = 'DNS' and a1 . p r o t o c o l ! = ' IGMPv3 ' ) ) ) ] Listing 4. ProtocolAnomaly pattern implemented in SiddhiQL. @info ( name =' P r o t o c o l A n o m a l y ' ) from ( e v e r y a1 = N e t w o r k P a c k e t [ ( a1 . p r o t o c o l ! = 'TCP ' and a1 . p r o t o c o l ! = 'UDP' and a1 . p r o t o c o l ! = 'MQTT' and a1 . p r o t o c o l ! = 'ARP' and a1 . p r o t o c o l ! = 'DHCP' and a1 . p r o t o c o l ! = 'MDNS' and a1 . p r o t o c o l ! = 'NTP' and a1 . p r o t o c o l ! = 'ICMP ' and a1 . p r o t o c o l ! = ' ICMPv6 ' and a1 . p r o t o c o l ! = 'DNS' and a1 . p r o t o c o l ! = ' IGMPv3 ' ) ] ) s e l e c t a1 . i d a s i d , t i m e : t i m e s t a m p I n M i l l i s e c o n d s ( ) a s t i m e s t a m p , a1 . d e s t I p a s d e s t I p i n s e r t i n t o P r o t o c o l A n o m a l y ;</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61593:1:1:NEW 11 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. TCP SYN attack comparison</ns0:figDesc><ns0:graphic coords='17,163.08,415.14,370.87,194.40' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. XMAS scan attack comparison</ns0:figDesc><ns0:graphic coords='18,170.86,466.52,355.32,194.40' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Telnet-Mirai attack comparison</ns0:figDesc><ns0:graphic coords='19,167.08,125.76,362.88,194.40' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. DoS big message attack comparison</ns0:figDesc><ns0:graphic coords='19,172.15,511.90,352.73,194.40' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 10 .Figure 11 .</ns0:head><ns0:label>1011</ns0:label><ns0:figDesc>Figure 10. Discwave attack comparison (using time windows)</ns0:figDesc><ns0:graphic coords='20,163.41,188.51,370.22,194.40' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 12 .Figure 13 .</ns0:head><ns0:label>1213</ns0:label><ns0:figDesc>Figure 12. Subfuzzing attack comparison (using time windows)</ns0:figDesc><ns0:graphic coords='21,157.79,101.00,381.46,194.40' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15. UDP scan without delay comparison</ns0:figDesc><ns0:graphic coords='22,209.40,450.85,278.25,137.14' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>Figure 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure 16. XMAS Scan without delay comparison</ns0:figDesc><ns0:graphic coords='23,208.86,63.78,279.31,137.14' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61593:1:1:NEW 11 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61593:1:1:NEW 11 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head /><ns0:label /><ns0:figDesc>of interest to implement the architecture with additional ESBs and CEP engines to extend the comparison with the products of other vendors. Another interesting line of future work would be to automate the process of feature selection, as proposed in Wajahat et al. (2020), thus providing useful information for the selection of the machine learning model with different underlying structures in network traffic. These modifications should solve the current limitations of the architecture mentioned above.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Feature importanceBy using pre-processed features, we can select the model. In this case, our data features fit a linear distribution very well. Therefore, we chose a linear regression to predict these key features. This model can change depending on the whole IoT network.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Feature name</ns0:cell><ns0:cell>Feature importance</ns0:cell></ns0:row><ns0:row><ns0:cell>Destination port (1883)</ns0:cell><ns0:cell>0.259</ns0:cell></ns0:row><ns0:row><ns0:cell>Calculated window size</ns0:cell><ns0:cell>0.240</ns0:cell></ns0:row><ns0:row><ns0:cell>Protocol (TCP)</ns0:cell><ns0:cell>0.122</ns0:cell></ns0:row><ns0:row><ns0:cell>Protocol (MQTT)</ns0:cell><ns0:cell>0.100</ns0:cell></ns0:row><ns0:row><ns0:cell>IP source (192.168.1.11)</ns0:cell><ns0:cell>0.092</ns0:cell></ns0:row><ns0:row><ns0:cell>Information (Publish message)</ns0:cell><ns0:cell>0.032</ns0:cell></ns0:row><ns0:row><ns0:cell>Source port (59662)</ns0:cell><ns0:cell>0.030</ns0:cell></ns0:row><ns0:row><ns0:cell>IP source (192.168.1.7)</ns0:cell><ns0:cell>0.029</ns0:cell></ns0:row><ns0:row><ns0:cell>Source port (62463)</ns0:cell><ns0:cell>0.027</ns0:cell></ns0:row><ns0:row><ns0:cell>Source port (52588)</ns0:cell><ns0:cell>0.016</ns0:cell></ns0:row><ns0:row><ns0:cell>Packet length</ns0:cell><ns0:cell>0.005</ns0:cell></ns0:row></ns0:table><ns0:note>14/27 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61593:1:1:NEW 11 Oct 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:note place='foot' n='27'>/27 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61593:1:1:NEW 11 Oct 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"List of responses to the reviewers’ comments October 11, 2021 • Paper reference: CS-2021:05:61593:0:2:REVIEW • Paper title: Detecting Security Attacks in Cyber-Physical Systems: A Comparison of Mule and WSO2 Intelligent IoT Architectures • Journal: PeerJ Computer Science We would like to thank the reviewers for their fruitful comments, which have been taken into account in this new version of the paper. Reviewer 1 Q1. There is no link of code in the paper. Add github or any other relevant link so that proposed work may be tested. Answer. We have added a repository where the simulator is included, along with the necessary instructions to deploy WSO2 and Esper Mule. This repository is mentioned and referenced in the “Results” section (line 591). It can be accessed through this url:https://github.com/josE4roldan/Detecting-sec urity-attacks-in-cyber-physical-systems-a-comparison-of-mule-and -WSO2-intelligent-IoT Q2. Highlight all assumptions and limitations of your work. Answer. The assumptions have been included in the introduction (lines 110116), as follows: • CEP works successfully in IoT environments. There are different works in wich CEP architectures are sucessfully deployed in IoT environments (Roldán et al., 2020; Corral-Plaza et al., 2020). • CEP engines and ESBs from different vendors can be integrated with our architecture to detect cybersecurity threats in real time. This is because this architecture has already been deployed with Mule (Roldán et al., 2020) and there are works describing how to deploy WSO2 in an IoT environment (Fremantle, 2015). 1 Moreover, the limitations are listed in the conclusions (lines 781-784), as follows: Although our work achieved the proposed objectives, there are certain limitations in specific contexts. One is that although the architecture makes it possible to define a threshold automatically, it is still necessary to perform a feature selection process. Another is that despite the fact that the architecture is capable of defining a threshold for one or more features, it is not able to fully generate the pattern. Note that, in the case of the limitations, emphasis is placed on the possibility of alleviating these limitations in future work. Q3. Mention all figures properly in text. Answer. Thanks to the reviewer’s comment we have detected the flaw in the formatting. Figures are now cited with the abbreviation “Fig.”, as recommended by PeerJ. Q4. Mention time complexity of proposed pipeline. Answer. The subsection entitled “Estimated computational complexity” (line 595) has been added to the “Results” section. In this subsection, we have mentioned the computational complexity for both the training and event generation stages. It should be noted that these metrics may change if another model or another preprocessing process is chosen. Moreover, we cannot calculate the computational complexity of the CEP engines because we do not know the internal operations they perform. We explain this in the manuscript. Reviewer 2 Q5. In the introduction in lines 36,37, I am afraid I have to disagree with the authors about the lack of discussion in the IoT securities space. In my opinion, IoT securities have been discussed and published extensively. The authors should provide more evidence to convince the reader about this statement and how this research contributes novelty to the IoT securities domain. Answer. Thanks to the reviewer’s comment, we have clarified this expression lines(40-42). Indeed, there is quite a lot of literature on cybersecurity in the IoT domain, but it is clear that further research is needed and especially on insecure implementations made on this paradigm. We have explained this statement with real cases. Q6. I would like to suggest the authors should also provide a literature review in order to support the chosen research method. Answer. This is an interesting comment. In response, we have decided to restructure the paper, and now the “Related work” section is immediately after the “Background” section (line 273). This has allowed us to better justify the research method, taking as a reference other works in which this type of comparisons have been applied. 2 Q7. In line 343, the author should justify the use of the MQTT network simulator. How is this simulator a better choice compared to other network simulators? Answer. We have added a more extensive justification for using the MQTT simulator (lines 423-445). In particular, we detail the benefits of the architectures that are usually used in MQTT networks, as well as the possibility of creating reproducible experiments with packets extracted from an MQTT network. Q8. The authors should provide more initial relevant literature to support this research. For example, in lines 155,156 about the TCP and UDP flood attacks, the authors should provide more evidence to support the statement. Answer. Thanks to the reviewer’s comment we have improved the bibliography, and updated some references throughout the text including, among others, the one mentioned in the review (section “Background” and subsection “Security in the Internet of Things”). The added references are as follows: (Gutnikov et al., 2021), (Kaspersky, 2021) (line 51), (Corral-Plaza et al., 2020) (line 112), (Fremantle, 2015) (line 115), (Montgomery et al., 2021)[updated reference] (line 102), (Roldán-Gómez et al.(2021)) [Github repository] (line 591). Q9. The methods described do not provide sufficient information to be reproducible by another investigator. Unless the authors provide the complete programmable codes, then other researchers could replicate these findings. Answer. We have added a repository where the simulator is included, along with the necessary instructions to deploy WSO2 and Esper Mule. This repository is mentioned and referenced in the “Results” section (line 594). It can be accessed through this url:https://github.com/josE4roldan/Detecting-sec urity-attacks-in-cyber-physical-systems-a-comparison-of-mule-and -WSO2-intelligent-IoT Q10. The research question is sufficiently well defined, relevant & meaningful. The research fills an identified knowledge gap. The submission should clearly define the research question, which must be relevant and meaningful. The knowledge gap being investigated should be identified, and statements about how the study contributes to filling that gap. Answer. According to the reviewer’s comment: “the research question is sufficiently well defined, relevant & meaningful. The research fills an identified knowledge gap”. We would like to point out that the manuscript defines 4 research questions in the introduction section (lines 76-94) and then these are discussed in the ’Answers to the Research Questions’ subsection (line 717). The knowledge gap being investigated and aim are also identified in the introduction section: (1) to demonstrate that our intelligent architecture, which integrates CEP and ML in order to detect IoT security attacks in real time, can be implemented with different integration platforms such as Mule and WSO2, different CEP engines such as Esper and Siddhi and different ML algorithms such as linear regression; and (2) to provide a comprehensive analysis of the performance 3 and benefits of the architecture depending on the different vendor technologies used for its implementation; in particular, a comparison of the architecture implementation with Mule and Esper versus WSO2 and Siddhi is included. In this way, we provide a comparative analysis that can be very useful for the developer when choosing between one technology and another for the implementation of the architecture, depending on the requirements of the specific application domain and case study. The ’Answers to the Research Questions’ subsection and the conclusions section, which has been rewritten, clarify how our study fills such gaps. In addition, this subsection is added to the ’Discussion’ section, at the request of the publisher, in order to maintain a standard structure. Q11. I would like to comment on lines 93-102; in conclusion, I am not convinced whether this aim has already been achieved. Should this aim is successfully achieved, how would it substantially benefit humanity. Answer. In critical systems, quick threat detection can be crucial to avoid major impacts. This paper demonstrates that our architecture can be deployed on two different CEP engines; each CEP engine performs better in certain situations. This paper does not only provide an architecture to automatically detect attacks in IoT environments, but also it allows us to choose the best platform for each particular case which could require immediate detection. Reviewer 3 Q12. The abstract does not align with the paper actual findings and conclusion as well. Answer. The most relevant findings and conclusions have been included in the abstract (lines 30-33). Q13. Figures 2, 3 and a few more are not readable clearly Answer. Figures have been enlarged to make them more legible. Q14. Too many figures, while elaboration is not extensive, authors may elaborate properly. Answer. The feature selection diagram has been removed because it was redundant, and the simulator and network figures have been combined (see new Figure 4, line 461). Unfortunately, due to the nature of the figures (they are very wide and narrowing them means removing information from the axes), we cannot group the diagrams in a better way. Q15. Conclusion requires revision, it can be concise with the findings only, currently, it is too lengthy. Answer. The conclusions have been schematized to make them shorter and clearer lines(754-780). 4 Q16. Proofread is highly recommended. Answer. speaker. Q17. The manuscript has been completely revised by a native English May elaborate more on their proposed approach. Answer. We have added a new subsection entitled “Proposed approach” within the “Comparing architecture performance and stress” section. In this subsection, we explain in a schematic way all the steps to be followed to carry out this work. Moreover (lines 400-422), we have restructured the sections: the “Related work” section is now immediately after the “Background” section line(271), so that the methodology is introduced and justified in a better way. Q18. Provided references are better enough. However, a few of them are missing information, and can be strengthen further. Answer. The references have been completed, DOI and URL have been added to the references, and new references have been added. The added references are as follows: (Gutnikov et al., 2021), (Kaspersky, 2021) (line 51), (CorralPlaza et al., 2020) (line 112), (Fremantle, 2015) (line 115), (Montgomery et al., 2021)[updated reference] (line 102), (Roldán-Gómez et al.(2021)) [Github repository] (line 591). 5 "
Here is a paper. Please give your review comments after reading it.
291
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>A large range of applications have been identified based upon the communication of underground sensors deeply buried in the soil. The classical electromagnetic wave (EM) approach, which works well for terrestrial communication in air medium, when applied for this underground communication, suffers from significant challenges attributing to signal absorption by rocks, soil, or water contents, highly varying channel condition caused by soil characteristics, and requirement of big antennas. As a strong alternative of EM, various magnetic induction (MI) techniques have been introduced. These techniques basically depend upon the magnetic induction between two coupled coils associated with transceiver sensor nodes. This paper elaborates on three basic MI communication mechanisms i.e. direct MI transmission, MI waveguide transmission, and 3D coil MI communication with detailed discussion of their working mechanism, advantages and limitations. The comparative analysis of these MI techniques with each other as well as with EM wave method will facilitate the users in choosing the best method to offer enhanced transmission range (upto 250m), reduced path loss(&lt;100dB), channel reliability, working bandwidth(1-2 KHz), &amp; omni-directional coverage to realize the promising MI based wireless underground sensor network (WUSN) applications.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Based upon the nature of underlying medium, wireless sensor networks may be categorized as air-based terrestrial wireless sensor networks (WSNs), soil-based underground WSNs and water-based underwater WSNs <ns0:ref type='bibr' target='#b1'>(Akyildiz et al., 2002)</ns0:ref>. The wireless underground sensor networks (WUSNs) <ns0:ref type='bibr' target='#b31'>(Sardar et al., 2019)</ns0:ref> are comprised of sensor devices, which operate under the earth surface and interact with each other wirelessly. These sensing nodes may be either deployed within closed underground structures like underground roads/subways, mines <ns0:ref type='bibr' target='#b10'>(Forooshani et al., 2013)</ns0:ref> or tunnels <ns0:ref type='bibr' target='#b9'>(Dudley et al., 2007)</ns0:ref> or these may be buried completely inside the ground. In the first scenario, inspite of sensor networks being located underground, signals are communicated through the air in void space existing below earth surface which helps in improving the security in underground mines ensuring comfortable communications for the passengers and drivers in road/subway tunnels as well as in securing these structures from attacks by PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:1:0:NEW 21 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science mean of consistent monitoring <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. Similarly in second scenario, sensing nodes buried inside soil interact with each other through soil as propagation medium and this kind of communication of WUSNs offers a huge range of naive applications including smart irrigation <ns0:ref type='bibr' target='#b8'>(Dong et al., 2013)</ns0:ref>, precision agriculture <ns0:ref type='bibr' target='#b55'>(Yu et al., 2017)</ns0:ref>, border patrolling, soil monitoring, predicting landslides <ns0:ref type='bibr' target='#b4'>(Aleotti and Chowdhury, 1999)</ns0:ref> or earth quakes or volcano eruptions <ns0:ref type='bibr' target='#b52'>(Werner-Allen et al., 2006)</ns0:ref> and many more <ns0:ref type='bibr' target='#b0'>(Akyildiz and Stuntebeck, 2006)</ns0:ref>.</ns0:p><ns0:p>The researchers observed that using the terrestrial motes in underground communication could not yield satisfactory and reliable communication results due to harsh environment of underground media <ns0:ref type='bibr' target='#b37'>(Stuntebeck et al., 2006)</ns0:ref>. They further worked a lot on the analysis of characteristics of communication channel of WUSNs. The underground signal communication using EM wave propagation has been analyzed using channel characterization models in detail in <ns0:ref type='bibr' target='#b0'>(Akyildiz and Stuntebeck, 2006)</ns0:ref>, <ns0:ref type='bibr' target='#b17'>(Li et al., 2007)</ns0:ref>, <ns0:ref type='bibr' target='#b51'>(Vuran and Silva, 2010)</ns0:ref>, <ns0:ref type='bibr' target='#b23'>(Peplinski et al., 1995)</ns0:ref>, <ns0:ref type='bibr' target='#b34'>(Silva and Vuran, 2009)</ns0:ref>, <ns0:ref type='bibr' target='#b50'>(Vuran and Akyildiz, 2010)</ns0:ref>. Here it was evaluated that how path loss and bit error rate are affected by environmental factors like water particles in soil (humidity) as well as network related factors including operating frequency and burial depth of sensing nodes. The researchers in <ns0:ref type='bibr' target='#b36'>(Silva et al., 2015)</ns0:ref> investigated through experiments the communication link characteristics of different types of WUSN channels such as underground to underground (UG-UG) channel, underground to above-ground (UG-AG) channel and above-ground to underground (AG-UG) channel.</ns0:p><ns0:p>However, gradually the researchers found that EM wave communication mechanism suffers from huge problems. Firstly, the high signal attenuation or path loss is caused by absorption by soil, rock elements and underground water <ns0:ref type='bibr' target='#b10'>(Forooshani et al., 2013)</ns0:ref>. Secondly, due to the soil properties such as soil type <ns0:ref type='bibr' target='#b28'>(Salam and Vuran, 2016)</ns0:ref>, volumetric water content etc changing very randomly with location and over time, performance of sensor networks remains unpredictable <ns0:ref type='bibr' target='#b49'>(Trang et al., 2018)</ns0:ref>. Third problem is of using large sized antennas, which are deployed so that practical communication range may be achieved with low operating frequencies <ns0:ref type='bibr' target='#b27'>(Salam and Raza, 2020)</ns0:ref>. The impact of antenna size and orientation as well as soil moisture on underground communication was also realized experimentally by developing outdoor WUSN tested in <ns0:ref type='bibr' target='#b35'>(Silva and Vuran, 2010)</ns0:ref>. All these issues made EM wave approach unsuitable for deploying WUSNs despite the promising application domains <ns0:ref type='bibr' target='#b38'>(Sun and Akyildiz, 2009)</ns0:ref>.</ns0:p><ns0:p>Last decade has witnessed a new approach called magnetic induction (MI) as an effective alternative of EM communication for harsh environment like underground <ns0:ref type='bibr' target='#b40'>(Sun and Akyildiz, 2010b)</ns0:ref> or underwater <ns0:ref type='bibr' target='#b3'>(Akyildiz et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b21'>Muzzammil et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b6'>Debnath, 2021)</ns0:ref> <ns0:ref type='bibr' target='#b7'>(Domingo, 2012)</ns0:ref>. The MI channel capacity from a pair of transceiver coils has been elaborated in <ns0:ref type='bibr' target='#b15'>(Kisseleff et al., 2014b)</ns0:ref>.The algorithms explaining the efficient mechanism of deploying the magnetic coils for WUSNs have been worked on in <ns0:ref type='bibr'>(Sun and Akyildiz, 2013)</ns0:ref>. The authors in <ns0:ref type='bibr'>(Sun and</ns0:ref><ns0:ref type='bibr'>Akyildiz, 2010b, 2009)</ns0:ref> have detailed out the analysis of path loss and bandwidth using MI communication approach in soil as underground medium.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b20'>(Masihpour et al., 2013)</ns0:ref>, multi-hop relay techniques are proposed to extend communication range in near-field MI communication systems. Further, gradual analysis of communication channel model has led to evolution of MI waveguide mechanism for minimizing the path loss observed in conventional EM wave approach or original MI mechanism <ns0:ref type='bibr' target='#b40'>(Sun and Akyildiz, 2010b)</ns0:ref>. The system performance in term of path loss caused by EM wave mechanism, ordinary MI mechanism and MI waveguide mechanism have been analyzed, quantified and compared with each other <ns0:ref type='bibr'>(Sun and</ns0:ref><ns0:ref type='bibr'>Akyildiz, 2010b, 2009)</ns0:ref>. Further 3D MI Coils were found to be very beneficial to be used for omnidirectional coverage <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>. Also, the researchers in their study <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref> provided the multimode model for characterizing the wireless channels for WSNs used in underground structures like mines for both EM and MI techniques.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b45'>(Sun et al., 2011)</ns0:ref>, detailed architecture and operating framework of monitoring underground pipelines for detecting real time leakage effectively has been discussed using MI-based WSN, called as MISE-PIPE.</ns0:p><ns0:p>The researchers in <ns0:ref type='bibr' target='#b39'>(Sun and Akyildiz, 2010a</ns0:ref>) have further suggested two algorithms i.e. MST algorithm and triangle centroid (TC) for effective deployment of MI waveguides to connect the underground sensors in WUSN environment. Further cross layered protocol architecture for MI-WUSNs has been explained in <ns0:ref type='bibr' target='#b18'>(Lin et al., 2015)</ns0:ref>. The recent advancements made in related areas of MI-WUSNs, which might have impact on implementation of MI-WUSNs have been discussed in <ns0:ref type='bibr' target='#b16'>(Kisseleff et al., 2018)</ns0:ref>.</ns0:p><ns0:p>The novelty of this paper is comprehensive elaboration of all MI techniques at one place and their comparative analysis with each other as well with conventional EM wave approach based on thorough literature review. The various MI techniques exhibit performance enhancements wrt different parameters. The intended users of this review are the researchers working on achieving quality of service(QoS) parameters for WUSNs using magnetic induction approach instead of EM wave methodology and thereby realizing the potential MI-WUSN applications. The researchers working in the domain of underwater wireless sensor networks may also use this analysis.</ns0:p></ns0:div> <ns0:div><ns0:head>This comparative analysis will facilitate selection of appropriate technique with consideration of required</ns0:head><ns0:p>The rest of the paper is as follows. The section 'Survey methodology' explains the methodology adopted for literature review. </ns0:p></ns0:div> <ns0:div><ns0:head>SURVEY METHODOLOGY</ns0:head><ns0:p>The purpose of this paper is to do the detailed elaboration all MI based techniques followed by their comparison with each other as well as with classical EM wave approach for WUSNs. We started with a specific set of search terms used against a meta-search engine 'Google Scholar' to search across multiple databases. These search words were 'WUSN', 'wireless underground sensor networks', 'electromagnetic wave', 'magnetic induction' and 'MI waveguide'. The next step of this systematic literature review was to gather all retrieved documents from the year 2006 to the year 2020, where the process of screening included downloading the papers published in various journals , and reading their titles and abstracts.</ns0:p><ns0:p>From these, we identified all MI techniques used in WUSNs. For each of these MI techniques, further papers were searched and retrieved based on which, advantages and limitations of these MI techniques have been listed out and further compared. The literature review included study of more than 50 papers.</ns0:p></ns0:div> <ns0:div><ns0:head>APPLICATION DOMAINS OF MI BASED WUSNS</ns0:head><ns0:p>Unlike EM wave based WUSNs used for the cases of low burial depth utilizing UG-AG and AG-UG channels, MI-based WUSNs are especially helpful for the applications working on pure UG-UG channels, where underground sensor devices are deeply deployed in the soil or no above ground devices are there. Some of such application areas have been identified by the researchers as given here : a) Underground leakage monitoring applications MI based WUSNs may be applied for monitoring and detecting the leakage in underground infrastructures like water, gas or oil pipelines. It helps in assuring that there are no leakages in underground fuel tanks and also in determining the actual amount of oil currently available in the fuel tank so that overflowing may be avoided. MI-WUSNs also play important role in monitoring the leakage in underground septic sewage tanks. These are used with the help of sensors deployed along the path of pipelines so as to localize and repair the leakage of gas or water from gas pipelines or water pipelines respectively. (See Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>) b) Disaster prediction applications MI-WUSNs are also used in assessing and predicting the disastrous situations like floods, eruption of volcanoes and earthquake, Tsunami, oil spilling or land sliding situations. These disastrous conditions do arise attributing to alterations in materials like soil, water etc and these changes are monitored using underground sensor devices. MI-WUSNs prove far better as compared to current methods of landslide prediction, which are costlier as well as more time consuming to get deployed <ns0:ref type='bibr' target='#b33'>(Sheth et al., 2005)</ns0:ref>.</ns0:p><ns0:p>Similarly MI-WUSNs measure and monitor the imbalances of glacier movements and volcanic eruption movements as well. All these things help in fine predictions of forthcoming natural disasters.</ns0:p></ns0:div> <ns0:div><ns0:head>c) Agricultural monitoring applications</ns0:head><ns0:p>MI based WUSNs can be well utilized for monitoring applications related to agriculture, mines, tunnels, pollution and many more. The soil sensors buried inside the ground as part of WUSN may be used for <ns0:ref type='bibr'>(Sun et al., 2012)</ns0:ref>.</ns0:p><ns0:p>observing the soil properties like soil makeup <ns0:ref type='bibr' target='#b24'>(Raut and Ghare, 2020)</ns0:ref>, water content in soil (humidity), density of soil etc facilitating smart and efficient irrigation system <ns0:ref type='bibr' target='#b30'>(Sambo et al., 2020)</ns0:ref>. MI-WUSNs are also beneficial for continual monitoring of methane or carbon monoxide gases inside mines which can explode and cause a big fire, if not monitored properly. These sensors are also utilized for habitat (marine life, fish farms) monitoring, exploration (natural resource) monitoring or observance of underwater pollution also.</ns0:p></ns0:div> <ns0:div><ns0:head>d) Underground structural monitoring applications</ns0:head><ns0:p>MI based WUSNs are also used for monitoring the internal structures of dams, buildings to know the factors which influence the durability of those structures <ns0:ref type='bibr' target='#b22'>(Park et al., 2005)</ns0:ref>. It is ensured by tracking stress and strain present in the material; used for their construction like water, sand, concrete etc.</ns0:p></ns0:div> <ns0:div><ns0:head>e) Sports field monitoring applications</ns0:head><ns0:p>One important application domain making use of MI-WUSNs is sports field monitoring where sensor nodes buried inside are used for monitoring the condition of soil of different types of playgrounds or sport fields like golf course, soccer fields, baseball fields or grass tennis courts. The poor turf conditions do cause uncomfortable playing experience for the players, making it necessary to monitor and maintain the health of grass.</ns0:p></ns0:div> <ns0:div><ns0:head>f) Security related applications</ns0:head><ns0:p>MI-WUSNs are better suited for security related applications because these do have higher degree of concealment in comparison with terrestrial sensor devices. It is so because their presence is hidden and the chances of determining their presence and deactivating them are very less. By deploying pressure sensors under MI-WUSN along the border area, the concerned authorities can be alerted as soon as some illegal intruder tries to cross that region. The applications like surveillance of submarines or mines <ns0:ref type='bibr' target='#b26'>(Rolader et al., 2004)</ns0:ref> are also very beneficial using MI-WUSNs. (See Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>)</ns0:p></ns0:div> <ns0:div><ns0:head>MAGNETIC INDUCTION AS BETTER ALTERNATIVE OF EM WAVE COMMU-NICATION</ns0:head><ns0:p>The conventional signal communication techniques based upon EM wave propagation is not very much encouraging <ns0:ref type='bibr' target='#b12'>(Huang et al., 2020)</ns0:ref> for most of the underground communication applications due to following reasons : <ns0:ref type='table' target='#tab_0'>-2021:02:58375:1:0:NEW 21 Apr 2021)</ns0:ref> Manuscript to be reviewed </ns0:p><ns0:formula xml:id='formula_0'>4/17 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>a) High signal attenuation</ns0:head><ns0:p>The path loss for underground WSNs is highly dependent on a big number of parameters attributing to large variety of its underlying media such as presence of soil or rock or some fluid under the earth surface, various types of soil makeup like sand, clay or silt, volumetric water content or humidity in soil, soil density. Due to all these parameters, signal attenuation is very high <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>b) Rapidly changing channel conditions</ns0:head><ns0:p>All the soil characteristics mentioned above vary very rapidly and unpredictably with location (like sandy soil in desert area) or time (like more water content in soil during rainfall) due to which communication channel becomes very unpredictable and unreliable. Consequently, the bit error rate (BER) is also changed randomly. Due to all these, both satisfactory connectivity and energy efficiency become infeasible to be attained for WUSNs <ns0:ref type='bibr' target='#b49'>(Trang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>c) Large antenna size</ns0:head><ns0:p>For achieving communication range usable for practical applications, it becomes necessary to operate the transceivers at lower frequencies in MHz, for which it further becomes to keep the size of antenna high <ns0:ref type='bibr' target='#b27'>(Salam and Raza, 2020)</ns0:ref>. Consequently this size of antenna becomes too large to be buried in soil <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>.</ns0:p><ns0:p>To address the prominent problems of EM based WUSNs including large-sized antenna or electrical dipoles <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>, dynamically changing communication channel <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref> and high path loss, magnetic induction technology has been identified as the far better alternative by researchers.</ns0:p><ns0:p>Unlike communicating through waves, MI based transmission mechanism makes use of near field of coil associated with transceiver sensor node <ns0:ref type='bibr'>(Sun et al., 2013)</ns0:ref>. Due to usage of small coil of wire for transmitting and receiving signal, no lower limit of coil size is required in MI communication <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>VARIOUS MAGNETIC INDUCTION TECHNIQUES USED IN WUSNS</ns0:head><ns0:p>Following subsections cover in detail various MI communication methodologies -direct or ordinary MI communication, MI waveguide communication and MI three-directional communication.</ns0:p></ns0:div> <ns0:div><ns0:head>Direct or Ordinary MI Communication Technique</ns0:head><ns0:p>The underlying architecture, working, advantages and limitations of direct or ordinary MI communication technique are detailed out as follows: Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Architecture of Direct or Ordinary MI Communication approach</ns0:head><ns0:p>In ordinary or direct MI communication architecture <ns0:ref type='bibr' target='#b14'>(Kisseleff et al., 2014a)</ns0:ref>, the communications signals are transmitted or received with the usage of a coil of wire using the fundamental principle of mutual magnetic induction. Resembling with the analogy of transformers, transmitted coil transmits the signal in form of sinusoidal current, which further induces another similar sinusoidal current inside the receiver node and thus communication is accomplished <ns0:ref type='bibr' target='#b40'>(Sun and Akyildiz, 2010b)</ns0:ref>. The interaction amongst transmitter and receiver coupled coils is due to mutual induction (See Fig. <ns0:ref type='figure'>3</ns0:ref>).</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref>. Basic structure of direct MI communication <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>For better understanding of MI transceivers nodes, functionality of primary and secondary coil in a transformer may be referred, which is also depicted in Fig. <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>. Here M indicates the mutual inductance of transmitter and receiver coils, U s is the transmitter battery voltage L t and L r denote the self-inductance of transmitted and receiver coils. R t and R r are coil resistances and Z l denotes the load impedance of receiver node <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015;</ns0:ref><ns0:ref type='bibr'>Sun et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b40'>Sun and Akyildiz, 2010b)</ns0:ref>.</ns0:p><ns0:p>Attributing to usage of near field communication, the MI coils operating at lower frequency bands can achieve more stable and reliable transmission channels in harsh WUSN medium like soil or oil <ns0:ref type='bibr'>(Sun et al., 2013)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Advantages of direct or ordinary MI communication approach</ns0:head><ns0:p>The direct or ordinary MI transmission approach offers a promising solution to prominent problems of EM communication viz rapidly and unpredictably changing channel conditions and need of big sized antennas. Due to magnetic permeability of soil or rocks or water being same as that of air (4&#960; x 10 &#8722;7 H/m), MI channels are not impacted by the dynamic changes of soil with time or location <ns0:ref type='bibr' target='#b15'>(Kisseleff et al., 2014b;</ns0:ref><ns0:ref type='bibr' target='#b45'>Sun et al., 2011)</ns0:ref> and therefore this parameter has no impact on path loss for MI solutions.</ns0:p><ns0:p>Due to this, significant rise of path loss (upto 40 dB) has been observed in <ns0:ref type='bibr' target='#b38'>(Sun and Akyildiz, 2009</ns0:ref>) EM wave solution compared to no impact in MI solution, when water content in soil gets increased by 25%. Also instead of using huge sized antennas, small magnetic coils (of radius &lt; 0.1 meter) are used for MI communication, which makes WUSNs implementable in practical manner <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. For transmission distance of less than 1m, path loss of MI method has been observed as smaller than EM wave method <ns0:ref type='bibr' target='#b38'>(Sun and Akyildiz, 2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations of direct or ordinary MI communication approach</ns0:head><ns0:p>Inspite of benefits of direct MI communication like stable communication channel and small size of coils, some constraints put by direct MI communication put the hindrance in making it suitable alternative for underground sensor communication applications. a) Small communication range : Although factors affecting the signal attenuation due to varying soil properties don't apply in ordinary MI communication case, but, the magnetic field created by transmitter coil gets weakened by the time it reaches the receiver coil <ns0:ref type='bibr'>(Sun et al., 2013)</ns0:ref>. Due to this attenuation rate of near magnetic field, communication range attained is still too small (10m) for practical use for applications <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>. b) High path loss for larger transmissions : As the path loss in ordinary MI communication case is inversely proportional to cubic value of transmission distance as compared to simple value of transmission distance(1/r 3 vs. 1/r <ns0:ref type='bibr' target='#b45'>(Sun et al., 2011)</ns0:ref>. Due to this reason, MI communication is discouraged for terrestrial WSNs. For application related to underground sensor nodes, although the path loss for MI approach caused due to soil absorption is comparatively very low than EM communication, but total path loss may still be higher (greater than 100 dB) for larger transmission distances <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. Although for For operating frequency exceeding 900 MHz, path loss has been observed decreasing as compared to EM wave communication <ns0:ref type='bibr' target='#b38'>(Sun and Akyildiz, 2009)</ns0:ref>. c) Performance affected due to intersection angles of two coils : The system performance of direct MI mechanism is maximum if the transceiver nodes are deployed face to face along same axis or in same line. But practically the intersection angle of two coils is non-zero and significantly affects the communication performance <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>. d) Insufficient bandwidth : Also the bandwidth attained using ordinary MI communication approach is very very small (1-2 KHz), which is insufficient for practical applications <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2016)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>it is in MHz for EM wave solution. For enhancing the channel gain for direct MI communication, either the coil size may be increased or number of turns in coils may be increased. However it results in increased size of transceivers. One more parameter influencing the channel gain is unit length resistance of the loop.</ns0:p><ns0:p>Therefore lesser resistance wires and circuits may be used for ensuring lesser path loss with no increase in size. Further, it is possible to decrease the wire or circuit resistance by using better conductivity wires, better capacitors, better connectors and a customized printed circuit boards (PCB) <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>MI Waveguide Communication Technique</ns0:head><ns0:p>To overcome the constraints like limited communication range and high path loss observed by researchers in using direct MI mechanism, the development of an advanced technique called waveguide-based MI communication has been explored <ns0:ref type='bibr' target='#b45'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b39'>Sun and Akyildiz, 2010a)</ns0:ref>, which has proved efficient in significantly minimizing the transmission path loss, increasing the communication range and attaining a practical bandwidth of inter-sensor communication in various applications related to underground environment (see Fig. <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>) <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2016)</ns0:ref>. Following subsections discuss in detail the architecture, benefits and limitations of MI waveguide technique. <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture and working pattern of MI waveguide technique</ns0:head><ns0:p>The MI waveguide architecture is comprised of series of multiple resonant relay coils <ns0:ref type='bibr' target='#b47'>(Tam et al., 2020)</ns0:ref> deployed between the underground transceiver sensor nodes, which are wirelessly connected with each other in MI based WUSNs <ns0:ref type='bibr'>(Sun and Akyildiz, 2012;</ns0:ref><ns0:ref type='bibr' target='#b17'>Li et al., 2007)</ns0:ref>. Although the MI waveguide structure was initially discussed in <ns0:ref type='bibr' target='#b47'>(Tam et al., 2020)</ns0:ref> where the relay coils used to be very near to each other resulting in strong coupling, but for MI wireless communication, the coupling between relay nodes is quite weak due to their not being very close to each other <ns0:ref type='bibr' target='#b39'>(Sun and Akyildiz, 2010a)</ns0:ref>.</ns0:p><ns0:p>Wave technique is also used for EM communication, but unlike the relay points using EM wave technique, the MI relay point is simply a coil having no source of energy or processing device <ns0:ref type='bibr' target='#b45'>(Sun et al., 2011</ns0:ref><ns0:ref type='bibr'>(Sun et al., , 2012))</ns0:ref>.The MI waveguide and the regular waveguide are based on different principles and usable for different types of applications <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>The basic principle working behind inter sensor communication in MI waveguide mechanism is the serial magnetic induction or coupling between relay nodes located next to each other <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref>. Although some relay nodes do exist in between transmitter and receiver devices, but even then this MI waveguide communication pattern comes under category of wireless communication. Attributing to this unique physical architecture of MI wave guide, high degree of freedom is there in deploying the nodes and utilizing them in numerous harsh conditions of underground medium <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>. As the MI transceiver nodes and relay coils are magnetically coupled by virtue of their placement in straight line, the relay coils will get the induced current sequentially until it arrives at receiver node. During this entire process, signal gets strengthen when reaches the receiver <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>In MI waveguide functionality, the sinusoidal current in the transmitter coil produces a magnetic field which varies over time, and can cause another sinusoidal current in the first relay coil and then this relay repeats the process for next relay coil and so on(see Fig. <ns0:ref type='figure'>6</ns0:ref>) <ns0:ref type='bibr' target='#b40'>(Sun and Akyildiz, 2010b)</ns0:ref>. This allows the magnetic induction passively to be transmitted through all the intermediate coils until the MI receiver is reached, creating the MI waveguide <ns0:ref type='bibr' target='#b45'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009)</ns0:ref>.</ns0:p><ns0:p>It has been established in <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref> that it is required to have six or more relay coils to attain effective increment of signal strength with transmission range of 2m. Further relay density or number of relays can be lowered using the coils with low unit resistance and high conductivity wires and circuits <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>. <ns0:ref type='bibr' target='#b40'>(Sun and Akyildiz, 2010b)</ns0:ref>. between the adjacent coils and d is the total communication range between transmitter and receiver node, which is same as d=(n-1)/r, a is the radius of all coils, C is the capacitor with which each relay and transceiver coil is loaded. The relay coils can be made resonant coils for effective transmission of magnetic signals by appropriate design of capacitor value. Between any two adjacent relay coils, mutual inductance does exist, whose value is based on their inter-coil distance <ns0:ref type='bibr'>(Sun et al., 2012)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Advantages of MI waveguide approach</ns0:head><ns0:p>By using MI waveguide approach for underground communication, following benefits have been observed: a) Stable MI Channel : As most of the underground transmission media i.e. soil is non-magnetic having almost equal permeability values, therefore rate of attenuation of magnetic fields created by coils remains almost unaffected, keeping the MI channel conditions constant and stable <ns0:ref type='bibr'>(Sun and</ns0:ref><ns0:ref type='bibr'>Akyildiz, 2013, 2012)</ns0:ref>.</ns0:p><ns0:p>b) Need of a smaller number of Sensors : In case of underground communication using MI waveguide, distance of 5m is kept between two relay nodes, which is even more than the maximum transmission range attained during EM wave communication <ns0:ref type='bibr'>(Sun et al., 2012)</ns0:ref> Manuscript to be reviewed Computer Science fledged underground transceiver nodes <ns0:ref type='bibr' target='#b45'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b40'>Sun and Akyildiz, 2010b)</ns0:ref>.</ns0:p><ns0:p>. c) Bandwidth : The bandwidth achieved for both ordinary MI communication as well as MI waveguide communication is in same range (1-2 KHz), which has been found sufficient for non-traditional media applications which need low data rate monitoring <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b14'>Kisseleff et al., 2014a)</ns0:ref>. Also, if operating frequency is 10MHz, 3-dB bandwidth of MI waveguide has been found to be in same range with direct MI communication i.e. (1-2 KHz) <ns0:ref type='bibr' target='#b40'>(Sun and Akyildiz, 2010b)</ns0:ref>. d) Path Loss Reduction : Due to placement of relay coils between transceiver sensor nodes, MI waveguide mechanism offers huge reduction of path loss, which is the most prominent advantage of this technique <ns0:ref type='bibr' target='#b9'>(Dudley et al., 2007)</ns0:ref>. More specifically, this is attributed to appropriate design of waveguide parameters <ns0:ref type='bibr' target='#b45'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009)</ns0:ref>. The analysis in <ns0:ref type='bibr' target='#b40'>(Sun and Akyildiz, 2010b)</ns0:ref> has shown that MI waveguide offers path loss smaller than 100dB for distance even more than 250 m, whereas for transmission range of even slightly more than 5 meters, same path loss of 100 dB is observed for EM wave system as well direct MI communication. By reducing the relay distance and resistance value of coil wire, path loss can be further lessened for MI waveguide <ns0:ref type='bibr' target='#b19'>(Liu et al., 2021)</ns0:ref>. e) Extension of Transmission Range : Using MI waveguide technique, the transmission range is significantly extended as compared to EM wave communication or ordinary MI communication <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2016;</ns0:ref><ns0:ref type='bibr'>Sun and Akyildiz, 2013)</ns0:ref>. It has also been established experimentally that if Mica2 sensor are used for underground communication in soil using EM wave mechanism, communication range achieved is less than 4 m, which increases to 10 m with similar device size and power for direct MI communication and further gets extended to more than 100m for MI waveguide communication <ns0:ref type='bibr' target='#b45'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b39'>Sun and Akyildiz, 2010a)</ns0:ref>. The distance between relay nodes is even more than the maximum communication range of EM wave transmission <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2016)</ns0:ref>. Due to this extension in transmission range using MI waveguide mechanism, a fully connected sensor network may be attained without even deploying big number of sensors <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref>.</ns0:p><ns0:p>It has also been observed by the researchers that with the increase in transmission distance, the transmission power gets decreased (upto 50% of power required for EM or direct MI) making it favorable for the energy-constrained non-traditional media <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2016)</ns0:ref>. f) Better robustness and easier deployability and maintenance : Unlike a real waveguide, the MI waveguide is not a continual structure and therefore is comparatively more flexible and easy to be deployed at every 6 to 12 m <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref> and maintained <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. The relay coils in MI waveguide don't need extra power due to passive relaying of magnetic induction <ns0:ref type='bibr' target='#b39'>(Sun and Akyildiz, 2010a)</ns0:ref>. Hence unlike the sensor devices, these relay coils are easily deployable and once buried in soil, don't need much more regular maintenance <ns0:ref type='bibr'>(Sun and Akyildiz, 2013)</ns0:ref>. If due to some harsh conditions, some of the relay coils get damaged, even then the remaining relay coils ensure robustness of sensor network <ns0:ref type='bibr' target='#b45'>(Sun et al., 2011)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>g) Cost :</ns0:head><ns0:p>As the relay coils used in MI waveguide consume no energy and unit cost of these relay coils is very less <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2016;</ns0:ref><ns0:ref type='bibr'>Sun and Akyildiz, 2012;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009)</ns0:ref>, therefore overall cost of underground sensor network gets reduced to large extent as compared to using expensive relay sensor devices in EM wave communication <ns0:ref type='bibr'>(Sun et al., 2012)</ns0:ref>. h) Prolonged system lifetime : MI waveguide technique also leads to prolonged system lifetime because the underground sensor devices equipped with MI transceiver nodes can be recharged by above ground charging devices using inductive charging mechanism <ns0:ref type='bibr'>(Sun and Akyildiz, 2013</ns0:ref><ns0:ref type='bibr' target='#b7'>, 2012</ns0:ref><ns0:ref type='bibr' target='#b40'>, 2010b)</ns0:ref>. In RF-challenged environments, it becomes very cumbersome to replace the device batteries; therefore this option of magnetic induction charging proves very beneficial <ns0:ref type='bibr' target='#b39'>(Sun and Akyildiz, 2010a)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations of MI waveguide approach</ns0:head><ns0:p>Inspite of numerous advantages offered by MI waveguide technique for WUSNs, some constraints have been observed by the researchers, which highlight the essence of more optimization work on MI waveguide approach. Manuscript to be reviewed Computer Science a) Limited channel capacity and data rate : It has been found that due to lower ratio (order of 2.5) of mutual induction to self-induction (also termed as relative magnetic coupling strength) between adjacent relay coils working at the resonant frequency to attain low path loss in MI waveguide approach, the channel bandwidth becomes very limited (1-2 KHz) <ns0:ref type='bibr' target='#b39'>(Sun and Akyildiz, 2010a)</ns0:ref>. This decrease in channel bandwidth becomes more adverse, if communication distance increases to a particular threshold value, which resultantly leads to lower channel capacity as well as unsatisfactory data rate, inspite of large communication range <ns0:ref type='bibr' target='#b39'>(Sun and Akyildiz, 2010a)</ns0:ref>. b) Usable for limited application domains : Attributing to data rate and bandwidth limited channels, MI waveguide technique may be adopted for those applications only, where required data rate is low <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref>. For WUSN applications like rescuing the trapped ones in underground mines or border patrolling, big amount of data is required to be timely transmitted on MI channels, which requires better data rate and bandwidth. Hence much efforts are needed for enhancing the MI channel capacity for MI-WUSNs <ns0:ref type='bibr' target='#b39'>(Sun and Akyildiz, 2010a)</ns0:ref>. c) Reliability issue : As the multiple resonant MI relay coils constitute the foundation of communication success of MI waveguide approach, hence overall performance of such sensor networks are based not only on the transceiver sensor nodes, but also these relay coils. Therefore, the issue of reliability of such underground sensor networks is needed to be analysed in tough underground media <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref>. d) Complex deployment mechanism : Due to very rough and hostile underground communication medium, all transceiver sensor nodes are isolated until and unless connected by MI waveguide mechanism.</ns0:p><ns0:p>Therefore, deployment of large number of relay coils in MI-WUSNs costs a big amount of labor and therefore needs very thoughtful and complex strategies <ns0:ref type='bibr'>(Sun and Akyildiz, 2013)</ns0:ref> targeting at the objective of building a connected robust wireless sensor network with minimal possible relay coils <ns0:ref type='bibr'>(Sun and Akyildiz, 2013)</ns0:ref>.</ns0:p><ns0:p>It was established by the researchers that for underground pipelines made up of metallic material, no or very few relay coils are required due to metal pipe itself working as magnetic core of MI waveguide.</ns0:p><ns0:p>For non-metallic pipes such as PVC, single relay coil deployed around 5 m from each other is enough.</ns0:p><ns0:p>For winding these relay coils, the underground pipeline proves to be perfect core leading to small coil deployment cost if they are winded on pipeline during deployment time itself <ns0:ref type='bibr' target='#b45'>(Sun et al., 2011)</ns0:ref>. e) Lack of omnidirectional propagation : Most of the channel characterizations have been done</ns0:p><ns0:p>with assumption of placement of transceivers or relay coils in straight line, which is practically not always true. For transceiver nodes based on MI communication, the strength of received signal at receiver end is affected by the angle between axes of two mutually coupled coils. To maintain high-quality transmission in such cases, multidimensional MI coils are developed and deployed <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>MI 3-D coil communication technique</ns0:head><ns0:p>The basic architecture, functionality and advantages for MI 3-D coil communication technique are detailed out in following subsections:</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture of 3D Coil MI Communication approach</ns0:head><ns0:p>In most of the practical underground communication applications, coils are not buried in straight line due to two probable causes, first being inability to deploy relay coils in exact planned positions due to rocks or pipes being already present inside the ground and secondly the positions of already buried coils may get changed during operation of network due to aboveground pressure or movement of soil. It is also well established that received signal strength at MI transceiver end is affected by the angle between the axes of two adjacently placed coils. Therefore, option of using multidimensional MI coils in such a complex scenario has been worked upon for high quality transmission between sensing nodes <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>More precisely, 3-Directional (3D) coils have been designed and used which offers omni-directional signal coverage as well as minimal number of coils leading to reduced system complexity and cost (Refer Fig. <ns0:ref type='figure' target='#fig_12'>7</ns0:ref>) <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head>Advantages of 3D coil MI communication approach</ns0:head><ns0:p>In 3D or TD MI coil system, three individually fabricated unidirectional (UD) coils are vertically mounted on a cubical structure having each side with length of 10 cm such that each UD coil is perpendicularly deployed with respect to others. These three coils are meant for forming a powerful beam along three different axes of Cartesian coordinate <ns0:ref type='bibr' target='#b13'>(Ishtiaq and Hwang, 2020)</ns0:ref>. Due to magnetic flux created by one surrounding coil becoming zero on another two orthogonal coils, these three coils do no interfere with each other, attributing to field distribution structure of coils. Similar to direct MI coils, these 3D MI coils are also made up of 26-AWG wire and each one of these coils is supported by a serial capacitor for detecting resonance. At the side of receiving node, all the three signals from three coils are added <ns0:ref type='bibr' target='#b19'>(Liu et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Depending on channel model, after fixing MI coil parameters and transmission distance, it is the intersection angle between transmission and receiver nodes, which determines the signal strength. At least one coil can achieve adequate signal strength with three orthogonal coils regardless of how the angle of intersection is changed. Even if MI coils are rotated or intersection angel between those gets changed, high degree of communication is supposed to be kept by the system in this case <ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>. It has also been observed that if transmission gain is maximized using optimal power allocation and adoption of spatial-temporal code, good system performance can be achieved by combining the received signals at three orthogonally placed coils <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2016)</ns0:ref>.</ns0:p><ns0:p>In underwater sensor networks also, it has been found through modeling and analysis that the transmission range of MI system achieved was of 20 m range using small sized coils of 5cm radius with high value of water conductivity. Therefore, using TD coils helps in establishing more robust MI links, which remain unaffected by dynamical rotation of sensor nodes <ns0:ref type='bibr' target='#b11'>(Guo et al., 2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>COMPARATIVE ANALYSIS OF THREE MI COMMUNICATION METHODS</ns0:head><ns0:p>After detailed study of all physical layer techniques, it is clear that for applications having sensor nodes deeply buried inside soil, magnetic induction is better technology as compared to EM wave technology.</ns0:p><ns0:p>All three MI transmission techniques have their relative advantages and limitations.</ns0:p><ns0:p>The comparative analysis of EM wave technique as well as all MI techniques has been summarized in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. As compared to EM wave communication, direct MI communication is seen as better option, if factors like dynamic channel conditions of underlying media i.e. soil, need of large antenna size or effect of volumetric water content(VWC) percentage on path loss and connectivity are taken into consideration.</ns0:p><ns0:p>EM wave approach offers better bandwidth as compared to low bandwidth of 1-2 KHz achieved using MI approach. The comparison of path loss parameter is quiet complex, as its behavior is different in different scenarios. For very near region (transmission distance, d &lt; 1m), Direct MI method exhibits smaller path loss with respect to EM technique, but beyond this, path loss for MI channel becomes even 20 dB more than that of EM technique. Also, for dry soil, EM path loss is lesser, but with the increase of VWC in soil (which is generally the case), path loss of EM wave keeps on increasing, making MI as better alternative for such cases <ns0:ref type='bibr' target='#b38'>(Sun and Akyildiz, 2009)</ns0:ref>. Also, as path loss is inverse proportional to operating frequency in EM method and directly proportional for MI method, therefore for operating frequencies greater than 900 MHz, path loss decreases for direct MI approach <ns0:ref type='bibr' target='#b38'>(Sun and Akyildiz, 2009)</ns0:ref>. The MI waveguide approach is better than both EM wave as well as direct MI approach. The communication range offered by <ns0:ref type='bibr' target='#b40'>(Sun and Akyildiz, 2010b)</ns0:ref>. The MI waveguide transceivers require only less than half of energy consumed by EM wave of MI method, making MI waveguide suitable for energy constrained applications in addition to lower overall cost due to relay nodes not requiring any power <ns0:ref type='bibr' target='#b38'>(Sun and Akyildiz, 2009)</ns0:ref>. For both ordinary MI as well as MI waveguide system, the bandwidth achieved is smaller of the range 1-2 KHz, which is far lesser then EM wave mechanism, but it suffices for applications of low data rate monitoring <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b25'>Reddy et al., 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>13/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:1:0:NEW 21 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As all characterization of path loss or bit error rate or transmission distance of direct MI or MI waveguide techniques have been done with assumption of sensors nodes deployed in straight line, which is not the case in reality making MI 3-D coil mechanism as best option to offer omnidirectional coverage keeping other benefits same <ns0:ref type='bibr' target='#b40'>(Sun and Akyildiz, 2010b;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b13'>Ishtiaq and Hwang, 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>RESEARCH CHALLENGES AND SCOPE FOR FUTURE WORK</ns0:head><ns0:p>There are number of challenges before MI-WUSNs which need further attention, exploration and research work. As clarified in previous section, bandwidth achieved in all MI techniques is very small, due to which WUSN applications requiring high data rate monitoring can not be implemented. The combined usage of active and passive relaying in MI waveguide offering low path loss is one more challenging area due to significant design constrains attributing to determining appropriate location and operation pattern of each relay node. Similarly, although using orthogonal or 3D coils boosts the signal quality and </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:58375:1:0:NEW 21 Apr 2021) Manuscript to be reviewed Computer Science parameter as per application and thereby help in finding optimized solution of application specific WUSN implementation. The paper also discusses the challenges faced by MI based WUSN solutions as well as future scope of work based upon further exploration of MI techniques.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>The section 'Application domains of MI based WUSNs' of the paper elaborates various MI-WUSN based application domains. The next section discusses the edge of MI over conventional EM wave communication. The section 'Various magnetic induction techniques used in WUSNs' highlights the detailed architecture of all MI communication techniques i.e. Direct MI communication, MI waveguide communication and 3D MI coil communication. The next section then highlights the comparative analysis of these MI techniques. Future scope of work and conclusion are given in last two sections.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Usage of MI-WUSNs for detection of leakage of water or oil(Sun et al., 2012).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Usage of MI-WUSNs for border security applications(Vuran and Akyildiz, 2010).</ns0:figDesc><ns0:graphic coords='6,141.73,63.78,413.58,203.61' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:1:0:NEW 21 Apr 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:58375:1:0:NEW 21 Apr 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Analogy of direct MI communication with a transformer(Sun and Akyildiz, 2010b).</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,413.62,496.93' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:58375:1:0:NEW 21 Apr 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Basic structure of MI waveguide technique<ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref>.</ns0:figDesc><ns0:graphic coords='9,141.73,251.60,413.58,68.84' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Fig. 6 Figure 6 .</ns0:head><ns0:label>66</ns0:label><ns0:figDesc>Fig.6depicts MI waveguide mechanism, where n number of total coils are there, out of which n-2 are relay coils which are placed equidistantly in straight line between transceiver nodes; r is the distance</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>. As shown in Fig 6, instead of using n number of full fledged transceiver sensor nodes, (n-2) relay nodes are used with two transceiver nodes on transmission and receiving sides, which clearly points to the requirement of less number of full 9/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:1:0:NEW 21 Apr 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:58375:1:0:NEW 21 Apr 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Basic structure of MI waveguide with 3D coils.<ns0:ref type='bibr' target='#b48'>(Tan et al., 2015)</ns0:ref> </ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>other system parameters but designing such coils is also quite challenging and needs further work. One more area open for future work is interaction of MI-WUSNs with other types of WSNs, such as WUSNs interfacing with underwater WSNs in case of exploration of deep oceans or WUSNs interacting with power grid for monitoring structural health or WUSNs communicating with self-driving cars in case of navigation and charging.The upgradation of presently available solutions taking care of robust adjustment is also a big challenge because slight deviation in either of the system parameters may lead to whole theoretical solution as invalid due to imperfect channel state information (CSI). One example of such scenarios is varying soil wetness during rainfalls which may result in additional critical modification of channel state. One most promising but less explored area of future work for MI-WUSNs is to design cross-layer architecture ensuring multi-objective optimization which could optimize system performance in term of throughput, charging capability and accuracy of localization. Such type of multidimensional optimization leading to design of self-charging and power-efficient networks with the constraint of highsystem performance based upon application or operation mode is still an open area for future work for researchers CONCLUSION Using wireless sensor networks in non-conventional media such as soil has paved the way to a large number of novel applications ranging from soil monitoring and underground infrastructural monitoring to border security related applications. The transmission constraints such as dynamic channel condition, high path loss and big antenna size put by EM wave communication mechanism for WUSNs have been addressed using MI communication. This technique is based on the basic principle of mutual induction between coils connected with transceiver nodes of WUSNs. This paper has detailed out the gradual progression from ordinary MI communication to MI waveguide technique to MI waveguide with 3-D coils. These MI techniques have proved fruitful to offer advantages like constant channel condition due to similar permeability of propagation medium (air, water, rocks), reduced path loss due to low-cost and passive relay coils deployed between the transceivers, enhanced communication range attributing to relay coils, feasible bandwidth, negligible propagation delay, and small-sized coils. The comparative analysis of these MI techniques made in the paper has established that MI waveguide using 3D coils is the best technique for practically realizing WUSN applications. The future scope and open challenges discussed in the paper further opens various research avenues for researchers in time to come.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='5,141.73,63.78,413.58,206.19' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,141.73,63.78,413.58,124.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparative analysis of EM wave, Direct MI, MI waveguide and 3-D Coil MI waveguide communication methods both EM wave an direct MI is not enough for practical applications. Here, MI waveguide approach proves better which offers communication distances almost 25 times that of direct MI or EM communication</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>12/17PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:1:0:NEW 21 Apr 2021)</ns0:note></ns0:figure> </ns0:body> "
"Dated : 21st April 2021 Department of Computer Science & Engineering Guru Jambheshwar University of Science & Technology Hisar (Haryana) - India Respected Editor & Reviewers Greetings. Thanks to all of you for giving valuable feedbacks for improvement of the given manuscript. We have attended to all comments put by honorable editor as well as all reviewers. Accordingly, the paper has gone through thorough revision and editing wherever required and suggested. Referee comments are reproduced here, with our responses in italics. After making appropriate changes, revised manuscript has been generated with tracked changes using latexdiff. Comments and Answers of Honorable Editor Your manuscript has not been recommended for publication in PeerJ Computer Science in its current form; however, we do encourage you to address the concerns and criticisms of the reviewers detailed at the bottom of this letter and resubmit your article once you have updated it accordingly. Please revise your manuscript based on reviewers’ feedback and resubmit; elaborate on your points and clarify with references, examples, data, etc. If you do not agree with the reviewers’ views, then include your arguments in the updated manuscript. Also, note that if a reviewer suggested references, you should only add ones that will make your article better and more complete. 1. We highly recommend that you review the grammar one more time before resubmitting. Answer: After reviewing the paper, grammatical corrections have been done in entire paper. Also following spelling related mistakes have been identified and corrected: Line No 46 69 84 116 194 (before 309) (before 424) Earlier na¨ıve Tested Realtime Focused Methodologies Transformer waveuide, communcation Corrected Naïve Testbed real time Focused Methodologies Transformer waveguide, communication 2. In addition, please provide quantitative performance achievements of this work in the 'Abstract'. Answer: Quantitative values of performance parameters such as path loss (<100dB), bandwidth(1-2 KHz) etc have been added in “Abstract” section, as suggested. Comments and Answers of Honorable Reviewer #1 Basic reporting There are few format and spelling problems, and also unprofessional when claim 'accurate predictions' without giving data. Examples are as follows: 1. 'Table 1. Comparative analysis of EM wave, Direct MI, MI waveuide and 3-D Coil MI waveguide communcation methods' misspelling of 'Waveguide' and 'communication'. Answer: The spelling mistakes (such as 'Waveguide', 'communication') have been corrected as follows : Line No (of earlier document) 46 69 84 116 194 (before 309) (before 424) Earlier na¨ıve Tested Realtime Focused Methodologies Transformer waveuide, communcation Corrected Naïve Testbed real time Focused Methodologies Transformer waveguide, communication The phrase “accurate prediction” has been changed. Format related corrections have been done in page number 4 and 5. Blank lines have been added before all headings. 2. Line 29, WUSN, needs to be expended as it’s first time appears. Answer: As per the recommendation of honorable reviewer, the term WUSN is now expanded. Same has been checked for other abbreviations also. 3. Line 137, “…accurate predictions of …”, define what is the (accurate) range, resolution and sensitivity of the prediction? Answer: Required Change has been done in subsection “Disaster prediction applications” on page no 3. 4. Line 306, “the required number of underground transceiver nodes may be reduced to a significant level”, at what level? Answer: Said point (on page no 9) has been elaborated with required clarity. 5. Line 311, “MI waveguide mechanism offers huge reduction of path loss”, what is the loss level? Answer: Point d (Path Loss Reduction, on page 9) has been elaborated and clarified as required. Experimental design As the statements from the Introduction and from the Survey Methodology are inconsistent, the authors should decide 'What is the objective of this paper?' In line 90 of the introduction, the authors described that “This literature review is needed and intended to do the comparative analysis of all the techniques so as to enable the users to find the optimal solution of MI based WUSNs for respective applications”, while in line 107 of the Survey Methodology, “The purpose of this paper is to review the usage of magnetic induction approach in comparison with conventional EM wave approach for WUSNs and subsequently review all available MI techniques used for the same. ” The authors should decide whether to compare the techniques among MI approach or compare the MI with the conventional EM wave approach? Answer: Consistent statements (detailed comparison of all MI techniques with each other as well as with EM wave technique) have been incorporated in both “Introduction” (Page No 1) as well as “Survey Methodology” (Page No 3) sections. Validity of the findings 1. This paper is a literature review of magnetic induction mechanism for WUSNs, however the authors should point out what is the novelty of this paper? Answer: The novelty of this paper is comprehensive elaboration of all MI techniques at one place and their comparative analysis with each other as well with conventional EM wave approach. The various MI techniques exhibit performance enhancements wrt different parameters. This comparative analysis will facilitate selection of appropriate technique with consideration of required parameter as per application and thereby help in finding optimized solution of application specific WUSN implementation. (Same has been mentioned also in Introduction Section on page no 2) 2. The authors summarized the comparision of EM wave, Direct MI, MI waveguide and 3-D Coil MI waveguide communication methods in Table 1. However, clear numbers or range for parameters should be provided instead of 'Higher/ High/ Lower/ Low' as in table 1. Answer: Required quantitative values have been put in the table as suggested. (page no 13) Comments for the Author This paper is a literature review of magnetic induction mechanism for WUSNs, the authors compared the three techniques in magnetic induction approach, and with the classical electromagnetic wave. However, the authors can further improve the manuscript in several aspects: 1. Your introduction needs to support the objective. At the beginning of the introduction, the authors detail discussed the classical electromagnetic wave approaches (3 paragraphs), and their disadvantages. The authors should decide what is the main objective of this paper? (Comparative analysis of magnetic induction? as in title) or (comparison of MI with classical electromagnetic wave as described in Methodology?) The introduction should work for introducing the main objective. Answer: The topic of paper has been refined as “Comparative analysis of Magnetic induction based communication techniques for wireless underground sensor networks”. The objective of the paper is to compare the MI techniques with each other as well as with conventional EM wave method for WUSNs. Same has been now mentioned in “Introduction” as well “Methodology” sections. In Introduction section, some more content has been added to give insight about work done regarding MI based WUSNs. 2. Conclusions are lack of data support. e.g. line 228-243, 301-343 Answer: Supporting data along with required references has been added in all points of all MI techniques, as suggested by honorable reviewer. 3. What are the main challenges in magnetic induction based communication? Answer: The main challenges for MI based WUSNs, mentioned in section “RESEARCH CHALLENGES AND SCOPE FOR FUTURE WORK” (on page no 14) are fabrication of 3-D MI coils, development of cross layered protocols taking care of MI channels, determining appropriate location and operation pattern of each relay node and need of considering the interaction of MI-WUSNs with other types of WSNs. One more challenge of low bandwidth has also been added now. (Section has been rephrased for better clarity of challenges) 4. Mixed of capital and lower case in sentences. e.g. in title 'Comparative Analysis of Magnetic induction based Communication for wireless underground sensor networks', line 19 '......Electromagnetic wave (EM)......', line 23 '......Electromagnetic wave (EM)......' line 188,...... Answer: As per the recommendation of honorable reviewer, Mixed of capital and lower case in sentences is now removed and corrected. 5. line 166. indentation of g)...... Answer: As per the recommendation of honorable reviewer, this mistake is corrected. Comments and Answers of Honorable Reviewers #2 Basic reporting 1. In line 474 and 476, two references are same. Answer: Corrected. Second redundant reference has been removed. 2. Figure 7 should show the 3-D structure more clearly. Answer: Figure-7 has been redrawn with more clarity as suggested. 3. Literature research is not sufficient. Answer: Some more recent research papers of related areas have been studied and referred in introduction and other sections for strengthening the literature review. Experimental design As a literature review paper, most references are too early, more latest should be listed. For example, from line 397 to 417, the reference of advanced 3D-MI technique are published in 2016. Answer: Some more reference papers of year 2021 & 2020 have been added. Validity of the findings Lack of quantitative analysis for each technology. For example, more quantitative data should be presented in Table 1. More figures of analysis or comparison of the results should be presented. Answer: In comparison table, quantitative data has been put as suggested (page no 13). Comments for the Author Authors should strengthen literature review in related fields. There should be more academic content, such as proof of the novelty of the method and quantitative analysis, rather than vague written statements. Answer: Some more recent research papers of related areas have been studied and referred in introduction and other sections for strengthening the literature review. Also, the quantitative values have been added in Section “Comparative analysis” as well as in table. Once again thanks and we hope that the editor and reviewers agree with the action and reasoning presented in these replies and that the revised version of our contribution will meet your expectations. Partap Singh (on behalf of all authors) "
Here is a paper. Please give your review comments after reading it.
292
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>A large range of applications have been identified based upon the communication of underground sensors deeply buried in the soil. The classical electromagnetic wave (EM) approach, which works well for terrestrial communication in air medium, when applied for this underground communication, suffers from significant challenges attributing to signal absorption by rocks, soil, or water contents, highly varying channel condition caused by soil characteristics, and requirement of big antennas. As a strong alternative of EM, various magnetic induction (MI) techniques have been introduced. These techniques basically depend upon the magnetic induction between two coupled coils associated with transceiver sensor nodes. This paper elaborates on three basic MI communication mechanisms i.e. direct MI transmission, MI waveguide transmission, and 3D coil MI communication with detailed discussion of their working mechanism, advantages and limitations. The comparative analysis of these MI techniques with each other as well as with EM wave method will facilitate the users in choosing the best method to offer enhanced transmission range (upto 250m), reduced path loss(&lt;100dB), channel reliability, working bandwidth(1-2 KHz), &amp; omni-directional coverage to realize the promising MI based wireless underground sensor network (WUSN) applications.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Based upon the nature of underlying medium, wireless sensor networks may be categorized as air-based terrestrial wireless sensor networks (WSNs), soil-based underground WSNs and water-based underwater WSNs <ns0:ref type='bibr' target='#b1'>(Akyildiz et al., 2002)</ns0:ref>. The wireless underground sensor networks (WUSNs) <ns0:ref type='bibr' target='#b33'>(Sardar et al., 2019)</ns0:ref> are comprised of sensor devices, which operate under the earth surface and interact with each other wirelessly. These sensing nodes may be either deployed within closed underground structures like underground roads/subways, mines <ns0:ref type='bibr' target='#b10'>(Forooshani et al., 2013)</ns0:ref> or tunnels <ns0:ref type='bibr' target='#b9'>(Dudley et al., 2007)</ns0:ref> or these may be buried completely inside the ground. In the first scenario, inspite of sensor networks being located underground, signals are communicated through the air in void space existing below earth surface which helps in improving the security in underground mines ensuring comfortable communications for the passengers and drivers in road/subway tunnels as well as in securing these structures from attacks by PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:2:0:NEW 10 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science mean of consistent monitoring <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. Similarly in second scenario, sensing nodes buried inside soil interact with each other through soil as propagation medium and this kind of communication of WUSNs offers a huge range of naive applications including smart irrigation <ns0:ref type='bibr' target='#b8'>(Dong et al., 2013)</ns0:ref>, precision agriculture <ns0:ref type='bibr' target='#b57'>(Yu et al., 2017)</ns0:ref>, border patrolling, soil monitoring, predicting landslides <ns0:ref type='bibr' target='#b4'>(Aleotti and Chowdhury, 1999)</ns0:ref> or earth quakes or volcano eruptions <ns0:ref type='bibr' target='#b55'>(Werner-Allen et al., 2006)</ns0:ref> and many more <ns0:ref type='bibr' target='#b0'>(Akyildiz and Stuntebeck, 2006)</ns0:ref>.</ns0:p><ns0:p>The researchers observed that using the terrestrial motes in underground communication could not yield satisfactory and reliable communication results due to harsh environment of underground media <ns0:ref type='bibr' target='#b40'>(Stuntebeck et al., 2006)</ns0:ref>. They further worked a lot on the analysis of characteristics of communication channel of WUSNs. The underground signal communication using EM wave propagation has been analysed using channel characterization models in detail in <ns0:ref type='bibr' target='#b0'>(Akyildiz and Stuntebeck, 2006)</ns0:ref>, <ns0:ref type='bibr' target='#b18'>(Li et al., 2007)</ns0:ref>, <ns0:ref type='bibr' target='#b54'>(Vuran and Silva, 2010)</ns0:ref>, <ns0:ref type='bibr' target='#b24'>(Peplinski et al., 1995)</ns0:ref>, <ns0:ref type='bibr' target='#b36'>(Silva and Vuran, 2009)</ns0:ref>, <ns0:ref type='bibr' target='#b53'>(Vuran and Akyildiz, 2010)</ns0:ref>. Here it was evaluated that how path loss and bit error rate are affected by environmental factors like water particles in soil (humidity) as well as network related factors including operating frequency and burial depth of sensing nodes. The researchers in <ns0:ref type='bibr' target='#b39'>(Silva et al., 2015)</ns0:ref> investigated through experiments the communication link characteristics of different types of WUSN channels such as underground to underground (UG-UG) channel, underground to above-ground (UG-AG) channel and above-ground to underground (AG-UG) channel.</ns0:p><ns0:p>However, gradually the researchers found that EM wave communication mechanism suffers from huge problems. Firstly, the high signal attenuation or path loss is caused by absorption by soil, rock elements and underground water <ns0:ref type='bibr' target='#b10'>(Forooshani et al., 2013)</ns0:ref>. Secondly, due to the soil properties such as soil type <ns0:ref type='bibr' target='#b30'>(Salam and Vuran, 2016)</ns0:ref>, volumetric water content etc changing very randomly with location and over time, performance of sensor networks remains unpredictable <ns0:ref type='bibr' target='#b52'>(Trang et al., 2018)</ns0:ref>. Third problem is of using large sized antennas, which are deployed so that practical communication range may be achieved with low operating frequencies <ns0:ref type='bibr' target='#b29'>(Salam and Raza, 2020)</ns0:ref>. The impact of antenna size and orientation as well as soil moisture on underground communication was also realized experimentally by developing outdoor WUSN tested in <ns0:ref type='bibr' target='#b38'>(Silva and Vuran, 2010)</ns0:ref>. All these issues made EM wave approach unsuitable for deploying WUSNs despite the promising application domains <ns0:ref type='bibr' target='#b41'>(Sun and Akyildiz, 2009)</ns0:ref>.</ns0:p><ns0:p>Last decade has witnessed a new approach called magnetic induction (MI) as an effective alternative of EM communication for harsh environment like underground <ns0:ref type='bibr' target='#b43'>(Sun and Akyildiz, 2010b)</ns0:ref> or underwater <ns0:ref type='bibr' target='#b3'>(Akyildiz et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b22'>Muzzammil et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b6'>Debnath, 2021)</ns0:ref> <ns0:ref type='bibr' target='#b7'>(Domingo, 2012)</ns0:ref>. The MI channel capacity from a pair of transceiver coils has been elaborated in <ns0:ref type='bibr' target='#b16'>(Kisseleff et al., 2014b)</ns0:ref>.The algorithms explaining the efficient mechanism of deploying the magnetic coils for WUSNs have been worked on in <ns0:ref type='bibr'>(Sun and Akyildiz, 2013)</ns0:ref>. The authors in <ns0:ref type='bibr'>(Sun and</ns0:ref><ns0:ref type='bibr'>Akyildiz, 2010b, 2009)</ns0:ref> have detailed out the analysis of path loss and bandwidth using MI communication approach in soil as underground medium.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b21'>(Masihpour et al., 2013)</ns0:ref>, multi-hop relay techniques are proposed to extend communication range in near-field MI communication systems. Further, gradual analysis of communication channel model has led to evolution of MI waveguide mechanism for minimizing the path loss observed in conventional EM wave approach or original MI mechanism <ns0:ref type='bibr' target='#b43'>(Sun and Akyildiz, 2010b)</ns0:ref>. The system performance in term of path loss caused by EM wave mechanism, ordinary MI mechanism and MI waveguide mechanism have been analyzed, quantified and compared with each other <ns0:ref type='bibr'>(Sun and</ns0:ref><ns0:ref type='bibr'>Akyildiz, 2010b, 2009)</ns0:ref>. The reduction in path loss and increase of transmission distance (by more than 20 times) has been observed in MI waveguide communication as compared to ordinary communication by researchers <ns0:ref type='bibr' target='#b20'>(Liu et al., 2021)</ns0:ref>.Further 3D MI Coils were found to be very beneficial to be used for omnidirectional coverage <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>As compared to single-dimension coils, the improvement in directionality of MI communication using multi-dimensional coils, metamaterial enhanced antennas and spherical coil-array enclosed loop antennas and polyhedral geometry has been analysed for MI based underwater networks by researchers <ns0:ref type='bibr' target='#b22'>(Muzzammil et al., 2020)</ns0:ref>. Also, the researchers in their study <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref> provided the multimode model for characterizing the wireless channels for WSNs used in underground structures like mines for both EM and MI techniques. In <ns0:ref type='bibr' target='#b48'>(Sun et al., 2011)</ns0:ref>, detailed architecture and operating framework of monitoring underground pipelines for detecting real time leakage effectively has been discussed using MI-based WSN, called as MISE-PIPE. The researchers in <ns0:ref type='bibr' target='#b42'>(Sun and Akyildiz, 2010a</ns0:ref>) have further suggested two algorithms i.e. MST algorithm and triangle centroid (TC) for effective deployment of MI waveguides to connect the underground sensors in WUSN environment. Further cross layered protocol architecture for MI-WUSNs has been explained in <ns0:ref type='bibr' target='#b19'>(Lin et al., 2015)</ns0:ref>. A channel model considering the fading effect Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>of presence of large obstacles on coverage performance has also been proposed and analysed for MI based underwater wireless sensor network <ns0:ref type='bibr' target='#b6'>(Debnath, 2021)</ns0:ref>. Further work has been done on relay or waveguide based MI-WUSNs to attain the required Quality of Service(QoS) metrics in form of Min-Max problem using relay selection algorithms, relay placement approaches and optimization of operational parameters <ns0:ref type='bibr' target='#b13'>(Ishtiaq and Hwang, 2020)</ns0:ref>. The recent advancements made in related areas of MI-WUSNs, which might have impact on implementation of MI-WUSNs have been discussed in <ns0:ref type='bibr' target='#b17'>(Kisseleff et al., 2018)</ns0:ref>.</ns0:p><ns0:p>The novelty of this paper is comprehensive elaboration of all MI techniques at one place and their comparative analysis with each other as well with conventional EM wave approach based on thorough literature review. The various MI techniques exhibit performance enhancements wrt different parameters.</ns0:p><ns0:p>This comparative analysis will facilitate selection of appropriate technique with consideration of required parameter as per application and thereby help in finding optimized solution of application specific WUSN implementation. The paper also discusses the challenges faced by MI based WUSN solutions as well as future scope of work based upon further exploration of MI techniques.</ns0:p><ns0:p>The intended users of this review are the researchers working on achieving quality of service(QoS) parameters for WUSNs using magnetic induction approach instead of EM wave methodology and thereby realizing the potential MI-WUSN applications. The researchers working in the domain of underwater wireless sensor networks may also use this analysis.</ns0:p><ns0:p>The rest of the paper is as follows. The section 'Survey methodology' explains the methodology adopted for literature review. </ns0:p></ns0:div> <ns0:div><ns0:head>SURVEY METHODOLOGY</ns0:head><ns0:p>The purpose of this paper is to do the detailed elaboration all MI based techniques followed by their comparison with each other as well as with classical EM wave approach for WUSNs. We started with a specific set of search terms used against a meta-search engine 'Google Scholar' to search across multiple databases. These search words were 'WUSN', 'wireless underground sensor networks', 'electromagnetic wave', 'magnetic induction' and 'MI waveguide'. The next step of this systematic literature review was to gather all retrieved documents from the year 2006 to the year 2020, where the process of screening included downloading the papers published in various journals , and reading their titles and abstracts.</ns0:p><ns0:p>From these, we identified all MI techniques used in WUSNs. For each of these MI techniques, further papers were searched and retrieved based on which, advantages and limitations of these MI techniques have been listed out and further compared. The literature review included study of more than 50 papers.</ns0:p></ns0:div> <ns0:div><ns0:head>APPLICATION DOMAINS OF MI BASED WUSNS</ns0:head><ns0:p>Unlike EM wave based WUSNs used for the cases of low burial depth utilizing UG-AG and AG-UG channels, MI-based WUSNs are especially helpful for the applications working on pure UG-UG channels, where underground sensor devices are deeply deployed in the soil or no above ground devices are there. Some of such application areas have been identified by the researchers as given here : a) Underground leakage monitoring applications MI based WUSNs may be applied for monitoring and detecting the leakage in underground infrastructures like water, gas or oil pipelines. It helps in assuring that there are no leakages in underground fuel tanks and also in determining the actual amount of oil currently available in the fuel tank so that overflowing may be avoided. MI-WUSNs also play important role in monitoring the leakage in underground septic sewage tanks. These are used with the help of sensors deployed along the path of pipelines so as to localize and repair the leakage of gas or water from gas pipelines or water pipelines respectively <ns0:ref type='bibr' target='#b28'>(Sadeghioon et al., 2018)</ns0:ref>. (See Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> <ns0:ref type='bibr'>(Sun et al., 2012)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>b) Disaster prediction applications</ns0:head><ns0:p>MI-WUSNs are also used in assessing and predicting the disastrous situations like floods, eruption of volcanoes and earthquake, Tsunami, oil spilling or land sliding situations. These disastrous conditions do arise attributing to alterations in materials like soil, water etc and these changes are monitored using underground sensor devices. MI-WUSNs prove far better as compared to current methods of landslide prediction, which are costlier as well as more time consuming to get deployed <ns0:ref type='bibr' target='#b35'>(Sheth et al., 2005)</ns0:ref>.</ns0:p><ns0:p>Similarly MI-WUSNs measure and monitor the imbalances of glacier movements and volcanic eruption movements as well. All these things help in fine predictions of forthcoming natural disasters.</ns0:p></ns0:div> <ns0:div><ns0:head>c) Agricultural monitoring applications</ns0:head><ns0:p>MI based WUSNs can be well utilized for monitoring applications related to agriculture, mines, tunnels, pollution and many more. The soil sensors buried inside the ground as part of WUSN may be used for observing the soil properties like soil makeup <ns0:ref type='bibr' target='#b25'>(Raut and Ghare, 2020)</ns0:ref>, water content in soil (humidity), density of soil etc facilitating smart and efficient irrigation system <ns0:ref type='bibr' target='#b32'>(Sambo et al., 2020)</ns0:ref>. MI-WUSNs are also beneficial for continual monitoring of methane or carbon monoxide gases inside mines which can explode and cause a big fire, if not monitored properly. These sensors are also utilized for habitat (marine life, fish farms) monitoring, exploration (natural resource) monitoring or observance of underwater pollution also.</ns0:p></ns0:div> <ns0:div><ns0:head>d) Underground structural monitoring applications</ns0:head><ns0:p>MI based WUSNs are also used for monitoring the internal structures of dams, buildings to know the factors which influence the durability of those structures <ns0:ref type='bibr' target='#b23'>(Park et al., 2005)</ns0:ref>. It is ensured by tracking stress and strain present in the material; used for their construction like water, sand, concrete etc.</ns0:p></ns0:div> <ns0:div><ns0:head>e) Sports field monitoring applications</ns0:head><ns0:p>One important application domain making use of MI-WUSNs is sports field monitoring where sensor nodes buried inside are used for monitoring the condition of soil of different types of playgrounds or sport fields like golf course, soccer fields, baseball fields or grass tennis courts. The poor turf conditions do cause uncomfortable playing experience for the players, making it necessary to monitor and maintain the health of grass <ns0:ref type='bibr' target='#b0'>(Akyildiz and Stuntebeck, 2006;</ns0:ref><ns0:ref type='bibr' target='#b18'>Li et al., 2007)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>f) Security related applications</ns0:head><ns0:p>MI-WUSNs are better suited for security related applications because these do have higher degree of concealment in comparison with terrestrial sensor devices. It is so because their presence is hidden and the chances of determining their presence and deactivating them are very less. By deploying pressure sensors</ns0:p></ns0:div> <ns0:div><ns0:head>4/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_1'>2021:02:58375:2:0:NEW 10 Jun 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>under MI-WUSN along the border area, the concerned authorities can be alerted as soon as some illegal intruder tries to cross that region. The applications like surveillance of submarines or mines <ns0:ref type='bibr' target='#b27'>(Rolader et al., 2004)</ns0:ref> are also very beneficial using MI-WUSNs. (See Fig. <ns0:ref type='figure'>2</ns0:ref>) Figure <ns0:ref type='figure'>2</ns0:ref>. Usage of MI-WUSNs for border security applications <ns0:ref type='bibr' target='#b53'>(Vuran and Akyildiz, 2010)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>MAGNETIC INDUCTION AS BETTER ALTERNATIVE OF EM WAVE COMMU-NICATION</ns0:head><ns0:p>The conventional signal communication techniques based upon EM wave propagation is not very much encouraging <ns0:ref type='bibr' target='#b12'>(Huang et al., 2020)</ns0:ref> for most of the underground communication applications due to following reasons :</ns0:p></ns0:div> <ns0:div><ns0:head>a) High signal attenuation</ns0:head><ns0:p>The path loss for underground WSNs is highly dependent on a big number of parameters attributing to large variety of its underlying media such as presence of soil or rock or some fluid under the earth surface, various types of soil makeup like sand, clay or silt, volumetric water content or humidity in soil, soil density. Due to all these parameters, signal attenuation is very high <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>b) Rapidly changing channel conditions</ns0:head><ns0:p>All the soil characteristics mentioned above vary very rapidly and unpredictably with location (like sandy soil in desert area) or time (like more water content in soil during rainfall) due to which communication channel becomes very unpredictable and unreliable. Consequently, the bit error rate (BER) is also changed randomly. Due to all these, both satisfactory connectivity and energy efficiency become infeasible to be attained for WUSNs <ns0:ref type='bibr' target='#b52'>(Trang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009)</ns0:ref>. Even for WUSNs using Ultra Wideband (UWB) frequency of 3.1 to 5 GHz, burial depth and soil moisture were recommended to be lower than 30 cm and 20 percent respectively to attain acceptable signal strength <ns0:ref type='bibr' target='#b58'>(Zemmour et al., 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>c) Large antenna size</ns0:head><ns0:p>For achieving communication range usable for practical applications, it becomes necessary to operate the transceivers at lower frequencies in MHz, for which it further becomes to keep the size of antenna high <ns0:ref type='bibr' target='#b29'>(Salam and Raza, 2020)</ns0:ref>. Consequently this size of antenna becomes too large to be buried in soil <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. Even in the case of Surface Penetrating Radar used to search the underground objects, transmitting and receiving antennas were required to be moved at constant speed in linear direction to get cross-sectional image of object <ns0:ref type='bibr' target='#b5'>(Daniels, 1996)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>5/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:2:0:NEW 10 Jun 2021)</ns0:p></ns0:div> <ns0:div><ns0:head>Manuscript to be reviewed</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To address the prominent problems of EM based WUSNs including large-sized antenna or electrical dipoles <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>, dynamically changing communication channel <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref> and high path loss, magnetic induction technology has been identified as the far better alternative by researchers.</ns0:p><ns0:p>Unlike communicating through waves, MI based transmission mechanism makes use of near field of coil associated with transceiver sensor node <ns0:ref type='bibr'>(Sun et al., 2013)</ns0:ref>. Due to usage of small coil of wire for transmitting and receiving signal, no lower limit of coil size is required in MI communication <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>VARIOUS MAGNETIC INDUCTION TECHNIQUES USED IN WUSNS</ns0:head><ns0:p>Following subsections cover in detail various MI communication methodologies -direct or ordinary MI communication, MI waveguide communication and MI three-directional communication.</ns0:p></ns0:div> <ns0:div><ns0:head>Direct or Ordinary MI Communication Technique</ns0:head><ns0:p>The underlying architecture, working, advantages and limitations of direct or ordinary MI communication technique are detailed out as follows:</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture of Direct or Ordinary MI Communication approach</ns0:head><ns0:p>In ordinary or direct MI communication architecture <ns0:ref type='bibr' target='#b14'>(Kisseleff et al., 2014a)</ns0:ref>, the communications signals are transmitted or received with the usage of a coil of wire using the fundamental principle of mutual magnetic induction. Resembling with the analogy of transformers, transmitted coil transmits the signal in form of sinusoidal current, which further induces another similar sinusoidal current inside the receiver node and thus communication is accomplished <ns0:ref type='bibr' target='#b43'>(Sun and Akyildiz, 2010b)</ns0:ref>. The interaction amongst transmitter and receiver coupled coils is due to mutual induction (See Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>). <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>For better understanding of MI transceivers nodes, functionality of primary and secondary coil in a transformer may be referred, which is also depicted in Fig. <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>. Here M indicates the mutual inductance of transmitter and receiver coils, U s is the transmitter battery voltage L t and L r denote the self-inductance of transmitted and receiver coils. R t and R r are coil resistances and Z l denotes the load impedance of receiver node <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015;</ns0:ref><ns0:ref type='bibr'>Sun et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b43'>Sun and Akyildiz, 2010b)</ns0:ref>.</ns0:p><ns0:p>Attributing to usage of near field communication, the MI coils operating at lower frequency bands can achieve more stable and reliable transmission channels in harsh WUSN medium like soil or oil <ns0:ref type='bibr'>(Sun et al., 2013)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Advantages of direct or ordinary MI communication approach</ns0:head><ns0:p>The direct or ordinary MI transmission approach offers a promising solution to prominent problems of EM communication viz rapidly and unpredictably changing channel conditions and need of big sized antennas. Due to magnetic permeability of soil or rocks or water being same as that of air (4&#960; x 10 &#8722;7 H/m), MI channels are not impacted by the dynamic changes of soil with time or location <ns0:ref type='bibr' target='#b16'>(Kisseleff et al., 2014b;</ns0:ref><ns0:ref type='bibr' target='#b48'>Sun et al., 2011)</ns0:ref> and therefore this parameter has no impact on path loss for MI solutions.</ns0:p><ns0:p>Due to this, significant rise of path loss (upto 40 dB) has been observed in <ns0:ref type='bibr' target='#b41'>(Sun and Akyildiz, 2009</ns0:ref>) EM wave solution compared to no impact in MI solution, when water content in soil gets increased by 25%. Also instead of using huge sized antennas, small magnetic coils (of radius &lt; 0.1 meter) are used for MI communication, which makes WUSNs implementable in practical manner <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. For transmission distance of less than 1m, path loss of MI method has been observed as smaller than EM wave method <ns0:ref type='bibr' target='#b41'>(Sun and Akyildiz, 2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations of direct or ordinary MI communication approach</ns0:head><ns0:p>Inspite of benefits of direct MI communication like stable communication channel and small size of coils, some constraints put by direct MI communication put the hindrance in making it suitable alternative for underground sensor communication applications. Manuscript to be reviewed Computer Science coil gets weakened by the time it reaches the receiver coil <ns0:ref type='bibr'>(Sun et al., 2013)</ns0:ref>. Due to this attenuation rate of near magnetic field, communication range attained is still too small (10m) for practical use for applications <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>. b) High path loss for larger transmissions : As the path loss in ordinary MI communication case is inversely proportional to cubic value of transmission distance as compared to simple value of transmission distance(1/r 3 vs. 1/r <ns0:ref type='bibr' target='#b48'>(Sun et al., 2011)</ns0:ref>. Due to this reason, MI communication is discouraged for terrestrial WSNs. For application related to underground sensor nodes, although the path loss for MI approach caused due to soil absorption is comparatively very low than EM communication, but total path loss may still be higher (greater than 100 dB) for larger transmission distances <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. Although for For operating frequency exceeding 900 MHz, path loss has been observed decreasing as compared to EM wave communication <ns0:ref type='bibr' target='#b41'>(Sun and Akyildiz, 2009)</ns0:ref>. c) Performance affected due to intersection angles of two coils : The system performance of direct MI mechanism is maximum if the transceiver nodes are deployed face to face along same axis or in same line. But practically the intersection angle of two coils is non-zero and significantly affects the communication performance <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>. d) Insufficient bandwidth : Also the bandwidth attained using ordinary MI communication approach is very very small (1-2 KHz), which is insufficient for practical applications <ns0:ref type='bibr' target='#b34'>(Sharma et al., 2016)</ns0:ref>, whereas it is in MHz for EM wave solution. For enhancing the channel gain for direct MI communication, either the coil size may be increased or number of turns in coils may be increased. However it results in increased size of transceivers. One more parameter influencing the channel gain is unit length resistance of the loop.</ns0:p><ns0:p>Therefore lesser resistance wires and circuits may be used for ensuring lesser path loss with no increase in size. Further, it is possible to decrease the wire or circuit resistance by using better conductivity wires, better capacitors, better connectors and a customized printed circuit boards (PCB) <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>MI Waveguide Communication Technique</ns0:head><ns0:p>To overcome the constraints like limited communication range and high path loss observed by researchers in using direct MI mechanism, the development of an advanced technique called waveguide-based MI communication has been explored <ns0:ref type='bibr' target='#b48'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b42'>Sun and Akyildiz, 2010a)</ns0:ref>, which has proved efficient in significantly minimizing the transmission path loss, increasing the communication range and attaining a practical bandwidth of inter-sensor communication in various applications related to underground environment (see Fig. <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>) <ns0:ref type='bibr' target='#b34'>(Sharma et al., 2016)</ns0:ref>. Following subsections discuss in detail the architecture, benefits and limitations of MI waveguide technique. <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture and working pattern of MI waveguide technique</ns0:head><ns0:p>The MI waveguide architecture is comprised of series of multiple resonant relay coils <ns0:ref type='bibr' target='#b50'>(Tam et al., 2020)</ns0:ref> deployed between the underground transceiver sensor nodes, which are wirelessly connected with each other in MI based WUSNs <ns0:ref type='bibr'>(Sun and Akyildiz, 2012;</ns0:ref><ns0:ref type='bibr' target='#b18'>Li et al., 2007)</ns0:ref>. Although the MI waveguide structure was initially discussed in <ns0:ref type='bibr' target='#b50'>(Tam et al., 2020)</ns0:ref> where the relay coils used to be very near to each other Manuscript to be reviewed Computer Science resulting in strong coupling, but for MI wireless communication, the coupling between relay nodes is quite weak due to their not being very close to each other <ns0:ref type='bibr' target='#b42'>(Sun and Akyildiz, 2010a)</ns0:ref>.</ns0:p><ns0:p>Wave technique is also used for EM communication, but unlike the relay points using EM wave technique, the MI relay point is simply a coil having no source of energy or processing device <ns0:ref type='bibr' target='#b48'>(Sun et al., 2011</ns0:ref><ns0:ref type='bibr'>(Sun et al., , 2012))</ns0:ref>.The MI waveguide and the regular waveguide are based on different principles and usable for different types of applications <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>The basic principle working behind inter sensor communication in MI waveguide mechanism is the serial magnetic induction or coupling between relay nodes located next to each other <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref>. Although some relay nodes do exist in between transmitter and receiver devices, but even then this MI waveguide communication pattern comes under category of wireless communication. Attributing to this unique physical architecture of MI wave guide, high degree of freedom is there in deploying the nodes and utilizing them in numerous harsh conditions of underground medium <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>. As the MI transceiver nodes and relay coils are magnetically coupled by virtue of their placement in straight line, the relay coils will get the induced current sequentially until it arrives at receiver node. During this entire process, signal gets strengthen when reaches the receiver <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>In MI waveguide functionality, the sinusoidal current in the transmitter coil produces a magnetic field which varies over time, and can cause another sinusoidal current in the first relay coil and then this relay repeats the process for next relay coil and so on(see Fig. <ns0:ref type='figure' target='#fig_8'>6</ns0:ref>) <ns0:ref type='bibr' target='#b43'>(Sun and Akyildiz, 2010b)</ns0:ref>. This allows the magnetic induction passively to be transmitted through all the intermediate coils until the MI receiver is reached, creating the MI waveguide <ns0:ref type='bibr' target='#b48'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009)</ns0:ref>.</ns0:p><ns0:p>It has been established in <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref> that it is required to have six or more relay coils to attain effective increment of signal strength with transmission range of 2m. Further relay density or number of relays can be lowered using the coils with low unit resistance and high conductivity wires and circuits <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>. Fig. <ns0:ref type='figure' target='#fig_8'>6</ns0:ref> depicts MI waveguide mechanism, where n number of total coils are there, out of which n-2 are relay coils which are placed equidistantly in straight line between transceiver nodes; r is the distance between the adjacent coils and d is the total communication range between transmitter and receiver node, which is same as d=(n-1)/r, a is the radius of all coils, C is the capacitor with which each relay and transceiver coil is loaded. The relay coils can be made resonant coils for effective transmission of magnetic signals by appropriate design of capacitor value. Between any two adjacent relay coils, mutual inductance does exist, whose value is based on their inter-coil distance <ns0:ref type='bibr'>(Sun et al., 2012)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Advantages of MI waveguide approach</ns0:head><ns0:p>By using MI waveguide approach for underground communication, following benefits have been observed: a) Stable MI Channel : As most of the underground transmission media i.e. soil is non-magnetic having almost equal permeability values, therefore rate of attenuation of magnetic fields created by coils remains almost unaffected, keeping the MI channel conditions constant and stable <ns0:ref type='bibr'>(Sun and</ns0:ref><ns0:ref type='bibr'>Akyildiz, 2013, 2012)</ns0:ref>. b) Need of a smaller number of Sensors : In case of underground communication using MI waveguide, distance of 5m is kept between two relay nodes, which is even more than the maximum transmission range attained during EM wave communication <ns0:ref type='bibr'>(Sun et al., 2012)</ns0:ref>. As shown in Fig 6 <ns0:ref type='figure'>, instead</ns0:ref> of using n number of full fledged transceiver sensor nodes, (n-2) relay nodes are used with two transceiver nodes on transmission and receiving sides, which clearly points to the requirement of less number of full fledged underground transceiver nodes <ns0:ref type='bibr' target='#b48'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b43'>Sun and Akyildiz, 2010b)</ns0:ref>.</ns0:p><ns0:p>. c) Bandwidth : The bandwidth achieved for both ordinary MI communication as well as MI waveguide communication is in same range (1-2 KHz), which has been found sufficient for non-traditional media applications which need low data rate monitoring <ns0:ref type='bibr' target='#b34'>(Sharma et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b14'>Kisseleff et al., 2014a)</ns0:ref>. Also, if operating frequency is 10MHz, 3-dB bandwidth of MI waveguide has been found to be in same range with direct MI communication i.e. (1-2 KHz) <ns0:ref type='bibr' target='#b43'>(Sun and Akyildiz, 2010b)</ns0:ref>. d) Path Loss Reduction : Due to placement of relay coils between transceiver sensor nodes, MI waveguide mechanism offers huge reduction of path loss, which is the most prominent advantage of this technique <ns0:ref type='bibr' target='#b9'>(Dudley et al., 2007)</ns0:ref>. More specifically, this is attributed to appropriate design of waveguide parameters <ns0:ref type='bibr' target='#b48'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009)</ns0:ref>. The analysis in <ns0:ref type='bibr' target='#b43'>(Sun and Akyildiz, 2010b)</ns0:ref> has shown that MI waveguide offers path loss smaller than 100dB for distance even more than 250 m, whereas for transmission range of even slightly more than 5 meters, same path loss of 100 dB is observed for EM wave system as well direct MI communication. By reducing the relay distance and resistance value of coil wire, path loss can be further lessened for MI waveguide <ns0:ref type='bibr' target='#b20'>(Liu et al., 2021)</ns0:ref>. e) Extension of Transmission Range : Using MI waveguide technique, the transmission range is significantly extended as compared to EM wave communication or ordinary MI communication <ns0:ref type='bibr' target='#b34'>(Sharma et al., 2016;</ns0:ref><ns0:ref type='bibr'>Sun and Akyildiz, 2013)</ns0:ref>. It has also been established experimentally that if Mica2 sensor are used for underground communication in soil using EM wave mechanism, communication range achieved is less than 4 m, which increases to 10 m with similar device size and power for direct MI communication and further gets extended to more than 100m for MI waveguide communication <ns0:ref type='bibr' target='#b48'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b42'>Sun and Akyildiz, 2010a)</ns0:ref>. The distance between relay nodes is even more than the maximum communication range of EM wave transmission <ns0:ref type='bibr' target='#b34'>(Sharma et al., 2016)</ns0:ref>. Due to this extension in transmission range using MI waveguide mechanism, a fully connected sensor network may be attained without even deploying big number of sensors <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref>.</ns0:p><ns0:p>It has also been observed by the researchers that with the increase in transmission distance, the transmission power gets decreased (upto 50% of power required for EM or direct MI) making it favorable Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for the energy-constrained non-traditional media <ns0:ref type='bibr' target='#b34'>(Sharma et al., 2016)</ns0:ref>. f) Better robustness and easier deployability and maintenance : Unlike a real waveguide, the MI waveguide is not a continual structure and therefore is comparatively more flexible and easy to be deployed at every 6 to 12 m <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref> and maintained <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. The relay coils in MI waveguide don't need extra power due to passive relaying of magnetic induction <ns0:ref type='bibr' target='#b42'>(Sun and Akyildiz, 2010a)</ns0:ref>. Hence unlike the sensor devices, these relay coils are easily deployable and once buried in soil, don't need much more regular maintenance <ns0:ref type='bibr'>(Sun and Akyildiz, 2013)</ns0:ref>. If due to some harsh conditions, some of the relay coils get damaged, even then the remaining relay coils ensure robustness of sensor network <ns0:ref type='bibr' target='#b48'>(Sun et al., 2011)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>g) Cost :</ns0:head><ns0:p>As the relay coils used in MI waveguide consume no energy and unit cost of these relay coils is very less <ns0:ref type='bibr' target='#b34'>(Sharma et al., 2016;</ns0:ref><ns0:ref type='bibr'>Sun and Akyildiz, 2012;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009)</ns0:ref>, therefore overall cost of underground sensor network gets reduced to large extent as compared to using expensive relay sensor devices in EM wave communication <ns0:ref type='bibr'>(Sun et al., 2012)</ns0:ref>. h) Prolonged system lifetime : MI waveguide technique also leads to prolonged system lifetime because the underground sensor devices equipped with MI transceiver nodes can be recharged by above ground charging devices using inductive charging mechanism <ns0:ref type='bibr'>(Sun and Akyildiz, 2013</ns0:ref><ns0:ref type='bibr' target='#b7'>, 2012</ns0:ref><ns0:ref type='bibr' target='#b43'>, 2010b)</ns0:ref>. In RF-challenged environments, it becomes very cumbersome to replace the device batteries; therefore this option of magnetic induction charging proves very beneficial <ns0:ref type='bibr' target='#b42'>(Sun and Akyildiz, 2010a)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations of MI waveguide approach</ns0:head><ns0:p>Inspite of numerous advantages offered by MI waveguide technique for WUSNs, some constraints have been observed by the researchers, which highlight the essence of more optimization work on MI waveguide approach. a) Limited channel capacity and data rate : It has been found that due to lower ratio (order of 2.5) of mutual induction to self-induction (also termed as relative magnetic coupling strength) between adjacent relay coils working at the resonant frequency to attain low path loss in MI waveguide approach, the channel bandwidth becomes very limited (1-2 KHz) <ns0:ref type='bibr' target='#b42'>(Sun and Akyildiz, 2010a)</ns0:ref>. This decrease in channel bandwidth becomes more adverse, if communication distance increases to a particular threshold value, which resultantly leads to lower channel capacity as well as unsatisfactory data rate, inspite of large communication range <ns0:ref type='bibr' target='#b42'>(Sun and Akyildiz, 2010a)</ns0:ref>. b) Usable for limited application domains : Attributing to data rate and bandwidth limited channels, MI waveguide technique may be adopted for those applications only, where required data rate is low <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref>. For WUSN applications like rescuing the trapped ones in underground mines or border patrolling, big amount of data is required to be timely transmitted on MI channels, which requires better data rate and bandwidth. Hence much efforts are needed for enhancing the MI channel capacity for MI-WUSNs <ns0:ref type='bibr' target='#b42'>(Sun and Akyildiz, 2010a)</ns0:ref>. c) Reliability issue : As the multiple resonant MI relay coils constitute the foundation of communication success of MI waveguide approach, hence overall performance of such sensor networks are based not only on the transceiver sensor nodes, but also these relay coils. Therefore, the issue of reliability of such underground sensor networks is needed to be analysed in tough underground media <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref>. Therefore, deployment of large number of relay coils in MI-WUSNs costs a big amount of labor and therefore needs very thoughtful and complex strategies <ns0:ref type='bibr'>(Sun and Akyildiz, 2013)</ns0:ref> targeting at the objective of building a connected robust wireless sensor network with minimal possible relay coils <ns0:ref type='bibr'>(Sun and Akyildiz, 2013)</ns0:ref>.</ns0:p><ns0:p>It was established by the researchers that for underground pipelines made up of metallic material, no or very few relay coils are required due to metal pipe itself working as magnetic core of MI waveguide.</ns0:p><ns0:p>For non-metallic pipes such as PVC, single relay coil deployed around 5 m from each other is enough.</ns0:p></ns0:div> <ns0:div><ns0:head>11/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:2:0:NEW 10 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For winding these relay coils, the underground pipeline proves to be perfect core leading to small coil deployment cost if they are winded on pipeline during deployment time itself <ns0:ref type='bibr' target='#b48'>(Sun et al., 2011)</ns0:ref>. e) Lack of omnidirectional propagation : Most of the channel characterizations have been done</ns0:p><ns0:p>with assumption of placement of transceivers or relay coils in straight line, which is practically not always true. For transceiver nodes based on MI communication, the strength of received signal at receiver end is affected by the angle between axes of two mutually coupled coils. To maintain high-quality transmission in such cases, multidimensional MI coils are developed and deployed <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>MI 3-D coil communication technique</ns0:head><ns0:p>The basic architecture, functionality and advantages for MI 3-D coil communication technique are detailed out in following subsections:</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture of 3D Coil MI Communication approach</ns0:head><ns0:p>In most of the practical underground communication applications, coils are not buried in straight line due to two probable causes, first being inability to deploy relay coils in exact planned positions due to rocks or pipes being already present inside the ground and secondly the positions of already buried coils may get changed during operation of network due to aboveground pressure or movement of soil. It is also well established that received signal strength at MI transceiver end is affected by the angle between the axes of two adjacently placed coils. Therefore, option of using multidimensional MI coils in such a complex scenario has been worked upon for high quality transmission between sensing nodes <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>More precisely, 3-Directional (3D) coils have been designed and used which offers omni-directional signal coverage as well as minimal number of coils leading to reduced system complexity and cost (Refer Fig. <ns0:ref type='figure' target='#fig_11'>7</ns0:ref>) <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head>Advantages of 3D coil MI communication approach</ns0:head><ns0:p>In 3D or TD MI coil system, three individually fabricated unidirectional (UD) coils are vertically mounted on a cubical structure having each side with length of 10 cm such that each UD coil is perpendicularly deployed with respect to others. These three coils are meant for forming a powerful beam along three different axes of Cartesian coordinate <ns0:ref type='bibr' target='#b13'>(Ishtiaq and Hwang, 2020)</ns0:ref>. Due to magnetic flux created by one surrounding coil becoming zero on another two orthogonal coils, these three coils do no interfere with each other, attributing to field distribution structure of coils. Similar to direct MI coils, these 3D MI coils are also made up of 26-AWG wire and each one of these coils is supported by a serial capacitor for detecting resonance. At the side of receiving node, all the three signals from three coils are added <ns0:ref type='bibr' target='#b20'>(Liu et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Depending on channel model, after fixing MI coil parameters and transmission distance, it is the intersection angle between transmission and receiver nodes, which determines the signal strength. At least one coil can achieve adequate signal strength with three orthogonal coils regardless of how the angle of intersection is changed. Even if MI coils are rotated or intersection angel between those gets changed, high degree of communication is supposed to be kept by the system in this case <ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>. It has also been observed that if transmission gain is maximized using optimal power allocation and adoption of spatial-temporal code, good system performance can be achieved by combining the received signals at three orthogonally placed coils <ns0:ref type='bibr' target='#b34'>(Sharma et al., 2016)</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In underwater sensor networks also, it has been found through modelling and analysis that the transmission range of MI system achieved was of 20 m range using small sized coils of 5cm radius with high value of water conductivity. Therefore, using TD coils helps in establishing more robust MI links, which remain unaffected by dynamical rotation of sensor nodes <ns0:ref type='bibr' target='#b11'>(Guo et al., 2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>COMPARATIVE ANALYSIS OF THREE MI COMMUNICATION METHODS</ns0:head><ns0:p>After detailed study of all physical layer techniques, it is clear that for applications having sensor nodes deeply buried inside soil, magnetic induction is better technology as compared to EM wave technology.</ns0:p><ns0:p>All three MI transmission techniques have their relative advantages and limitations.</ns0:p><ns0:p>The comparative analysis of EM wave technique as well as all MI techniques has been summarized in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. As compared to EM wave communication, direct MI communication is seen as better option, if factors like dynamic channel conditions of underlying media i.e. soil, need of large antenna size or effect of volumetric water content(VWC) percentage on path loss and connectivity are taken into consideration.</ns0:p><ns0:p>EM wave approach offers better bandwidth as compared to low bandwidth of 1-2 KHz achieved using MI approach. The comparison of path loss parameter is quiet complex, as its behavior is different in different scenarios. For very near region (transmission distance, d &lt; 1m), Direct MI method exhibits smaller path loss with respect to EM technique, but beyond this, path loss for MI channel becomes even 20 dB more than that of EM technique. Also, for dry soil, EM path loss is lesser, but with the increase of VWC in soil (which is generally the case), path loss of EM wave keeps on increasing, making MI as better alternative for such cases <ns0:ref type='bibr' target='#b41'>(Sun and Akyildiz, 2009)</ns0:ref>. Also, as path loss is inverse proportional to operating frequency in EM method and directly proportional for MI method, therefore for operating frequencies greater than 900 MHz, path loss decreases for direct MI approach <ns0:ref type='bibr' target='#b41'>(Sun and Akyildiz, 2009)</ns0:ref>. The MI waveguide approach is better than both EM wave as well as direct MI approach. The communication range offered by both EM wave an direct MI is not enough for practical applications. Here, MI waveguide approach proves better which offers communication distances almost 25 times that of direct MI or EM communication <ns0:ref type='bibr' target='#b43'>(Sun and Akyildiz, 2010b)</ns0:ref>. The MI waveguide transceivers require only less than half of energy consumed by EM wave of MI method, making MI waveguide suitable for energy constrained applications in addition to lower overall cost due to relay nodes not requiring any power <ns0:ref type='bibr' target='#b41'>(Sun and Akyildiz, 2009)</ns0:ref>. For both ordinary MI as well as MI waveguide system, the bandwidth achieved is smaller of the range 1-2 KHz, which is far lesser then EM wave mechanism, but it suffices for applications of low data rate monitoring <ns0:ref type='bibr' target='#b34'>(Sharma et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b26'>Reddy et al., 2020)</ns0:ref>.</ns0:p><ns0:p>As all characterization of path loss or bit error rate or transmission distance of direct MI or MI waveguide techniques have been done with assumption of sensors nodes deployed in straight line, which is not the case in reality making MI 3-D coil mechanism as best option to offer omnidirectional coverage keeping other benefits same <ns0:ref type='bibr' target='#b43'>(Sun and Akyildiz, 2010b;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b13'>Ishtiaq and Hwang, 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>RESEARCH CHALLENGES AND SCOPE FOR FUTURE WORK</ns0:head><ns0:p>There are number of challenges before MI-WUSNs which need further attention, exploration and research work. As clarified in previous section, bandwidth achieved in all MI techniques is very small, due to which WUSN applications requiring high data rate monitoring can not be implemented. The combined usage of active and passive relaying in MI waveguide offering low path loss is one more challenging area due to significant design constrains attributing to determining appropriate location and operation pattern of each relay node <ns0:ref type='bibr' target='#b50'>(Tam et al., 2020)</ns0:ref>. Similarly, although using orthogonal or 3D coils boosts the signal quality and other system parameters but designing such coils is also quite challenging and needs further work. One more area open for future work is interaction of MI-WUSNs with other types of WSNs, such as WUSNs interfacing with underwater WSNs in case of exploration of deep oceans or WUSNs interacting with power grid for monitoring structural health or WUSNs communicating with self-driving cars in case of navigation and charging.The upgradation of presently available solutions taking care of robust adjustment is also a big challenge because slight deviation in either of the system parameters may lead to whole theoretical solution as invalid due to imperfect channel state information (CSI). One example of such scenarios is varying soil wetness during rainfalls which may result in additional critical modification of channel state. One most promising but less explored area of future work for MI-WUSNs is to design cross-layer architecture ensuring multi-objective optimization which could optimize system performance in term of throughput, charging capability and accuracy of localization. Such type of multidimensional Manuscript to be reviewed</ns0:p><ns0:p>Computer Science researchers <ns0:ref type='bibr' target='#b17'>(Kisseleff et al., 2018)</ns0:ref>.Estimating multi-hop route using the static and mobile relay nodes and establishing deterministic channel state models are also areas requiring further research work <ns0:ref type='bibr' target='#b13'>(Ishtiaq and Hwang, 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>Using wireless sensor networks in non-conventional media such as soil has paved the way to a large number of novel applications ranging from soil monitoring and underground infrastructural monitoring to border security related applications. The transmission constraints such as dynamic channel condition, high path loss and big antenna size put by EM wave communication mechanism for WUSNs have been </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>The section 'Application domains of MI based WUSNs' of the paper elaborates various MI-WUSN based application domains. The next section discusses the edge of MI over conventional EM wave communication. The section 'Various magnetic induction techniques used in WUSNs' highlights the detailed architecture of all MI communication techniques i.e. Direct MI communication, MI waveguide communication and 3D MI coil communication. The next section then highlights the comparative analysis of these MI techniques. Future scope of work and conclusion are given in last two sections.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Usage of MI-WUSNs for detection of leakage of water or oil(Sun et al., 2012).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Basic structure of direct MI communication<ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>.</ns0:figDesc><ns0:graphic coords='7,141.73,353.45,413.57,132.32' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:2:0:NEW 10 Jun 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Analogy of direct MI communication with a transformer(Sun and Akyildiz, 2010b).</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,413.59,496.76' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>a) Small communication range : Although factors affecting the signal attenuation due to varying soil properties don't apply in ordinary MI communication case, but, the magnetic field created by transmitter 7/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:2:0:NEW 10 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Basic structure of MI waveguide technique<ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref>.</ns0:figDesc><ns0:graphic coords='9,141.73,497.73,413.57,128.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:58375:2:0:NEW 10 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Analogy of MI waveguide technique with transformer(Sun and Akyildiz, 2010b).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:58375:2:0:NEW 10 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>d) Complex deployment mechanism : Due to very rough and hostile underground communication medium, all transceiver sensor nodes are isolated until and unless connected by MI waveguide mechanism.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Basic structure of MI waveguide with 3D coils.<ns0:ref type='bibr' target='#b51'>(Tan et al., 2015)</ns0:ref> </ns0:figDesc><ns0:graphic coords='13,141.73,355.85,413.58,118.92' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:58375:2:0:NEW 10 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>addressed using MI communication. This technique is based on the basic principle of mutual induction between coils connected with transceiver nodes of WUSNs. This paper has detailed out the gradual progression from ordinary MI communication to MI waveguide technique to MI waveguide with 3-D coils. These MI techniques have proved fruitful to offer advantages like constant channel condition due to similar permeability of propagation medium (air, water, rocks), reduced path loss due to low-cost and passive relay coils deployed between the transceivers, enhanced communication range attributing to relay coils, feasible bandwidth, negligible propagation delay, and small-sized coils. The comparative analysis of these MI techniques made in the paper has established that MI waveguide using 3D coils is the best technique for practically realizing WUSN applications. The future scope and open challenges discussed in the paper further opens various research avenues for researchers in time to come.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='5,141.73,63.78,413.57,206.21' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='6,141.73,112.95,413.56,203.61' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparative analysis of EM wave, Direct MI, MI waveguide and 3-D Coil MI waveguide communication methods optimization leading to design of self-charging and power-efficient networks with the constraint of high</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>13/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:2:0:NEW 10 Jun 2021) 516 system performance based upon application or operation mode is still an open area for future work for 517 14/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:2:0:NEW 10 Jun 2021)</ns0:note></ns0:figure> <ns0:note place='foot' n='17'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:2:0:NEW 10 Jun 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Dated : 10th June 2021 Department of Computer Science & Engineering Guru Jambheshwar University of Science & Technology Hisar (Haryana) - India Respected Editor & Reviewers Greetings. The feedbacks and comments given by you were valuable for the betterment of manuscript and therefore all comments given by respected editor and other reviewers have been incorporated in the paper. Review Comments have been reproduced here with our responses mentioned in italics. After making the required modifications, revised manuscript has been generated with tracked changes using latexdiff tool. References related additions are not covered by latexdiff. Comments and Answers of Honorable Editor After the first round of revision, reviewers felt that the work is required a major revision based on the following reasons. Hence, we do encourage you to address the concerns and criticisms of the reviewers detailed at the bottom of this letter and resubmit your article once you have updated it accordingly. 1) As this is a review paper, the given reference list doesn‟t capture all the relevant references from all published works from different research group in different parts of the world. It is clear that majority of the references are from the research group from I.F. Akyildiz and Z.Sun etc. Please carefully review this to diversify the references. Answer: Some more recent research papers of last two years by research group not consisting of above said authors have been introdcued during first review in reference list and cited in the paper with appropriate interpretation. Some of them are as follows : Ishtiaq, M. and Hwang, S.-H. (2020). A review of relay based underground magnetic induction communication using min-max approach. pages 281–282. Reddy, T. P., Kumar, C. S., Suman, K., Avinash, U., and Kuresan, H. (2020). Wireless underground sensor network using magnetic induction. In 2020 International Conference on Communication and Signal Processing (ICCSP), pages 1394–1398. IEEE. Muzzammil, M., Ahmed, N., Qiao, G., Ullah, I., and Wan, L. (2020). Fundamentals and advancements of magnetic-field communication for underwater wireless sensor networks. IEEE Transactions on Antennas and Propagation, 68(11):7555–7570. Liu, B. H., Fu, T. H., and Wang, Y. B. (2021). Research on the model and characteristics of underground magnetic induction communication channel. Progress In Electromagnetics Research M, 101:89–100. Masihpour Debnath, S. (2021). Network coverage using mi waves for underwater wireless sensor network in shadowing environment. IET Microwaves, Antennas & Propagation, pp 1-7. 2) In many places of the paper, reference was not given or missing. Authors should address this comment carefully. For example: (a) under “SURVEY METHODOLOGY: APPLICATION DOMAINS OF MI BASED WUSNS: a) Underground leakage monitoring applications”, (b) under “SURVEY METHODOLOGY: APPLICATION DOMAINS OF MI BASED WUSNS: e) Sports field monitoring applications”(c) Answer: Relevant references/citations have been added in mentioned sub-sections i.e. (b) & (e) of section “Application domains of MI based WUSNs”. Checking the same thing in entire paper, three citations have also been added in section “Research challenges and scope for future work”. 3) Under “MAGNETIC INDUCTION AS BETTER ALTERNATIVE OF EM WAVE COMMUNICATION”, some RF/Microwave/Sensors/Antennas references should be given here. If authors only used the references from other paper which is a review paper, this review paper will not add any value to the published domain as this information is available from other published works. Insightful review, discussions and citations are required here to enhance the quality of this paper. Answer: As suggested by the respected reviewer, reference of WUSNs using Ultra Wideband frequency and effect of burial depth and soil moisture on Surface penetrating radar using EM communication have been introduced in the section “Magnetic Induction as better alternative of EM wave communication”. As the prime focus of the paper is comparative analysis of MI techniques with each other and also with EM wave technique. Due to which, the comparison of EM wave communication with prior techniques has not been detailed. Also, references related to limitations put by EM wave communication such as Antenna size etc have been cited in various points of this section. 4) This comment is related to the previous comment. In section “COMPARATIVE ANALYSIS OF THREE MI COMMUNICATION METHODS”, Table 1 shows the comparison between EM wave communication, Direct MI, MI waveguide and 3D coil MI waveguide. I cannot see any reference or detailed discussions, i.e. antenna size, transmission range, bandwidth, etc on the EM wave communication. Not sure how the given references were used to cover the EM wave communication as they are for MI waveguides. Authors should consider some of the following references if related to EM wave communication topics. „A theoretical model of underground dipole antennas for communications in internet of underground things‟, IEEE Trans. Antennas Propag., 2019, 67, (6), pp. 3996–4009 „Link budget maximization for a mobile-band subsurface wireless sensor in challenging water utility environment‟, IEEE Trans. Ind. Electron., 2018, 65, (1), pp. 616– 625 „Improved communications in underground mines using reconfigurable antennas‟, IEEE Trans. Antennas Propag., 2018, 66, (12), pp. 7505–7510 Opportunities and Challenges in Health Sensing for Extreme Industrial Environment: Perspectives From Underground Mines,' in IEEE Access, vol. 7, pp. 139181-139195, 2019, “Design of mobile band subsurface antenna for drainage infrastructure monitoring, IET MAP, 2019” 'Soil effects on the underground-to-aboveground communication link in ultrawideband wireless underground sensor networks', IEEE Antennas Wireless Propag. Lett., vol. 16, pp. 218-221, 2017. Answer: As suggested, Relevant References related to EM wave communication have been added for parameters Antenna Size, Bandwidth, Transmission Range and Effect of VWC in said table. Detailed discussion about these parameters related to EM communication has already been done in section “Magnetic Induction as better alternative of EM wave communication”. References newly added in reference list and cited in the table for parameter values for EM communication are as follows: Salam, A. and Vuran, M. C. (2018). Em-based wireless underground sensor networks. In Underground sensing, pages 247–285. Elsevier. Zemmour, H., Baudoin, G., and Diet, A. (2016). Soil effects on the underground-toaboveground communication link in ultrawideband wireless underground sensor networks. IEEE Antennas and Wireless Propagation Letters, 16:218–221. 5) The clarity of all the figures of this paper should be enhanced. Authors should review this carefully. Answer: For ensuring enhanced quality and clarity of figures, Figure-3(Direct MI), Figure-5 (MI waveguide) & Figure-7(3D Coil MI waveguide) have been redrawn and Figure-2 has been edited for better clarity of text in the picture. Now all pictures are of better pixel resolution. [# PeerJ Staff Note: Please ensure that all reviewer and editor comments are addressed in a response letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the response letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the response letter. Directions on how to prepare a response letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #] [# PeerJ Staff Note: It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful #] Comments and Answers of REVIEWER-2 (Anonymous) Basic reporting 1) The literature was not well modified. Only a few new literatures were added without detailed interpretation. Answer: Interpretation have been added for the references recently added in the paper in Introduction section. 2) Some pictures are not clear enough. It is better to use vector pictures. Answer: For ensuring enhanced quality and clarity of figures, Figure-3, Figure-5 & Figure7 have been redrawn and Figure-2 have been modified. Now all pictures are of better pixel resolution analysis. Experimental design no comment Validity of the findings no comment Basic reporting 1) The literature was not well modified. Only a few new literatures were added without detailed interpretation. Answer: Interpretation has been added in various sections especially INTRODUCTION section for newly added literatures. 2) Some pictures are not clear enough. It is better to use vector pictures. Answer: For ensuring enhanced quality and clarity of figures, Figure-3, Figure-5 & Figure7 have been redrawn and Figure-2 have been modified. Now all pictures are of better pixel resolution analysis. Comments for the Author 1) Please add the latest references, and they must be interpreted in detail in the text, not simply cited. Answer: Latest references have been added with their interpretation in paper. 2) Please increase the clarity of the picture. Answer: Some more recent research papers of related areas have been studied and referred in introduction and other sections for strengthening the literature review. Also, the quantitative values have been added in Section “Comparative analysis” as well as in table Answer: For ensuring enhanced quality and clarity of figures, Figure-3, Figure-5 & Figure7 have been redrawn and Figure-2 have been modified. Now all pictures are of better pixel resolution analysis. 3) The references on which key technologies are described are still too old. Answer: References of last two-three years have been added related to key technologies such as EM wave communication & MI wave communication. Some of them are as follows : Ishtiaq, M. and Hwang, S.-H. (2020). A review of relay based underground magnetic induction communication using min-max approach. pages 281–282. Reddy, T. P., Kumar, C. S., Suman, K., Avinash, U., and Kuresan, H. (2020). Wireless underground sensor network using magnetic induction. In 2020 International Conference on Communication and Signal Processing (ICCSP), pages 1394–1398. IEEE. Muzzammil, M., Ahmed, N., Qiao, G., Ullah, I., and Wan, L. (2020). Fundamentals and advancements of magnetic-field communication for underwater wireless sensor networks. IEEE Transactions on Antennas and Propagation, 68(11):7555–7570. Liu, B. H., Fu, T. H., and Wang, Y. B. (2021). Research on the model and characteristics of underground magnetic induction communication channel. Progress In Electromagnetics Research M, 101:89–100. Masihpour Debnath, S. (2021). Network coverage using mi waves for underwater wireless sensor network in shadowing environment. IET Microwaves, Antennas & Propagation. Once again thanks and we hope that the editor and reviewers agree with the action and reasoning presented in these replies and that the revised version of our contribution will meet your expectations. Partap Singh (on behalf of all authors) "
Here is a paper. Please give your review comments after reading it.
294
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>A large range of applications have been identified based upon the communication of underground sensors deeply buried in the soil. The classical electromagnetic wave (EM) approach, which works well for terrestrial communication in air medium, when applied for this underground communication, suffers from significant challenges attributing to signal absorption by rocks, soil, or water contents, highly varying channel condition caused by soil characteristics, and requirement of big antennas. As a strong alternative of EM, various magnetic induction (MI) techniques have been introduced. These techniques basically depend upon the magnetic induction between two coupled coils associated with transceiver sensor nodes. This paper elaborates on three basic MI communication mechanisms i.e. direct MI transmission, MI waveguide transmission, and 3D coil MI communication with detailed discussion of their working mechanism, advantages and limitations. The comparative analysis of these MI techniques with each other as well as with EM wave method will facilitate the users in choosing the best method to offer enhanced transmission range (upto 250m), reduced path loss(&lt;100dB), channel reliability, working bandwidth(1-2 KHz), &amp; omni-directional coverage to realize the promising MI based wireless underground sensor network (WUSN) applications.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Based upon the nature of underlying medium, wireless sensor networks (WSNs) may be categorized as airbased terrestrial WSNs, soil-based underground WSNs and water-based underwater WSNs <ns0:ref type='bibr' target='#b1'>(Akyildiz et al., 2002)</ns0:ref>. The wireless underground sensor networks (WUSNs) <ns0:ref type='bibr' target='#b36'>(Sardar et al., 2019)</ns0:ref> comprise of wireless sensor devices, which operate in a subterranean environment and interact with one another wirelessly <ns0:ref type='bibr' target='#b6'>(Banaseka et al., 2021)</ns0:ref>. These sensing nodes may be either deployed within closed underground structures like underground roads/subways, mines <ns0:ref type='bibr' target='#b12'>(Forooshani et al., 2013)</ns0:ref> or tunnels <ns0:ref type='bibr' target='#b11'>(Dudley et al., 2007)</ns0:ref> or may be buried completely under the ground. In the first scenario, in spite of sensor networks being located underground, signals are communicated through air in void space existing below earth surface which helps in improving the security in underground mines ensuring comfortable communications for the passengers and drivers in road/subway hollow tunnels as well as in securing these structures from attacks PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:4:0:NEW 7 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science by means of consistent monitoring <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. Similarly in the second scenario, sensing nodes buried under the ground surface interact with one another through soil as propagation medium and this kind of communication of WUSNs offers a huge range of naive applications including smart irrigation <ns0:ref type='bibr' target='#b10'>(Dong et al., 2013)</ns0:ref>, precision agriculture <ns0:ref type='bibr' target='#b62'>(Yu et al., 2017)</ns0:ref>, border patrolling, soil monitoring, predicting landslides <ns0:ref type='bibr' target='#b5'>(Aleotti and Chowdhury, 1999)</ns0:ref> or earth quakes or volcano eruptions <ns0:ref type='bibr' target='#b60'>(Werner-Allen et al., 2006)</ns0:ref> and many more applications related to Internet of underground things (IoUT) <ns0:ref type='bibr' target='#b0'>(Akyildiz and Stuntebeck, 2006;</ns0:ref><ns0:ref type='bibr' target='#b6'>Banaseka et al., 2021)</ns0:ref>.</ns0:p><ns0:p>The researchers observed that using the terrestrial motes in underground communication could not yield satisfactory and reliable communication results due to harsh environment of underground media <ns0:ref type='bibr' target='#b43'>(Stuntebeck et al., 2006)</ns0:ref>. They further worked a lot on the analysis of characteristics of communication channel of WUSNs. The underground signal communication using EM wave propagation has been analysed using channel characterization models in <ns0:ref type='bibr' target='#b0'>(Akyildiz and Stuntebeck, 2006)</ns0:ref>, <ns0:ref type='bibr' target='#b19'>(Li et al., 2007)</ns0:ref>, <ns0:ref type='bibr' target='#b59'>(Vuran and Silva, 2010)</ns0:ref>, <ns0:ref type='bibr' target='#b26'>(Peplinski et al., 1995)</ns0:ref>, <ns0:ref type='bibr' target='#b39'>(Silva and Vuran, 2009)</ns0:ref>, <ns0:ref type='bibr' target='#b58'>(Vuran and Akyildiz, 2010)</ns0:ref>.</ns0:p><ns0:p>Here it was evaluated that how path loss and bit error rate are affected by environmental factors like water particles in soil (humidity) as well as network related factors including operating frequency and burial depth of the sensing nodes. The researchers <ns0:ref type='bibr' target='#b41'>(Silva et al., 2015)</ns0:ref> investigated through experiments the communication link characteristics of different types of WUSN channels such as underground to underground (UG-UG) channel, underground to above-ground (UG-AG) channel and above-ground to underground (AG-UG) channel.</ns0:p><ns0:p>However, gradually the researchers found that EM wave communication mechanism suffers from huge problems. Firstly, the high signal attenuation or path loss is caused by absorption by soil, rock elements and underground water <ns0:ref type='bibr' target='#b12'>(Forooshani et al., 2013)</ns0:ref>. Secondly, As the soil properties such as soil type <ns0:ref type='bibr' target='#b33'>(Salam and Vuran, 2016)</ns0:ref> and volumetric water content etc change very randomly with location and over time, therefore performance of sensor networks remains unpredictable <ns0:ref type='bibr' target='#b56'>(Trang et al., 2018)</ns0:ref>. Third problem is of using large sized antennas, which are deployed so that practical communication range may be achieved with low operating frequencies <ns0:ref type='bibr' target='#b32'>(Salam and Raza, 2020)</ns0:ref>. The impact of antenna size and orientation as well as soil moisture on underground communication was also realized experimentally by developing outdoor WUSN in <ns0:ref type='bibr' target='#b40'>(Silva and Vuran, 2010)</ns0:ref>. All these issues made EM wave approach unsuitable for deploying WUSNs despite the promising application domains <ns0:ref type='bibr' target='#b44'>(Sun and Akyildiz, 2009)</ns0:ref>.</ns0:p><ns0:p>Last decade has witnessed a new approach called magnetic induction (MI) as an effective alternative of EM communication for harsh environment like underground <ns0:ref type='bibr' target='#b46'>(Sun and Akyildiz, 2010b)</ns0:ref> or underwater <ns0:ref type='bibr' target='#b3'>(Akyildiz et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b24'>Muzzammil et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b8'>Debnath, 2021)</ns0:ref> <ns0:ref type='bibr' target='#b9'>(Domingo, 2012)</ns0:ref>. The MI channel capacity from a pair of transceiver coils has also been elaborated <ns0:ref type='bibr' target='#b17'>(Kisseleff et al., 2014b)</ns0:ref>.The algorithms explaining the efficient mechanism of deploying the magnetic coils for WUSNs have been worked on <ns0:ref type='bibr'>(Sun and Akyildiz, 2013)</ns0:ref>. The authors <ns0:ref type='bibr'>(Sun and</ns0:ref><ns0:ref type='bibr'>Akyildiz, 2010b, 2009)</ns0:ref> have detailed out the analysis of path loss and bandwidth using MI communication approach in soil as underground medium. The multi-hop relay techniques are proposed to extend communication range in near-field MI communication systems <ns0:ref type='bibr' target='#b23'>(Masihpour et al., 2013)</ns0:ref>. Further, gradual analysis of communication channel model has led to evolution of MI waveguide mechanism for minimizing the path loss observed in conventional EM wave approach or original MI mechanism <ns0:ref type='bibr' target='#b46'>(Sun and Akyildiz, 2010b)</ns0:ref>. The system performance in term of path loss caused by EM wave mechanism, ordinary MI mechanism and MI waveguide mechanism have been analysed, quantified and compared with one another <ns0:ref type='bibr'>(Sun and</ns0:ref><ns0:ref type='bibr'>Akyildiz, 2010b, 2009)</ns0:ref>. The reduction in path loss and increase of transmission distance (by more than 20 times) has been observed in MI waveguide communication as compared to ordinary communication by the researchers <ns0:ref type='bibr' target='#b21'>(Liu et al., 2021)</ns0:ref>.Further 3D MI Coils were found to be very beneficial to be used for omnidirectional coverage <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>. As compared to single-dimension coils, the improvement in directionality of MI communication using multidimensional coils, metamaterial enhanced antennas and spherical coil-array enclosed loop antennas and polyhedral geometry has been analysed for MI based underwater networks by researchers <ns0:ref type='bibr' target='#b24'>(Muzzammil et al., 2020)</ns0:ref>. The researchers also provided the multimode model for characterizing the wireless channels for WSNs used in underground structures like mines for both EM and MI techniques in their study <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. The scientists also detailed out the architecture and operating framework of monitoring underground pipelines for detecting real time leakage effectively using MI-based WSN, called as MISE-PIPE <ns0:ref type='bibr' target='#b52'>(Sun et al., 2011)</ns0:ref>, . The researchers <ns0:ref type='bibr' target='#b45'>(Sun and Akyildiz, 2010a)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for MI-WUSNs has been explained in <ns0:ref type='bibr' target='#b20'>(Lin et al., 2015)</ns0:ref>. A channel model considering the fading effect of presence of large obstacles on coverage performance has also been proposed and analysed for MI based underwater wireless sensor network <ns0:ref type='bibr' target='#b8'>(Debnath, 2021)</ns0:ref>. Further work has been done on relay or waveguide based MI-WUSNs to attain the required Quality of Service(QoS) metrics in form of Min-Max problem using relay selection algorithms, relay placement approaches and optimization of operational parameters <ns0:ref type='bibr' target='#b15'>(Ishtiaq and Hwang, 2020)</ns0:ref>. In addition to EM and MI communication, the researchers have also worked on Acoustic based propagation as communication methodology, but Acoustic approach has proved to be more appropriate for detection based applications as compared to communication based applications <ns0:ref type='bibr' target='#b6'>(Banaseka et al., 2021)</ns0:ref>. The recent advancements made in related areas of MI-WUSNs, which might have impact on implementation of MI-WUSNs have been discussed <ns0:ref type='bibr' target='#b18'>(Kisseleff et al., 2018)</ns0:ref>.</ns0:p><ns0:p>The aim of this paper is comprehensive elaboration of all MI techniques at one place and their comparative analysis with one another as well as with conventional EM wave approach based on thorough literature review. The various MI techniques exhibit performance enhancements with respect to different parameters. This comparative analysis will facilitate selection of appropriate technique with consideration of required parameters as per application and thereby help in finding optimized solution of application specific WUSN implementation. The paper also discusses the challenges faced by MI based WUSN solutions as well as future scope of work based upon further exploration of MI techniques.</ns0:p><ns0:p>The intended users of this review are the researchers working on achieving QoS parameters for WUSNs using MI approach instead of EM wave methodology and thereby realizing the potential MI-WUSN applications. The researchers working in the domain of underwater wireless sensor networks may also be benefited using this analysis.</ns0:p><ns0:p>The rest of the paper is as follows. The section 'Methodology' explains the methodology adopted for </ns0:p></ns0:div> <ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>The purpose of this paper is the detailed elaboration of all MI based techniques followed by their comparison with one another as well as with classical EM wave approach for WUSNs. We started with a specific set of search terms used against a meta-search engine 'Google Scholar' to search across multiple databases. These search words were 'WUSN', 'wireless underground sensor networks', 'electromagnetic wave', 'magnetic induction' and 'MI waveguide'. The next step of this systematic literature review was to gather all retrieved documents from the year 2006 to the year 2020, where the process of screening included downloading the papers published in various journals and reading their abstracts. From these, we identified all MI techniques used in WUSNs. For each of these MI techniques, further papers were searched and retrieved based on which, advantages and limitations of these MI techniques have been listed out and compared further. The literature review includes study of more than 100 research papers.</ns0:p></ns0:div> <ns0:div><ns0:head>APPLICATION DOMAINS OF MI BASED WUSNS</ns0:head><ns0:p>Unlike EM wave based WUSNs used for the cases of low burial depth utilizing UG-AG and AG-UG channels, MI-based WUSNs are especially helpful for the applications working on pure UG-UG channels, where underground sensor devices are deeply deployed in the soil or no above ground devices are there. Some of such application areas identified by the researchers are given here : a) Underground leakage monitoring applications MI based WUSNs may be applied for monitoring and detecting the leakage in underground infrastructures like water, gas or oil pipelines. It helps in assuring that there are no leakages in underground fuel tanks and also in determining the actual amount of oil currently available in the fuel tank so that overflowing may be avoided. MI-WUSNs also play an important role in monitoring the leakage in underground septic sewage tanks. These are used with the help of sensors deployed along the path of pipelines so as to localize and repair the leakage of gas or water from gas pipelines or water pipelines respectively <ns0:ref type='bibr' target='#b31'>(Sadeghioon et al., 2018)</ns0:ref>. (See Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> <ns0:ref type='bibr'>(Sun et al., 2012)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>b) Disaster prediction applications</ns0:head><ns0:p>MI-WUSNs are also used in assessing and predicting the disastrous situations like floods, eruption of volcanoes and earthquake, Tsunami, oil spilling or land sliding situations. These disastrous conditions do arise attributing to alterations in materials like soil, water etc. and these changes are monitored using underground sensor devices. MI-WUSNs prove far better as compared to current methods of landslide prediction, which are costlier as well as more time consuming to get deployed <ns0:ref type='bibr' target='#b38'>(Sheth et al., 2005)</ns0:ref>.</ns0:p><ns0:p>Similarly, MI-WUSNs measure and monitor the imbalances of glacier movements and volcanic eruption movements as well. All these things help in fine predictions of forthcoming natural disasters.</ns0:p></ns0:div> <ns0:div><ns0:head>c) Agricultural monitoring applications</ns0:head><ns0:p>MI based WUSNs can be well utilized for monitoring applications related to agriculture, mines, tunnels, pollution and many more. The soil sensors buried inside the ground as part of WUSN may be used for observing the soil properties like soil makeup <ns0:ref type='bibr' target='#b27'>(Raut and Ghare, 2020)</ns0:ref>, water content in soil (humidity), density of soil etc. facilitating smart and efficient irrigation system <ns0:ref type='bibr' target='#b35'>(Sambo et al., 2020)</ns0:ref>. MI-WUSNs are also beneficial for continual monitoring of methane or carbon monoxide gases inside mines which can explode and cause a big fire, if not monitored properly. These sensors are also utilized for habitat (marine life, fish farms) monitoring, exploration (natural resource) monitoring or observance of underwater pollution also.</ns0:p></ns0:div> <ns0:div><ns0:head>d) Underground structural monitoring applications</ns0:head><ns0:p>MI based WUSNs are also used for monitoring the internal structures of dams, buildings to know the factors which influence the durability of these structures <ns0:ref type='bibr' target='#b25'>(Park et al., 2005)</ns0:ref>. It is ensured by tracking stress and strain present in the material; used for their construction like water, sand, concrete etc.</ns0:p></ns0:div> <ns0:div><ns0:head>e) Sports field monitoring applications</ns0:head><ns0:p>One important application domain making use of MI-WUSNs is sports field monitoring where sensor nodes buried inside are used for monitoring the condition of soil of different types of playgrounds or sport fields like golf courses, soccer fields, baseball fields or grass tennis courts. The poor turf conditions do cause uncomfortable playing experience for the players, making it necessary to monitor and maintain the health of grass <ns0:ref type='bibr' target='#b0'>(Akyildiz and Stuntebeck, 2006;</ns0:ref><ns0:ref type='bibr' target='#b19'>Li et al., 2007)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>f) Security related applications</ns0:head><ns0:p>MI-WUSNs are better suited for security related applications because these do have higher degree of concealment in comparison with terrestrial sensor devices. It is so because their presence is hidden and the chances of determining their presence and deactivating them are very less. By deploying pressure sensors</ns0:p></ns0:div> <ns0:div><ns0:head>4/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_0'>2021:02:58375:4:0:NEW 7 Oct 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>under MI-WUSN along the border area, the concerned authorities can be alerted as soon as some illegal intruder tries to cross that region. The applications like surveillance of submarines or mines <ns0:ref type='bibr' target='#b30'>(Rolader et al., 2004)</ns0:ref> are also very beneficial using MI-WUSNs. (See Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>)</ns0:p></ns0:div> <ns0:div><ns0:head>g) MI-Assisted Wireless Powered Underground Sensor Networks</ns0:head><ns0:p>WUSNs allow for remote monitoring and management of a variety of subsurface environments, however those have a substantial reliability issue. To solve this issue and alleviate current networking issues, <ns0:ref type='bibr' target='#b22'>(Liu, 2021)</ns0:ref> presents the magnetic induction (MI)-assisted wireless powered underground sensor network (MI-WPUSN), a new idea that combines the benefits of MI communication techniques with those of wireless power transfer mechanisms. MI-WPUSN is one-of-a-kind platform with seven envisioned devices and four various communication modes that has considerable reliability potential but is limited by its complex and difficult data collection. </ns0:p></ns0:div> <ns0:div><ns0:head>MI AS BETTER ALTERNATIVE OF EM WAVE COMMUNICATION</ns0:head><ns0:p>The conventional signal communication techniques based upon EM wave propagation is not very much encouraging <ns0:ref type='bibr' target='#b14'>(Huang et al., 2020)</ns0:ref> for most of the underground communication applications due to the following reasons :</ns0:p></ns0:div> <ns0:div><ns0:head>a) High signal attenuation</ns0:head><ns0:p>The path loss for underground WSNs is highly dependent on a big number of parameters attributing to large variety of its underlying media such as presence of soil or rock or some fluid under the earth surface, various types of soil makeup like sand, clay or silt, volumetric water content or humidity in soil, soil density. Moreover, path loss of EM wave mechanuism is a function whereas path loss for MI based communication is a logarithmic function of the distance <ns0:ref type='bibr' target='#b6'>(Banaseka et al., 2021)</ns0:ref>.Due to all these parameters, signal attenuation is very high <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>b) Rapidly changing channel conditions</ns0:head><ns0:p>All the soil characteristics mentioned above vary very rapidly and unpredictably with location (like sandy soil in desert area) or time (like more water content in soil during rainfall), due to which communication channel becomes very unpredictable and unreliable. Consequently, the bit error rate (BER) also changes randomly. Due to all these, both satisfactory connectivity and energy efficiency become infeasible to be attained for WUSNs <ns0:ref type='bibr' target='#b56'>(Trang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009)</ns0:ref>. Even for WUSNs using Ultra Wideband (UWB) frequency of 3.1 to 5 GHz, burial depth and soil moisture were recommended to be lower than 30 cm and 20 percent respectively to attain acceptable signal strength <ns0:ref type='bibr' target='#b63'>(Zemmour et al., 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>5/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_0'>2021:02:58375:4:0:NEW 7 Oct 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science c) Large antenna size</ns0:head><ns0:p>For achieving communication range usable for practical applications, it becomes necessary to operate the transceivers at lower frequencies in MHz, for which it further becomes to keep the size of antenna high <ns0:ref type='bibr' target='#b32'>(Salam and Raza, 2020)</ns0:ref>. Consequently this size of antenna becomes too large to be buried in soil <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. Even in the case of Surface Penetrating Radar used to search the underground objects, transmitting and receiving antennas were required to be moved at constant speed in linear direction to get cross-sectional image of object <ns0:ref type='bibr' target='#b7'>(Daniels, 1996)</ns0:ref>.</ns0:p><ns0:p>To address the prominent problems of EM based WUSNs including large-sized antenna or electrical dipoles <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>, dynamically changing communication channel <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref> and high path loss, MI technology has been identified as a far better alternative by researchers. Unlike communicating through waves, MI based transmission mechanism makes use of near field of coil associated with transceiver sensor node <ns0:ref type='bibr'>(Sun et al., 2013)</ns0:ref>. Due to the usage of small coil of wire for transmitting and receiving signal, no lower limit of coil size is required in MI communication <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>VARIOUS MI TECHNIQUES USED IN WUSNS</ns0:head><ns0:p>Following subsections cover in detail various MI communication methodologies -direct or ordinary MI communication, MI waveguide communication and MI three-directional communication.</ns0:p></ns0:div> <ns0:div><ns0:head>Direct or Ordinary MI Communication Technique</ns0:head><ns0:p>The underlying architecture, working, advantages and limitations of direct or ordinary MI communication technique are detailed as follows:</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture of Direct or Ordinary MI Communication approach</ns0:head><ns0:p>In ordinary or direct MI communication architecture <ns0:ref type='bibr' target='#b16'>(Kisseleff et al., 2014a)</ns0:ref>, the communications signals are transmitted or received with the usage of a coil of wire using the fundamental principle of mutual magnetic induction. Resembling with the analogy of transformers, transmitted coil transmits the signal in form of sinusoidal current, which further induces another similar sinusoidal current inside the receiver node and thus communication is accomplished <ns0:ref type='bibr' target='#b46'>(Sun and Akyildiz, 2010b)</ns0:ref>. The interaction amongst transmitter and receiver coupled coils is due to mutual induction (See Fig. <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>). <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>For better understanding of MI transceivers nodes, functionality of primary and secondary coil in a transformer may be referred, which is also depicted in Fig. <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>. Here M indicates the mutual inductance of transmitter and receiver coils, U s is the transmitter battery voltage, L t and L r denote the self-inductance of transmitted and receiver coils. R t and R r are coil resistances and Z l denotes the load impedance of receiver node <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015;</ns0:ref><ns0:ref type='bibr'>Sun et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b46'>Sun and Akyildiz, 2010b)</ns0:ref>.</ns0:p><ns0:p>Attributing to usage of near field communication, the MI coils operating at lower frequency bands can achieve more stable and reliable transmission channels in harsh WUSN medium like soil or oil <ns0:ref type='bibr'>(Sun et al., 2013)</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head>Advantages of direct or ordinary MI communication approach</ns0:head><ns0:p>The direct or ordinary MI transmission approach offers a promising solution to the prominent problems of EM communication viz rapidly and unpredictably changing channel conditions and need of big sized antennas.</ns0:p><ns0:p>Due to magnetic permeability of soil or rocks or water being same as that of air (4&#960; x 10 &#8722;7 H/m), MI channels are not impacted by the dynamic changes of soil with time or location <ns0:ref type='bibr' target='#b17'>(Kisseleff et al., 2014b;</ns0:ref><ns0:ref type='bibr' target='#b52'>Sun et al., 2011)</ns0:ref>. Therefore, this parameter has no effect on path loss for MI solutions. It has also been clearly observed in <ns0:ref type='bibr' target='#b44'>(Sun and Akyildiz, 2009)</ns0:ref> that in case of water content in soil getting increased by 25%, there is no impact on path loss for MI solution, whereas significant path loss (upto 40 dB) is there for EM wave solution. Also instead of using huge sized antennas, small magnetic coils (of radius &lt; 0.1 meter)</ns0:p></ns0:div> <ns0:div><ns0:head>7/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:4:0:NEW 7 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science are used for MI communication, which makes WUSNs implementable in a practical manner <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. For transmission distance of less than 1m, path loss of MI method has been observed as smaller than EM wave method <ns0:ref type='bibr' target='#b44'>(Sun and Akyildiz, 2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations of direct or ordinary MI communication approach</ns0:head><ns0:p>Inspite of benefits of direct MI communication like stable communication channel and small size of coils, some constraints put by direct MI communication put the hindrance in making it suitable alternative for underground sensor communication applications. a) Small communication range : Although factors affecting the signal attenuation due to varying soil properties don't apply in ordinary MI communication case, but, the magnetic field created by transmitter coil gets weakened by the time it reaches the receiver coil <ns0:ref type='bibr'>(Sun et al., 2013)</ns0:ref>. Due to this attenuation rate of near magnetic field, communication range attained is still too small (of the order of 10m) for practical use for applications <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>. b) High path loss for larger transmissions : As the path loss in ordinary MI communication case is inversely proportional to cubic value of transmission distance as compared to simple value of transmission distance(1/r 3 vs. 1/r) <ns0:ref type='bibr' target='#b52'>(Sun et al., 2011)</ns0:ref>. Due to this reason, MI communication is discouraged for terrestrial WSNs. For application related to underground sensor nodes, although the path loss for MI approach caused due to soil absorption is comparatively very low when compared to EM communication, but total path loss may still be higher (greater than 100 dB) for larger transmission distances <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. Although for operating frequency exceeding 900 MHz, path loss has been observed decreasing as compared to EM wave communication <ns0:ref type='bibr' target='#b44'>(Sun and Akyildiz, 2009)</ns0:ref>. c) Performance affected due to intersection angles of two coils : The system performance of direct MI mechanism is maximum if the transceiver nodes are deployed face to face along the same axis or in same line. But practically the intersection angle of two coils is non-zero and it significantly affects the communication performance <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>. d) Insufficient bandwidth : Also the bandwidth attained using ordinary MI communication approach is very very small (1-2 KHz), which is insufficient for practical applications <ns0:ref type='bibr' target='#b37'>(Sharma et al., 2016)</ns0:ref>, whereas it is in MHz for EM wave solution. For enhancing the channel gain for direct MI communication, either the coil size may be increased or number of turns in coils may be increased. However, it results in increased size of transceivers. One more parameter influencing the channel gain is unit length resistance of the loop. Therefore lesser resistance wires and circuits may be used for ensuring lesser path loss with no increase in size. Further, it is possible to decrease the wire or circuit resistance by using better conductivity wires, better capacitors, better connectors and customized printed circuit boards (PCB) <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>MI Waveguide Communication Technique</ns0:head><ns0:p>To overcome the constraints like limited communication range and high path loss observed by researchers in using direct MI mechanism, the development of an advanced technique called waveguide-based MI communication has been explored <ns0:ref type='bibr' target='#b52'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b45'>Sun and Akyildiz, 2010a)</ns0:ref>. This technique has proved efficient in significantly minimizing the transmission path loss, increasing the communication range and attaining a practical bandwidth of inter-sensor communication in various applications related to underground environment (see Fig. <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>) <ns0:ref type='bibr' target='#b37'>(Sharma et al., 2016)</ns0:ref>. Following subsections discuss in detail the architecture, benefits and limitations of MI waveguide technique.</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture and working pattern of MI waveguide technique</ns0:head><ns0:p>The MI waveguide architecture comprises of series of multiple resonant relay coils <ns0:ref type='bibr' target='#b54'>(Tam et al., 2020)</ns0:ref> which are deployed between the underground transceiver sensor nodes and are wirelessly connected with one another in MI based WUSNs <ns0:ref type='bibr'>(Sun and Akyildiz, 2012;</ns0:ref><ns0:ref type='bibr' target='#b19'>Li et al., 2007)</ns0:ref>. Although the MI waveguide structure was initially discussed <ns0:ref type='bibr' target='#b54'>(Tam et al., 2020)</ns0:ref> where the relay coils used to be very near to one another resulting in strong coupling, but for MI wireless communication, the coupling between relay nodes is quite weak due to their not being very close to one another <ns0:ref type='bibr' target='#b45'>(Sun and Akyildiz, 2010a)</ns0:ref>. Wave technique is also used for EM communication, but unlike the relay points using EM wave technique, the MI relay point is simply a coil having no source of energy or processing device <ns0:ref type='bibr' target='#b52'>(Sun et al., 2011</ns0:ref><ns0:ref type='bibr'>(Sun et al., , 2012))</ns0:ref>.The MI waveguide and the regular waveguide are based on different principles and usable for different types of applications <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>The basic working principle behind inter sensor communication in MI waveguide mechanism is the serial magnetic induction or coupling between relay nodes located next to one another <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref>. Although some relay nodes do exist in between transmitter and receiver devices, but even then this MI waveguide communication pattern comes under category of wireless communication. Attributing to this unique physical architecture of MI wave guide, high degree of freedom is there in deploying the nodes and utilizing them in numerous harsh conditions of underground medium <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>. As the MI transceiver nodes and relay coils are magnetically coupled by virtue of their placement in straight line, the relay coils will get the induced current sequentially until it arrives at receiver node. During this entire process, signal gets strengthened till it reaches the receiver <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>In MI waveguide functionality, the sinusoidal current in the transmitter coil produces a magnetic field which varies over time, and can cause another sinusoidal current in the first relay coil and then this relay repeats the process for next relay coil and so on (see Fig. <ns0:ref type='figure' target='#fig_8'>6</ns0:ref>) <ns0:ref type='bibr' target='#b46'>(Sun and Akyildiz, 2010b)</ns0:ref>. This allows the magnetic induction passively to be transmitted through all the intermediate coils until the MI receiver is reached, creating the MI waveguide <ns0:ref type='bibr' target='#b52'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009)</ns0:ref>.</ns0:p><ns0:p>It has been established that it is required to have six or more relay coils to attain effective increment of signal strength with transmission range of 2m <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref>. Further relay density or number of relays can be lowered using the coils with low unit resistance and high conductivity wires and circuits <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>. It has been verified through physical experiments also that communication range got extended using MI waveguide system as compared to direct MI communication. Other factors like scale, resistance value of magnetic coils and number of turns also influence the MI transmission <ns0:ref type='bibr' target='#b57'>(Vidhya and Danvarsha, 2021)</ns0:ref>. Fig. <ns0:ref type='figure' target='#fig_8'>6</ns0:ref> depicts MI waveguide mechanism, where n number of total coils are there, out of which n-2 are relay coils which are placed equidistantly in a straight line between transceiver nodes; r is the distance between the adjacent coils and d is the total communication range between transmitter and receiver node, which is same as d=(n-1)/r, a is the radius of all coils, C is the capacitor with which each relay and transceiver coil is loaded. The relay coils can be made resonant coils for effective transmission of magnetic signals by appropriate design of capacitor value. Between any two adjacent relay coils, mutual inductance does exist, whose value is based on their inter-coil distance <ns0:ref type='bibr'>(Sun et al., 2012)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Advantages of MI waveguide approach</ns0:head><ns0:p>By using MI waveguide approach for underground communication, following benefits have been observed: a) Stable MI Channel : As most of the underground transmission media i.e. soil is non-magnetic having almost equal permeability values, therefore rate of attenuation of magnetic fields created by coils remains almost unaffected, keeping the MI channel conditions constant and stable <ns0:ref type='bibr'>(Sun and</ns0:ref><ns0:ref type='bibr'>Akyildiz, 2013, 2012)</ns0:ref>. <ns0:ref type='bibr' target='#b52'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b46'>Sun and Akyildiz, 2010b)</ns0:ref>.</ns0:p><ns0:p>. c) Bandwidth : The bandwidth achieved for both ordinary MI communication as well as MI waveguide communication is in the same range (1-2 KHz), which has been found sufficient for non-traditional media applications requiring low data rate monitoring <ns0:ref type='bibr' target='#b37'>(Sharma et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b16'>Kisseleff et al., 2014a)</ns0:ref>. Also, if operating frequency is 10MHz, then 3-dB bandwidth of MI waveguide is found to be in same range as of direct MI communication i.e. (1-2 KHz) <ns0:ref type='bibr' target='#b46'>(Sun and Akyildiz, 2010b)</ns0:ref>. d) Path Loss Reduction : Due to placement of relay coils between transceiver sensor nodes, MI waveguide mechanism offers huge reduction of path loss, which is the most prominent advantage of this technique <ns0:ref type='bibr' target='#b11'>(Dudley et al., 2007)</ns0:ref>. More specifically, this is attributed to appropriate design of waveguide parameters <ns0:ref type='bibr' target='#b52'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009)</ns0:ref>. The analysis in <ns0:ref type='bibr' target='#b46'>(Sun and Akyildiz, 2010b)</ns0:ref> Manuscript to be reviewed Computer Science transmission range of even slightly more than 5 meters, same path loss of 100 dB is observed for EM wave system as well as for direct MI communication. By reducing the relay distance and resistance value of coil wire, path loss can be further lessened for MI waveguide <ns0:ref type='bibr' target='#b21'>(Liu et al., 2021)</ns0:ref>. e) Extension of Transmission Range : Using MI waveguide technique, the transmission range is significantly extended as compared to EM wave communication or ordinary MI communication <ns0:ref type='bibr' target='#b37'>(Sharma et al., 2016;</ns0:ref><ns0:ref type='bibr'>Sun and Akyildiz, 2013)</ns0:ref>. It has also been established experimentally that if Mica2 sensor are used for underground communication in soil using EM wave mechanism, communication range achieved is less than 4 m, which increases to 10 m with similar device size and power for direct MI communication and further gets extended to more than 100m for MI waveguide communication <ns0:ref type='bibr' target='#b52'>(Sun et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b45'>Sun and Akyildiz, 2010a)</ns0:ref>. The distance between relay nodes is even more than the maximum communication range of EM wave transmission <ns0:ref type='bibr' target='#b37'>(Sharma et al., 2016)</ns0:ref>. Due to this extension in transmission range using MI waveguide mechanism, a fully connected sensor network may be attained even without deploying large number of sensors <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref>.</ns0:p><ns0:p>It has also been observed by the researchers that with the increase in transmission distance, the transmission power gets decreased (upto 50% of power required for EM or direct MI) making it favourable for the energy-constrained non-traditional media <ns0:ref type='bibr' target='#b37'>(Sharma et al., 2016)</ns0:ref>. f) Better robustness and easier deployability and maintenance : Unlike a real waveguide, the MI waveguide is not a continual structure and therefore is comparatively more flexible and easy to be deployed and maintained at every 6 to 12 m <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref> <ns0:ref type='bibr' target='#b2'>(Akyildiz et al., 2009)</ns0:ref>. The relay coils in MI waveguide don't need extra power due to passive relaying of magnetic induction <ns0:ref type='bibr' target='#b45'>(Sun and Akyildiz, 2010a)</ns0:ref>. Hence unlike the sensor devices, these relay coils are easily deployable and once buried in soil, don't need much more regular maintenance <ns0:ref type='bibr'>(Sun and Akyildiz, 2013)</ns0:ref>. If due to some harsh conditions, some of the relay coils get damaged, even then the remaining relay coils ensure robustness of sensor network <ns0:ref type='bibr' target='#b52'>(Sun et al., 2011)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>g) Cost :</ns0:head><ns0:p>As the relay coils used in MI waveguide consume no energy and unit cost of these relay coils is very less <ns0:ref type='bibr' target='#b37'>(Sharma et al., 2016;</ns0:ref><ns0:ref type='bibr'>Sun and Akyildiz, 2012;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009)</ns0:ref>, therefore overall cost of underground sensor network gets reduced to large extent as compared to using expensive relay sensor devices in EM wave communication <ns0:ref type='bibr'>(Sun et al., 2012)</ns0:ref>. h) Prolonged system lifetime : MI waveguide technique also leads to prolonged lifetimeof system because the underground sensor devices equipped with MI transceiver nodes can be recharged by above ground charging devices using inductive charging mechanism <ns0:ref type='bibr'>(Sun and Akyildiz, 2013</ns0:ref><ns0:ref type='bibr' target='#b9'>, 2012</ns0:ref><ns0:ref type='bibr' target='#b46'>, 2010b)</ns0:ref>.</ns0:p><ns0:p>In RF-challenged environments, it becomes very cumbersome to replace device batteries; therefore this option of magnetic induction charging proves very beneficial <ns0:ref type='bibr' target='#b45'>(Sun and Akyildiz, 2010a)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations of MI waveguide approach</ns0:head><ns0:p>Inspite of numerous advantages offered by MI waveguide technique for WUSNs, some constraints have been observed by the researchers, which highlight the essence of more optimization work on MI waveguide approach. a) Limited channel capacity and data rate : It has been found that due to lower ratio (order of 2.5) of mutual induction to self-induction (also termed as relative magnetic coupling strength) between adjacent relay coils working at the resonant frequency to attain low path loss in MI waveguide approach, the channel bandwidth becomes very limited (1-2 KHz) <ns0:ref type='bibr' target='#b45'>(Sun and Akyildiz, 2010a)</ns0:ref>. This decrease in channel bandwidth becomes more adverse, if communication distance increases to a particular threshold value, which finally leads to lower channel capacity as well as unsatisfactory data rate, inspite of large communication range <ns0:ref type='bibr' target='#b45'>(Sun and Akyildiz, 2010a)</ns0:ref>. b) Usable for limited application domains : Attributing to data rate and bandwidth limited channels, MI waveguide technique may be adopted for those applications only, where required data rate is low <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref>. For WUSN applications like rescuing the trapped ones in underground mines or border patrolling, large amount of data is required to be timely transmitted on MI channels, which requires higher data rate and bandwidth. Hence much efforts are needed for enhancing the MI channel Manuscript to be reviewed Computer Science capacity for MI-WUSNs <ns0:ref type='bibr' target='#b45'>(Sun and Akyildiz, 2010a)</ns0:ref>. c) Reliability issue : As the multiple resonant MI relay coils constitute the foundation of communication success of MI waveguide approach, hence overall performance of such sensor networks are based not only on the transceiver sensor nodes, but also these relay coils. Therefore, the issue of reliability of such underground sensor networks is needed to be analysed in tough underground media <ns0:ref type='bibr'>(Sun and Akyildiz, 2012)</ns0:ref>. d) Complex deployment mechanism : Due to very rough and hostile underground communication medium, all transceiver sensor nodes are isolated until and unless connected by MI waveguide mechanism.</ns0:p><ns0:p>Therefore, deployment of large number of relay coils in MI-WUSNs costs a big amount of labor <ns0:ref type='bibr' target='#b6'>(Banaseka et al., 2021)</ns0:ref> and therefore needs very thoughtful and complex strategies <ns0:ref type='bibr'>(Sun and Akyildiz, 2013)</ns0:ref> targeting at the objective of building a connected robust wireless sensor network with minimal possible relay coils <ns0:ref type='bibr'>(Sun and Akyildiz, 2013)</ns0:ref>.</ns0:p><ns0:p>It was established by the researchers that for underground pipelines made up of metallic material, no or very few relay coils are required due to metal pipe itself working as magnetic core of MI waveguide.</ns0:p><ns0:p>For non-metallic pipes such as PVC, single relay coil deployed around 5 m distant from one another is enough. For winding these relay coils, the underground pipeline proves to be perfect core leading to small coil deployment cost if they are winded on pipeline during deployment time itself <ns0:ref type='bibr' target='#b52'>(Sun et al., 2011)</ns0:ref>. e) Lack of omnidirectional propagation : Most of the channel characterizations have been done</ns0:p><ns0:p>with assumption of placement of transceivers or relay coils in straight line, which is practically not always true. For transceiver nodes based on MI communication, the strength of received signal at receiver end is affected by the angle between axes of two mutually coupled coils. To maintain high-quality transmission in such cases, multidimensional MI coils are developed and deployed <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>MI 3-D coil communication technique</ns0:head><ns0:p>The basic architecture, functionality and advantages for MI 3-D coil communication technique are detailed out in following subsections:</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture of 3D coil MI communication approach</ns0:head><ns0:p>In most of the practical underground communication applications, coils are not buried in straight line due to two probable causes, first being inability to deploy relay coils in exact planned positions due to rocks or pipes being already present inside the ground and secondly the positions of already buried coils may get changed during operation of network due to aboveground pressure or movement of soil. It is also well established that received signal strength at MI transceiver end is affected by the angle between the axes of two adjacently placed coils. Therefore, option of using multidimensional MI coils in such a complex scenario has been worked upon for high quality transmission between sensing nodes <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>More precisely, 3-directional (3D) coils have been designed and used which offers omni-directional signal coverage as well as minimal number of coils leading to reduced system complexity and cost (Refer Fig. <ns0:ref type='figure' target='#fig_11'>7</ns0:ref>) <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Advantages of 3D coil MI communication approach</ns0:head><ns0:p>In 3D MI coil system, three individually fabricated unidirectional (UD) coils are vertically mounted on a cubical structure having each side with length of 10 cm such that each 3D coil is perpendicularly deployed with respect to others. These three coils are meant for forming a powerful beam along three different axes of Cartesian coordinate <ns0:ref type='bibr' target='#b15'>(Ishtiaq and Hwang, 2020)</ns0:ref>. As the magnetic flux created by one surrounding coil becomes zero with respect to another two orthogonal coils, hence these three coils do no interfere with each other, attributing to field distribution structure of coils. Similar to direct MI coils, these 3D MI coils are also made up of 26-AWG wire and each one of these coils is supported by a serial capacitor for detecting resonance. At the side of receiving node, all the three signals from three coils are added <ns0:ref type='bibr' target='#b21'>(Liu et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Depending on channel model, after fixing MI coil parameters and transmission distance, it is the intersection angle between transmission and receiver nodes, which determines the signal strength. At least one coil can achieve adequate signal strength with three orthogonal coils regardless of how much the angle of intersection is changed. Even if MI coils are rotated or intersection angel between those gets changed, high degree of communication is supposed to be kept by the system in this case <ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>. It has also been observed that if transmission gain is maximized using optimal power allocation and adoption of spatial-temporal code, good system performance can be achieved by combining the received signals at three orthogonally placed coils <ns0:ref type='bibr' target='#b37'>(Sharma et al., 2016)</ns0:ref>.</ns0:p><ns0:p>In underwater sensor networks also, it has been found through modelling and analysis that the transmission range of MI system achieved is of the order of 20 m range using small sized coils of 5cm radius with high value of water conductivity. Therefore, using 3D coils helps in establishing more robust MI links, which remain unaffected by dynamical rotation of sensor nodes <ns0:ref type='bibr' target='#b13'>(Guo et al., 2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>COMPARATIVE ANALYSIS OF VARIOUS MI COMMUNICATION METHODS</ns0:head><ns0:p>After detailed study of all physical layer techniques, it is clear that for applications having sensor nodes deeply buried inside soil, MI is a better technology as compared to EM wave technology. All three MI transmission techniques have their relative advantages and limitations.</ns0:p><ns0:p>The comparative analysis of EM wave technique as well as all MI techniques has been summarized in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. As compared to EM wave communication, direct MI communication is seen as a better option, if factors like dynamic channel conditions of underlying media i.e. soil, need of large antenna size or effect of volumetric water content(VWC) percentage on path loss and connectivity are taken into consideration. EM wave approach offers better bandwidth as compared to low bandwidth of 1-2 KHz achieved using MI approach. The comparison of path loss parameter is quiet complex, as its behaviour is different in different scenarios. For very near region (transmission distance, d &lt; 1m), Direct MI method exhibits smaller path loss with respect to EM technique, but beyond this, path loss for MI channel becomes even 20 dB more than that of EM technique. Also, for dry soil, EM path loss is lesser, but with the increase of VWC in soil (which is generally the case), path loss of EM wave keeps on increasing, making MI as better alternative for such cases <ns0:ref type='bibr' target='#b44'>(Sun and Akyildiz, 2009)</ns0:ref>. Also, as path loss is inversely proportional to operating frequency in EM method and directly proportional for MI method, therefore for operating frequencies greater than 900 MHz, path loss decreases for direct MI approach <ns0:ref type='bibr' target='#b44'>(Sun and Akyildiz, 2009)</ns0:ref>. The MI waveguide approach is better than both EM wave as well as direct MI approach.</ns0:p><ns0:p>The communication range offered by both EM wave and direct MI is not enough for practical applications.</ns0:p><ns0:p>Here, MI waveguide approach proves better which offers communication distances almost 25 times that of direct MI or EM communication <ns0:ref type='bibr' target='#b46'>(Sun and Akyildiz, 2010b)</ns0:ref>. The MI waveguide transceivers require only less than half of energy consumed by EM wave or MI method, making MI waveguide suitable for energy constrained applications in addition to lower overall cost due to relay nodes not requiring any power <ns0:ref type='bibr' target='#b44'>(Sun and Akyildiz, 2009)</ns0:ref>. For both ordinary MI as well as MI waveguide system, the bandwidth achieved is smaller of the range 1-2 KHz, which is far lesser than EM wave mechanism, but it suffices for applications of low data rate monitoring <ns0:ref type='bibr' target='#b37'>(Sharma et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b29'>Reddy et al., 2020)</ns0:ref>. The MI systems (the MI waveguide as well as 3D MI coil) exhibit constant channel condition and offer relatively longer transmission range than that of the EM wave-based system. All the characterizations of path loss or bit error rate or transmission distance of direct MI or MI waveguide techniques have been done with assumption of sensors nodes deployed in straight line, which is not the case in reality. It makes MI 3-Directional coil mechanism as best option to offer omnidirectional coverage keeping other benefits same <ns0:ref type='bibr' target='#b46'>(Sun and Akyildiz, 2010b;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akyildiz et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b15'>Ishtiaq and Hwang, 2020)</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head>RESEARCH CHALLENGES AND SCOPE FOR FUTURE WORK</ns0:head><ns0:p>There are a number of challenges before utilizing MI-WUSNs which need further attention, exploration and research work. As clarified in previous section, bandwidth achieved in all MI techniques is very small, due to which WUSN applications requiring high data rate monitoring can not be implemented.</ns0:p><ns0:p>The combined usage of active and passive relaying in MI waveguide offering low path loss is another challenging area due to significant design constrains attributing to determining appropriate location and operation pattern of each relay node <ns0:ref type='bibr' target='#b54'>(Tam et al., 2020)</ns0:ref>. Although using orthogonal or 3D coils boosts the signal quality and other system parameters but designing such coils is also quite challenging and needs further work. One more area open for future work is interaction of MI-WUSNs with other types of WSNs, such as WUSNs interfacing with underwater WSNs in case of exploration of deep oceans or WUSNs interacting with power grid for monitoring structural health or WUSNs communicating with self-driving cars in case of navigation and charging. The upgradation of presently available solutions, taking care of robust adjustment is also a big challenge because slight deviation in either of the system parameters may lead to rendering whole theoretical solution as invalid due to imperfect channel state information (CSI).</ns0:p><ns0:p>One example of such scenarios is varying soil wetness during rainfalls which may result in additional critical modification of channel state. One of the most promising but less explored areas of future work</ns0:p><ns0:p>for MI-WUSNs is to design cross-layer architecture ensuring multi-objective optimization <ns0:ref type='bibr' target='#b42'>(Singh et al., 2021</ns0:ref>) which could optimize system performance in term of throughput, charging capability and accuracy of localization. Such types of multidimensional optimization leading to design of self-charging and power-efficient networks with the constraint of high system performance based upon application or operation mode is still an open area for future work for researchers <ns0:ref type='bibr' target='#b18'>(Kisseleff et al., 2018)</ns0:ref>.Estimating multi-hop route using the static and mobile relay nodes and establishing deterministic channel state models are also the areas requiring further research work <ns0:ref type='bibr' target='#b15'>(Ishtiaq and Hwang, 2020)</ns0:ref>. MI communication is an effective and reliable communication mode for underground sensors to interact with each other.</ns0:p><ns0:p>Meanwhile Wireless Power Transfer (WPT) mechanism is used to remotely charge the sensor batteries in WUSNs ensuring better network reliability. Integration of MI and WPT has large potential and scope for improvement of reliability and practicability of WUSNs, especially in situations when no human support is possible <ns0:ref type='bibr' target='#b22'>(Liu, 2021)</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>have further suggested two algorithms i.e. MST algorithm and triangle centroid (TC) for effective deployment of MI waveguides to connect the underground sensors in WUSN environment. Further cross layered protocol architecture 2/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:4:0:NEW 7 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>literature review. The section 'Application domains of MI based WUSNs' of the paper elaborates various MI-WUSN based application domains. The next section discusses the edge of MI over conventional EM wave communication. The section 'Various MI techniques used in WUSNs' highlights the detailed architecture of all MI communication techniques i.e. Direct MI communication, MI waveguide communication and 3D MI coil communication. The next section then highlights the comparative analysis of these MI techniques. Future scope of work and conclusion is given in the last section.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Usage of MI-WUSNs for detection of leakage of water or oil(Sun et al., 2012).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Usage of MI-WUSNs for border security applications(Vuran and Akyildiz, 2010).</ns0:figDesc><ns0:graphic coords='6,141.73,219.87,413.56,203.61' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Basic structure of direct MI communication<ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>.</ns0:figDesc><ns0:graphic coords='7,141.73,466.10,413.57,132.32' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:4:0:NEW 7 Oct 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Analogy of direct MI communication with a transformer (Sun and Akyildiz, 2010b).</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,413.59,496.76' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Basic structure of MI waveguide technique<ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref>.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Analogy of MI waveguide technique with transformer (Sun and Akyildiz, 2010b).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>has shown that MI waveguide offers path loss smaller than 100dB for distance even more than 250 m, whereas for 10/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:4:0:NEW 7 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:58375:4:0:NEW 7 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Basic structure of MI waveguide with 3D coils.<ns0:ref type='bibr' target='#b55'>(Tan et al., 2015)</ns0:ref> </ns0:figDesc><ns0:graphic coords='13,141.73,573.85,413.58,118.92' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>Using wireless sensor networks in non-conventional media such as soil has paved the way to a large number of novel applications ranging from soil monitoring and underground infrastructural monitoring to border security related applications. The transmission constraints such as dynamic channel condition, high path loss and large antenna size put by EM wave communication mechanism for WUSNs have beenaddressed using MI communication. This technique is based on the basic principle of mutual induction between coils connected with transceiver nodes of WUSNs. This work has detailed out the gradual progression from ordinary MI communication to MI waveguide technique to MI waveguide with 3-D coils. These MI techniques have proved fruitful to offer advantages like constant channel condition due to similar permeability of propagation medium (air, water, rocks), reduced path loss due to low-cost and passive relay coils deployed between the transceivers, enhanced communication range attributing to relay coils, feasible bandwidth, negligible propagation delay, and small-sized coils. The comparative analysis of these MI techniques made in the work has established that MI waveguide using 3D coils is the best technique for practically realizing WUSN applications. The future scope and open challenges discussed in the work further opens various research avenues for the researchers in time to come.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='5,141.73,63.78,413.57,206.21' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparative analysis of EM wave, direct MI, MI waveguide and 3-D coil MI waveguide communication methods</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>13/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:4:0:NEW 7 Oct 2021) 14/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58375:4:0:NEW 7 Oct 2021)</ns0:note></ns0:figure> </ns0:body> "
"Dated : 7th Oct 2021 Department of Computer Science & Engineering Guru Jambheshwar University of Science & Technology Hisar (Haryana) - India Respected Editor & Reviewers Greetings. Please accept our heartfelt gratitude for your review comments received through E-Mail dated 24th Aug 2021 for improvisation of the manuscript. Each & every comment has been worked upon very carefully and appropriate changes have been done in the manuscript. It is to be noted that review comments are being reproduced in following sections with our responses mentioned in italics. After making the required modifications, revised manuscript has been generated with tracked changes using latexdiff tool. References related additions are not covered by latexdiff. Editor comments The reviewer has expressed concerns about not providing up-to-date references and related content in this review paper. To address this, authors should carefully review all the new literature (last three years, i.e. 2019, 2020, 2021) and provide the details of the development in the proposed research. Answer: As per the comments of the Honorable Editor, up-to-date references are now introduced and also the contents of the review paper are updated. Following thirteen references of last three years (2021, 2020 & 2019) have been referred in the review paper: Banaseka, F. K., Katsriku, F., Abdulai, J. D., Adu-Manu, K. S., and Engmann, F. N. A. (2021). Signal propagation models in soil medium for the study of wireless underground sensor networks: A review of current trends. Wireless Communications and Mobile Computing, 2021. Debnath, S. (2021). Network coverage using mi waves for underwater wireless sensor network in shadowing environment. IET Microwaves, Antennas & Propagation, pages 1–7. Liu, B. H., Fu, T. H., and Wang, Y. B. (2021). Research on the model and characteristics of underground magnetic induction communication channel. Progress In Electromagnetics Research M, 101:89–100. Liu, G. (2021). Data collection in mi-assisted wireless powered underground sensor networks: Directions, recent advances, and challenges. IEEE Communications Magazine, 59(4):132–138. Singh, P., Singh, R. P., and Singh, Y. (2021). An optimal energy-throughput efficient cross-layer solution using naked mole rat algorithm for wireless underground sensor networks. Materials Today:Proceedings. Vidhya, J. and Danvarsha, B. (2021). Design and implementation of underground soil statistics transmission gadget utilizing wusn. In 2021 6th International Conference on Communication and Electronics Systems (ICCES), pages 1–6. IEEE. Huang, H., Shi, J., Wang, F., Zhang, D., and Zhang, D. (2020). Theoretical and experimental studies on the signal propagation in soil for wireless underground sensor networks. Sensors, 20(9):2580. Muzzammil, M., Ahmed, N., Qiao, G., Ullah, I., and Wan, L. (2020). Fundamentals and advancements of magnetic-field communication for underwater wireless sensor networks. IEEE Transactions on Antennas and Propagation, 68(11):7555–7570. Raut, P. and Ghare, P. (2020). Analysis of wireless channel parameters for the different types of soil in the wusns. In 2020 IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS), pages 1–4. IEEE. Reddy, T. P., Kumar, C. S., Suman, K., Avinash, U., and Kuresan, H. (2020). Wireless underground sensor network using magnetic induction. In 2020 International Conference on Communication and Signal Processing (ICCSP), pages 1394–1398. IEEE. Salam, A. and Raza, U. (2020). On burial depth of underground antenna in soil horizons for decision agriculture. In International Conference on Internet of Things, pages 17–31. Springer. Tam, N. T., Dung, D. A., Hung, T. H., Binh, H. T. T., and Yu, S. (2020). Exploiting relay nodes for maximizing wireless underground sensor network lifetime. Applied Intelligence, 50(12):4568–4585. Sardar, M. S., Xuefen, W., Yi, Y., Kausar, F., and Akbar, M. W. (2019). Wireless underground sensor networks. International Journal of Performability Engineering, (11). Reviewer 2 1) Authors did not follow the previous comments and the references are still very old. This is not a qualified review paper. Answer: Sir, we have tried our level best to do the required changes suggested by you in the previous revision. Thirteen references of last three years (2021, 2020, 2019) have been referred in the paper. Following contents with respect to reference of four latest papers have been added in recent revision as per your suggestion. Page 3/18 (INTRODUCTION Section) In addition to EM and MI communication, the researchers have also worked on Acoustic based propagation as communication methodology, but Acoustic approach has proved to be more appropriate for detection based applications as compared to communication based applications (Banaseka et al., 2021). Page 5/18 (MI AS BETTER ALTERNATIVE OF EM WAVE COMMUNICATION Section) Moreover, path loss of EM wave mechanuism is a function whereas path loss for MI based communication is a logarithmic function of the distance(Banaseka et al., 2021). Banaseka, F. K., Katsriku, F., Abdulai, J. D., Adu-Manu, K. S., and Engmann, F. N. A. (2021). Signal propagation models in soil medium for the study of wireless underground sensor networks: A review of current trends. Wireless Communications and Mobile Computing, 2021. Page 5/18 (APPLICATION DOMAINS OF MI BASED WUSNS Section) g) MI-Assisted Wireless Powered Underground Sensor Networks WUSNs allow for remote monitoring and management of a variety of subsurface environments, however those have a substantial reliability issue. To solve this issue and alleviate current networking issues, (Liu, 2021) presents the magnetic induction (MI)-assisted wireless powered underground sensor network (MI-WPUSN), a new idea that combines the benefits of MI communication techniques with those of wireless power transfer mechanisms. MI-WPUSN is one-of-a-kind platform with seven envisioned devices and four various communication modes that has considerable reliability potential but is limited by its complex and difficult data collection. Liu, G. (2021). Data collection in mi-assisted wireless powered underground sensor networks: Directions, recent advances, and challenges. IEEE Communications Magazine, 59(4):132–138. Page 9/18 (MI Waveguide Communication Technique Section) It has been verified through physical experiments also that communication range got extended using MI waveguide system as compared to direct MI communication. Other factors like scale, resistance value of magnetic coils and number of turns also influence the MI transmission (Vidhya and Danvarsha, 2021). Vidhya, J. and Danvarsha, B. (2021). Design and implementation of underground soil statistics transmission gadget utilizing wusn. In 2021 6th International Conference on Communication and Electronics Systems (ICCES), pages 1–6. IEEE. Page 15/18 (RESEARCH CHALLENGES AND SCOPE FOR FUTURE WORK Section) One of the most promising but less explored areas of future work for MI-WUSNs is to design cross-layer architecture ensuring multi-objective optimization (Singh et al., 2021) which could optimize system performance in term of throughput, charging capability and accuracy of localization. Singh, P., Singh, R. P., and Singh, Y. (2021). An optimal energy-throughput efficient cross-layer solution using naked mole rat algorithm for wireless underground sensor networks. Materials Today:Proceedings. 2) The overviews of Figure 5 and Figure 7 are better put together. Answer: Thanks for the valuable comments of reviewer. According to the authors opinion, Figure 5 and 7 represent the Basic structure of MI waveguide without and with 3D coils respectively. So, it is better to detail them separately. 3. In page 14, authors only added two technologies in the last 2 rows of the table without describing them. Above all, authors should focus on reviewing newer technologies. Answer: As per the suggestions of Honorable reviewer, the description of the last 2 rows are now introduced in the manuscript (see pp. 13). The MI systems (the MI waveguide as well as 3D MI coil) exhibit constant channel condition and offer relatively longer transmission range than that of the EM wave-based system. All the characterizations of path loss or bit error rate or transmission distance of direct MI or MI waveguide techniques have been done with assumption of sensors nodes deployed in straight line, which is not the case in reality Once again thanks and we hope that the editor and reviewers agree with the action and reasoning presented in these replies and that the revised version of our contribution will meet your expectations. Partap Singh (on behalf of all authors) "
Here is a paper. Please give your review comments after reading it.
295
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Data dimensionality informs us about data complexity and sets limit on the structure of successful signal processing pipelines. In this work we revisit and improve the manifold adaptive Farahmand-Szepesva &#341;i-Audibert (FSA) dimension estimator, making it one of the best nearest neighbor-based dimension estimators available. We compute the probability density function of local FSA estimates, if the local manifold density is uniform. Based on the probability density function, we propose to use the median of local estimates as a basic global measure of intrinsic dimensionality, and we demonstrate the advantages of this asymptotically unbiased estimator over the previously proposed statistics: the mode and the mean. Additionally, from the probability density function, we derive the maximum likelihood formula for global intrinsic dimensionality, if i.i.d. holds. We tackle edge and finite-sample effects with an exponential correction formula, calibrated on hypercube datasets. We compare the performance of the corrected median-FSA estimator with kNN estimators: maximum likelihood (Levina-Bickel), the 2NN and two implementations of DANCo (R and matlab). We show that corrected median-FSA estimator beats the maximum likelihood estimator and it is on equal footing with DANCo for standard synthetic benchmarks according to mean percentage error and error rate metrics. With the median-FSA algorithm, we reveal diverse changes in the neural dynamics while resting state and during epileptic seizures. We identify brain areas with lower-dimensional dynamics that are possible causal sources and candidates for being seizure onset zones.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>corrected median-FSA estimator with kNN estimators: maximum likelihood <ns0:ref type='bibr'>(Levina-Bickel)</ns0:ref>, the 2NN and two implementations of DANCo (R and Matlab). We show that corrected median-FSA estimator beats the maximum likelihood estimator and it is on equal footing with DANCo for standard synthetic benchmarks according to mean percentage error and error rate metrics. With the median-FSA algorithm, we reveal diverse changes in the neural dynamics while resting state and during epileptic seizures. We identify brain areas with lower-dimensional dynamics that are possible causal sources and candidates for being seizure onset zones. </ns0:p></ns0:div> <ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>Dimensionality sets profound limits on the stage where data takes place, therefore it is often crucial to know the intrinsic dimension of data to carry out meaningful analysis. Intrinsic dimension provides direct information about data complexity, as such, it was recognised as a useful measure to describe the PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:51754:1:0:NEW 4 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science dynamics of dynamical systems <ns0:ref type='bibr' target='#b26'>(Grassberger and Procaccia, 1983)</ns0:ref>, to detect anomalies in time series <ns0:ref type='bibr' target='#b30'>(Houle et al., 2018)</ns0:ref>, to diagnose patients with various conditions <ns0:ref type='bibr' target='#b19'>(Dlask and Kukal, 2017;</ns0:ref><ns0:ref type='bibr' target='#b39'>Polychronaki et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b42'>Sharma et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b0'>Acharya et al., 2013)</ns0:ref> and to use it simply as plugin parameter for signal processing algorithms.</ns0:p><ns0:p>Most of the multivariate datasets lie on a lower dimensional manifold embedded in a potentially very high-dimensional embedding space. This is because the observed variables are far from independent, and this interdependence introduces redundancies resulting in a lower intrinsic dimension (ID) of data compared with the number of observed variables. To capture this -possibly non-linear -interdependence, nonlinear dimension-estimation techniques can be applied to reveal connections between the variables in the dataset <ns0:ref type='bibr' target='#b44'>(Sugiyama and Borgwardt, 2013;</ns0:ref><ns0:ref type='bibr' target='#b40'>Romano et al., 2016)</ns0:ref>, particularly between time series <ns0:ref type='bibr' target='#b9'>(Benk&#337; et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b34'>Krakovsk&#225;, 2019)</ns0:ref>. In these latter case, the estimated intrinsic dimension provides actionable information about the causal structures within the investigated system based on it's dynamics. Dimension estimation of system's dynamics from time series is supported by theorems of nonlinear dynamical systems. Given a univariate time series generated by a deterministic chaotic dynamical system one can reconstruct the multivariate state of the system for example by time delay embedding if some mild conditions are met <ns0:ref type='bibr'>(Packard et al., 1980;</ns0:ref><ns0:ref type='bibr' target='#b46'>Takens, 1981)</ns0:ref>. This procedure is carried out by adding the time shifted versions of the time series to itself as new coordinates:</ns0:p><ns0:formula xml:id='formula_0'>X(t) = [x(t), x(t &#8722; &#964;), x(t &#8722; 2&#964;), . . . , x(t &#8722; (E &#8722; 1)&#964;)x(t &#8722; (E &#8722; 1)&#964;)]</ns0:formula><ns0:p>(1)</ns0:p><ns0:p>where x(t) is the time series, X(t) is the reconstructed state. E and &#964; are two parameters, the embedding dimension and embedding delay respectively.</ns0:p><ns0:p>State space reconstruction by time delay embedding or some other technique based on wavelet transformation <ns0:ref type='bibr' target='#b38'>(Parlitz and Mayer-Kress, 1995;</ns0:ref><ns0:ref type='bibr' target='#b50'>You and Huang, 2011;</ns0:ref><ns0:ref type='bibr' target='#b31'>Hu et al., 2019)</ns0:ref> or recurrent neural networks <ns0:ref type='bibr' target='#b17'>(Chen et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b18'>de Brouwer et al., 2019)</ns0:ref> are usually a first step in any nonlinear time series analysis pipeline to characterize the system's dynamics <ns0:ref type='bibr' target='#b10'>(Bradley and Kantz, 2015)</ns0:ref>. In the E dimensional embedding space, the intrinsic dimensionality of the augmented dataset can be a relevant real-time descriptor of the dynamics <ns0:ref type='bibr' target='#b43'>(Skinner et al., 1994)</ns0:ref>.</ns0:p><ns0:p>To estimate the ID of data various approaches have been proposed, for a full review of techniques see the work of <ns0:ref type='bibr' target='#b13'>Campadelli et al. (2015)</ns0:ref>. Here we discuss the k-Nearest Neighbor (kNN) ID estimators, with some recent advancements in the focus.</ns0:p><ns0:p>A usually basic assumption of kNN ID estimators is that the fraction of points f in a spherical neighborhood is approximately determined by the intrinsic dimensionality (D) and radius (R) times alocally almost constant -mostly density-dependent factor (&#951;(x, R), Eq. 2).</ns0:p><ns0:formula xml:id='formula_1'>f &#8776; &#951;(x, R) * R D (2)</ns0:formula><ns0:p>where f is the fraction of samples in a neighborhood.</ns0:p><ns0:p>Assuming a Poisson sampling process on the manifold, <ns0:ref type='bibr' target='#b36'>Levina and Bickel (2005)</ns0:ref> derived a Maximum Likelihood estimator, which became a popular method and got several updates <ns0:ref type='bibr' target='#b24'>(Ghahramani and Mckay, 2005;</ns0:ref><ns0:ref type='bibr' target='#b27'>Gupta and Huang, 2010)</ns0:ref>. These estimators are prone to underestimation of dimensionality because of finite sample effects and overestimations because of the curvature.</ns0:p><ns0:p>To address the challenges posed by curvature and finite sample, new estimators were proposed <ns0:ref type='bibr' target='#b41'>(Rozza et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b8'>Bassis et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b15'>Ceruti et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b21'>Facco et al., 2017)</ns0:ref>. To tackle the effect of curvature, a minimal neighborhood size can be taken on normalized neighborhood distances as in the case of MIND ML <ns0:ref type='bibr' target='#b41'>(Rozza et al., 2012)</ns0:ref>. To tackle the underestimation due to finite sample effects, empirical corrections were applied. A naive empirical correction approach was applied by <ns0:ref type='bibr' target='#b12'>Camastra and Vinciarelli (2002)</ns0:ref>: a perceptron was trained on the estimates computed for randomly sampled hypercubes to learn a correction function. Motivated by the correction in the previous work, the IDEA method was created <ns0:ref type='bibr' target='#b41'>(Rozza et al., 2012)</ns0:ref>; and a more principled approach was carried out, where the full distribution of estimates was compared to the distributions computed on test data sets using the Kullback-Leibler divergence (MIND KL <ns0:ref type='bibr' target='#b41'>(Rozza et al., 2012)</ns0:ref>, DANCo <ns0:ref type='bibr' target='#b15'>(Ceruti et al., 2014)</ns0:ref>). In the case of DANCo, not just the nearest neighbor distances, but the angles are measured and taken into account in the estimation process resulting in more accurate estimates.</ns0:p><ns0:p>In the recent years, further estimators have been proposed, such as the estimator that uses minimal neighborhood information leveraging the empirical distribution of the ratio of the nearest neighbors to fit Manuscript to be reviewed</ns0:p><ns0:p>Computer Science intrinsic dimension <ns0:ref type='bibr' target='#b21'>(Facco et al., 2017)</ns0:ref>, or other approaches based on simplex skewness <ns0:ref type='bibr' target='#b33'>(Johnsson et al., 2015)</ns0:ref> and normalized distances <ns0:ref type='bibr' target='#b16'>(Chelly et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b2'>Amsaleg et al., 2015</ns0:ref><ns0:ref type='bibr' target='#b4'>Amsaleg et al., , 2018</ns0:ref><ns0:ref type='bibr' target='#b6'>Amsaleg et al., , 2019))</ns0:ref>. is computed leveraging the formula for the distances of the kth and 2kth neighbor. This computation is repeated for the whole sample and a global estimate is generated as the mean of the local estimates. C We show the local estimates (blue dots), the empirical mean (orange) and median (red) in the function of a neighborhood size for the 2D points above. The mean has an upcurving tail at small neigborhood sizes but the median seems to be robust global estimate even for the smallest neighborhood. The mean approximately lies on a hyperbola aD k&#8722;1 + D &#8776; &#948; k (x i ) i , where a &#8776; 0.685 is a constant (grey dashed line). D We measure the intrinsic dimension of the dynamics for a logistic map driven by two other independent logistic maps (n = 1000). We show the local FSA estimates (blue), the mean (orange) and the median (red) in the function of neighborhood size after time delay embedding (E = 4, &#964; = 1). The dynamics is approximately 3 dimensional and the median robustly reflects this, however the mean overestimates the intrinsic dimension at small neigborhood sizes.</ns0:p><ns0:p>In the followings we revisit the manifold adaptive Farahmand-Szepesv&#225;ri-Audibert (FSA) dimension Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>estimator proposed by <ns0:ref type='bibr' target='#b22'>Farahmand et al. (2007)</ns0:ref> to measure intrinsic dimensionality of datasets (Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>). This estimator is extremely simple, it uses two neighborhoods around a data point to estimate the local intrinsic dimensionality.</ns0:p><ns0:p>We derive the FSA estimator from Eq. 2. Let M be a D dimensional manifold and let's have a sample {x i } where i &#8712; {1, 2, . . . , n} with n size sampled from M . We take two neigborhoods around a sample point, thereby we fix f = k/n and if R i k is the distance at which the k-th neighbor is found around x i , then we can take the logarithm of both sides:</ns0:p><ns0:formula xml:id='formula_2'>ln k n &#8776; ln &#951; + D ln R i k ln 2k n &#8776; ln &#951; &#8242; + D ln R i 2k (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>)</ns0:formula><ns0:p>If &#951; is slowly varying and &#8710;R is small, we can take &#951; = &#951; &#8242; as a constant. Thus, by subtracting the two equations from each other we get rid of the local density dependence:</ns0:p><ns0:formula xml:id='formula_4'>ln (2) &#8776; D ln R i 2k R i k (4)</ns0:formula><ns0:p>We rearrange Eq. 4 to compute the local estimates, which is practically fitting a line through the log-distance of the kth and 2kth nearest neighbor at a given sample's location (Fig. <ns0:ref type='figure' target='#fig_1'>1 A B</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_5'>&#948; k (x i ) = ln(2) ln R i 2k /R i k (5) where &#948; k (x i ) is the local FSA dimension estimate.</ns0:formula><ns0:p>To compute a global ID estimate, the authors proposed the mean of local estimates at sample-points, or a vote for the winner global ID value (the mode), if the estimator is used in integer-mode. They proved that the above global ID estimates are consistent for k &gt; 1, if &#951; is differentiable and the manifold is regular. They calculated the upper bound for the probability of error for the global estimate, however this bound contains unknown constants <ns0:ref type='bibr' target='#b22'>(Farahmand et al., 2007)</ns0:ref>.</ns0:p><ns0:p>In practice one computes the local estimates for various neighborhood sizes and compute the global estimate typically by averaging. We show this procedure by two examples: on uniformly sampled points from the 2D plane and on a coupled logistic map system (Fig. <ns0:ref type='figure' target='#fig_1'>1 C, D</ns0:ref>). For the uniform random sample the basic assumptions of the FSA method hold, and the average of local values estimates well the global dimension D = 2 at bigger neighborhood sizes (k &gt; 8). However for small neighborhood sizes the estimate curls upwards and goes to infinity at k = 1 (Fig. <ns0:ref type='figure' target='#fig_1'>1 C</ns0:ref>). One can use a robust statistic, the median as a global estimate and gets better results.</ns0:p><ns0:p>As a second example let's see the intrinsic dimension estimation procedure for a coupled logistic map system to grasp the complexity of the system's dynamics! We couple three chaotic logistic maps, such that two independent variables drive a third one through nonlinear coupling:</ns0:p><ns0:formula xml:id='formula_6'>x(t + 1) = r x x(1 &#8722; x) y(t + 1) = r y y(1 &#8722; y) z(t + 1) = r z z(1 &#8722; z &#8722; &#946; zx x &#8722; &#946; zy y) (6)</ns0:formula><ns0:p>where x, y and z &#8712; [0, 1] are the state variables, r i = 3.99 and &#946; i = 0.3 are parameters. We generate n = 10 3 sample points with periodic boundary on the [0, 1] interval and investigate the dynamics of the variable z. We apply time delay embedding with embedding dimension E = 4 and embedding delay &#964; = 1, and compute the local FSA estimates around each sample in the embedding space with periodic boundary conditions (Fig. <ns0:ref type='figure' target='#fig_1'>1 D</ns0:ref>). At small neighborhoods the mean of the local estimates is higher than the actual intrinsic dimensionality (D &#8776; 3) of the data, the median however stays approximately constant with respect to k.</ns0:p><ns0:p>We showed in the previous two examples that the median of local FSA estimates was a more robust estimator of the intrinsic dimension than the mean, but the generality of this finding is yet to be explored by more rigorous means. Additionally, in these cases the data were abundant, and the edge effect was In this paper we propose an improved FSA estimator, based on the assumption that the density is locally uniform. The main contributions of this paper are as follows:</ns0:p><ns0:p>(1) We calculate the probability density function of local FSA estimates, and derive formula for the sampling distribution of the median.</ns0:p><ns0:p>(2) We prove that the median is an asymptotically unbiased estimate of the intrinsic dimension, and introduce this variant as the median FSA (mFSA) algorithm. To confirm the validity of the theory, we make comparison with empirical measurements carried out on uniformly sampled random hypercube datasets with varied sample size and intrinsic dimension value. We find that finite sample size and edge effects cause systematic underestimation at high intrinsic dimensions.</ns0:p><ns0:p>(3) We present the new corrected median FSA (cmFSA) method to alleviate the underestimation due to finite sample and edge effects. We achieve this by applying a heuristic exponential correctionformula applied on the mFSA estimate and we test the new algorithm on benchmark datasets.</ns0:p><ns0:p>(4) Finally, we apply the mFSA estimator to locate putative epileptic focus on Local Field Potential measurements of a human subject.</ns0:p><ns0:p>The paper is organised as follows. In the Methods section, we present the steps of FSA, mFSA and cmFSA algorithms, then we describe the simulation of the hypercube datasets and we show the specific calibration procedure used in the cmFSA method. After these, we turn to benchmark datasets. We refer to data generation scripts and display the evaluation procedure. This section ends with a description of Local Field Potential measurements and the analysis workflow. In the Results section, we lay out the theoretical results about the FSA estimator first, then we validate them against simple simulations as second. Third, we compare our algorithms on benchmark datasets against standard methods. Fourth, we apply the mFSA algorithm on Local Field Potential measurements. These parts are followed by the Discussion and Conclusion sections.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>The FSA and mFSA algorithm</ns0:head><ns0:p>There is a dataset with a sample size n, and sample points x i &#8712; R m . Then, 1. Compute distances: Calculate the distance of the kth and 2kth nearest neighbors (R k , R 2k ) for each data point (x i ). Here the neighborhood size is some positive integer k &#8712; Z + .</ns0:p><ns0:p>2. Compute local estimates: Get local estimates &#948; k (x i ) from the distances for each data point according to Eq. 5.</ns0:p><ns0:p>3. Calculate global estimate: Aggregate the local estimates into one global value. This last step is the only difference between the FSA and the mFSA method:</ns0:p><ns0:p>(a) FSA estimator:</ns0:p><ns0:formula xml:id='formula_7'>d (k) FSA = &#8721; &#948; k (x i ) n (7) (b) mFSA estimator: d (k) mFSA = M[{&#948; k (x 1 ), &#948; k (x 2 ), . . . , &#948; k (x n )}]<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where M stands for the sample median.</ns0:p></ns0:div> <ns0:div><ns0:head>5/23</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:51754:1:0:NEW 4 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The cmFSA algorithm</ns0:p><ns0:p>There is a dataset with a sample size n, and sample points x i &#8712; R m . Then, 1. Compute mFSA estimate Apply the mFSA algorithm to get biased global estimate d (k) mFSA .</ns0:p><ns0:p>2. Model Calibration Fit a correction-model with the the given sample size n on uniform random hypercube calibration datasets consisting of various intrinsic dimension values, many instances each (at least N = 15 realizations). We used the following model:</ns0:p><ns0:formula xml:id='formula_8'>D &#8776; d exp L &#8721; l=1 &#945; l d l (9)</ns0:formula><ns0:p>where D is the true dimension of the underlying manifold, &#945; l -s are sample size and k dependent coefficients, L is the order of the polynomial and d = d</ns0:p><ns0:formula xml:id='formula_9'>(k)</ns0:formula><ns0:p>mFSA is a shorthand for the biased local estimate. This model is derived from heuristic reasoning, and simplifies to a linear model in the parameters, if the logarithm of the two sides is taken.</ns0:p><ns0:p>First we calculate biased estimates on each test data. Second, we carry out the model fit by linear regression on the log-log values with the ordinary least squares method or with orthogonal distance regression.</ns0:p><ns0:p>3. Calculate cmFSA estimate Plug in the biased estimate into fitted the correction model to compute</ns0:p><ns0:formula xml:id='formula_10'>d (k) cmFSA .</ns0:formula><ns0:p>A python implementation of the algorithms can be found at https://github.com/phrenico/cmfsapy along with the supporting codes for this article.</ns0:p></ns0:div> <ns0:div><ns0:head>Simulations on D-hypercube datansets</ns0:head><ns0:p>The simulations were implemented in python3 (Van Rossum and Drake, 2009) using the numpy <ns0:ref type='bibr'>(Oliphant, 2006)</ns0:ref>, scipy <ns0:ref type='bibr' target='#b48'>(Virtanen et al., 2020)</ns0:ref> and matplotlib <ns0:ref type='bibr' target='#b32'>(Hunter, 2007)</ns0:ref> packages, unless otherwise stated.</ns0:p><ns0:p>We generated test-datasets by uniform random sampling from the unit D-cube to demonstrate, that theoretical derivations fit to data. We measured distances with a circular boundary condition to avoid edge effects, hence the data is as close to the theoretical assumptions as possible.</ns0:p><ns0:p>To illustrate the probability density function of the FSA estimator, we computed the local FSA intrinsic dimension values (Fig. <ns0:ref type='figure' target='#fig_7'>2</ns0:ref>). We generated d-hypercubes (n = 10000, one realization) with dimensions of 2, 3, 5, 8, 10 and 12, then computed histograms of local FSA estimates for three neighborhood sizes: k = 1, 11, 50 respectively (Fig. <ns0:ref type='figure' target='#fig_7'>2</ns0:ref> A-F). We selected these specific neighborhoods because of didactic purposes:</ns0:p><ns0:p>the k = 1 neighborhood is the smallest one, the k = 50 is a bigger neighborhood, which is still much smaller than the sample size, so the estimates are not affected by the finite sample effect. The k = 11 neighborhood represents a transition between the two 'extremes', the specific value is an arbitrary choice giving pleasing visuals suggesting the gradual change in the shape of the curve as a function of the k parameter. We drew the theoretically computed pdf to illustrate the fit.</ns0:p><ns0:p>To show that the theoretically computed sampling distribution of the mFSA fits to the hypercube datasets, we varied the sample size (n = 11, 101, 1001) with N = 5000 realizations from each. We computed the global mFSA for each realization and plotted the results for d = 2 (Fig. <ns0:ref type='figure' target='#fig_9'>3 A</ns0:ref>) and d = 5</ns0:p><ns0:p>(Fig. <ns0:ref type='figure' target='#fig_9'>3 B</ns0:ref>).</ns0:p><ns0:p>We investigated the dimensionality and sample-size effects on mFSA estimates (Fig. <ns0:ref type='figure' target='#fig_10'>4 A-F</ns0:ref>). We simulated the hypercube data in the 2-30 dimension-range, and applied various sample sizes: n = 10, 100, 1000, 2500, 10000; one hundred realizations each (N = 100). We computed the mFSA values with minimal neighborhood size (k = 1), and observed finite-sample-effects, and asymptotic convergence.</ns0:p><ns0:p>We repeated the analysis with hard boundary conditions.</ns0:p><ns0:p>We fitted a correction formula on the logarithm of dimension values and estimates with the least squares method (Eq. 10), using all 100 realizations for each sample sizes separately. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_11'>&#945; = &#8721;(ln E i )d (i) &#8721; d (i) 2</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Where E i = D i /d (i) is the relative error, D i is the intrinsic dimension of the data, and d (i) are the corresponding mFSA estimates. We carried out the model fit on the 2 &#8722; 30 intrinsic dimension range.</ns0:p><ns0:p>We also calibrated the cmFSA algorithm in a wider range of intrinsic dimension values (2 &#8722; 80) and applied required more coefficients in the polynomial fit procedure (SFig. 1 A). Also, we used orthogonal distance regression to fit the mean over realizations of ln E i with the same D i value (SFig. 1 B). We utilized the mean and standard deviation of the regression error to compute the ideal error rate of cmFSA estimator, if the error-distributions are normal (SFig. 1 C-F).</ns0:p></ns0:div> <ns0:div><ns0:head>Simulations on customly sampled manifolds</ns0:head><ns0:p>We carried out simulations on datasets sampled from manifolds according to uniform, multivariate</ns0:p><ns0:p>Gaussian, Cauchy distribution and on uniformly sampled D-spheres in the function of sample size as in <ns0:ref type='bibr' target='#b21'>Facco et al. (2017)</ns0:ref> Figure <ns0:ref type='figure' target='#fig_7'>2</ns0:ref>.</ns0:p><ns0:p>The uniform sampling was carried out on D-hypercube data with periodic boundary conditions. The Gaussian datasets were sampled from a zero mean and unit variance and no covariance multivariate normal distribution. The Cauchy datasets were generated so that the probability density of the norms were a Cauchy distribution . We achieved this by the following procedure:</ns0:p><ns0:p>1. Generate n points according to D dimensional Gaussian distribution (&#950; i ) and normalize the euclidean distance of the points from the origin.</ns0:p><ns0:formula xml:id='formula_12'>z i = &#950; i |&#950; i | where &#950; i &#8764; N (0, I)</ns0:formula><ns0:p>and I is the D-dimensional identity matrix. Thus, the points z i are uniformly distributed on the hyper-surface of a D &#8722; 1 dimensional hyper-sphere of unit radius.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Generate n positive real numbers</ns0:head><ns0:formula xml:id='formula_13'>u i from a Cauchy distribution f (u) = 1 &#960;(1+u 2 )</ns0:formula><ns0:p>and multiply z i by this to get a dataset:</ns0:p><ns0:formula xml:id='formula_14'>x i = u i &#215; z i</ns0:formula><ns0:p>Thus the norms of the resulting points are distributed according to a Cauchy distribution.</ns0:p><ns0:p>Finally, we produced the D-sphere data with the first step of the previous procedure.</ns0:p><ns0:p>We generated N = 200 instances of each dataset with the intrinsic dimension values D = 2, 5, 10, we estimated the global mFSA and cmFSA dimensions and plotted the mean and standard deviation in the function of sample size.</ns0:p></ns0:div> <ns0:div><ns0:head>Comparison on synthetic benchmark datasets</ns0:head><ns0:p>We simulated N = 100 instances of 15 manifolds (Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>, M i , n = 2500) with various intrinsic dimensions.</ns0:p><ns0:p>We generated the datasets according to the first 15 manifolds proposed by <ns0:ref type='bibr' target='#b13'>Campadelli et al. (2015)</ns0:ref>. More specifically Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> contains the description manifold types, the first 15 manifolds of Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> are used in this work as synthetic benchmark, and the Table <ns0:ref type='table'>4</ns0:ref> shows the benchmark results in <ns0:ref type='bibr' target='#b13'>Campadelli et al. (2015)</ns0:ref>, http://www.mL.uni-saarland.de/code/IntDim/IntDim.htm).</ns0:p><ns0:p>We applied the wide (D = 2 &#8722; 80) calibration procedure (l 1 = &#8722;1, l 2 = 1, l 3 = 2, l 4 = 3) as in the previous subsection (n = 2500, k = 5) to compute cmFSA for the datasets. We used cmFSA in two modes, in integer and in fractal mode. In the former the global estimates are rounded to the nearest integer value, while in the latter case the estimates can take on real values.</ns0:p><ns0:p>We measured the performance of the mFSA and corrected-mFSA estimators on the benchmark datasets, and compared them with the performance of ML <ns0:ref type='bibr' target='#b36'>(Levina and Bickel, 2005)</ns0:ref> DANCo <ns0:ref type='bibr' target='#b15'>(Ceruti et al., 2014)</ns0:ref> and the 2NN <ns0:ref type='bibr' target='#b21'>(Facco et al., 2017)</ns0:ref> estimators. We used the Matlab (MATLAB, 2020; Lombardi, 2020)(see on github) and an R package <ns0:ref type='bibr' target='#b33'>(Johnsson et al., 2015)</ns0:ref> implementation of DANCo. In the case of DANCo, we also investigated the results for integer and for fractal mode just as for the cmFSA algorithm.</ns0:p><ns0:p>To quantify the performance we adopted the Mean Percentage Error (MPE, Eq.11) metric <ns0:ref type='bibr' target='#b13'>(Campadelli et al., 2015)</ns0:ref>: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_15'>MPE = 100 MN M &#8721; j=1 N &#8721; i=1 |D j &#8722; d i j | D j (<ns0:label>11</ns0:label></ns0:formula><ns0:p>Computer Science Where there is N realizations of M types of manifolds, D j are the true dimension values, d i j are the dimension estimates.</ns0:p><ns0:p>Also, we used the error rate -the fraction of cases, when the estimator did not find (missed) the true dimensionality -as an alternative metric. We used this metric to compare the performace of DANCo and cmFSA in integer mode, we simply counted the cases, when the estimator missed the true dimension value:</ns0:p><ns0:formula xml:id='formula_16'>H j = 1 N N &#8721; i=1 I(D j = d i j ) (12)</ns0:formula><ns0:p>where H j is the error rate for a manifold computed from N realizations and I = 1 if D j = d i j is the indicator function for the error. We computed the mean error rate H by averaging the manifold specific values.</ns0:p></ns0:div> <ns0:div><ns0:head>Dimension estimation of interictal and epileptic dynamics</ns0:head><ns0:p>We used data of intracranial field potentials from two subdural grids positioned -parietofrontally (6*8 channels, Gr A-F and 1-8) and frontobasally (2*8 channels, Fb A-B and 1-8) -on the brain surface and from three strips located on the right temporal cortex (8 channels, JT 1-8), close to the hippocampal formation and two interhemispheric strips, located within the fissura longitudinalis, close to the left and right gyrus cinguli (8 channels BIH 1-8 and 8 channels JIH 1-8) as part of presurgical protocol for a subject with drug resistant epilepsy (Fig. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>After CSD computation, we bandpass-filtered the CSD signals <ns0:ref type='bibr' target='#b25'>(Gramfort et al., 2013)</ns0:ref> (1-30 Hz, fourth order Butterworth filter) to improve signal to noise ratio.</ns0:p><ns0:p>We embedded CSD signals and subsampled the embedded time series. We used an iterative manual procedure to optimize embedding parameters (SFig. 2). Since the fastest oscillation is (30 Hz) in the signals, a fixed value with one fourth period (2048/120 &#8776; 17 samples) were used as embedding delay. We inspected the average space-time separation plots of CSD signals to determine a proper subsampling, with the embedding dimension of D=2 (SFig. 2 A). We found, that the first local maximum of the space-time separation was at around 5 ms: 9 &#8722; 10, 10 &#8722; 11, 11 &#8722; 12 samples for the 1%, 25%, 50% percentile contour-curves respectively. Therefore, we divided the embedded time series into 10 subsets to ensure the required subsampling. Then, we embedded the CSD signal up to D = 12 and measured the intrinsic dimensionality for each embeddings (SFig. 2 B and C). We found that intrinsic dimension estimates started to show saturation at D &gt;= 3, therefore we chose D = 7 as a sufficiently high embedding dimension (averaged over k = 10 &#8722; 20 neighborhood sizes).</ns0:p><ns0:p>We measured the intrinsic dimensionality of the embedded CSD signals using the mFSA method during interictal and epileptic episodes (Fig. <ns0:ref type='figure' target='#fig_16'>9</ns0:ref>). We selected the neighborhood size between k = 10 and k = 20 and averaged the resulting estimates over the neighborhoods and subsampling realizations.</ns0:p><ns0:p>We investigated the dimension values (Fig. <ns0:ref type='figure' target='#fig_16'>9</ns0:ref> C and D) and differences (Fig. <ns0:ref type='figure' target='#fig_16'>9 E</ns0:ref>) between interictal and epileptic periods.</ns0:p><ns0:p>We also compared the mFSA estimates with the original -mean based -FSA estimates in the function of neighborhood size on a recording in the k = 1 &#8722; 12 neighborhood range and plotted the estimates against each other to visualize differences (Fig. <ns0:ref type='figure' target='#fig_16'>9 B</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head>Manifold adaptive dimension estimator revisited</ns0:head><ns0:p>The probability density of Farahmand-Szepesv &#225;ri-Audibert estimator</ns0:p><ns0:p>We compute the probability density function of Farahmand-Szepesv&#225;ri-Audibert (FSA) intrinsic dimension estimator based on normalized distances.</ns0:p><ns0:p>The normalized distance density of the kNN can be computed in the context of a K-neighborhood,</ns0:p><ns0:p>where the normalized distance of K &#8722; 1 points follows a specific form:</ns0:p><ns0:formula xml:id='formula_17'>p(r|k, K &#8722; 1, D) = D B(k, K &#8722; k) r Dk&#8722;1 (1 &#8722; r D ) K&#8722;k&#8722;1 (13)</ns0:formula><ns0:p>where r &#8712; [0, 1] is the normalized distance of the kth neighbor and B is the Euler-beta function. In practice, the normalization is carried out by dividing with the distance of Kth neighbor (</ns0:p><ns0:formula xml:id='formula_18'>r k = R k /R K , k &lt; K).</ns0:formula><ns0:p>Here p(r|k, K &#8722; 1, D)&#8710;r describes the probability that the k-th neighbor can be found on a thin shell at the normalized distance r among the K &#8722; 1 neighbors if the intrinsic dimension is D (see SI A. 1 for a derivation). A maximum likelihood estimator based on Eq. 13 leads to the formula of the classical Levina-Bickel estimator <ns0:ref type='bibr' target='#b36'>(Levina and Bickel, 2005)</ns0:ref>. For a derivation of this probability density and the maximum likelihood solution see SI A. 1 and SI A. 2 respectively.</ns0:p><ns0:p>We realize that the inverse of normalized distance appears in the formula of FSA estimator, so we can express it as a function of r:</ns0:p><ns0:formula xml:id='formula_19'>&#948; k = log 2 log (R 2k /R k ) = &#8722; log 2 log (R k /R 2k ) = &#8722; log 2 log r k (<ns0:label>14</ns0:label></ns0:formula><ns0:formula xml:id='formula_20'>)</ns0:formula><ns0:p>Where</ns0:p><ns0:formula xml:id='formula_21'>r k = R k /R 2k .</ns0:formula><ns0:p>Combining Eq. 13 and Eq. 14, one can obtain the pdf of the FSA estimator:</ns0:p><ns0:formula xml:id='formula_22'>q (&#948; k ) &#8801; p (r|k, 2k &#8722; 1, D) dr d&#948; k = D log (2) B(k, k) 2 &#8722; Dk &#948; k 1 &#8722; 2 &#8722; D &#948; k k&#8722;1 &#948; 2 k (<ns0:label>15</ns0:label></ns0:formula><ns0:formula xml:id='formula_23'>)</ns0:formula><ns0:p>Theorem 1. The median of q(&#948; k ) is at D. . The pdf-s are less skewed and the variance gets smaller as the neighborhood size gets bigger. Also, the higher the dimension of the manifold, the higher the variance of the local estimates.</ns0:p><ns0:p>Proof. We apply the monotonic substitution a = 2 &#8722;D/&#948; k on Eq. 15:</ns0:p><ns0:formula xml:id='formula_24'>293 p(a) = q(&#948; k ) d&#948; k da = (16) = D log (2) B(k, k) a k (1 &#8722; a) k&#8722;1 log 2 a D 2 log 2 2 D log 2 a log 2 a (17) = 1 B(k, k) a k&#8722;1 (1 &#8722; a) k&#8722;1 (18)</ns0:formula><ns0:p>The pdf in Eq.18 belongs to a beta distribution. The cumulative distribution function of this density is the Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The median of this distribution is at a = 1 2 , thus at &#948; k = D since:</ns0:p><ns0:formula xml:id='formula_25'>a = 2 &#8722; D &#948; k = 1 2 (20) D = &#948; k (21)</ns0:formula><ns0:p>and a is a monotonic function of &#948; , therefore the median in &#948; k can be computed by the inverse mapping.</ns0:p><ns0:p>This means that the median of the local FSA estimator is equal to the intrinsic dimension independent of neighborhood size, even for the minimal neighborhood, if the locally uniform point density assumption holds. The sample median is a robust statistic, therefore we propose to use the sample median of local estimates as a global dimension estimate. We will call this modified method the 'median Farahmand-Szepesv&#225;ri-Audibert' (mFSA) estimator.</ns0:p><ns0:p>Let's see the form for the smallest possible neighborhood size: k = 1 (Fig. <ns0:ref type='figure' target='#fig_7'>2</ns0:ref>). The pdf for the estimator takes a simpler from (Eq. 22).</ns0:p><ns0:formula xml:id='formula_26'>q(&#948; |k = 1, D) = D log(2) 2 &#8722; D &#948; 1 &#948; 2 1 (22)</ns0:formula><ns0:p>Also, we can calculate the cumulative distribution function analytically (Eq. 23).</ns0:p><ns0:formula xml:id='formula_27'>Q(&#948; |k = 1, D) = &#948; 1 0 q(t|k = 1, D) dt = 2 &#8722;D/&#948; 1 (23)</ns0:formula><ns0:p>The expectation of &#948; k diverges for k = 1-but not for k &gt; 1 -although the median exists.</ns0:p><ns0:formula xml:id='formula_28'>Q(&#948; 1 = D) = D 0 q(t|k = 1, D) dt = 0.5<ns0:label>(24)</ns0:label></ns0:formula><ns0:p>From Eq. 23 the median is at D (Eq. 24).</ns0:p></ns0:div> <ns0:div><ns0:head>Sampling distribution of the median</ns0:head><ns0:p>We can compute the pdf of the sample median if an odd sample size is given (n = 2l + 1) and if sample points are drawn independently according to Eq. 15 (see SI. Section C for a derivation). Roughly half of the points have to be smaller, half of the points have to be bigger and one point has to be exactly at d (Eq.</ns0:p></ns0:div> <ns0:div><ns0:head>25)</ns0:head><ns0:p>.</ns0:p><ns0:formula xml:id='formula_29'>p(d|k, D, n) = 1 B(l + 1, l + 1) P a = 2 &#8722;D/d 1 &#8722; P a = 2 &#8722;D/d l q(d) (25)</ns0:formula><ns0:p>Where p(a) and P(a) are the pdf and cdf of a (Eq. 18, 19) and q is the pdf of the FSA estimator (Fig.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>A B).</ns0:head><ns0:p>We determine the standard error by the numerical integration of Eq 25 and found that the error shrinks approximately with the square-root of n and k (Fig. <ns0:ref type='figure' target='#fig_9'>3 C D</ns0:ref>). Also, the value of the standard error is proportional to the dimension of the manifold. From these observations, we express the error as:</ns0:p><ns0:formula xml:id='formula_30'>&#963; d &#8776; &#954; * D &#8730; nk (<ns0:label>26</ns0:label></ns0:formula><ns0:formula xml:id='formula_31'>)</ns0:formula><ns0:p>where &#954; is a constant. These empirical results can be backed up by theory: the same expression arises for the standard error by using the Laplace and Stirling approximations, also by these methods, the exact value of &#954; = The solid lines represent the theoretical pdf-s of the median and the shaded histograms are the results of simulations (N = 5000 realizations of hypercube datasets with periodic boundary conditions). The derived formula fits well to the histograms. The variance shrinks with bigger sample size, and the pdf becomes less skewed, more Gaussian-like. C The standard error of median in the function of sample size computed by numerical integration and Laplace-Stirling approximation (grey dashed). The standard error linearly decreases on a log-log plot in the function of sample size. The slope is approximately &#8722;0.5, independent of the dimension of the manifold and the error's value is proportional to D. Thus, the relative error (err/D) is independent of intrinsic dimension and it is shown by the overlapping markers on the black dashed straight line. D The standard error in the function of neighborhood size computed by numerical integration and Laplace-Stirling approximation. The slope of the lines are also approximately &#8722;0.5, the apprximation (grey dashed line) becomes accurate for k &gt; 10 neighborhood size.</ns0:p></ns0:div> <ns0:div><ns0:head>Maximum Likelihood solution for the manifold-adaptive estimator</ns0:head><ns0:p>If the samples are independent and identically distributed, we can formulate the likelihood function as the product of sample-likelihoods (Eq. 27). We seek for the maximum of the log likelihood function, but the derivative is transcendent for k &gt; 1. Therefore, we can compute the place of the maximum numerically (Eq. 29).</ns0:p></ns0:div> <ns0:div><ns0:head>12/23</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:51754:1:0:NEW 4 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_32'>Computer Science L = n &#8719; i=1 D log (2) B(k, k) 2 &#8722;Dk/&#948; (i) (1 &#8722; 2 &#8722;D/&#948; k (i) ) k&#8722;1 &#948; k (i) 2 (27) log L = n log log (2) B(k, k) + n log D &#8722; Dk log(2) &#8721; 1 &#948; k (i) + (k &#8722; 1) &#8721; log 1 &#8722; 2 &#8722;D/&#948; k (i) (28) &#8722;2 &#8721; log(&#948; k (i) ) &#8706; log L &#8706; D = n D &#8722; log(2)k &#8721; 1 &#948; k (i) + log(2)(k &#8722; 1) &#8721; 1 &#948; k (i) (2 D/&#948; k (i) &#8722; 1) ! = 0 (<ns0:label>29</ns0:label></ns0:formula><ns0:formula xml:id='formula_33'>)</ns0:formula><ns0:p>For k = 1, the ML formula is equal to the Levina-Bickel (k = 1) and MIND 1ML formulas.</ns0:p></ns0:div> <ns0:div><ns0:head>Results on randomly sampled hypercube datasets</ns0:head><ns0:p>Theoretical probability density function of the local FSA estimator fits to empirical observations (Eq. 15, Fig. <ns0:ref type='figure' target='#fig_7'>2</ns0:ref>). We simulated hypercube datasets with fixed sample size (n = 10000) and of various intrinsic dimensions <ns0:ref type='bibr'>(D = 2, 3, 5, 8, 10, 12)</ns0:ref>. We measured the local FSA estimator at each sample point with 3 different k parameter values (k = 1, 11, 50). We visually confirmed that the theoretical pdf fits perfectly to the empirical histograms.</ns0:p><ns0:p>The empirical sampling distribution of mFSA fits to the theoretical curves for small intrinsic dimension values (Fig. <ns0:ref type='figure' target='#fig_9'>3</ns0:ref>). To demonstrate the fit, we drew the density of mFSA on two hypercube datasets D = 2 and D = 5 with the smallest possible neighborhood (k = 1), for different sample sizes <ns0:ref type='bibr'>(n = 11, 101, 1001)</ns0:ref>.</ns0:p><ns0:p>At big sample sizes the pdf is approximately a Gaussian <ns0:ref type='bibr' target='#b35'>(Laplace, 1986)</ns0:ref>, but for small samples the pdf is non-Gaussian and skewed.</ns0:p><ns0:p>The mFSA estimator underestimates intrinsic dimensionality in high dimensions. This phenomena is partially a finite sample effect (Fig. <ns0:ref type='figure' target='#fig_10'>4</ns0:ref>), but edge effects make this underestimation even more severe. This phenomenon was pronounced at low sample sizes and high dimensions, but we experienced convergence to the real dimension value as we increased sample size.</ns0:p><ns0:p>We graphically showed that mFSA estimator asymptotically converged to the real dimension values for hypercube-datasets, when we applied periodic boundary conditions (Fig. <ns0:ref type='figure' target='#fig_11'>5</ns0:ref>). We found, that the convergence is much slower for hard boundary conditions, where edge effects make systematic estimation errors higher.</ns0:p><ns0:p>From the shape of the curves in Fig. <ns0:ref type='figure' target='#fig_10'>4</ns0:ref>, we heuristically derived a correction formula for finite sample size and edge effects (Eq. 9). The heuristics is as follows. We tried to find a formula, which intuitively describes the true intrinsic dimension in the function (C) of the estimated values. One can see on Fig. <ns0:ref type='figure' target='#fig_10'>4</ns0:ref>, that at small values the error converges to zero and also the curve lies approximately on the diagonal, so it's derivative goes to one.</ns0:p><ns0:formula xml:id='formula_34'>lim d&#8594;0 C(d) = D lim d&#8594;0 C &#8242; (d) = 1 (<ns0:label>30</ns0:label></ns0:formula><ns0:formula xml:id='formula_35'>)</ns0:formula><ns0:p>where C is the correction function and d is the biased estimate. Eq. 9 satisfies these conditions and gives good fit to empirical data (Fig. <ns0:ref type='figure' target='#fig_12'>6</ns0:ref>).</ns0:p><ns0:p>From an other point of view, Eq. 9 means that one could estimate the logarithm of relative error with an L-order polynomial:</ns0:p><ns0:formula xml:id='formula_36'>log(E rel ) = log D d = L &#8721; l=1 &#945; l d l (31)</ns0:formula><ns0:p>The order of the polynomial was different for the two types of boundary conditions. When we applied hard boundary, the order was L = 1, however in the periodic case higher order polynomials fit the data. Thus, in the case of hard-boundary, we could make the empirical correction formula:</ns0:p><ns0:formula xml:id='formula_37'>D &#8776; C(d) = de &#945; n d (32)</ns0:formula><ns0:p>where &#945; n is a sample size dependent coefficient that we could fit with the least squares method. This simple model described well the data in the 2 &#8722; 30 intrinsic dimension range (Fig. <ns0:ref type='figure' target='#fig_12'>6 A-F</ns0:ref>). As the intrinsic dimension of the manifold grows, the estimates start to deviate from the ideal diagonal line due to finite sample effect. This systematic under-estimation of intrinsic dimension is more severe in the case of low sample size and high intrinsic dimension.</ns0:p></ns0:div> <ns0:div><ns0:head>Results on customly sampled manifolds</ns0:head><ns0:p>We investigated the case when the assumption of uniform sampling or flatness is violated through gaussian, Cauchy and hypersphere datasets (Fig. <ns0:ref type='figure' target='#fig_13'>7</ns0:ref>) with various intrinsic dimensions and sample sizes. We added hypercube datasets with periodic boundary conditions as a control with the same parameter setting respectively (k = 5).</ns0:p><ns0:p>On the hypercube datasets with periodic boundary conditions the mFSA algorithm produced a massive underestimation of intrinsic dimension for low sample sizes for D = 10, but cmFSA corrects this bias caused by finite sample size (Fig. <ns0:ref type='figure' target='#fig_13'>7 A</ns0:ref>). For the small-dimensional cases. when D = 2 and D = 5 both cmFSA and FSA estimated well the true intrinsic dimension values. On the gaussian datasets with non-periodic boundary conditions mFSA produced even more severe underestimation for D = 10 and even for D = 5, but cmFSA overestimated the intrinsic dimensions (Fig. <ns0:ref type='figure' target='#fig_13'>7 B</ns0:ref>). On the heavy tailed Cauchy datasets mFSA showed a non-monotonic behaviour in the function of sample size: for fewer points it had low values with a maximum at mid sample sizes and exhibited slow decline convergence to true dimension value for big samples (Fig. <ns0:ref type='figure' target='#fig_13'>7 B</ns0:ref>). This shape of the curve resulted in underestimation for small samples followed by an overestimation part depressing towards the true dimension values as N goes to infinity (D = 5, 10). For D = 2 the first underestimation section was missing. cmFSA produced severe overestimation for these Cauchy datasets. The hypersphere dataset is an example when the point density is approximately uniform, but the manifold is curved. On this datset mFSA produced underestimation for </ns0:p></ns0:div> <ns0:div><ns0:head>Results on synthetic benchmarks</ns0:head><ns0:p>We tested the mFSA estimator and its corrected version on synthetic benchmark datasets <ns0:ref type='bibr' target='#b29'>(Hein and Audibert, 2005;</ns0:ref><ns0:ref type='bibr' target='#b13'>Campadelli et al., 2015)</ns0:ref>. We simulated N = 100 instances of 15 manifolds (Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>, M i , n = 2500) with various intrinsic dimensions.</ns0:p><ns0:p>We estimated the intrinsic dimensionality of each sample and computed the mean, the error rate and Mean Percentage Error (MPE) for the estimators. We compared the mFSA, cmFSA, the R and the Matlab implementation of DANCo, the Levina-Bickel and the 2NN estimator (Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>). cmFSA and DANCo was evaluated in two modes, in a fractal-dimension mode and in an integer dimension mode.</ns0:p><ns0:p>The mFSA estimator underestimated intrinsic dimensionality, especially in the cases when the data had high dimensionality. The Levina-Bickel estimator overestimated low intrinsic dimensions and underestimated the high ones. The 2NN estimator produced underestimation on most test manifolds it reached the best average result on the M 6 and M 13 manifolds.</ns0:p><ns0:p>In contrast, the cmFSA estimator found the true intrinsic dimensionality of the datasets, it reached the best overall error rate (0.277) and 2 nd best MPE (Fig. <ns0:ref type='figure' target='#fig_14'>8</ns0:ref>, Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>). In some cases, it slightly over-estimated the dimension of test datasets. Interestingly, DANCo showed implementation-dependent performance, the Matlab algorithm showed the 2 nd best error rate (0.323) and the best MPE value (Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>). The R version overestimated the dimensionality of datasets in most cases. </ns0:p></ns0:div> <ns0:div><ns0:head>Analysing epileptic seizures</ns0:head><ns0:p>To show how mFSA works on real-world noisy data, we applied it to human neural recordings of epileptic seizures.</ns0:p><ns0:p>We acquired field potential measurements from a patient with drug-resistant epilepsy by 2 electrode grids and 3 electrode strips. We analyzed the neural recordings during interictal periods and during epileptic activity to map possible seizure onset zones (see Methods).</ns0:p><ns0:p>We found several characteristic differences in the dimension patterns between normal and control conditions. In interictal periods (Fig. <ns0:ref type='figure' target='#fig_16'>9</ns0:ref> C), we found the lowest average dimension value at the FbB2 position on the froto-basal grid. Also, we observed gradually increasing intrinsic dimensions on the cortical grid (Gr) between the F1 and D6 channels. In contrast, we observed the lowest dimension values at the right interhemispherial strip (JIH 1-2) and on the temporo-basal electrode strip (JT 3-5) close to the hippocampus, and the gradient on the cortical grid altered during seizures (Fig. <ns0:ref type='figure' target='#fig_16'>9 D</ns0:ref>). Comparing the dimensions between seizure and control periods, majority of the channels showed lower dimensions during seizures. This decrease was most pronounced close to the hippocampal region (strip JT) and the parietal region mapped by the main electrode grid (GrA-C). Curiously, the intrinsic dimensionality became higher at some frontal (GrE1-F2) and fronto-basal (FbA1-B3) recording sites during seizure (Fig. <ns0:ref type='figure' target='#fig_16'>9</ns0:ref> A and E).</ns0:p><ns0:p>Comparison of the original FSA and the mFSA dimension estimators on the seizure data series ). The figure presents that mFSA and cmFSA makes errors if the sampling process is not uniform. A Results on hypercubes with periodic boundary conditions dataset shows, that mFSA systematically underestimates the intrinsic dimension especially for higher dimension values, this bias is corrigated by cmFSA. B The mFSA algorithm underestimates and cmFSA overestimates the intrinsic dimension for the gaussian datasets. C For the Cauchy datasets, mFSA estimator shows an average underestimation at small sample sizes and an over-estimation region followed by convergence to true dimension value. cmFSA severely overestimates the intrinsic dimension values. D On the slightly curved hypersphere datasets mFSA also underestimates the intrinsic dimension and cmFSA gives and overestimation.</ns0:p><ns0:p>showed characteristic difference similar to the one observed in the simulated data: mFSA resulted in lower dimension estimates than FSA and the difference between the two methods decreases as the k neighbourhood increases (Fig. <ns0:ref type='figure' target='#fig_16'>9</ns0:ref> B, compare it with Fig. <ns0:ref type='figure' target='#fig_1'>1 C D</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>In this work we revisited and improved the manifold adaptive FSA dimension estimator. We computed the probability density function of local estimates for uniform local density. From the pdf, we derived the maximum likelihood formula for intrinsic dimensionality. However these results were derived for the simplest uniform euclidean manifold with single global intrinsic dimension, they form a base for application in more complex cases. For example the pdf of the local statistic make possible to apply the FSA estimator within mixture-based approaches, this would provide better ID estimates when the ID is varying in the data set <ns0:ref type='bibr' target='#b28'>(Haro et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b1'>Allegra et al., 2020)</ns0:ref>.</ns0:p><ns0:p>We proposed to use the median of local estimates as a global measure of intrinsic dimensionality, and demonstrated that this measure is asymptotically unbiased. This property holds even for the minimal k = 1 neighborhood size, where the previously proposed mean is infinite. The use of minimal neighborhood may be relevant, because it ameliorates the effect of curvature and density inequalities <ns0:ref type='bibr' target='#b21'>(Facco et al., 2017)</ns0:ref>. We tackled edge and finite sample effects with a correction formula calibrated on hypercube datasets.</ns0:p><ns0:p>We showed that the coefficients are sample-size dependent. <ns0:ref type='bibr' target='#b12'>Camastra and Vinciarelli (Camastra and Vinciarelli, 2002)</ns0:ref> took a resembling empirical approach, where they corrected correlation dimension estimates with a perceptron, calibrated on d-dimensional datasets. Our approach is different, because we tried to grasp the connection between underestimation and intrinsic dimensionality more directly, by showing that the dimension-dependence of the relative error is exponential (Eq. 31). The calibration procedure of DANCo may generalize better, because it compares the full distribution of local estimates rather than just a centrality measure <ns0:ref type='bibr' target='#b15'>(Ceruti et al., 2014)</ns0:ref>. Also, we are aware that our simple correction formula overlooks the effect of curvature, uneven density and noise. One can try to address the effect of curvature and nonuniform density with the choice of minimal neighborhood size (k = 1), thus the estimation error is minimal <ns0:ref type='bibr' target='#b21'>(Facco et al., 2017)</ns0:ref>. We investigated cases when the flatness and uniformity assumptions is violated on curved and unevenly sampled manifolds as in <ns0:ref type='bibr' target='#b21'>Facco et al. (2017)</ns0:ref> and found that the estimation errors can be large both for mFSA and cmFSA. We investigated the non-uniform sampling with Gaussian and Cauchy datasets (k = 5). For the Gaussian dataset cmFSA moderately overestimated the values. For the Cauchy dataset the overestimation of cmFSA is very severe: for less than 500 points, the estimation error and also the standard deviation seems to be unbounded. On the curved hypersphere data cmFSA also produced moderate overestimation. These datasets are quite challenging, and the 2NN method of <ns0:ref type='bibr' target='#b21'>Facco et al. (2017)</ns0:ref>, which uses minimal neighborhood information, presents more exact results on these. The simplicity of the correction in cmFSA, more specifically that the calibration is based on uniformly sampled hypercube datasets makes it vulnerable to non-uniform density and curvature. Additionally, the effect of noise on the estimates is yet to be investigated. There are several strategies to alleviate noise effects such as undersample the data while keeping the neighborhood fixed <ns0:ref type='bibr' target='#b21'>(Facco et al., 2017)</ns0:ref>, or using a larger neighborhood size, while keeping the sample size fixed. Both of these procedures make the effect of curvature more severe, which makes the dimension estimation of noisy curved data a challenging task.</ns0:p><ns0:p>We benchmarked the new mFSA and corrected-mFSA method against Levina-Bickel estimator, 2NN</ns0:p><ns0:p>and DANCo on synthetic benchmark datasets and found that cmFSA showed comparable performance to DANCo. For many datasets, R-DANCo overestimated the intrinsic dimensionality, which is most probably due to rough default calibration <ns0:ref type='bibr' target='#b33'>(Johnsson et al., 2015)</ns0:ref>; the Matlab implementation showed the best overall results in agreement with Campadelli et al <ns0:ref type='bibr' target='#b13'>(Campadelli et al., 2015)</ns0:ref>. This superiority was however dataset-specific: cmFSA performed genuinely the best in 2, DANCo in 1 out of the 15 benchmark datasets, with 7 ties (Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>). Also, cmFSA showed better overall error rate than DANCo. Combining the performance measured by different metrics, we recognise that cmFSA found the true intrinsic dimension of the data in more cases, but when mistaken, it makes relatively bigger errors compared with DANCo.</ns0:p><ns0:p>More specifically in the cases of M 1 , M 6 , M 12 cmFSA almost never hits the true intrinsic dimension value,</ns0:p><ns0:p>where M 1 is a 10-dimensional sphere, M 6 is a 6-dimensional manifold embedded in 36 dimensions and M 12 is a 20-dimensional multivariate Gaussian. In the first case the manifold is curved, in the second it is embedded in high dimensional ambient space and in the third one it is non-uniformly sampled. DANCo was robust against the curvature and the non-uniform sampling, but also exhibited vulnerability to high ambient space data M 6 . For this dataset the 2NN method performed the best.</ns0:p><ns0:p>The mFSA algorithm revealed diverse changes in the neural dynamics during epileptic seizures.</ns0:p><ns0:p>In normal condition, the gradient of dimension values on the cortical grid reflects the hierarchical organization of neocortical information processing <ns0:ref type='bibr' target='#b45'>(Tajima et al., 2015)</ns0:ref>. During seizures, this pattern becomes disrupted pointing to the breakdown of normal activation routes. Some channels showed lower dimensional dynamics during seizures; that behaviour is far from the exception: the decrease in dimensionality is due to widespread synchronization events between neural populations <ns0:ref type='bibr'>(Mormann et al., 2000)</ns0:ref>, a phenomenon reported by various authors <ns0:ref type='bibr' target='#b39'>(Polychronaki et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b11'>Bullmore et al., 1994;</ns0:ref><ns0:ref type='bibr'>P&#228;ivinen et al., 2005)</ns0:ref>. Stronger red color marks areas, where the dynamics during seizure was smaller-dimensional than its interictal counterpart. However, stronger blue indicates electrodes, where the during-seizure dynamics was higher dimensional than the interictal dynamics. <ns0:ref type='bibr' target='#b9'>Benk&#337; et al. (2018)</ns0:ref> showed, that dimensional relations between time series from dynamical systems can be exploited to infer causal relations between brain areas. In the special case of unidirectional coupling between two systems, the dimension of the cause should be lower than the dimension of the consequence. Thus, the lower-dimensional areas are possible causal sources <ns0:ref type='bibr' target='#b44'>(Sugiyama and Borgwardt, 2013;</ns0:ref><ns0:ref type='bibr' target='#b34'>Krakovsk&#225;, 2019;</ns0:ref><ns0:ref type='bibr' target='#b9'>Benk&#337; et al., 2018)</ns0:ref> and candidates for being the seizure onset zone. Interestingly, Esteller et al found, that the Higuchi fractal dimension values were higher at seizure onset and decreased to lower values as the seizures evolved over time <ns0:ref type='bibr' target='#b20'>(Esteller et al., 1999)</ns0:ref>. We found, that most areas showed decreased dimensionality, but few areas also showed increased dimension values as seizure takes place.</ns0:p><ns0:p>This may suggests that new -so far unused -neural circuits are activated at seizure onset; whether this circuitry contributes to or counteracts epileptic seizure is unclear. </ns0:p></ns0:div><ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The estimation procedure of manifold-adaptive Farahmand-Szepesv&#225;ri-Audibert intrinsic dimension estimator. A The data is a set of uniformly sampled points from the [0, 1] &#215; [0, 1] interval (n = 10 3 ). A neighborhood around the most central sample point is colored by blue. B A magnified view shows the neighborhood around the central sample point. The local FSA estimate (d k (x i ))is computed leveraging the formula for the distances of the kth and 2kth neighbor. This computation is repeated for the whole sample and a global estimate is generated as the mean of the local estimates. C We show the local estimates (blue dots), the empirical mean (orange) and median (red) in the function of a neighborhood size for the 2D points above. The mean has an upcurving tail at small neigborhood sizes but the median seems to be robust global estimate even for the smallest neighborhood. The mean approximately lies on a hyperbola aD k&#8722;1 + D &#8776; &#948; k (x i ) i , where a &#8776; 0.685 is a constant (grey dashed line). D We measure the intrinsic dimension of the dynamics for a logistic map driven by two other independent logistic maps (n = 1000). We show the local FSA estimates (blue), the mean (orange) and the median (red) in the function of neighborhood size after time delay embedding (E = 4, &#964; = 1). The dynamics is approximately 3 dimensional and the median robustly reflects this, however the mean overestimates the intrinsic dimension at small neigborhood sizes.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:08:51754:1:0:NEW 4 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:08:51754:1:0:NEW 4 Oct 2021) Manuscript to be reviewed Computer Science softened by periodic boundary, but data can be scarce and the manifold may have finite size causing systematic errors in the estimates of intrinsic dimension.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:08:51754:1:0:NEW 4 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:08:51754:1:0:NEW 4 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>9 A). The participants signed a written consent form and the study was approved by the relevant institutional ethical committee (Medical Research Council, Scientific and Research-Ethics Committee TUKEB, Ref number: 20680-4/2012/EKU (368/PI/2012)). This equipment recorded extracellular field potentials at 88 neural channels at a sampling rate of 2048 Hz. Moreover, we read in -using the neo package (Garcia et al., 2014)-selected 10 second long chunks of the recordings from interictal periods (N = 16) and seizures (N = 18) to further analysis. We standardised the data series and computed the Current Source Density (CSD) as the second spatial derivative of the recorded potential. We rescaled the 10 second long signal chunks by subtracting the mean and dividing by the standard deviation. Then, we computed the CSD of the signals by applying the graph Laplacian operator on the time-series. The Laplacian contains information about the topology of the electrode grids, to encode this topology, we used von Neumann neighborhood in the adjacency matrix. 8/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:51754:1:0:NEW 4 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Probability density functions of the local Farahmand-Szepesv&#225;ri-Audibert estimator (&#948; ) for various dimensions (D) and neighborhood sizes (k). A-F The sublots show that the theoretical pdfs (continuous lines) fit to the histograms (n = 10000) of local estimates calculated on uniformly sampled hypercubes (D = 2, 3, 5, 8, 10, 12). The three colors denote the three presented neigborhood sizes: k = 1 (blue), k = 11 (orange) and k = 50 (green). The pdf-s are less skewed and the variance gets smaller as the neighborhood size gets bigger. Also, the higher the dimension of the manifold, the higher the variance of the local estimates.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>294</ns0:head><ns0:label /><ns0:figDesc>regularized incomplete Beta function (I a ) with k as both parameters symmetrically. 295 P(a) = I a (k, k) Sci. reviewing PDF | (CS-2020:08:51754:1:0:NEW 4 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure3. The sampling distribution and standard error of the median for the FSA estimator on uniformly sampled hypercubes. The figure shows the pdf of median-FSA estimator of points uniformly sampled from two example systems: a square (A) and from a 5D hypercube (B) for three sample sizes n = 11 (blue), n = 101 (orange) and n = 1001 (green) respectively for the smallest neighborhood (k = 1). The solid lines represent the theoretical pdf-s of the median and the shaded histograms are the results of simulations (N = 5000 realizations of hypercube datasets with periodic boundary conditions). The derived formula fits well to the histograms. The variance shrinks with bigger sample size, and the pdf becomes less skewed, more Gaussian-like. C The standard error of median in the function of sample size computed by numerical integration and Laplace-Stirling approximation (grey dashed). The standard error linearly decreases on a log-log plot in the function of sample size. The slope is approximately &#8722;0.5, independent of the dimension of the manifold and the error's value is proportional to D. Thus, the relative error (err/D) is independent of intrinsic dimension and it is shown by the overlapping markers on the black dashed straight line. D The standard error in the function of neighborhood size computed by numerical integration and Laplace-Stirling approximation. The slope of the lines are also approximately &#8722;0.5, the apprximation (grey dashed line) becomes accurate for k &gt; 10 neighborhood size.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>FFigure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Intrinsic dimension dependence of the median-FSA estimator for uniformly sampled unit hypercubes with various sample sizes (k = 1) with periodic boundary conditions. Subplots A-F show the mean of median-FSA estimator (thick line) values from N = 100 realizations (shading) of uniformly sampled unit hypercubes . The perfect estimation values lie on the diagonal (dashed black line).As the intrinsic dimension of the manifold grows, the estimates start to deviate from the ideal diagonal line due to finite sample effect. This systematic under-estimation of intrinsic dimension is more severe in the case of low sample size and high intrinsic dimension.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>FFigure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Sample size dependence of the median-FSA estimator for uniformly sampled unit hypercubes with varied intrinsic dimension value and peiodic boundary (k = 1). Subplots A-F show the mean of median-FSA estimator (thick line) values from N = 100 realizations (shading). The estimator asymptotically converges to the true dimension value, but the convergence is faster for lower intrinsic dimensions.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>FFigure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure6. Bias-correction of the median-FSA estimator for uniformly sampled unit hypercubes with various sample sizes with non-periodic boundary (k = 1). Subplots A-F show the mean of median-FSA estimator (grey line) values from N = 100 realizations (shading) of uniformly sampled unit hypercubes. The boundary condition is hard, so the edge effect makes under-estimation even more severe than in the case of periodic boundary condition. The colored lines show the corrected estimates according to the d c = d exp(&#945;d). In the D = 1 &#8722; 30 intrinsic dimension range a simple coefficient was enough to get small mean sqaured error after model fit.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure7. mFSA and cmFSA dimension estimates on customly sampled data in the function of sample size (k = 5, D = 2, 5, 10). The figure presents that mFSA and cmFSA makes errors if the sampling process is not uniform. A Results on hypercubes with periodic boundary conditions dataset shows, that mFSA systematically underestimates the intrinsic dimension especially for higher dimension values, this bias is corrigated by cmFSA. B The mFSA algorithm underestimates and cmFSA overestimates the intrinsic dimension for the gaussian datasets. C For the Cauchy datasets, mFSA estimator shows an average underestimation at small sample sizes and an over-estimation region followed by convergence to true dimension value. cmFSA severely overestimates the intrinsic dimension values. D On the slightly curved hypersphere datasets mFSA also underestimates the intrinsic dimension and cmFSA gives and overestimation.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Performance-comparison between cmFSA and DANCo on synthetic benchmark datasets. cmFSA and DANCo have comparable performance with small differences according to Mean Percentage Error and Error rate metrics. A Dataset-wise Mean Percentage Error (MPE) on benchmark data. cm-FSA (blue) shows smaller MPE in 4 cases (M 9 , M 10a&#8722;c ) and bigger MPE in 4 cases (M 1 , M 6 , M 10d , M 12 ) compared with DANCo (Matlab). B Dataset-wise error rate for cmFSA and DANCo. cmFSA shows smaller error rates in 5 cases (M 9 , M 10a&#8722;d ) and bigger error rates in 2 cases (M 1 , M 12 ) compared with DANCo.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:08:51754:1:0:NEW 4 Oct 2021)Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. mFSA and FSA dimension estimates on intracranial brain-LFP measurements during interictal activity and epileptic seizures. A The experimental setup with the implanted electrodes are shown. A 64 channel intracranial cortical grid (red grid on graph A, Gr A1-F8 on graph C), a smaller frontobasal grid (magenta dots, Fb A1-B8) and a right temporal electrode strip, close to the hippocampus (cyan dots, JT1-8). Dimension estimates were calculated for two additional electrode strips close to the gyrus cinguli (JIH and BIH) which are hidden on this figure. The change in the mFSA estimates between seizure and control is color coded and mapped onto the recording electrodes. B Comparison of mFSA and FSA estimates on an epileptic seizure. FSA results in higher estimates, but the difference decreases with the increasing neighbourhood parameter k. C Average of mFSA dimension values from interictal LFP activity (N=16, k=5-10). The areas with lower-dimensional dynamics are marked by hot colors. D Average of mFSA dimension values from seizure LFP activity (N=18, k=5-10), colors same as on graph C. E Difference of average dimension values. Stronger red color marks areas, where the dynamics during seizure was smaller-dimensional than its interictal counterpart. However, stronger blue indicates electrodes, where the during-seizure dynamics was higher dimensional than the interictal dynamics.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>Lombardi, G.(Retrieved July 16, 2020). Intrinsic dimensionality estimation techniques. MATLAB Central File Exchange.MATLAB (2020).MATLAB version 9.8.0.1396136 (R2020a). The Mathworks, Inc., Natick, Massachusetts.Mormann, F., Lehnertz, K.,David, P., and E. Elger, C. (2000). Mean phase coherence as a measure for phase synchronization and its application to the EEG of epilepsy patients. Physica D: Nonlinear Phenomena, 144(3):358-369. Oliphant, T. E.(2006). A guide to NumPy, volume 1. Trelgol Publishing USA. Packard, N. H., Crutchfield, J. P., Farmer, J. D., and Shaw, R. S.(1980). Geometry from a Time Series.Physical ReviewLetters, 45(9):712-716. P&#228;ivinen, N., Lammi, S., Pitk&#228;nen, A., Nissinen, J., Penttonen, M., and Gr&#246;nfors, T. (2005). Epileptic seizure detection: A nonlinear viewpoint. Computer Methods and Programs in Biomedicine, 79(2):151-</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Synthetic benchmark datasets.The synthetic benchmark datasets used for comparison are the first 15 manifolds form<ns0:ref type='bibr' target='#b13'>Campadelli et al. (2015)</ns0:ref>. The datasets represent various types of manifolds with or without curvature, also with uniform or non-uniform sampling of n = 2500 points.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>dataset description</ns0:cell><ns0:cell cols='2'>d embed-dim</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>M 1</ns0:cell><ns0:cell>10d sphere</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>11</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>M 2</ns0:cell><ns0:cell>3d affine space</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>M 3</ns0:cell><ns0:cell>4 figure</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>M 4</ns0:cell><ns0:cell>4d manifold in 8d</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>M 5</ns0:cell><ns0:cell>2d helix in 3d</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>M 6</ns0:cell><ns0:cell>6dim manifold in 36d</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>36</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>M 7</ns0:cell><ns0:cell>swiss roll</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>M 9</ns0:cell><ns0:cell>20d affine space</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>20</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>M 10a</ns0:cell><ns0:cell>10d hypercube</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>10 M 10b</ns0:cell><ns0:cell>17d hypercube</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>17</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>11 M 10c</ns0:cell><ns0:cell>24d hypercube</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>24</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>12 M 10d</ns0:cell><ns0:cell>70d hypercube</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>70</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>13 M 11</ns0:cell><ns0:cell>Moebius band 10x twisted</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>14 M 12</ns0:cell><ns0:cell>Multivariate Gaussian</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>20</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>15 M 13</ns0:cell><ns0:cell>1d curve in 13d</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>13</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Dimension estimates on synthetic benchmark datasets.The table shows true dimension values (d), median-Farahmand-Szepesv&#225;ri-Audibert (mFSA), corrected median Farahmand-Szepesv&#225;ri-Audibert (cmFSA), DANCo, Maximum Likelihood(Levina) and 2NN mean estimates from N = 100 realizations. cmFSA and DANCo was applied in integer and in fractal modes. The mean percentage error (MPE) values can be seen in the bottom line, the Matlab version of DANCo estimator (DANCo M) produced the smallest error followed by the cmFSA estimator.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>dataset</ns0:cell><ns0:cell cols='8'>d mFSA cmFSA frac cmFSA DANCo R DANCo M frac DANCo M Levina</ns0:cell><ns0:cell>2NN</ns0:cell></ns0:row><ns0:row><ns0:cell>M 1</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>9.09</ns0:cell><ns0:cell>11.19</ns0:cell><ns0:cell>11.08</ns0:cell><ns0:cell>11.34</ns0:cell><ns0:cell>10.42</ns0:cell><ns0:cell>10.30</ns0:cell><ns0:cell>10.15</ns0:cell><ns0:cell>9.40</ns0:cell></ns0:row><ns0:row><ns0:cell>M 2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>2.87</ns0:cell><ns0:cell>3.02</ns0:cell><ns0:cell>3.00</ns0:cell><ns0:cell>3.00</ns0:cell><ns0:cell>2.90</ns0:cell><ns0:cell>3.00</ns0:cell><ns0:cell>3.20</ns0:cell><ns0:cell>2.93</ns0:cell></ns0:row><ns0:row><ns0:cell>M 3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>3.83</ns0:cell><ns0:cell>4.14</ns0:cell><ns0:cell>4.00</ns0:cell><ns0:cell>5.00</ns0:cell><ns0:cell>3.84</ns0:cell><ns0:cell>4.00</ns0:cell><ns0:cell>4.29</ns0:cell><ns0:cell>3.87</ns0:cell></ns0:row><ns0:row><ns0:cell>M 4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>3.95</ns0:cell><ns0:cell>4.29</ns0:cell><ns0:cell>4.00</ns0:cell><ns0:cell>5.00</ns0:cell><ns0:cell>3.92</ns0:cell><ns0:cell>4.00</ns0:cell><ns0:cell>4.38</ns0:cell><ns0:cell>3.91</ns0:cell></ns0:row><ns0:row><ns0:cell>M 5</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>1.97</ns0:cell><ns0:cell>2.00</ns0:cell><ns0:cell>2.00</ns0:cell><ns0:cell>2.00</ns0:cell><ns0:cell>1.98</ns0:cell><ns0:cell>2.00</ns0:cell><ns0:cell>2.19</ns0:cell><ns0:cell>1.99</ns0:cell></ns0:row><ns0:row><ns0:cell>M 6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6.38</ns0:cell><ns0:cell>7.38</ns0:cell><ns0:cell>7.16</ns0:cell><ns0:cell>9.00</ns0:cell><ns0:cell>6.72</ns0:cell><ns0:cell>7.00</ns0:cell><ns0:cell>7.04</ns0:cell><ns0:cell>5.93</ns0:cell></ns0:row><ns0:row><ns0:cell>M 7</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>1.95</ns0:cell><ns0:cell>1.98</ns0:cell><ns0:cell>2.00</ns0:cell><ns0:cell>2.00</ns0:cell><ns0:cell>1.96</ns0:cell><ns0:cell>2.00</ns0:cell><ns0:cell>2.18</ns0:cell><ns0:cell>1.98</ns0:cell></ns0:row><ns0:row><ns0:cell>M 9</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>14.58</ns0:cell><ns0:cell>20.07</ns0:cell><ns0:cell>20.10</ns0:cell><ns0:cell>19.16</ns0:cell><ns0:cell>19.24</ns0:cell><ns0:cell>19.09</ns0:cell><ns0:cell cols='2'>16.38 15.55</ns0:cell></ns0:row><ns0:row><ns0:cell>M 10a</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>8.21</ns0:cell><ns0:cell>9.90</ns0:cell><ns0:cell>10.00</ns0:cell><ns0:cell>10.00</ns0:cell><ns0:cell>9.56</ns0:cell><ns0:cell>9.78</ns0:cell><ns0:cell>9.20</ns0:cell><ns0:cell>8.63</ns0:cell></ns0:row><ns0:row><ns0:cell>M 10b</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>12.76</ns0:cell><ns0:cell>16.95</ns0:cell><ns0:cell>16.96</ns0:cell><ns0:cell>16.04</ns0:cell><ns0:cell>16.39</ns0:cell><ns0:cell>16.24</ns0:cell><ns0:cell cols='2'>14.33 13.58</ns0:cell></ns0:row><ns0:row><ns0:cell>M 10c</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>16.80</ns0:cell><ns0:cell>24.10</ns0:cell><ns0:cell>24.06</ns0:cell><ns0:cell>23.61</ns0:cell><ns0:cell>23.39</ns0:cell><ns0:cell>23.26</ns0:cell><ns0:cell cols='2'>18.89 18.04</ns0:cell></ns0:row><ns0:row><ns0:cell>M 10d</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>35.64</ns0:cell><ns0:cell>69.84</ns0:cell><ns0:cell>69.84</ns0:cell><ns0:cell>69.73</ns0:cell><ns0:cell>71.00</ns0:cell><ns0:cell>70.91</ns0:cell><ns0:cell cols='2'>40.35 40.05</ns0:cell></ns0:row><ns0:row><ns0:cell>M 11</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>1.97</ns0:cell><ns0:cell>2.00</ns0:cell><ns0:cell>2.00</ns0:cell><ns0:cell>2.00</ns0:cell><ns0:cell>1.97</ns0:cell><ns0:cell>2.00</ns0:cell><ns0:cell>2.19</ns0:cell><ns0:cell>1.98</ns0:cell></ns0:row><ns0:row><ns0:cell>M 12</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>15.64</ns0:cell><ns0:cell>21.96</ns0:cell><ns0:cell>21.98</ns0:cell><ns0:cell>21.72</ns0:cell><ns0:cell>20.88</ns0:cell><ns0:cell>20.00</ns0:cell><ns0:cell cols='2'>17.72 17.24</ns0:cell></ns0:row><ns0:row><ns0:cell>M 13</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.11</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell>MPE</ns0:cell><ns0:cell /><ns0:cell>13.58</ns0:cell><ns0:cell>4.73</ns0:cell><ns0:cell>2.89</ns0:cell><ns0:cell>9.64</ns0:cell><ns0:cell>3.39</ns0:cell><ns0:cell>2.35</ns0:cell><ns0:cell cols='2'>13.23 10.91</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='20'>/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:51754:1:0:NEW 4 Oct 2021)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Dear Mr. Negrello, I am writing about the resubmission of the “Manifold-adaptive dimension estimation revisited” manuscript. Thank you for the concise summary of the essential review-points and for the opportunity to improve and resubmit the manuscript. We tried to extend and modify the text along the guidance of reviewers. Please see our point by point answer below. Looking forward to your reply, Sincerely, Zsigmond Benko Editor comments (Mario Negrello) Please improve the problem statement, research question and rationale of the approach.   We added a new figure 1 to the revised manuscript to show the intuition behind the FSA intrinsic dimension estimator and also to demonstrate the limitation of the mean as global estimate. Also, we included some motivational description about the theory of nonlinear time series analysis in the Introduction and as a demonstration we present dimension estimation on the coupled systems the logistic maps . As per reviewer 2, please also consider in detail the consequences of hypercube corrections in the case of data with distributions other than the uniformly sampled hypercube.  We added an analysis on non-uniformly sampled or curved custom datasets (gaussian, hypersphere and Cauchy) as the Figure 7 of the revised manuscript. This analysis shows some limitations of our approach: the correction may lead to overestimation of the intrinsic dimension if the manifold is curved or extremely non-uniformly sampled. As reviewer 2 suggests, please consider making the code available, as it is best practice for methodological papers as yours.   We made the code available on github at the https://github.com/phrenico/cmfsapy/tree/main/examples/article_results/ url. We created an installable python package and uploaded to the Python Package Index (PyPI). Please also adjust the structure of the paper to conform to PeerJ guidelines, according to reviewer 1 as well. We restructured the paper according to the guidelines:  In the end of Introduction we phrase the aims of the article in a more structured   way We moved the Methods after the introduction sections Added the Conclusion section Dear Reviewer, I really appreciate your work on reading and commenting on the “Manifold-adaptive dimension estimation revisited” manuscript. I found your suggestions especially helpful on the structure of the paper, as well as the comments and question shed light on several inconsistencies in the notation and on not clear wordings. We tried to restructure the manuscript to conform with journal-specific guidelines, to clarify the notation, to rephrase the ambiguous sentences and to improve the quality of figure captions. I send you the point-by-point answer to your comments below. Best regards, Zsigmond Benko Reviewer 1 (Anonymous) Basic reporting The language is generally clear, but it can be improved in certain sections. The literature is well referenced. The structure is not conforming to PeerJ Computer Science standard (namely, the Methods section should be placed right after the Introduction, not after the Discussion). The figures are high quality, but adding more details in the corresponding captions would facilitate their comprehension. Experimental design The article tackles a critical problem in the computer science field - dimensionality estimation. The research question, however, should be better stated in the introduction. Specifically, authors should clearly state the rationale and the aim of their study. The methods should be moved after the introduction and before the results so as to facilitate the reader following the flow of the article. Validity of the findings The proposed approach solves certain limitations of the algorithm that is based on. The authors performed an extensive comparison of their approach with other state-of-the art algorithms on various synthetic datasets. Perhaps, an additional comparison of the proposed approach with such algorithms on neural data might be ideal. Conclusions are missing. Comments for the Author The paper sets to revisit a dimensionality estimation algorithm, the manifold adaptive Farahmand-Szepesva’ri-Audibert (or FSA). The authors first computed the local probability density function following the original FSA pipeline and then e the median of such pdf to obtain the global estimate of the dimensionality. They further corrected for finite sample effect implementing a correction formula and finally compared the performances of their algorithm with those of the original FSA as well as other state-ofthe-art techniques. The proposed approach outperforms the original FSA and perform similarly to other methods when applied to synthetic datasets. When applied to neural signals recorded during epileptic seizures, the authors hypothesize that low-dimensional brain regions might be potential sources for the seizure onset. The overall structure of the paper is ok to follow. However, certain parts of the manuscript can be improved and some details need to be added. Following are some specific comments. 1. Line 96. I suggest you provide more justification for your study. What is the rationale of your approach? How do you expect your result to differ from the original FSA algorithm? Also, could you be clearer when you say 'we correct the underestimation effect by an exponential formula'? · · · I extended the Introduction by 2 motivational examples and inserted an additional figure about the dimension estimation procedure. This figure (the new Figure 1) also shows the empirical observation that the median of local estimates is more robust estimator of the intrinsic dimension than the mean. I added the paragraph to make the case for the study at line 118: “We showed in the previous two examples that the median of local FSA estimates was a more robust estimator of the intrinsic dimension than the mean but the generality of this finding is yet to be explored by more rigorous means. Additionally, in these cases the data were abundant, and the edge effect was softened by periodic boundary, but data can be scarce and the manifold may have finite size causing systematic errors in the estimates of intrinsic dimension.” · Also, I changed the last paragraph to make the sentence with the “exponential formula” clearer at line 132: “We present the new corrected median FSA (cmFSA) method to alleviate the underestimation due to finite sample and edge effects. We achieve this by applying a heuristic exponential correctionformula applied on the mFSA estimate and we test the new algorithm on benchmark datasets.” 2. Line 98, end of introduction. Could you please add some description of your following section in a clearer way? · · I included an enumeration containing the main contributions of the paper, starting form line 123. I added a paragraph about the organization of the paper at the end of the Introduction section at line 137: “The paper is organised as follows. In the Methods section, we present the steps of FSA, mFSA and cmFSA algorithms, then we describe the simulation of the hypercube datasets and we show the specific calibration procedure used in the cmFSA method. After these, we turn to benchmark datasets. We refer to data generation scripts and display the evaluation procedure. This section ends with a description of Local Field Potential measurements and the analysis workflow. In the Results section, we lay out the theoretical results about the FSA estimator first, then we validate them against simple simulations as second. Third, we compare our algorithms on benchmark datasets against standard methods. Fourth, we apply the mFSA algorithm on Local Field Potential measurements. These parts are followed by the Discussion and Conclusion sections.” 3. Authors should also compare their approach to the original FSA (or to some other methods) on the neural dataset and not the synthetic ones only.  I inserted a new subplot into the corresponding figure (Figure 9 B in the revised manuscript), which shows a comparison of original FSA and mFSA estimates plotted against each other for different neighborhood sizes. 4. The method section should be moved right after the introduction. This would allow to describe the proposed approach before showing the corresponding results. Moreover, some results are already described in this section (e.g., lines 252, 256, 274). Those sentences should be removed and included in the results section only. Also, please put the figures closer to the corresponding location in the main text where they are referred to. · · · · I moved the Methods section after the Introduction. Also, I added two subsections into the Methods about the original and proposed variants of the FSA algorithm with the steps of the algorithms. (“FSA and mFSA algorithm”, “cmFSA algorithm” subsections) I removed lines 252, 256, 274 from the Methods section. I altered the position of figures in the LaTeX code, which hopefully helps to bring them closer to the coresponding locations in the text 5. Authors should add a conclusion section. · I added the Conclusion section. 6. Line 167, cmFSA acronym should be defined before use. · In the revised text, I define the mFSA and cmFSA acronyms at the end of Introduction in the main contributions part. 7. Line 244, authors should justify why they chose to test those three specific values of k. · I added few sentences about this specific choice at line 179: “We selected these specific neighborhoods because of didactic purposes: the neighborhood is the smallest one, the is a bigger neighborhood, which is still much smaller than the sample size, so the estimates are not affected by the finite sample effect. The neighborhood represents a transition between the two 'extremes', the specific value is an arbitrary choice giving pleasing visuals suggesting the gradual change in the shape of the curve as a function of the $parameter.” 8. There are some inconsistencies related to the use of the notation for the true and the predicted dimensionality. According to line 255, D indicates the true dimensionality and d the predicted dimensionality. However, in line 270 you use d to indicate the true dimensionality and d ̂ for the predicted one. Choose one notation and stick with it. · I corrected the inconsistency by o using D for the true dimension and o small d for the global estimated value o also I introduced δ for the local dimension values 9. The captions of each figure should be more detailed. Specifically, they should briefly describe the take-home message of the figure (one or two sentences are enough). · I added take-home message to each figure-caption 10. Figure 1: it is not clear to the reviewer how the histogram was obtained. Didn’t you test only one realization in this case (line 242)? Yes it was one realization of random-uniform hypercube dataset with n=10 000 sample points. The figure shows the probability density of the locals estimates, which is computed for each sample point. The histogram was obtained from the local estimates (n=10 000). I modified the title of the figure (Figure 2 in the revised manuscript) to contain the “local” word, to make it more clear that the probability density of the local dimension values are depicted in the figure. Also I changed the notation, because the local values were also denoted by “d”, I assigned the δ symbol to local dimension values and modified the x-axis of the subplots accordingly. 11. Table I: what are cmFSA_fr and M_DANCO_fr? I used cmFSA and DANCo in two modes: in integer mode and in fractal mode. In the former, the global dimension values were rounded to the nearest integer value, but in the latter case they were left alone as real numbers. I applied the following changes to the revised text: · I added this detail to the Methods section at line 223: “We used cmFSA in two modes, in integer and in fractal mode. In the former the global estimates are rounded to the nearest integer value, while in the latter case the estimates can take on real values.” and a bit later at line 229: “In the case of DANCo, we also investigated the results for integer and for fractal mode just as for the cmFSA algorithm.” · I added the “cmFSA and DANCo was applied in integer and in fractal modes.” sentence to the caption of table I. Dear Mr. Allegra, Thank you for your work in scrutinizing and commenting on the manuscript titled “Manifoldadaptive dimension estimation revisited”. Your comments were invaluable: they led several to additional results and improved the overall quality of the article. I tried to improve the research motivation part by adding an new figure to the manuscript, changed the notation to improve the clarity of the derivations and included additional results on the standard error of the median supported by derivations packed into the SI, also I added several subplots to the figures to make them more expressive. Please, see our point-by-point reply to your questions and comments below. Best regards, Zsigmond Benko Reviewer 2 (Michele Allegra) Basic reporting The manuscript is sound and well written. References are generally exhaustive. The methodology is clearly explained. Experimental design The propose method seems to be competitive with state-of-art methods, and I believe it offers some advantages with respect to some of issued of ID estimators, in particular boundary effects, and variations of the density of points in the data. I think the manuscript is a fair contribution to the field of ID estimation, and it can be of interest to researchers in this area, and more in general to researchers needing accurate ID estimation as part of their data analysis pipelines. Validity of the findings Major comments: 1) In my opinion, the main problem with the boundary-effect correction is that it is optimized for uniformly-sampled hypercubes, and may lead to overestimation of the ID in cases when the data are not uniforly sampled. This is clearly visible form table I: while the estimation is nearly perfect for uniformly sampled data on linear subspaces [M2,M9,M10a-c], or generally uniformly sampled data on locally at spaces [M5,M7,M13], it yields an overestimation in the case of non-uniformities, such as he Gaussian case [M12], the non-linear manifold case [M6], or the sphere [M1]. The overestimation may be even more sever for non-uniform samplings with heavy-tailed distributions, such as the Cauchy distribution used in Facco et al. 2015. The authors should extensively comment on this point. · · · To investigate the non-uniform sampling case, I did an analysis on uniform, gaussian, cauchy and hypersphere datasets as in the Figure 2 of Facco 2017. (inserted as the new figure 7.) Added the 2NN estimator of Facco to the simulated benchmarks extended the discussion section supposedly along the guidelines given by the reviewer: line 424: “One can try to address the effect of curvature and nonuniform density with the choice of minimal neighborhood size ($ $), thus the estimation error is minimal (Facco 2017). We investigated cases when the flatness and uniformity assumptions is violated on curved and unevenly sampled manifolds as in Facco 2017 and found that the estimation errors can be large both for mFSA and cmFSA. We investigated the non-uniform sampling with Gaussian and Cauchy datasets ( ). For the Gaussian dataset cmFSA moderately overestimated the values. For the Cauchy dataset the overestimation of cmFSA is very severe: for less than 500 points, the estimation error and also the standard deviation seems to be unbounded. On the curved hypersphere data cmFSA also produced moderate overestimation.These datasets are quite challenging, and the 2NN method of Facco 2017, which uses minimal neighborhood information, presents more exact results on these. he simplicity of the correction in cmFSA, more specifically that the calibration is based on uniformly sampled hypercube datasets makes it vulnerable to non-uniform density and curvature.” and later around line 450: “More specifically in the cases of , , cmFSA almost never hits the true intrinsic dimension value, where is a 10-dimensional sphere, is a 6-dimensional manifold embedded in 36 dimensions and is a 20-dimensional multivariate Gaussian. In the first case the manifold is curved, in the second it is embedded in high dimensional ambient space and in the third one it is non-uniformly sampled. DANCo was robust against the curvature and the non-uniform sampling, but also exhibited vulnerability to high ambient space data $M_6$. For this dataset the 2NN method performed the best.” Facco, E., d’Errico, M., Rodriguez, A. et al. Estimating the intrinsic dimension of datasets by a minimal neighborhood information. Sci Rep7, 12140 (2017). https://doi.org/10.1038/s41598017-11873-y 2) Since this is a methodological work, I would recommend that the authors make publicly available the code implementing cmFSA. · · We made the code available at the https://github.com/phrenico/cmfsapy/ URL and we also uploaded an installable python package to the Python Package Index. 3) It is not clear how the different sample sizes were included in the calibration of the correction term. It seems that the calibration term used to infer the ID of the datasets M1-M13 was inferred from the n = 2500 hypercubes. Is one going to use the same term with datasets of different n ? It seems that one should rather use a term calibrated on that specific n . The authors should comment on this point. Furthermore, why was k = 5 used for calibration, instead of k = 1 used in subsequent analyses? Yes, the calibration procedure has to be carried out for each specific n. In the subsequent analysis. We corrected the manuscript to make the description more clear on this part, I inserted that the coefficients are sample size dependent into the results. Also I included additional descriptions of the estimation algorithm into the methods section, where I explicitly state, that the correction model is for a specific sample sizes approximately at line 158 of the revised manuscript: “Fit a correction-model with the the given sample size n on uniform random hypercube calibration datasets consisting of various intrinsic dimension values, many instances each (at least N=15 realizations).” I used k=5 neigborhood size on the benchmarks, because bigger neighborhoods result in smaller variance in the estimates. It comes with the price of higher bias, but this bias is corrigated by the correction formula. Additionally in real-world datasets the effect of noise can be ameliorated by bigger neighborhood choice, presumably the same effect can be seen on the Figure 9 B of the revised dataset, where the dimension estimates of the LFP chanels are depicted in the function of neighborhood size. Here the mFSA estimates are higher for smaller neighborhood sizes, a possible explanation is the noise on the data. Minor points: • The Authors may better stress the fact that their median-based procedure is independent of k, and thus allows selecting a minimal neighborhood size (k = 1). In this case, the used statistics is essentially equivalent to the one used by Facco et al., 2017 - even though the estimation procedure is slightly different. As in Facco et al., using a minimal size neighborhood can make the method very robust to density variations and curvature. I inserted a figure and 2 sentences points into the main text to emphasize the relevance of this findig: 1. in the Introduction section: I inserted the new Figure 1. C D, which shows that the sample median is a good estimator even in small neighborhood sizes 2. in the discussion section I inserted at line 413: 'This property holds even for the minimal neighborhood size, where the previously proposed mean is infinite. The use of minimal neighborhood may be relevant, because it ameliorates the effect of curvature and density inequalities' 3. in the conclusion section at line 478: 'We derived the probability density function of local dimension estimates for uniform data density and proved that the median is an unbiased estimator of the global intrinsic dimension, even at small neighborhoods.' • The simplicity of the proposed statistic makes it suitable to be embedded within mixture-based approaches to provide better ID estimates whe the ID is varying in the data set (Haro, G., Randall, G. & Sapiro, G. Translated poisson mixture model for stratifcation learning. Int. J. Comput.Vis. 80, 358(cid:21)374 (2008); Allegra M, Facco E, Denti F, Laio A, Mira A (2020) Data segmentation based on the local intrinsic dimension. Sci Rep 10(1):16449). In the Discussion at line 407 I added: “However these results were derived for the simplest uniform euclidean manifold with single global intrinsic dimension, they form a base for application in more complex cases. For example the pdf of the local statistic make possible to apply the FSA estimator within mixture-based approaches, this would provide better ID estimates when the ID is varying in the data set (Haro2008, Allegra2020).” • The Authors may better clarify Eqs. (1-2). In Eqs (1-2), k is used to indicate both a variable quantity and a fixed quantity. In eq. (1), k is a variable quantity, like R [notice that Eq.(1) now uses both R and Rk, inconsistently]. In eq. (2), k is a fixed value, like rk and r2k. Also, the quantities in Eq. (1) should be better defined. I would recommend something like: A usually basic assumption of kNN ID estimators is that the fraction of points f in a spherical neighborhood centered at x is approximately determined by the intrinsic dimensionality (D) and radius(R) times a locally almost constant mostly densitydependent factor(η(x, R)): f /n = η(x, R)RD [...] If Rk is the distance at which the k-th neighbor is found, from Eq. (1) one can take the logarithm…' We modified equation 1 and 2 (eq. 2 in revised manuscript) supposedly according to the guidelines. We also modified the definitions and the description for the derivation of equation 3. starting from line 70: “ A usually basic assumption of kNN ID estimators is that the fraction of points in a spherical neighborhood is approximately determined by the intrinsic dimensionality ( ) and radius ( ) times a -- locally almost constant -- mostly density-dependent factor ( , Eq.2). where is the fraction of samples in a neighborhood.” after line 97: “We derive the FSA estimator from Eq. 2. Let be a dimensional manifold and let's have a sample where i with size sampled from . We take two neigborhoods around a sample point, thereby we fix and if is the distance at which the -th neighbor is found around , then we can take the logarithm of both sides: If is slowly varying and is small, we can take as a constant. Thus, by subtracting the two equations from each other we get rid of the local density dependence: ” • In Eq. (5), the Authors may better clarify what p(r|k, K − 1, D) is: something like the probability that the normalized distance of the k-th neighbor among K neighbors is r if the intrinsic dimension is D. I inserted the sentence at line 283: “Here describes the probability that the on a thin shell at the normalized distance among the dimension is (see SI\,A.\,1 for a derivation).” -th neighbor can be found neighbors if the intrinsic • Thus, we can compute the pdf of the estimated values as plugging in K = 2k into Eq. 5 followed by change of variables(p. 4). This sentence might be more clearly rephrased, e.g., Combining (5) and (6), one can obtain the pdf of the FSA estimator I changed the text according to the suggestion at line 291: “Combining Eq.13 and Eq.14 one can obtain the pdf of the FSA estimator:” • In theorem 1, the Authors may mention that the substitution a = 2−D/dk is monotonic, which justifies the invariance of the median. I changed the text to mention the monotonity of the mapping at line 297: “and is a monotonic function of , therefore the median in can be computed by the inverse mapping.” • This means that the median of the FSA estimator is equal to the intrinsic dimension independent of neighborhood size. Again, this fact should be stressed because it allows using small k, which cannot be done in standard FSA: indeed, for small k, as evident from fig. 1, the mean and mode produce severe underestimates. I modified the sentence at line 299: “This means that the median of the local FSA estimator is equal to the intrinsic dimension independent of neighborhood size, even for the minimal neighborhood, if the locally uniform point density assumption holds.” • In Fig. 2, the Authors may add a third panel showing on a simple plot the standard error of the median as a function of log(n), for different values of D different curves for different values of D.    I added a third C and a fourth D panel to figure 3, showing the standard error of the estimates in the function of and also in the function of neighborhood size. The analysis led to the result that the standard error is proportional to approximately (eq.26 in revised manuscript and SI section D). I also derive this sample-size and neighborhood dependence by using the Laplace approximation for the median and Stirling approximation for the Eulerbeta function (SI section D). • I would put a derivation of Eq. (17) in the SI. (the rationale of the binomial is nearly obvious, but a full explanation may help the reader).  We included a general derivation for the pdf of the sample median in the SI section C. • Are periodic boundary conditions used in Figure 4, as the main text indicates? This should be clarifed also in the caption of Fig. 4, to stress the differece with Fig. 3, which is not using PBC. Yes, we used periodic boundary conditions for the old figure 4. In the revised manuscript Figure 4, Figure 5 and Figure 6 are involved, we modified the captions to contain the boundary conditions. • In eqs. (21)-(22) it would be better to bring in some notational clarity. What are d, D, ˆd? Note that D was always used as the true value of intrinsic dimension. I changed the notation, introduced for local id estimates, stands for global estimates, and big is used for the true dimension value in the revised text. • How is the error in Fig. 6 defined? It is stated that the error rate the fraction of cases, when the estimator did not and (missed) the true dimensionality. What does this mean exactly? That |Dj − dij| > 1 ? I inserted the definition of error rate into the revised manuscript, this metric is defined only for integer mode estimators (DANCo, cmFSA in integer mode) (line 232): “Also, we used the error rate -- the fraction of cases, when the estimator did not find (missed) the true dimensionality -- as an alternative metric. We used this metric to compare the performace of DANCo and cmFSA in integer mode, we simply counted the cases, when the estimator missed the true dimension value: where is the error rate for a manifold computed from realizations and if is the indicator function for the error. We computed the mean error rate by averaging the manifold specific values.” • In fig. 7, what are 1-8 on the x axis? Is it simply the electrode number? To what areas do the grid recordings Gr-A ... Gr-F correspond? The Authors should specify it in Methods, or at least provide a reference. We improved the old figure 7 (became Figure 9 in the revised text). In Fig 9 A we included the layout of electrode grid on the brain surface and the C D E subplots has axis labels now, also we modified the descriptions in the Methods at line 236: “We used data of intracranial field potentials from two subdural grids positioned -parietofrontally (6*8 channels, Gr A-F and 1-8) and frontobasally (2*8 channels, Fb A-B and 1-8) -- on the brain surface and from three strips located on the right temporal cortex (8 channels, JT 1-8), close to the hippocampal formation and two interhemispheric strips, located within the fissura longitudinalis, close to the left and right gyrus cinguli (8 channels BIH 1-8 and 8 channels JIH 1-8) as part of presurgical protocol for a subject with drug resistant epilepsy (Fig. 9 A).” • Why was k = 10 used in the analysis of electrode data? If results change a lot between k = 1 and k = 10, it may be because data were not optimally subsampled. In Figure 9 the new B subplot shows the neighborhood-dependence of the estimates. From this it can be seen, that neighborhood is required to get above the noise level to obtain stable estimates as an alternative to further subsampling. • In Fig. S2, I would stress that panel c shows that the error distribution after correction is approximately Gaussian. We added panels E and F to Sfig 2 to address this caveat, E shows that the shape of the error is indeed Gaussian-like. Panel F shows that it can not be rejected that the error-samples are generated from Gaussian distributions. We observed a diagonal gradient of intrinsic dimensions on the cortical grid (Gr)( (p. 7). It is difficult to interpret a diagonal gradient (as opposed to a vertical gradient representing cortical hierarchy). We rephrased this part of the results section (line 389): “We found several characteristic differences in the dimension patterns between normal and control conditions. In interictal periods (Fig. 9 C), we found the lowest average dimension value at the FbB2 position on the froto-basal grid. Also, we observed gradually increasing intrinsic dimensions on the cortical grid (Gr) between the F1 and D6 channels. In contrast, we observed the lowest dimension values at the right interhemispherial strip (JIH 1-2) and on the temporo-basal electrode strip (JT 3-5) close to the hippocampus, and the gradient on the cortical grid altered during seizures (Fig. 9 D). Comparing the dimensions between seizure and control periods, majority of the channels showed lower dimensions during seizures. This decrease was most pronounced close to the hippocampal region (strip JT) and the parietal region mapped by the main electrode grid (GrA-C). Curiously, the intrinsic dimensionality became higher at some frontal (GrE1-F2) and fronto-basal (FbA1-B3) recording sites during seizure (Fig. 9 A and E).” "
Here is a paper. Please give your review comments after reading it.
296
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Peer production online communities are groups of people that collaboratively engage in the building of common resources such as wikis and open source projects. In such communities, participation is highly unequal: few people concentrate the majority of the workload, while the rest provide irregular and sporadic contributions. The distribution of participation is typically characterized as a power law distribution. However, recent statistical studies on empirical data have challenged the power law dominance in other domains. This work critically examines the assumption that the distribution of participation in wikis follows such distribution. We use statistical tools to analyse over 6,000 wikis from Wikia/Fandom, the largest wiki repository. We study the empirical distribution of each wiki comparing it with different well-known skewed distributions. The results show that the power law performs poorly, surpassed by three others with a more moderated heavy-tail behavior. In particular, the truncated power law is superior to all competing distributions, or superior to some and as good as the rest, in 99.3\% of the cases. These findings have implications that can inform a better modeling of participation in peer production, and help to produce more accurate predictions of the tail behavior, which represents the activity and frequency of the core contributors. Thus, we propose to consider the truncated power law as the distribution to characterize participation distribution in wiki communities. Furthermore, the truncated power law parameters provide a meaningful interpretation to characterize the community in terms of the frequency of participation of occasional contributors and how unequal are the group of core contributors. Finally, we found a relationship between the parameters and the productivity of the community and its size.</ns0:p><ns0:p>These results open research venues for the characterization of communities in wikis and in online peer production.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION 31</ns0:head><ns0:p>Since the emergence of online communities, one of the major topics of interest is to understand the 32 different levels in which members participate: that is, the distribution of participation, also named 33 distribution of work, or effort. Far from classical organizational structures, and more similar to volunteer-34 driven social movements, communities show an inherent participation inequality across its participants.</ns0:p></ns0:div> <ns0:div><ns0:head>35</ns0:head><ns0:p>Specifically in peer production communities, such as those in wikis and free/open source software, this 36 issue has derived multiple research questions: the concentration of participation in an elite (Shaw and 37 <ns0:ref type='bibr' target='#b29'>Hill, 2014;</ns0:ref><ns0:ref type='bibr' target='#b19'>Matei and Britt, 2017;</ns0:ref><ns0:ref type='bibr' target='#b18'>Kittur et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b25'>Priedhorsky et al., 2007)</ns0:ref>, the degree of participation 38 inequality <ns0:ref type='bibr' target='#b10'>(Fuster Morell, 2010;</ns0:ref><ns0:ref type='bibr' target='#b23'>Ortega et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b21'>Neis and Zielstra, 2014)</ns0:ref>, the characterization of who 39 participates more <ns0:ref type='bibr' target='#b13'>(Hill and Shaw, 2013;</ns0:ref><ns0:ref type='bibr' target='#b26'>Reagle, 2012)</ns0:ref>, the process of changing user roles <ns0:ref type='bibr'>(Arazy et al.,</ns0:ref><ns0:ref type='bibr'>40</ns0:ref> 2015; <ns0:ref type='bibr' target='#b24'>Preece and Shneiderman, 2009)</ns0:ref>, or the evolution of participation depending on multiple factors 41 <ns0:ref type='bibr' target='#b33'>(Vasilescu et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b28'>Serrano et al., 2018)</ns0:ref>. In the issue at hand, i.e. participation, the two quantified dimensions are the number of contributions, and the share of people in the community that has made such number of contributions. The relationship among them is negative, that is, the higher the number of contributions, the smaller the share of contributors that has made such number of contributions. According to this idea, a small amount of contributions would be common, while larger amounts would be more rare. This fits with the assumption of participation inequality in which most members of the community tend to participate very little (occasional contributors), while a few of them account for an enormous amount of contributions (core contributors). In fact, the statement is not ungrounded, since several statistical studies focused on Wikipedia claim that the number of edits per user follow a power law distribution <ns0:ref type='bibr' target='#b18'>(Kittur et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b32'>Stuckman and Purtilo, 2011)</ns0:ref>, and other studies find similar behavior in free/open source communities <ns0:ref type='bibr' target='#b12'>(Healy and Schussman, 2003;</ns0:ref><ns0:ref type='bibr' target='#b30'>Sowe et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b27'>Schweik and English, 2012;</ns0:ref><ns0:ref type='bibr' target='#b8'>Cosentino et al., 2017)</ns0:ref> or other peer production communities <ns0:ref type='bibr' target='#b36'>(Wu et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b35'>Wilkinson, 2008)</ns0:ref>. 1 Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows an example of the power law. 2 If we consider it represents a distribution for participation, the distribution models how frequent is to find a person that contributes X times. It can be seen that the frequency quickly declines as X grows, because most users only contribute a few times. However, it shows how we can find a small amount of contributors with a very high number of contributions.</ns0:p><ns0:p>The power law implies an underlying regularity in the behavior of the phenomenon under study. In particular, the power relationship should hold independently of which particular scale we are looking at. This may not be the case in real data, where the tails may exhibit a more conservative behavior, and other distributions may suit better <ns0:ref type='bibr' target='#b20'>(Mitzenmacher, 2004)</ns0:ref>.</ns0:p><ns0:p>While the power law has been considered a suitable distribution in many fields including online communities <ns0:ref type='bibr' target='#b17'>(Johnson et al., 2014)</ns0:ref> and organizations <ns0:ref type='bibr' target='#b1'>(Andriani and McKelvey, 2009)</ns0:ref>, recent studies in statistics challenge its apparent pervasiveness <ns0:ref type='bibr' target='#b7'>(Clauset et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b4'>Broido and Clauset, 2019)</ns0:ref>. According to these studies, power law distributions are complicated to detect because fluctuations occur in the tail of the distribution, and because of the difficulty of identifying the range over which power law behavior holds.</ns0:p><ns0:p>For some cases this difference between a power law distribution and other heavy tailed distributions may not be relevant, since the former may be enough to roughly represent the participation. However, using the power law as statistical characterization of wiki participation can lead to unrealistic predictions regarding the likelihood of extremely active core contributors. A power law is a relationship in which a relative change in one quantity gives rise to a proportional relative change in the other quantity, independent of the initial size of those quantities. In the peer production field, the regularity of the power law would imply that the relationship that holds for the occasional contributors would be the same to that 1 Other studies just mention a highly skewed distribution or similar statements without further specification <ns0:ref type='bibr' target='#b14'>(Howison et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b9'>Crowston et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b3'>Barbrook-Johnson and Tenorio-Forn&#233;s, 2017)</ns0:ref>.</ns0:p><ns0:p>2 Original picture by Hay Kranen PD. available at Wikimedia Commons. Our version is a slight variation from the original one.</ns0:p></ns0:div> <ns0:div><ns0:head>2/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63802:1:1:NEW 25 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for the core members, which may be a strong assumption for a community when it comes to predicting the activity level and the frequency of core contributors. In other words, the tail of the distribution, which represents the activity of core contributors, may not have an extreme behavior as the power law suggests, i.e., the number of extremely active contributors and their productivity may not be as high. If that is the case, more conservative distributions, such as the the truncated power law, would provide a better fit. In fact, such distribution was found suitable in a comparative analysis of the ten largest Wikipedias <ns0:ref type='bibr' target='#b22'>(Ortega, 2009)</ns0:ref>.</ns0:p><ns0:p>According to these premises, it seems reasonable to question the characterization of the participation in peer production as a power law, and consider other heavy-tailed distributions. Thus, we will apply the statistical tools proposed by <ns0:ref type='bibr' target='#b4'>Broido and Clauset (2019)</ns0:ref> to study peer production distributions, and more precisely participation distributions from wiki communities. The statistical tools proposed in that work provide a test to determine whether a distribution provides a better fit than another with respect to the empirical data provided. Thus, we will use them to analyze whether one candidate distribution consistently provides a better fit than the others. The candidates will be five well-known distributions, namely, the power law, three heavy-tailed distributions with a tail more conservative than the power law (truncated power law, stretched exponential and log-normal) and a non-heavy tailed distribution (exponential), following the example by <ns0:ref type='bibr' target='#b4'>Broido and Clauset (2019)</ns0:ref>.</ns0:p><ns0:p>In our work, we focus on Fandom/Wikia, the largest wiki repository which provides a large and diverse sample of peer production communities. Fandom/Wikia accounts for over 300,000 wikis. However, because of constraints of the statistical methods used, which require a certain minimum of observations, we will use for our analysis the &#8764;6,000 wikis which have at least 100 registered contributors.</ns0:p><ns0:p>The rest of the article proceeds as follows. Section 'Methodology and Data Collection' details the process followed to perform the statistical analysis and for the data collection. Section 'Results of the statistical tests' shares the results of the statistical study of user contributions, and discusses its results through the explanation of series of graphs. The next section offers an analysis of the winning distribution, i.e. the truncated power law, and proposes an interpretation of its parameters and how they characterize the different wikis under study. The paper closes with some concluding remarks and future work.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODOLOGY AND DATA COLLECTION Methodology</ns0:head><ns0:p>Following <ns0:ref type='bibr' target='#b7'>Clauset et al. (2009)</ns0:ref> and <ns0:ref type='bibr' target='#b4'>Broido and Clauset (2019)</ns0:ref>, our study is divided in two analyses.</ns0:p><ns0:p>First, in order to assess if the power law distribution is a plausible model for the given empirical data, we use the authors' goodness of fit test. Then, we perform an exhaustive analysis in order to identify which distribution better describes each wiki within the data set. These two methods are explained in this section.</ns0:p><ns0:p>Goodness of fit <ns0:ref type='bibr' target='#b7'>Clauset et al. (2009)</ns0:ref> propose a statistical test in order to asses if a distribution plausibly follows a power law. First, the power law distribution is used to model the data, finding its slope, or &#945; parameter, and the minimum value from which the power law behavior is observed, or x min parameter.</ns0:p><ns0:p>Afterwards, in order to compare the empirical data to different distributions, we create a set of comparable synthetic data sets that follow the distribution (i.e. have the same parameters). This allows us to compare the real data with the synthetic data, and see how they deviate from each other. This method is considered more accurate than comparing the deviation with an ideal distribution which real data may never fit. Thus, we artificially create 100 synthetic data sets per wiki, for each of the five distributions.</ns0:p><ns0:p>Thus, the distance of the real data to its power law model is compared with the distance of the synthetic data sets to their power law models. Note that the synthetic data sets are also fit to power law models to compete in similar conditions These distances are calculated using the Kolmogorov-Smirnov (KS) statistic. The goodness-of-fit test returns a p-value between 0 and 1 representing the number of synthetic data set fits that outperformed the real data fit. E.g. a p-value of 0.4 represents that the real data fits the power law better than 40% of the synthetically generated data. This p-value is then used to decide whether to rule out the hypothesis of the data following a power law. In our study, we rule out the power law model hypothesis if the p-value is smaller than 0.1, as <ns0:ref type='bibr' target='#b7'>Clauset et al. (2009)</ns0:ref> and <ns0:ref type='bibr' target='#b4'>Broido and Clauset (2019)</ns0:ref> Manuscript to be reviewed <ns0:ref type='bibr'>et al. (2009)</ns0:ref>, for the result to be accurate to within &#949;, we should generate about &#949; &#8722;2 /4 samples. Our study generates 100 synthetic data sets per test, therefore, the results are within an &#949; of 0.05.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>When the number of observations is relatively small, this goodness of fit test cannot rule out a power law model in those cases in which the data follows other distributions such as the log-normal or exponential. For instance, for data following an exponential distribution with &#955; = 0.125, at least 100 observations are needed for the average p-value to drop bellow our threshold of 0.1, while for data following a log-normal distribution with &#181; = 0.3, the average p-value drops below 0.1 from around 300 observations <ns0:ref type='bibr' target='#b7'>(Clauset et al., 2009)</ns0:ref>. Thus, high p-values in these distributions with small number of observations should not be interpreted as the data following a power law. Moreover, as studied in the following section, even if a distribution plausibly follows a power law, other distributions may fit the data better.</ns0:p><ns0:p>This work considers wikis with more than 100 observations (i.e. wikis with over 100 registered contributors) for the p-value study for two reasons. First, as already mentioned, the goodness-of-fit test would not be able to rule out the power law. Second, as the wikis with less than 100 contributors represent more than 98% of wikis (See Section 'Methodology and Data Collection'), the percentage of wikis passing the test due to the small number of observations may further obfuscate the result about the adequacy of the power law.</ns0:p><ns0:p>Summarizing, our study considers distributions with more than 100 observations (i.e. wikis with over 100 registered contributors), performs the goodness-of-fit tests proposed by <ns0:ref type='bibr' target='#b7'>Clauset et al. (2009)</ns0:ref> considering those with a p-value greater or equal to 0.1(&#177;0.0158) 3 to plausibly follow a power law. See</ns0:p><ns0:p>Section 'Results of the statistical tests' for more details.</ns0:p><ns0:p>This study was performed using the poweRlaw R package <ns0:ref type='bibr' target='#b11'>(Gillespie, 2014)</ns0:ref>. Besides, the R script source code, required for applying these statistical tests to our data, is available as free/open source software to facilitate replication. 4</ns0:p></ns0:div> <ns0:div><ns0:head>Likelihood-ratio test</ns0:head><ns0:p>The previously described goodness of fit test provides a tool to decide whether to rule out a power law distribution as a good model for the data. However, even if a power law model is not rejected, there may be better alternative distributions. The likelihood-ratio test allows us to compare the likelihood of the empirical data fitting two competing distributions. That is, it establishes which distribution is more likely to fit the data, and whether the difference is significant.</ns0:p><ns0:p>Following the approach described by <ns0:ref type='bibr' target='#b7'>Clauset et al. (2009)</ns0:ref>, our study compares the likelihood of 5 different skewed distributions. Our hypothesis is that the power law is too 'ambitious' for the observations of the tail. We also expect the distribution to be heavy tailed, i.e. with a decrease of the tail slower than in an exponential distribution. In addition to these two distributions that frame the expected tail of our data, our study adds three skewed distributions that would lie in between, presenting a slower decrease in the tail than the exponential but a stronger decrease than the power law: the truncated power law (also named power law with exponential cut-off), the log-normal and the stretched exponential. Both the truncated power law and the log-normal distributions have two terms, while the power law, exponential and stretched exponential have only one. The number of terms of the distributions is relevant, since it is a factor for fitness.</ns0:p><ns0:p>The study exhaustively compares, for each wiki, the fit of the data to those five skewed distributions (power law, truncated power law, log-normal, exponential and stretched exponential), and identifies when the likelihood differences are statistically significant. It uses the Vuong method <ns0:ref type='bibr' target='#b34'>(Vuong, 1989)</ns0:ref>, which considers the variance of the data, and returns a p-value that states if the likelihood differences may be due to the data fluctuations, or are significant in order to favor one distribution over the other. 5 As Clauset et al.</ns0:p><ns0:p>(2009), we consider significant the differences with a p-value smaller than 0.1, i.e. those that have less than 10% probabilities of being a result of the data fluctuations. Additionally, in order to avoid over-fitting to the tail of the distribution, we force the method to fit every contributor with at least 10 contributions. If we do not impose this condition, the method could exclude many contributors in order to find a better fit for the most active contributors, for instance a fit for the people with more than 500 contributions.</ns0:p><ns0:p>This study was performed using the Powerlaw Python package <ns0:ref type='bibr' target='#b0'>(Alstott et al., 2014)</ns0:ref>. Similar to the previous subsection, the Python script source code, required for the performed analysis, is available as free/open source software to facilitate replication. 6</ns0:p></ns0:div> <ns0:div><ns0:head>Data collection</ns0:head><ns0:p>This work investigates the distribution of participation in wikis from Wikia/Fandom studying the number of edits per user. Wikia/Fandom is a suitable research object to draw conclusions about participation in wikis in general. As argued by <ns0:ref type='bibr' target='#b29'>Shaw and Hill (2014)</ns0:ref>, Wikia is an ideal setting in which to study peer production. Wikia only hosts publicly accessible, openly-licensed, volunteer-produced, peer production projects. To date, it is the largest and most diverse repository of open knowledge peer production, with a rich ecosystem of a broad diversity of topics, languages, community and wiki sizes. Furthermore, Wikia never restricts viewership, nor participation (except that from spammers or vandals). Wikia hosts some of the largest and most successful wikis in multiple topics and languages, such as Marvel or Star Wars fandom wikis, LyricWiki on song lyrics, Proteins scientific wiki, or AmericanFootballDatabase.</ns0:p><ns0:p>To collect our data we used the publicly available Wikia census described by Jim&#233;nez-D&#237;az et al.</ns0:p><ns0:p>(2018) and retrieved on the 20th of February 2018. 7 However, as explained in the methodological section, we limit our analysis to wikis with at least 100 registered contributors which have done at least one edit, and excluding bot users.</ns0:p><ns0:p>Thus, starting from this census data, and complementing it with additional information as explained below, we have created a new data set to study the distribution of participation, i.e. which is the distribution of edits made by registered contributors, excluding bots. By only including registered contributors we exclude anonymous contributors, which can be identified by their IP address. However, it is problematic to unambiguously match the IP address to a single anonymous contributor and vice versa. Furthermore, it is also difficult to consider an anonymous contributor as a member of the wiki community.</ns0:p><ns0:p>This data set is complete, since it includes all the Wikia/Fandom wikis with at least 100 contributors which made at least one contribution, resulting in 6,676 wikis, as explained in detail below.</ns0:p><ns0:p>The mentioned Wikia census provides information of &#8764;300,000 wikis. However, the census does not provide information on the number of edits of each participant in each wiki. Thus, such information needs to be retrieved to complement the data set.</ns0:p><ns0:p>Therefore, in order to retrieve the required data, we need to query the API of each of the wikis hosted in Wikia. Spefically, we need to query the Special:ListUsers API endpoint that every MediaWiki wiki has. 8 Such Special:ListUsers page lists the information of every registered user in a given wiki, e.g. username, number of edits, groups she belongs to, or date of last edit made. A perl script was developed in order to use that endpoint and obtain the number of edits performed by each registered user. In particular, the script queries the endpoint making a request for all users. Afterwards, it filters out the bot users, removing the users belonging to the bot and bot-global groups. As with the previous scripts, this perl script source code is available as free/open source software to facilitate replication. 9</ns0:p><ns0:p>The data collection was performed on November 6, 2018 and it is publicly available. 10 It contains information about 295, 658 wikis, since 8, 433 wikis endpoints were technically unavailable 11 .</ns0:p><ns0:p>This data, i.e. the census wikis with the edits information, was curated to avoid duplicates and to filter out wikis without human participation (i.e. bot only) and without statistical data provided by Wikia/Fandom. After removing them, the collection contains information about 282, 039 wikis.</ns0:p><ns0:p>The reliability of the data collected is considered high. The edit numbers are as reliable as Wikia/Fandom publicly accessible statistics are (i.e. those from the Special:ListUsers endpoint). Furthermore, we have also done a consistent effort in bot identification in order to filter them out, as they may alter the participation distribution.</ns0:p><ns0:p>For statistical reasons already explained in the methodological section, this work considers only wikis with at least 100 registered (non-bot) contributors. Thus, the number of considered wikis was further 6 Likelihood-ratio test script: ANONYMIZED 7 Wikia census: https://www.kaggle.com/abeserra/wikia-census 8 Note all Wikia/Fandom wikis use the same wiki software, MediaWiki, maintained by Wikimedia Foundation and used by its projects, including Wikipedia.</ns0:p><ns0:p>9 Script to retrieve user contributions: ANONYMIZED 10 ANONYMIZED 11 Wikis may be unavailable for a number of reasons, e.g. being removed from the platform, or having changed their name. Unavailable wikis represent 3,5% of the total wikis, constituting a small percentage of expected noise that should not compromise the results of the study.</ns0:p></ns0:div> <ns0:div><ns0:head>5/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63802:1:1:NEW 25 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed reduced to 6, 676. Hence, this is not a sample, but the observed full population of Wikia/Fandom wikis with at least 100 registered users with contributions.</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>RESULTS OF THE STATISTICAL TESTS</ns0:head><ns0:p>According to the goodness of fit test described in the methodological section, the power law is a plausible distribution (i.e. it cannot be ruled out) for the 83% of the 6,676 wikis from Wikia/Fandom with at least 100 registered non-bot contributors. However, as explained in the same section, that does not mean that the power law is the best choice, since other distributions may fit the empirical data better.</ns0:p><ns0:p>Thus, we perform the likelihood-ratio test to compare the pairs of the five candidate distributions as explained above. The distributions are: power law, truncated power law, exponential, stretched exponential and log-normal. For each wiki, we perform likelihood-ratio tests comparing all the competing distributions against each other. That is, we perform 10 likelihood-ratio tests for each wiki, since there are 10 possible couples.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> summarizes the results of these comparisons. The figure's pentagon apexes shows each of the five considered distributions. An arrow from distribution A to distribution B represents the percentage of wikis in which distribution A was preferred over distribution B in the likelihood-ratio test, while the opposite arrow represents the percentage of wikis where distribution B was superior to distribution A. Note in some cases, the likelihood-ratio test may be inconclusive to determine which of the two distributions is better for a given wiki, and in those cases neither A nor B is superior. It is important to remark that the test being inconclusive means that both distributions fare similarly, which could mean that both are adequate or even that both are inadequate. For the sake of clarity, the figure omits the complementary percentage where the likelihood-ratio test was inconclusive, although it can be easily calculated. 12 The analysis of the figure results shows that the power law is not a strong contender, as it is rarely a more likely distribution than any of its competitors, with the exception of the exponential distribution, which is also overwhelmingly defeated by the rest of the candidates.</ns0:p><ns0:p>The defeat of the exponential distribution by all candidates means that a large tail of core contributors is clearly present in the wiki participation distributions, and thus that an exponential distribution, which is However, the power law being defeated by the rest of the heavy-tailed distributions means that the tail is not as heavy or large as a power law would predict. Hence, more moderated heavy-tailed distributions are required. This conclusion is similar to the one drawn in recent works that disprove the supposed prevalence of the power law in other domains <ns0:ref type='bibr' target='#b7'>(Clauset et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b4'>Broido and Clauset, 2019)</ns0:ref>.</ns0:p><ns0:p>Thus, a correct characterization of the distributions, in nearly all cases, lies in between the exponential and the power law distributions. Among the rest of the candidates, the truncated power law stands out, since as seen in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>, it is rarely beaten by its competitors: 2.16% against the stretched exponential, 2.08% against the log-normal, 0.18% against the exponential, and 0.04% against the power law distribution. Hence, the likelihood-ratio test clearly supports the truncated power law as the most appropriate distribution to characterize participation.</ns0:p><ns0:p>The appropriateness of the truncated power law is better appreciated when we aggregate the results of the likelihood-ratio tests for each wiki as shown in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>. We count the cases where a candidate distribution won all the likelihood-ratio tests for each wiki, which means that that distribution is the right choice for that wiki. In addition, we also counted the times where a candidate distribution lost at least one test, which means that for that wiki the candidate distribution was not the best choice.</ns0:p><ns0:p>It is important to remark that only in 10 wikis (0.15%) no candidate distribution won any likelihoodratio test which means that they all were equally good (or, more precisely, bad) candidates. We have inspected these cases and they all exhibit uncommon participation distributions.</ns0:p><ns0:p>According to Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>, the truncated power law is significantly better than all the candidates in 596 wikis out of the 6,676, i.e. approx. 9% of the wikis considered. While the rest of the distributions fare much worse: only the log-normal and stretched exponential distributions are the best candidates in 41 and 2 wikis, respectively. The power law and the exponential are not the best candidates for any wiki, which reinforces the idea of the suitability of a heavy-tailed distribution but not as heavy as that from the power law.</ns0:p><ns0:p>According to the aggregated results in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>, the truncated power law is not the best or among the best candidates for only 177 wikis out of 6, 676 wikis (2.65%); more precisely in 67 wikis (1%) looses one test, in 101 wikis (1.51%) loses two tests and in 9 wikis (0.1%) loses three tests. The rest of the distributions fare much worse, e.g. log-normal can be ruled out as the best candidate in the 17.36% of the wikis and the stretched exponential in the 22.73%. This result reinforces the idea of the truncated power law being the distribution of choice when trying to characterize the participation distribution in wikis, because it seems difficult to find a better one for most of the cases.</ns0:p><ns0:p>We show an example of participation distribution where the truncated power law won all the tests in Figure <ns0:ref type='figure'>3</ns0:ref>. The figure shows a log-log plot of the complementary distribution function where the X axis represents the logarithm of the number of edits in the wiki and the Y axis the inverse cumulative relative frequency, i.e. the percentage of contributors that made at least X edits in the wiki. The figure displays the observations (grey squares) and the fitted distributions, i.e. the truncated power law and all the candidate distributions. The observations in the left side of the graph represent the contributors with fewer edits, while those most towards the right are the core contributors that made most edits, i.e., the tail of the participation distribution.</ns0:p><ns0:p>In this figure, first we can observe the different tails of the considered distribution. While the exponential has the most conservative tail, the power law is the one that has a heavier tail, while the rest of the distributions have a tail in between them. Regarding the data fitting, the exponential with his bounded tail is not able to model the community behavior at all. The rest of them fit the initial slope, but only the truncated power law is able to successfully grasp the tail behavior, because the others predict a heavier Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref>. Complementary cumulative distribution function of participation of a wiki and the fitted distributions.The X axis represents the logarithm of number of edits and the Y axis the inverse cumulative relative frequency the percentage of contributors that made at least X edits in the wiki.</ns0:p><ns0:p>tail.</ns0:p><ns0:p>Note the participation distribution in Figure <ns0:ref type='figure'>3</ns0:ref> is one of the 9% examples in which the truncated power law wins all test. Still, as mentioned, in most of the cases (97, 35%), the Truncated power law is not defeated by any other distribution. Such cases typically correspond with participation distributions with tails that can be conveniently fitted by the truncated power law, but also by the log-normal and/or the stretched exponential. So, according to this statistical evidence, the truncated power law is in fact the most adequate distribution for wiki participation.</ns0:p><ns0:p>The statistical analysis carried out shows that the truncated power law is the best distribution to characterize the participation in wikis among those considered, as it is barely rejected and is the only proper fit in 9% of the cases. In the next section, we will interpret the parameters of this distribution in the context of participation and will relate them with the characteristical features of the wiki communities.</ns0:p></ns0:div> <ns0:div><ns0:head>ANALYSIS OF THE TRUNCATED POWER LAW FOR CHARACTERIZING PARTICIPATION DISTRIBUTIONS</ns0:head><ns0:p>In this section, we will explore the diversity of participation distributions that are modelled by the truncated power law, but before that, we need to understand better the effect and interpretation of the parameters that define the the truncated power law.</ns0:p></ns0:div> <ns0:div><ns0:head>Interpretation of the truncated power law parameters</ns0:head><ns0:p>The truncated power law is defined as a power law multiplied by an exponential: x &#8722;&#945; e &#8722;&#955; x . In the log-log plot, the parameter &#945; is related to the slope of the power law function, while the parameter &#955; is related to the starting point and/or the steepness of the decay in the tail.</ns0:p><ns0:p>As a result, lower alphas can be associated with a higher frequency of participation of occasional contributors. While the number of contributions increase, their frequency decreases less conspicuously than in the case of higher alphas. In other words, in communities with lower alphas the frequency of contributors with more contributions decreases less significantly.</ns0:p><ns0:p>On the other hand, higher lambdas can be associated with more pronounced deviations from the power law in the tail, which means that more active contributors are less frequent as what the power law would predict. Thus, higher lambdas relate to less inequality among active contributors than predicted by the power law.</ns0:p></ns0:div> <ns0:div><ns0:head>8/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63802:1:1:NEW 25 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In Figure <ns0:ref type='figure' target='#fig_2'>4</ns0:ref>, we show the truncated power law of nine wikis with different &#945; and &#955; parameters that illustrate how diverse may be the participation distributions in wikis. From left to right we show three plots each of them with three participation distributions with roughly similar &#945; values (the alpha values grow from the left to the right plot). In each plot, we show participation distributions with similar &#945; but with different &#955; values. This figure illustrates the idea that the initial slope of the distributions depends on &#945; values, as it is steeper from the left to the right plots. Besides, in each figure we can appreciate that higher values in the &#955; parameter are associated with a more pronounced and earlier decay sooner, or, conversely, smaller values allow the power law relationship to prevail longer.</ns0:p></ns0:div> <ns0:div><ns0:head>Relationships of the parameters with features from the wiki communities</ns0:head><ns0:p>In this section we explore whether the &#945; and &#955; parameters are related to some features from wiki communities, namely, the number of edits and the number of participants. We will use scatter plots in which each dot represents a wiki in a 2-dimensional plot. The plot axes represent the values of the &#945; and &#955; parameters, and the dot is colored according to a color gradient related with the specific wiki feature.</ns0:p><ns0:p>More precisely, in Figure <ns0:ref type='figure' target='#fig_3'>5</ns0:ref> the color represents the number of edits, and in Figure <ns0:ref type='figure' target='#fig_4'>6</ns0:ref>, it represents the number of contributors of the wiki. For the sake of clarity, the plot will only display the wikis where the truncated power law distribution won all the likelihood-ratio tests.</ns0:p><ns0:p>The scatter plots show a cloud of dots with no clear relationship among the parameters. The relationship could be inverse, since the cloud rarely includes wikis with large &#945; and &#955; values or wikis with small &#945; and &#955; values. However, the variability is very high to see a clear pattern.</ns0:p><ns0:p>When studying the relationship of the parameters with the size of the community in Figure <ns0:ref type='figure' target='#fig_3'>5</ns0:ref>, we can observe how the &#955; parameter seems to be inversely related to the number of edits of the wiki, as the largest wikis are distributed in the lower part of the figure and vice versa. In other words, larger wikis (those with millions of edits) have smaller lambdas, which means that the decay in the tail of their participation distributions is not as significant. It reveals that, given an alpha value, there are more core contributors than in wikis whose participation distributions have higher lambda values, and that results in more productive communities in terms of edits. On the contrary, wikis with higher lambdas have a less populated elite of core contributors which results in smaller wikis in terms of edits.</ns0:p><ns0:p>At Figure <ns0:ref type='figure' target='#fig_4'>6</ns0:ref>, we can observe that the number of contributors of the wiki is related to the combination of Manuscript to be reviewed</ns0:p><ns0:p>Computer Science distribution of participation of wikis with smaller communities (yellow dots).</ns0:p><ns0:p>We cannot conclude if higher inequality is cause or consequence of larger communities and vice versa.</ns0:p><ns0:p>Such confirmation would require further research. However, it seems that there is a clear link between community size and participation distribution.</ns0:p><ns0:p>Furthermore, it is important to bear in mind that we are observing the participation distribution during the whole life of the wiki, that is, the aggregated effect of different communities that interacted in the wiki across time, since new contributors come and other leave, or contribute in different degrees, throughout their evolution. In fact, larger communities are usually older communities. In this sense, it would be interesting to observe how the yearly participation distribution in these wikis evolved, because the highlighted inequality could potentially be the result of the aggregation throughout the years of more egalitarian distributions of participation.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUDING REMARKS</ns0:head><ns0:p>In this work, we have critically studied the distribution of participation in wikis. We aimed to analyze Wikia/Fandom, which hosts &#8764;300,000 wikis. From those, we selected the 6,676 wikis with at least 100 registered contributors to perform our statistical analysis. This is considered an extensive and diverse population, appropriate for an analysis following the approach defined by <ns0:ref type='bibr' target='#b7'>Clauset et al. (2009)</ns0:ref>. According to our results, the power law is not an appropriate distribution for wiki participation, as it predicts that core contributors are more frequent and more active than the observed in these communities. This contradicts the bulk of the peer production literature, which refers to the power law as the reference distribution when discussing about contributor participation.</ns0:p><ns0:p>In our statistical analysis we have considered potential alternatives, and from these distributions, the truncated power law gives clearly the best fit with the empirical data. Consequently, it should be considered as the distribution of participation of choice when characterizing wiki communities. Of course, it may not be adequate for some specific communities, and yet it has been able to characterize effectively the vast majority of them, while the other candidates performed significantly worse. These findings have implications that can inform a better modeling of participation in peer production, and help to produce more accurate predictions of the tail behavior, that is, predictions about the frequency and the activity level of the core contributors.</ns0:p><ns0:p>In our analysis, we have also found that the parameters of the truncated power law distribution (that govern the slope and the decay of the power law relationship in a wiki project) are related with the number of members in the community and the number of edits in the project. However, the reasons behind these findings deserve deeper consideration and are a matter of future research.</ns0:p><ns0:p>The prevalence of the truncated power law as the distribution of choice for characterizing the participation distribution in wikis has several implications. For instance, it means that the truncated power law fits better, especially concerning the frequency and the activity level of the core contributors. The change of slope of the truncated power law may also serve to empirically determine a clear division between core and non-core contributors instead of using arbitrary divisions as in other studies <ns0:ref type='bibr' target='#b18'>(Kittur et al., 2007)</ns0:ref>.</ns0:p><ns0:p>Further research may provide insights on how and why the inner dynamics change, and how we can study better the different emergent roles within peer production communities.</ns0:p><ns0:p>In a truncated power law, the frequency and activity level of core contributors, i.e. the highly active members, is smaller than that predicted by a power law with the same slope. That means that, when looking at the distribution tail, we can observe a sharper decrease in the frequency of extremely active contributors as the edit activity increases.</ns0:p><ns0:p>The reasons behind this fact need to be determined. They could be related with community dynamics such as some kind of elitism that prevents more people to be involved as much as those more active in the community, or that many active contributors experiment a burnout at some point and cease or decrease their activity level <ns0:ref type='bibr' target='#b15'>(Jiang et al., 2018)</ns0:ref>, or even with the fact that it is not possible to find people as productive as a power law distribution predicts for certain participation levels.</ns0:p><ns0:p>Still, the difference in the participation level between core and non-core contributors is remarkable and it seems to reinforce the idea that core contributors are somehow special, in the sense that there is a qualitative change in their work and motivations <ns0:ref type='bibr' target='#b6'>(Burke and Kraut, 2008)</ns0:ref> and thus higher barriers to join them, and/or the elitization of the core leads to oligarchies <ns0:ref type='bibr' target='#b29'>(Shaw and Hill, 2014)</ns0:ref>.</ns0:p><ns0:p>The approach followed by this work has several limitations. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>cautious with the generalizability of our findings beyond Wikia/Fandom, i.e. to every wiki communities or to peer production communities in general. That is, could we argue that the distribution of participation in peer production is a truncated power law? We cannot prove that empirically, and yet we have a good base for cautious claims in that regard; similar to other generalizations performed in the field, e.g. by <ns0:ref type='bibr' target='#b29'>Shaw and Hill (2014)</ns0:ref>. That is, considering the significant size and diversity of the sample used, there is good evidence for potential generalizability. In order to support this generalization, these results would need to be validated in other projects, such as the Wikimedia Foundation projects, as well as in other peer production communities such as Free/Open Source Software projects. Thus, we encourage other researchers to replicate our approach with other peer production communities.</ns0:p><ns0:p>Furthermore, the statistical analysis methods employed require a certain number of observations to have conclusive results, which constrains their applicability for studying the participation distribution of wikis with small communities. Despite of having near 300,000 wikis in Wikia, most of them have under 100 registered contributors and were discarded, using 'only' 6,676 wikis in the analysis. For wikis with smaller communities statistical methods may find difficult to provide conclusive results as the differences are subtle and mostly related with the tail behavior.</ns0:p><ns0:p>We have analyzed the participation in the communities aggregated through time (years), that is, accumulating the participation of all the members from the beginning. However, the members of a wiki community change through time, as change the participation dynamics. The participation distribution could be different when analyzed in a smaller time window, such as a year.</ns0:p><ns0:p>We have already defined several potential lines for future work, but we would like to mention those that we consider more interesting. First, it would be relevant to use a different base population, in order to appropriately generalize for peer production communities and not just wikis. For instance, we could analyze in a similar manner communities from Github, Wikimedia Foundation projects, or Stack</ns0:p><ns0:p>Exchange. Second, it would be useful to perform a temporal analysis with a rolling time window, in order to understand how these distributions evolve over time. This is especially relevant if we consider the evolution of the truncated power law parameters and how they relate with participation dynamics and inequality. In fact, we can highlight the importance to deepen the study the characterization of wikis based on their truncated power law parameters. That is, it would be interesting to cluster similar wikis and explain the causes or consequences of the different typologies. Moreover, we could explore how they relate with factors such as maturity stage, community dynamics and sustainability.</ns0:p><ns0:p>Our work asserts the truncated power law is probably the most appropriate distribution to represent the distribution of participation in wikis from Wikia. Our results can be better understood if they are observed in the context of a previous study that questioned the prevalence of power law in several fields <ns0:ref type='bibr' target='#b7'>(Clauset et al., 2009)</ns0:ref> and the ground-breaking finding that the power law was indeed rare in real-life networks <ns0:ref type='bibr' target='#b4'>(Broido and Clauset, 2019)</ns0:ref>. Our finding will thus open new lines of research, revisiting old assumptions in the field, exploring further the causes behind the observed structural change in core contributor participation and the relationships with the sizes of the community and the project and other factors behind the behavior.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Power law distribution. For participation, the X axis represents the number of contributions made by a person and the Y axis the number of persons that made X contributions.</ns0:figDesc><ns0:graphic coords='4,183.09,63.78,330.87,172.05' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2. Results of the likelihood-ratio test between the five considered distributions for registered contributors. The distributions considered are: power law (PL), truncated power law (TPL), log-normal (LN), exponential (EXP) and stretched exponential (SEXP). Each arrow from A to B has the percentage of cases in which A was superior than B. The figure shows in a darker color the arrow with the higher percentage for each pair of distributions.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Complementary cumulative distribution functions in logarithmic scales of truncated power laws. Each sub-figure plots three wikis with similar &#945; parameter, adopting smaller values in the left plot, average values in the middle and higher values in the right. The X axis represents the logarithm of number of edits and the Y axis the inverse cumulative relative frequency the percentage of contributors that made at least X edits in the wiki.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Scatter plot of the TPL-distributed wikis where the color represents the number of edits.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Scatter plot of the TPL-distributed wikis where the color represents the number of contributors.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>do, i.e. if the probability of obtaining a worse fit by chance is smaller than 10%. The number of synthetic data sets used to calculate the p-value determines the accuracy of the result. Following Clauset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63802:1:1:NEW 25 Oct 2021)</ns0:cell><ns0:cell>3/14</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Aggregated results of the likelihood-ratio tests for each wiki counting the cases where a candidate distribution wins all tests and loses at least one test not able to represent heavy tails, is not a good candidate.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Power law</ns0:cell><ns0:cell>0 (0%)</ns0:cell><ns0:cell>2816 (42,18%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Truncated power law</ns0:cell><ns0:cell>596 (8.93%)</ns0:cell><ns0:cell>177 (2,65%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Log-normal</ns0:cell><ns0:cell>41 (0.61%)</ns0:cell><ns0:cell>1159 (17.36%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Stretched exponential</ns0:cell><ns0:cell>2 (0.03%)</ns0:cell><ns0:cell>1492 (22,35%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Exponential</ns0:cell><ns0:cell>0 (0%)</ns0:cell><ns0:cell>6578 (98.53%)</ns0:cell></ns0:row></ns0:table><ns0:note>12 In all cases, percentage of A &gt; B + percentage of A &lt; B+ percentage of inconclusive = 100% 6/14 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63802:1:1:NEW 25 Oct 2021) Manuscript to be reviewed Computer Science Distribution Wins all tests Loses at least one test</ns0:note></ns0:figure> <ns0:note place='foot' n='3'>The confidence interval is due to the test resolution that depends on the number of synthetic data sets considered.4 Goodness of fit tests script: ANONYMIZED 5 The method is adapted by Clauset et al's for nested distributions such as power law and truncated power law, where a family of distributions is a subset of the other. Such modified method, which we use as well, allows to state whether the larger family is indeed needed or both distributions are good models.4/14PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63802:1:1:NEW 25 Oct 2021)Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='14'>/14 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63802:1:1:NEW 25 Oct 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Point by point response to reviewers We appreciate the editor and reviewers’ insightful remarks, which have helped us to improve the manuscript. Taking into account these comments the paper has been reviewed and enhanced. We believe that the present version is clearer than the former one. Below we provide detailed descriptions of the modifications carried out in response to each specific comment. Before that, we would like to propose a change in the manuscript title. The old title is “Participation in wiki communities: A statistical characterization” and we would like to change it to “Participation in wiki communities: reconsidering their statistical characterization” to better highlight our contribution that challenges the power-law assumption and proves it wrong for most cases. Editor's Decision Minor Revisions The paper should become acceptable once the minor revisions suggested by reviewer #2 are addressed. Most importantly, we would like to see a revised submission where the importance and implications of the research are more clearly stated. Reviewer #2 has also outlined a set of minor revisions that should be taken into account in revising the submission, as well as an annotated pdf where more specific suggestions are given. Thank you very much for considering the paper acceptable. We have addressed all the issues raised by both reviewers, paying particular attention to those by Reviewer #2 regarding the importance and implications of our research, as we will detail below. Comments from the reviewers Reviewer 1 Basic reporting The manuscript uses appropriate language, references relevant research, and follows a standard article structure. Experimental design The data collection and analysis strategy is well-motivated, clearly described, builds on prior work, extends into a new domain, and is rigorously applied. Validity of the findings The data was appropriately retrieved and analyzed using constructs/methods, the findings replicate other empirical findings about power laws in social systems, discussion emphasizes truncated power law as appropriate fit and the corresponding mechanisms and interpretations that generate them. We thank the reviewer for the assessment of our paper. Additional comments I would add the following references into the manuscript: Mitzenmacher, M. (2004). A Brief History of Generative Models for Power Law and Lognormal Distributions. Internet Mathematics, 1(2), 226–251. https://doi.org/10.1080/15427951.2004.10129088 Andriani, P., & McKelvey, B. (2009). Perspective—From Gaussian to Paretian Thinking: Causes and Implications of Power Laws in Organizations. Organization Science, 20(6), 1053–1071. https://doi.org/10.1287/orsc.1090.0481 We thank the reviewer for the suggestion, we consider that the references are good for contextualizing our paper among the stream of power-law literature beyond online peer production. The present version also includes the following reference: Johnson, S.L., Faraj, S. and Kudaravalli, S. (2014). Emergence of Power Laws in Online Communities: The Role of Social Mechanisms and Preferential Attachment. MIS Quarterly Vol. 38, No. 3 , pp. 795-808, A1-A13 All the new references appear in the Introduction. Reviewer: Jeremy Foote Basic reporting I am happy to review this article, which examines the distribution of participation in online wiki communities. Overall, the language of this manuscript is adequate. There are a number of places where the language feels awkward to a native English speaker, but only rarely do these weaknesses hinder comprehension. I have marked some of the most egregious examples in the attached PDF. All the English suggestions have been tackled. We appreciate that they were highlighted in the PDF. Other minor mistakes throughout the text were also corrected and we also gave answers to other comments annotated in the PDF. There were a few places, most notably at the end of the introduction, where \ref's were broken. It looks like this may have been because the style of the manuscript does not include section numbers? All the broken references were fixed now since, as suggested by the reviewer, the manuscript format not including section numbers broke some. The paper identifies the most important relevant literature and is well-situated from a methodological perspective. I think that the authors could have done more to explain the practical importance of the research and to identify more concretely the practical implications of identifying a context as having distributions which are fit better by one function versus another. In other words, what is the scientific and theoretical benefit of applying Broido and Clauset's approach to all of these new contexts? We thank the reviewer for this comment. We have included an explanation of the benefits in the abstract and the conclusion. For example: “These findings have implications that can inform a better modeling of participation in peer production, and help to produce more accurate predictions of the tail behavior, which represents the activity and frequency of the core contributors.” Furthermore, in the introduction we tried to explain more clearly why the power law should not be considered. In general, the structure of the article was effective. The figures were well done and persuasive. ● I found the multiple colors of Figure 1 a bit confusing. Does the cutoff point represent the mean, for example? This should be clarified, and referenced in the legend. The present version of the figure only uses one color to avoid confusion. The references to the colors in the text have been deleted. ● I found Figure 2 engaging, but I wished that the edges were colored or weighted based on the percentage, to make it easier to distinguish the 'winners' visually. We provide a new version of Figure 2. In the new version, we highlighted the arrow with the higher percentage using a darker color (and the arrow with a lower percentage with a lighter one). The new version also shows a different arrow style to ease visualization. ● At least in my copy, Figure 3 appears to have some compression artifacts and should be produced as a vector image (also Figures 5 and 6). We provide a vector image version of these figures without compression artifacts. ● Finally, I thought that Figure 4 might be more persuasive if it also included empirical data points (although it's possible that this might make the figures too noisy). We omitted and omit the empirical data points because the resulting image is too noisy. Regarding the data, the data and code include appears to be adequate for running the statistical analyses. I was unable to find the code used to actually gather the edits per person, as described in lines 195-204. Nor was this raw data made available. I did not run any of the code to test it. The URLs of the data and code repositories have been anonymized to be compliant with the blind review rules. However, they will be included in the final version of the paper. Experimental design The overall design of this paper is appropriate for a computer science journal, well defined, and well executed. The methods are well situated in previous work and well described. We thank the reviewer for the positive judgement of the design of the experiment. As explained above, I do think that the authors could and should do more to identify the knowledge that this approach gives us that we didn't have before, especially in the Introduction / Background section. I have only two, fairly minor suggestions for the methods section. First, the authors claim to consider only communities with 100 contributors at times, and at other times those with 100 registered users. These are different measures, and it should be clear which cutoff is the actual cutoff. If it is registered users, does that mean that unregistered users (i.e., 'IP users') are also removed? If so, this should be made clear and justified. In the paper, we now speak about registered contributors, and it is clear that we work with wikis with over 100 registered contributors excluding bots and anonymous (IP) users. We justified the exclusion of anonymous users in the paper. As is often argued, it is problematic to unambiguously match the IP address to a single anonymous contributor and vice versa. Furthermore, it is also difficult to consider an anonymous contributor as a member of the wiki community. Second, the authors should explain why 8K wiki endpoints were not available. I would guess that this is because the wikis no longer existed, but that should be made clear. Wikis may be unavailable for a number of reasons, e.g. being removed from the platform, or having changed their name. Unavailable wikis represent 3,5% of the total wikis, constituting a small percentage of expected noise that should not compromise the results of the study. We included this justification as a footnote in the paper. Validity of the findings The findings are well-supported and the analysis of this paper appears both reasonable and statistically sound. Overall, the authors make a convincing case that wikis are typically well-described by a truncated power law. Thank you very much, we share the reviewer’s opinion. As mentioned above, I would have liked to have a more substantive understanding of what the conclusions infer for our understanding of how these communities operate, as well as what else we might be able to do with this approach. When these sorts of explanations occurred, I was often unconvinced. We tried to better motivate our research and the implications of the findings, as mentioned above. For example, the manuscript seems to argue that the findings show that high-volume contributors differ from low-volume contributors, but this seems like it would already be very likely. Indeed, if anything a power-law distribution would suggest that they are more different from each other (in the sense that the discrepancy in the number of edits is larger). We tried to clarify this aspect in the paper, as the reviewer is 100% correct in his interpretation. However, we originally intended to say exactly the same. Thus, we have reviewed the paper and rewrote the parts where clearer phrasing was needed. In particular, in the introduction and the conclusions. I believe that the concluding remarks section should be restructured from bullet points to paragraphs. We rewrote the conclusions in paragraph form. I found the description of generalizability at ~398-401 confusing. We rewrote that part hopefully explaining in a clearer way what we mean. Additional comments Overall, I found this paper to be convincing in showing that a truncated power law is a reasonable distribution for characterizing wikis. The analysis is narrow but well-executed and while I provided a number of suggestions for improvements to the presentation and discussion of the results, I think that the paper shows a number of strengths We thank the reviewer for his opinion. We hope that he finds satisfactory the new version of the paper. "
Here is a paper. Please give your review comments after reading it.
297
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The growing technological advance is causing constant business changes. The continual uncertainties in project management make requirements engineering essential to ensure the success of projects. The usual exponential increase of stakeholders throughout the project suggests the application of intelligent tools to assist requirements engineers.</ns0:p><ns0:p>Therefore, this article proposes Nhatos, a computational model for ubiquitous requirements management that analyses context histories of projects to recommend reusable requirements. The scientific contribution of this study is the use of the similarity analysis of projects through their context histories to generate the requirement recommendations. The implementation of a prototype allowed to evaluate the proposal through a case study based on real scenarios from the industry. One hundred fifty-three software projects from a large bank institution generated context histories used in the recommendations. The experiment demonstrated that the model achieved more than 70% stakeholder acceptance of the recommendations.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In recent years, the continuous and growing use of new technologies results in a Digital Transformation, bringing disruptive changes across domains <ns0:ref type='bibr' target='#b52'>(Nadkarni and Pr&#252;gl, 2021)</ns0:ref>. The techniques considered crucial to eliciting requirements do not hold up, given the paradigm shifts that have occurred. <ns0:ref type='bibr' target='#b73'>Villela et al. (2019)</ns0:ref> argued that Requirements Engineering (RE) involves various dimensions and thus ubiquitous RE allows an adequate approach for handling the complexity involved.</ns0:p><ns0:p>The software has become present in the vast majority of businesses, with companies that lack some level of automation being rare. Enterprises need to deal with increasingly diverse, complex, and interconnected systems, while the demand for rapid innovations requires ever-shorter feedback loops.</ns0:p><ns0:p>The spread of software in business-to-consumer and business-to-business environments makes it difficult to engage the growing number of stakeholders. Traditional requirements elicitation techniques, such as interviews or focus groups, present problems of scalability and limitation when they need to occur continuously involving the growing number of stakeholders <ns0:ref type='bibr' target='#b74'>(Villela et al., 2018)</ns0:ref>.</ns0:p><ns0:p>RE stands out as one of the most critical areas for software project results. Factors such as goal setting, project planning, involvement, and identification of user needs are key to project success <ns0:ref type='bibr' target='#b30'>(Hastie and Wojewoda, 2015)</ns0:ref>. In the meantime, incorrect application of RE is a primary reason for project failures, increasing development time, and cost <ns0:ref type='bibr' target='#b18'>(Dick et al., 2017;</ns0:ref><ns0:ref type='bibr'>Project Management Institute, 2017b;</ns0:ref><ns0:ref type='bibr' target='#b12'>Bozyigit et al., 2021)</ns0:ref>. When proper requirements management is applied, the chances of project success increase.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:63121:1:1:ACCEPTED 26 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Studies indicate that 30% of project success factors are related to RE processes <ns0:ref type='bibr' target='#b30'>(Hastie and Wojewoda, 2015)</ns0:ref>. Apart from that, reusing requirements can help in the execution of projects, reducing the time for analysis of requirements and identifying reusable code and artifacts, such in case of software development <ns0:ref type='bibr' target='#b33'>(Irshad et al., 2018)</ns0:ref>.</ns0:p><ns0:p>One option for addressing the issues faced by requirements engineers is requirements reuse. Software Engineering Recommendation Systems (SERSs) help teams select information and make decisions when they are inexperienced or unable to consider all available data. However, setting context is a challenge for recommendation systems <ns0:ref type='bibr' target='#b59'>(Robillard et al., 2014)</ns0:ref>.</ns0:p><ns0:p>The use of ubiquitous computing <ns0:ref type='bibr' target='#b42'>(Lopes et al., 2014)</ns0:ref> is an alternative for assisting requirements engineers in their activities. The classical works of <ns0:ref type='bibr' target='#b77'>Weiser (1999)</ns0:ref>, <ns0:ref type='bibr' target='#b63'>Satyanarayanan (2001)</ns0:ref>, <ns0:ref type='bibr' target='#b16'>and Dey et al. (2001)</ns0:ref> defined the ubiquitous computing and context-aware computing. Since then, these concepts have been applied in different knowledge areas such in health <ns0:ref type='bibr'>(Vianna and</ns0:ref><ns0:ref type='bibr'>Barbosa, 2014, 2019;</ns0:ref><ns0:ref type='bibr' target='#b17'>Dias et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b55'>Petry et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b10'>Bavaresco et al., 2020)</ns0:ref>, well-being <ns0:ref type='bibr' target='#b72'>(Vianna et al., 2017)</ns0:ref>, competence management <ns0:ref type='bibr' target='#b61'>(Rosa et al., 2015)</ns0:ref>, learning <ns0:ref type='bibr' target='#b7'>(Barbosa et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b76'>Wagner et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b6'>Barbosa et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b36'>Larentis et al., 2020)</ns0:ref>, commerce <ns0:ref type='bibr' target='#b8'>(Barbosa et al., 2016)</ns0:ref>, accessibility <ns0:ref type='bibr' target='#b68'>(Tavares et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b5'>Barbosa et al., 2018)</ns0:ref>, Smart Cities <ns0:ref type='bibr' target='#b60'>(Rolim et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b53'>Orrego and Barbosa, 2019;</ns0:ref><ns0:ref type='bibr' target='#b49'>Matos et al., 2021)</ns0:ref>, and agriculture <ns0:ref type='bibr' target='#b15'>(de Souza et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b11'>Bhanu et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b31'>Helfer et al., 2020)</ns0:ref>. The application of ubiquitous computing in project management coined the term Ubiquitous Project Management <ns0:ref type='bibr' target='#b25'>(Filippetto et al., 2020)</ns0:ref>.</ns0:p><ns0:p>The ubiquitous computing is aware of contexts and allows to use this information to introduce context awareness in the computational systems. Based on contexts, the systems adapt the execution according to the strategic information obtained in the runtime <ns0:ref type='bibr' target='#b0'>(Abech et al., 2016)</ns0:ref>. Recently, the use of context-aware computing to support the development and maintenance of software emerged as a strategic research theme <ns0:ref type='bibr'>(D'Avila et al., 2020a,b)</ns0:ref>. In addition, in disruptive applications the ubiquitous computing has been considered an alternative to develop hygge software <ns0:ref type='bibr' target='#b72'>(Vianna et al., 2017)</ns0:ref>. As a recent evolution, ubiquitous computing has been empowered with the use of temporal series of contexts to organize and analyze the data. This new knowledge research area received the name of Context Histories <ns0:ref type='bibr' target='#b61'>(Rosa et al., 2015;</ns0:ref><ns0:ref type='bibr'>Martini et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b2'>Aranda et al., 2021;</ns0:ref><ns0:ref type='bibr'>Machado et al., 2021)</ns0:ref> or Trails <ns0:ref type='bibr' target='#b64'>(Silva et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b8'>Barbosa et al., 2016</ns0:ref><ns0:ref type='bibr' target='#b5'>Barbosa et al., , 2018))</ns0:ref>. This kind of organization allows the exploration of advance strategies to data analysis, such as, profile management <ns0:ref type='bibr' target='#b76'>(Wagner et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b9'>Barbosa et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b38'>Leithardt et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b14'>Dalmina et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b22'>Ferreira et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b40'>Leithardt et al., 2020)</ns0:ref>, pattern analysis <ns0:ref type='bibr' target='#b19'>(Dupont et al., 2020)</ns0:ref>, context prediction (Da <ns0:ref type='bibr'>Rosa et al., 2016)</ns0:ref>, and similarity analysis <ns0:ref type='bibr' target='#b78'>(Wiedmann et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b26'>Filippetto et al., 2021)</ns0:ref>.</ns0:p><ns0:p>This article presents a model for recommending requirements in software projects, called Nhatos. The proposed model differs from previous literature in exploring the similarity of project context histories to assist RE processes by predicting future contexts. Thus, new requirements are recommended both in the early stages and throughout the project life cycle. The study seeks to answer the following research questions: (1) Is it possible to use project context histories to infer requirements in the requirements identification phase, considering the characteristics and similarity of the projects? (2) Does stakeholder collaboration, providing project characteristics and feedback from recommendations contribute throughout the requirements management processes? This article has five sections. The next section discusses related works focusing on the scientific contributions. The third section proposes the model, mainly describing its architecture, the similarity analysis strategy, and the proposed Ontology of Requirements Recommendation. The fourth section describes implementation aspects focusing on prototype characteristics, such as technologies, features, screens, and database model. The section focused on evaluation aspects mainly addresses the application of the prototype in two case studies based on 153 real software projects. Finally, last section presents the conclusion, answers the research questions, and suggests future works.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORKS</ns0:head><ns0:p>The selection of related works demanded the identification of studies that involve the development of models for Requirements Management. The criteria adopted for the choice of works prioritized articles that addressed: (i) models or systems for recommending requirements; (ii) similarity analysis of projects or their requirements; (iii) feedback system on recommendations for new requirements. <ns0:ref type='bibr' target='#b34'>Kim et al. (2019)</ns0:ref> <ns0:ref type='formula'>2016</ns0:ref>) presented a semi-automated approach, known as Feature Extraction for Reuse of Natural Language Requirements (FENL), for extracting phrases that may represent software resources.</ns0:p><ns0:p>The authors aim to extract resources from product reviews online, thus allowing the reuse of software requirements. <ns0:ref type='bibr' target='#b56'>Portugal et al. (2017)</ns0:ref> proposed the use of a software versioning repository (GitHub) as a source of information. To deal with large masses of data and provide access to suitable sources, the authors created project profiles with useful attributes for RE. Afterward, they applied clustering and Natural Language Processing (NLP) to recommend projects by identifying similar keywords in their description. <ns0:ref type='bibr' target='#b79'>Williams and Mahmoud (2017)</ns0:ref> used the social network Twitter as a requirements source to allow a data-driven, interactive and adaptable RE process. The authors performed an analysis with 4,000 tweets from 10 software systems sampled from various application domains. The results revealed that about 50% of the tweets collected contained useful technical information. In addition, the results showed that text classifiers like Support Vector Machines and Na&#239;ve Bayes can be useful in capturing and categorizing tweets technically informative. <ns0:ref type='bibr' target='#b27'>Garcia and Paiva (2016)</ns0:ref> presented a recommendation system that collects the history of using a Web service, relates this information to requirements, and generates reports with recommendations that can increase the quality of this service. The proposed approach aims to provide analytical reports in a language close to the business. The system indicates new workflows, navigation paths, identifies potential resources to remove, and correlates the requirements and the proposed changes, helping to keep the specification of the software requirements up to date. <ns0:ref type='bibr' target='#b32'>Hujainah et al. (2021)</ns0:ref> proposed a technique for prioritizing requirements and thus selecting the requirements to be developed. While not directly recommending, prioritization helps the selection of requirements and supports the process. The authors addressed this task focusing on specific challenges in this area, such as scalability, lack of automation, and excessive time consumption. The study presented a semiautomated scalable prioritization technique using a multi-criteria decision-making method, clustering algorithms, and a binary search tree. The technique aims to mitigate the need for expert involvement in this process and increase efficiency. <ns0:ref type='bibr' target='#b67'>Swathine and Sumathi (2021)</ns0:ref> worked with requirements traceability and based on this information the proposal indicates which requirements must be considered to support the interested parties in the process.</ns0:p><ns0:p>This study used a meta-heuristic approach to create a novel traceability system for analyzing systems' functional requirements. The authors aimed to identify traceable links for supporting decision-making, solving the inconsistency problem, and generating quality requirements. <ns0:ref type='bibr' target='#b51'>Mougouei and Powers (2021)</ns0:ref> allowed the selection of requirements considering dependencies and value to be delivered by the requirements. The authors proposed the Dependency-Aware Requirements Selection, an intelligent system that analyzes the value dependencies among requirements, aiming to reduce the risk of value loss. This model considered the user preferences for the requirements, showing promising results in reducing value loss, including when applied in large requirement sets.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> shows the characteristics adopted in the comparison between Nhatos and related works. The first item (processes) informs which of the RE processes the articles address -validation (V), elicitation (E), specification (S), or management (M). The second item (recommendation) shows the type of item recommended in the study -requirements (R), wrong definitions (W), or projects (P). The third shows the </ns0:p></ns0:div> <ns0:div><ns0:head>PROPOSED MODEL</ns0:head><ns0:p>According to <ns0:ref type='bibr' target='#b59'>Robillard et al. (2014)</ns0:ref>, an SERS needs to have specific requirements to be considered a In order to measure the applicability of an intelligent tool to support requirements engineers, we conducted a survey involving software design professionals. This research aimed to answer whether the project teams need a proactive tool to support their activities involving the RE processes.</ns0:p></ns0:div> <ns0:div><ns0:head>Principles of Nhatos: Survey with 56 Professionals</ns0:head><ns0:p>A survey involved 56 professionals working in the software development industry, including project managers, analysts, project teams, and teachers. Participants answered an electronic questionnaire with multiple choice and transcribed questions. About 71% of the interviewees had more than five years of experience in projects. More than 70% of respondents worked in companies with more than 100 employees. The main objective of the research was to capture the perception of professionals regarding the support tools in project management currently used in their work environment. This research allowed to identify gaps and possible improvements in the RE area guiding the specification of Nhatos. The following are the research questions and results:</ns0:p><ns0:p>&#8226; Which areas do you consider most critical to the success of the project? 50% of respondents selected the scope as the most critical area. Participants also mentioned the areas of Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; In the projects where problems occurred, what were the areas in which the problems were identified? A total of 39.29% of the participants answered that problems in project management are due to incorrect Scope Management (SM). Other project areas, such as Time Management and</ns0:p><ns0:p>Communications, obtained 25 (44.6%) and 21 (37.5%) responses, respectively;</ns0:p><ns0:p>&#8226; What types of suggestions would you like to receive from a proactive project management tool?</ns0:p><ns0:p>According to the interviewees' perception, 32.1% answered that a tool should suggest new requirements for projects;</ns0:p><ns0:p>&#8226; Do you believe that information from other projects already completed could assist in project management? 85.7% of the members confirmed that history contributes to the management of the new projects.</ns0:p><ns0:p>The perception of the teams collected in the survey allowed to conclude that there is interest from the professionals regarding the use of an intelligent tool to support the project teams during the RE processes.</ns0:p><ns0:p>This opportunity stimulated the development of the Nhatos, which aims to assist teams during the RE process life cycle. 5. Agents: Multi-Agent System (MAS) that captures the events related to project's evolution or modification. The capture is triggered when some of these events occur: (a) addition of a new requirement; (b) termination of an activity; or (c) evolution in the percentage of completion of the project. Fig. <ns0:ref type='figure' target='#fig_5'>2</ns0:ref> shows the proposed MAS using the the Prometheus methodology <ns0:ref type='bibr' target='#b37'>(Larioui, 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Model Architecture</ns0:head><ns0:p>The MAS has six agents. The Translate agent converts to English the texts from native languages used in the projects. The NLP uses English as the language, so this translation is necessary to Nhatos. Projects Similarity analyzes the similarity of the projects using project size, methodology applied and area of expertise. NLP techniques allow to group projects according to their expertise.</ns0:p><ns0:p>Context Storage stores each event occurrence in the project's history. Recommendation Engine permanently monitors the project's events to orchestrate the execution of the other agents when one event occurs. Requirements Similarity uses semantic analysis to determine the requirements similarity based on texts written in natural language. This analysis is detailed in the Similarity Analysis subsection. The agent also compares requirements to determine if requirements have the same number of actors. Context Similarity performs the similarity analysis in the context histories of the projects. </ns0:p></ns0:div> <ns0:div><ns0:head>Similarity Analysis</ns0:head><ns0:p>The similarity analysis occurs in two moments: (1) similarity analysis based on project characteristics, and (2) similarity analysis based on context histories of projects. The first analysis occurs at each insertion of a new project, while the second analysis occurs during a new evolution of the projects' life cycle. After the insertion of a new project, the multi-agent system identifies this event and initiates the recommendation process. The multi-agent system goes through the stored histories and compares the variables with characteristic of each project in the history with the same variables as the original project.</ns0:p><ns0:p>The model considers the configurations previously informed by the specialist -these configurations make up a weight system, which will be applied during the calculation.</ns0:p><ns0:p>After the model groups similar projects, the context histories of those projects are analyzed to identify reusable requirements among them. Therefore, for projects in the same group, Nhatos calculates similarity considering the semantic distance between their requirements.</ns0:p><ns0:p>Nhatos defines the semantic distance by analyzing the distance between text documents proposed by <ns0:ref type='bibr' target='#b35'>Kusner et al. (2015)</ns0:ref>. This approach takes advantage of the results of <ns0:ref type='bibr' target='#b50'>Mikolov et al. (2013)</ns0:ref>, whose model word2vec generates combinations of words, on a large-scale ontology, for extensive data sets (for example, we use the training of approximately 100 billion words in this model). In this way, Nhatos compares the texts that describe the objectives of the requirements. Afterward, the results are stored in history, enabling the recommendation of requirements related to the same theme (similar purposes) and similar projects (same area of knowledge).</ns0:p><ns0:p>The recommendation of requirements in the initial phases of the projects aims to bring historical information to the project teams, mainly to the requirements engineers and stakeholders. Then, these users will be able to accept or reject the requirements recommended by the model. Nhatos thus ensures that no requirements of the historical basis are disregarded by those involved during management.</ns0:p><ns0:p>During the life cycle of projects, Nhatos saves the events in context histories. The model identifies Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Nhatos compares each project context with contexts from similar projects by using the semantic distance between the requirements from the previously-stored histories. The recommendation of the next occurrence of the context history occurs for the project in execution when the distance between the requirements is acceptable (according to the specialist's settings), and the number of actors is equal between the requirements compared.</ns0:p><ns0:p>Fig. <ns0:ref type='figure' target='#fig_11'>5</ns0:ref> shows an example of analyzing the context histories of an ongoing project with histories from similar projects. In this example, the entire recommendation flow is elucidated step by step. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science steps 2, 3, and 4 obey both the minimum acceptable semantic distance and the same number of actors involved in the compared requirements. Since the expert has configured recommendations that require at least three similar consecutive steps, Nhatos recommends the fifth requirement of project B for project A.</ns0:p><ns0:p>Assuming that the configured training sample was 70%, Nhatos uses the remaining percentage of the project (30%) to verify that at least one requirement with the same semantic distance and the same number of actors as the recommended requirement occurred throughout the life cycle. Once it occurs, the recommendation made is considered assertive.</ns0:p><ns0:p>The analysis of more than one chronological context, which occurred during the project, aims to identify projects that have a similar execution sequence. This analysis contributes to a higher degree of precision in the recommendations made. The most significant number of similar consecutive contexts indicates the proximity between project implementation. However, when Nhatos considers more contexts, the fewer projects must be identified as similar and, therefore, the fewer recommendations made, since each project is unique (Project Management Institute, 2017a). We added a specification document (Requirement Specification Document) to exemplify how the requirement is instantiated when using ontologies. Each document has a scenario, where the actors through the defined objectives identify requirements for a given project.</ns0:p></ns0:div> <ns0:div><ns0:head>Ontology of Requirements Recommendation</ns0:head></ns0:div> <ns0:div><ns0:head>9/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:63121:1:1:ACCEPTED 26 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head>IMPLEMENTATION ASPECTS</ns0:head><ns0:p>We developed a software prototype to meet the model definitions. Fig. <ns0:ref type='figure'>7</ns0:ref> shows that the prototype integrates three applications: (1) Console Application 1 ; (2) RESTFul API Application or WebService 2 ; and (3) Hybrid Application 3 , composed by the Web Application and the Mobile App. The first two are back-end applications, which run on a server. The third application operates on mobile devices, acting as front-end software. The users involved in the requirements engineering processes used this application.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 7. Prototype Overview</ns0:head></ns0:div> <ns0:div><ns0:head>Hybrid Application</ns0:head><ns0:p>The Hybrid Application runs both on mobile devices and on conventional computers operating in a browser web. In this way, the application has two interfaces for communication with system users: (1) Web Application; and (2) Mobile App.</ns0:p><ns0:p>This application allows the interaction of project teams with Nhatos, allowing the management of projects, requirements, activities, resources, registration of interested parties, and evaluation of recommendations. The prototype allows to monitor projects, capturing context information to compose the context histories. The software can be used throughout the life cycle of projects. Further, this application presents the recommendations to the user, thus allowing the collection of feedback from interested parties.</ns0:p><ns0:p>Fig. <ns0:ref type='figure' target='#fig_14'>8a</ns0:ref> shows the interface for the project presentation with characteristic information about the project such as size, methodology, percentage of evolution, and area of knowledge. Fig. <ns0:ref type='figure' target='#fig_14'>8b</ns0:ref> presents the list of project requirements, as well as their respective percentage of evolution.</ns0:p><ns0:p>Fig. <ns0:ref type='figure' target='#fig_15'>9a</ns0:ref> shows the settings of the weight variables, which are defined by an expert. These variables define the importance of each aspect of the project and its requirements during the recommendation process. The user can define weights related to project area, size, methodology, and level of completion, as well as the acceptable semantic proximity between the requirements that will be considered for a possible recommendation.</ns0:p><ns0:p>Fig. <ns0:ref type='figure' target='#fig_15'>9b</ns0:ref> presents examples of recommendations. The interface contains the recommended requirement, as well as the requirement that raised this recommendation and the semantic distance between these requirements. The interaction area also enables users to provide their feedback, selected from the options to accept or reject the recommendation. This user decision is registered in order to evaluate the acceptability of the recommendations by stakeholders in the future.</ns0:p></ns0:div> <ns0:div><ns0:head>Console Application</ns0:head><ns0:p>This application developed in Python is an encapsulated software that acts in the form of service.</ns0:p><ns0:p>The software uses the concept of multi-agent systems, proposed by <ns0:ref type='bibr' target='#b54'>Padgham and Winikoff (2004)</ns0:ref> Manuscript to be reviewed The Translate agent translates all the contents entered by the user into the English language, enabling the execution of subsequent agents. Since, as a premise, all contents must be registered in English.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Nhatos uses NLP and the corpus of texts obtained for the application of the study is written in this language. The software loads the projects and their requirements from the database, translating them using the API Google Cloud Translate <ns0:ref type='bibr' target='#b29'>(Google, 2021b)</ns0:ref>. We considered the usage of this API does not generate discrepancies in translations since this API has an estimated accuracy of 85% <ns0:ref type='bibr' target='#b1'>(Aiken, 2019)</ns0:ref> and the texts translated are technical, having a technical writing pattern.</ns0:p><ns0:p>The Projects Similarity agent activates after the translation of the content of the projects and their requirements. This agent analyzes the similarity between all projects in the database and groups the characteristics of the projects separately. The software considers the information in the ontology (Fig. <ns0:ref type='figure' target='#fig_12'>6</ns0:ref>), considering all projects in the database according to size, area of knowledge, management methodology, and level of completeness (schedule). After consulting all projects, the agent classifies and labels each project, so that the next step of the algorithm starts.</ns0:p><ns0:p>The Requirements Similarity employs the use of NLP to find requirements that contain equivalent objectives, as well as the same number of actors involved. This agent also considers the similarity analysis between the projects, performed previously. In this way, the algorithm analyzes the similarity between the requirements of projects considered similar.</ns0:p><ns0:p>First, the agent appropriates the new settings for distance, steps, and sampling, thus starting new processing. Then, it removes the previous recommendations (if any), to start a new recommendation process. This method removes all recommendations that originated from the same distance, number of steps, and sampling setting. Because, it is considered that throughout the life cycle, the requirements may have changed regarding their objectives or actors involved (Project Management Institute, 2017a).</ns0:p><ns0:p>Then, the agent retrieves all projects from the database. The requirements of each project are obtained.</ns0:p><ns0:p>Each requirement is compared with the requirements of similar projects. In this step, the agent checks whether the objective of both requirements meets the established distance parameters, as well as the number of actors. Finally, once the number of requirements found sequentially is equal to the number of configured steps, this requirement is considered recommendable to the project. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The Context Storage agent keeps the information in a database with four main entities: (1) projects;</ns0:p><ns0:p>(2) requirements;</ns0:p><ns0:p>(3) requirements distance; and (4) recommendations. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The requirements entity keeps model requirements. The same project can contain several requirements, according to the relationship in the diagram (1..n). The actors of each requirement are stored in the entity actors and an actor can be linked to several requirements and vice versa, as shown in the entity requirements actors (n..n).</ns0:p><ns0:p>The requirements distance entity saves the distance information between the processed requirements.</ns0:p><ns0:p>The entity stores the original requirement (req a id) and the compared requirement (req b id), as well as the respective semantic distance (distance) between them.</ns0:p><ns0:p>Finally, the recommendations entity keeps the recommendations inferred by Nhatos. Each recommendation is directed to a project (project id) and the requirement that generated such a recommendation (requirement id). The assessment of the assertiveness of each recommendation is stored in the is assertive property.</ns0:p></ns0:div> <ns0:div><ns0:head>RESTFull API Application or Web Service</ns0:head><ns0:p>The RESTFull API Application provides a communication channel between the Hybrid Application and the Context Storage application. Using the RESTFull protocol, the application allows data traffic in the JSON format between applications. It enables the exchange of data between the hybrid application, used by users, and the information already processed by Console Application, which stores its information in the Nhatos model database.</ns0:p></ns0:div> <ns0:div><ns0:head>EVALUATION ASPECTS</ns0:head><ns0:p>The application of a case study in a software development company allows answering the research questions. This company develops application solutions for a banking institution. This study aimed to confirm the hypothesis of using the analysis of context histories of projects to recommend requirements for new or ongoing projects.</ns0:p><ns0:p>The study employed a database with the context histories of 153 software development projects. The database has projects with different resources and development methods, such as distributed or local teams.</ns0:p><ns0:p>The use of different characteristics of the 153 projects allowed the analysis of a diversity of contexts to the recommendation of requirements.</ns0:p><ns0:p>The evaluation considered two cases: (1) two teams evaluated the use of the prototype during the implementation of 5 real projects, and (2) 17 completed projects were used to evaluate the recommendations made by Nhatos, comparing the recommendations done with the requirements in the 17 original projects. Next subsections describe these evaluations.</ns0:p></ns0:div> <ns0:div><ns0:head>Team Evaluation During Project Execution</ns0:head><ns0:p>The first case involved two teams with a total of 12 professionals. These professionals were asked to validate the recommendations made by the model. Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref> shows the profile of the teams that participated in this experiment. Initially, project teams inserted information related to the projects in the database (scope and descriptive information, terms of reference, resources, schedule, and tasks/activities). The professionals used the Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>integration interface to insert information into the database. After, the project teams identified and registered the requirements using the prototype running on mobile devices.</ns0:p><ns0:p>The similarity analysis of the projects used NLP since approximately 79% of the requirements and project documents are written in natural language <ns0:ref type='bibr' target='#b43'>(Luisa et al., 2004)</ns0:ref>. The algorithm classified each project through the use of NLP according to its respective area of knowledge. The identification of these areas involved the use of the opening statement. In addition, the project charter contains a high-level description of the project. The classification used the Google Natural Language API (GNL), which provides resources for the analysis of unstructured texts, such as content classification and entity identification. The content rating analyzes a document and results in a list of categories that apply to the found text. The classification can still contain several levels, specifying the greater depth of details about the area of knowledge in question <ns0:ref type='bibr' target='#b28'>(Google, 2021a)</ns0:ref>.</ns0:p><ns0:p>Currently, GNL processes English sentences only. The projects had information in Portuguese, so it was necessary to previously translate the descriptive content of the projects before carrying out the classification process. The prototype performed the translation automatically using the Google Cloud</ns0:p><ns0:p>Translation API (GCT) <ns0:ref type='bibr' target='#b29'>(Google, 2021b)</ns0:ref>. The GCT receives a phrase as input, identifies its language and translates it into the language selected by the user. During the study, the agent translated all sentences into English. After translation, Nhatos categorized the projects using GNL based on the 153 projects. This step classified 23 projects as Finance (15.03%), 20 as Business and Industry (13.07%), 10 as Computers and Electronics (6.53%), 6 as Credit and Lending (3.92%), and 4 as Accounting and Auditing (2.61%), having these categories the highest number of classified projects.</ns0:p><ns0:p>In addition, the variables registered in the Configuration module allowed to defining the knowledge area of each project. The similarity analysis used this definition. These variables received weights, which reflect the uniqueness of each project. The weight setting allows the algorithm to generate different recommendations according to the characteristics recorded by the specialist. Table <ns0:ref type='table' target='#tab_6'>3</ns0:ref> presents the recommended requirement values for each project and the requirement values added to new projects through the recommendations. During the case study, users included in the projects twenty requirements, which were used to evaluate the recommendations made in the execution of the project. Whenever a user added a new requirement, the Requirements Similarity Agent identifies the event and performs the semantic proximity analysis. The Agent compares the new requirement description with the requirements stored in the context history of the project.</ns0:p><ns0:p>The GNL algorithm analyzes texts in English, performing a semantic analysis. Thus, the model translates the description of the project or requirements which are in another language. Table <ns0:ref type='table' target='#tab_7'>4</ns0:ref> shows an example of a requirement added to a project and the recommended requirements based on this insertion.</ns0:p><ns0:p>Nhatos considers the objective of each requirement and also the number of related actors. The software must allow the manager to request airline tickets (1 actor).</ns0:p><ns0:p>The software must allow the manager to order supplies (1 actor). 0.21</ns0:p><ns0:p>The software must allow the manager to request resource transfers between projects (1 actor).</ns0:p></ns0:div> <ns0:div><ns0:head>0.32</ns0:head><ns0:p>At this stage, Nhatos applies the analysis to the entire database, regardless of whether projects were Manuscript to be reviewed</ns0:p><ns0:p>Computer Science compared in the first stage of the recommendation. The semantics of the included text and the objectives of the requirements may change throughout the project <ns0:ref type='bibr' target='#b18'>(Dick et al., 2017)</ns0:ref>. This step also allows project requirements that were not originally recommended to be analyzed and considered. The analysis is carried out at this point on the requirements, considering the grouping of projects based on their characteristics.</ns0:p><ns0:p>The semantic distance represents the comparison of the requirement objectives, having a floating-point value between 0.0 and 1.0. The distance closer to 0 indicates that the recommended requirement is semantically closer to the original.</ns0:p><ns0:p>Consequently, the closer the semantic distance to 1, the less similar the recommended requirement is considered when compared to the original. The actors of each requirement are also considered during this stage of the analysis. This context information considers the number of actors to which a requirement is related.</ns0:p><ns0:p>During the case study, for each of the five projects, the model analyzed the similarity, recommending the requirements between similar projects. Soon after, the team analyzed the recommended requirements for the five registered projects. Table <ns0:ref type='table' target='#tab_6'>3</ns0:ref> shows that the approval rate of the requirements had an average of 71.0%. The average appropriation of the recommendations presented for new projects shows the acceptance of the requirement recommendation model evaluated by the teams, in order to provide more information to the managers since the beginning of the project.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation of Recommendations Through Analysis of Context Histories</ns0:head><ns0:p>The second case compared the requirements registered in 147 completed projects with the requirements recommendations made by Nhatos. This study allowed to infer whether the recommendations made, considering a sample of 70% of the progress of the projects, were in fact inserted into the remaining percentage of the project.</ns0:p><ns0:p>This scenario evaluated the recommendations made in many situations considering projects with different characteristics. Most of the projects (81) used the agile methodology based on the framework SCRUM <ns0:ref type='bibr' target='#b66'>(Sutherland and Coplien, 2019)</ns0:ref>. The others (66) employed traditional methodology based on the good practices proposed by Project Management Institute (2017a). We classified the projects into one of three size categories, considering the execution time of each one: small (up to 500 hours), medium (up to 3,000 hours), and large (over 3,000 hours).</ns0:p><ns0:p>The learning phase used a sample of 70% referring to the execution of each project. The Nhatos Learning consists of collecting the project's evolutionary events. Figure <ns0:ref type='figure' target='#fig_9'>4</ns0:ref> shows this process in the Recommendation Engine, Context Storage, and Context History Similarity steps. With this learning, Nhatos generated requirement recommendations for the same projects. The remaining 30% of data from the executed process was the base for assessing the recommendations made by the model.</ns0:p><ns0:p>The similarity analysis between the projects taking into account the context histories considers that a consecutive sequence of contexts must be similar. The model then recommends the next requirements for the project being implemented based on the sequence of events that generated context histories.</ns0:p><ns0:p>In all, 9 test scenarios were configured using different parameterizations to find the best scenario of recommendations, where each test considered the entire historical database. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science considered configurations with a minimum semantic distance between the requirements of 0.25 (75%).</ns0:p><ns0:p>All 3 step configurations for these tests resulted in an assertiveness rate equal to or less than 50%. Scenario 2 and 3 generated a few recommendations, 27 and 2, respectively. Test 1 generated a total of 481 recommendations. However, as mentioned, the assertiveness did not reach 50%.</ns0:p><ns0:p>On the other hand, tests 7, 8, and 9 generated a significant amount of recommendations. The assertiveness rate was high, above 80% in the three cases. However, the semantic distance proved to be relatively comprehensive, considering requirements only 65% similar. Therefore, the tests allowed the inference of a large number of requirements recommendations (a total of 9,214). In these scenarios, Nhatos provided a high number of recommendations, being more than the teams could assess. Therefore, the model did not attend to a requirement of ubiquitous applications in these cases since these applications must be minimally intrusive <ns0:ref type='bibr' target='#b63'>(Satyanarayanan, 2001)</ns0:ref>.</ns0:p><ns0:p>Scenarios 4, 5, and 6 were more promising than the first ones. All three achieved a percentage rate of assertiveness close to or higher than 70%. However, only scenarios 4 and 5 had an adequate number of recommendations for analysis in the use case. </ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>This article proposed Nhatos, a computational model which provides recommendations considering the characteristics of each new project. In this way, the teams start the life cycle of each project with a broader set of information, making the project planning more assertive, which increases the chances of success.</ns0:p><ns0:p>In addition, Nhatos infers new recommendations during the execution of the projects through the analysis of the included requirements. The recommendations benefited from a semantic analysis of the text that understood the requirements' objectives, as well as the number of actors involved. In this sense, new scenarios for the projects are considered during their implementation. The model considers context histories of projects when recommending new requirements, when considering the schedule of similar contexts and when compared to the original projects.</ns0:p><ns0:p>The research questions allow to validate the use of the Nhatos in two dimensions: (i) requirement recommendation considering the context histories of the projects; (ii) elicitation and specification of requirements, allowing their use collaboratively. In this sense, the results demonstrated the adherence of the Nhatos to proactive requirements management in projects. A summary of the main conclusions is as follows:</ns0:p><ns0:p>1. Nhatos achieved an accuracy of 65.33% regarding the average value of the 9 test scenarios performed and scenario 7 reached an accuracy of 83.04%.</ns0:p><ns0:p>2. The first research question focused on the suitability of the recommendations made by the model for the new project, considering the team that developed the projects and the projects already executed. The evaluation confirmed the relevance of projects' context histories for recommending requirements for the projects. Case 1 presented an average recommendation approval rate of 71.0%, proving that Nhatos can make suitable recommendations based on experts' configurations and characteristics of other projects. Case 2 proved as true the hypothesis of using the context histories for the requirements recommendation, achieving a value higher than 80% of assertiveness in different scenarios through the similarity analysis of the project context histories. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science teams could collaborate during all requirements management processes through the use of the prototype.</ns0:p><ns0:p>4. The answers to the research questions confirm the main scientific contributions of this study, which is the recommendation of requirements considering the characteristics of the projects and analyzing the context histories, in addition to monitoring the entire life cycle of the requirements throughout the project. Thus, the model helps in planning projects by providing a broader set of information adhering to the project in progress for requirements engineers when starting a new project.</ns0:p><ns0:p>5. The collaboration of all interested parties enhanced the model, mainly in the identification and specification of requirements. This is not present in the related works. This differential enabled the collection of more information during the implementation of the project and brought technical and practical knowledge about the importance of the requirements by all stakeholders.</ns0:p><ns0:p>6. The case studies and the prototype allowed the evaluation of the Nhatos, contributing to the observations of gaps in the management of project requirements. The case studies focused on answer the two research questions presented in the introduction.</ns0:p><ns0:p>Based on the results obtained in the case studies, we suggest the following opportunities that future studies can explore:</ns0:p><ns0:p>1. The exploration of the model usage, considering projects from different companies.</ns0:p><ns0:p>2. The monitoring of the prototype use over time since the prototype can provide a more robust design history and more assertive requirement recommendations, considering the model benefits from a growing database.</ns0:p><ns0:p>3. The exploration of the pattern analysis of context histories, which can allow detecting emergent patterns related to requirements and projects.</ns0:p><ns0:p>4. Future investigations can enhance the interface among applications, performing a deeper analysis concerning the API built.</ns0:p><ns0:p>5. Future studies may perform a wider comparative analysis with related works to investigate the results obtained in this study, exploring the data used and the saved time through the use of Nhatos.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>presented an ontology knowledge base and the design process for recommending security requirements based on the cases of attack and the system domain knowledge. The base has three parts: Ontology APT, Ontology of security general knowledge, and Ontology of domain-specific 2/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:63121:1:1:ACCEPTED 26 Oct 2021) Manuscript to be reviewed Computer Science knowledge. Each ontology can help in understanding the security concerns in their knowledge. Integrating three ontologies with the problem domain ontology allows the derivation of suitable security requirements with the recommendation process of security requirements. The proposed knowledge base and the process can help to derive the security requirements, considering attacks in real systems. Liu et al. (2018) conducted a study that approaches the development or maintaining of Android applications. The authors analyzed the multiple challenges that developers face in creating explanations about permissions use. They proposed a new structure, which explores possible security requirements recommendations through the description of similar applications. The study uses techniques of information retrieval and text abstract to find frequent uses of permission. Xie et al. (2017) proposed a methodology that uses Conditional Random Fields (CRF) to provide a quantitative exploration of the interactions between users and systems in order to discover potential requirements. By analyzing user behavior patterns at runtime, domain experts made predictions about how users' intentions change. The authors proposed improvements to help address the similar needs identified. Bakar et al. (</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:63121:1:1:ACCEPTED 26 Oct 2021) Manuscript to be reviewed Computer Science strategic path used by the authors for the recommendations. The fourth item refers to the collaboration of interested parties during the recommendation process. Finally, the last two columns present the environment of the model observation and the type of evaluation.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>recommendation system, which are: (a) a mechanism for collecting data and artifacts from the development process in a data model; (b) a recommendation mechanism to analyze the data model and generate recommendations; and (c) a user interface to trigger the recommendation cycle and present its results. Nhatos meets the three requirements mentioned, because: (a) it collects data through a multi-agent system throughout the entire project life cycle; (b) it generates a recommendation considering the current context of the project, and (c) it has an interface on mobile devices to present the results to users and collect their feedback.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Fig. 1 Figure 1 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Fig. 1 shows Nhatos architecture using the Technical Architecture Module (TAM) modeling specification (SAP, 2021). The following components are part of the model and they are seen in the figure with their respective numbering:</ns0:figDesc><ns0:graphic coords='6,141.73,334.20,413.57,245.89' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>6.</ns0:head><ns0:label /><ns0:figDesc>Projects Database: It saves application settings, such as (a) Project data; (b) Recommendations made by the model; (c) Feedback from stakeholders regarding the recommendations; and (d) Context histories that occurred throughout the life cycle of the projects.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Fig. 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Fig.2shows the representation of the multi-agent system with the three agents that conduct the similarity analysis (Context, Projects and Requirements).</ns0:figDesc><ns0:graphic coords='7,150.00,534.93,397.03,125.61' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Multi-Agent System</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Similarity Analysis by Project Characteristics</ns0:figDesc><ns0:graphic coords='8,141.73,112.45,413.57,198.42' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>information that is susceptible to changes in state over the life cycle of the projects, which are: (a) purpose of the requirements; (b) actors involved in the requirements.Whenever a user inserts a new requirement into the project or at least one of this context information is modified, the similarity analysis of projects by context histories begins. Nhatos uses the stored histories to complement the similarity analysis based on the characteristics, updating recommendations based on the new information.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Fig. 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Fig.4shows how the recommendation flow occurs in this step. This flow seeks similar contexts by analyzing the context histories of the project and comparing this information with other stored contexts.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Similarity Analysis by Project Context Histories</ns0:figDesc><ns0:graphic coords='9,141.73,63.78,413.57,163.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Similarity Analysis by Context Histories</ns0:figDesc><ns0:graphic coords='9,141.73,410.01,413.58,203.56' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Fig. 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Fig. 6 shows the Ontology of Requirements Recommendation proposed by Nhatos. The domain ontology contains Projects, Requirements, and Specification. This representation is an extension of the work of Silver (2014), with the addition of the Projects Ontology. Three ontologies covered the domain considered by Nhatos: a) Requirements Specification Ontology which makes up the requirements specification; b) Projects Ontology which is characterized by the domain of the project and its contextual information; and c) Requirements Ontology that represents the requirements.</ns0:figDesc><ns0:graphic coords='10,208.65,346.02,279.75,211.88' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Ontology of Requirements Recommendation</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Screenshots with Project Details and Requirements</ns0:figDesc><ns0:graphic coords='12,229.72,63.78,237.60,224.70' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Screenshots with Specialist Settings</ns0:figDesc><ns0:graphic coords='13,229.72,63.78,237.60,224.70' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Relational Entity of the Nhatos Model</ns0:figDesc><ns0:graphic coords='13,162.41,429.65,372.23,217.48' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>3.</ns0:head><ns0:label /><ns0:figDesc>The second research question assessed the ability of the model in recommending requirements throughout the projects' life cycle in a collaborative manner. During the project's follow-up period in the case study, the registration of 20 new requirements occurred, in addition to the requirements recommended by Nhatos. All team members participated in a collaborative analysis of each requirement and contributed knowledge during the elicitation, specification, and validation processes, providing more information for the project. The collaboration of the requirements management team allowed the evaluation of possible impacts that may occur in the projects. Also,16/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:63121:1:1:ACCEPTED 26 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparison of Related Works</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Author</ns0:cell><ns0:cell cols='2'>Processes</ns0:cell><ns0:cell cols='2'>Recommen dation</ns0:cell><ns0:cell cols='2'>Strategy</ns0:cell><ns0:cell>Collabo ration</ns0:cell><ns0:cell>Environment</ns0:cell><ns0:cell>Type</ns0:cell></ns0:row><ns0:row><ns0:cell>Kim et al. (2019)</ns0:cell><ns0:cell /><ns0:cell>V 1</ns0:cell><ns0:cell /><ns0:cell>R 5</ns0:cell><ns0:cell cols='2'>Ontologies</ns0:cell><ns0:cell>&#10004;</ns0:cell><ns0:cell>Academic</ns0:cell><ns0:cell>Experiment</ns0:cell></ns0:row><ns0:row><ns0:cell>Liu et al. (2018)</ns0:cell><ns0:cell /><ns0:cell>V</ns0:cell><ns0:cell /><ns0:cell>R</ns0:cell><ns0:cell cols='2'>Description</ns0:cell><ns0:cell>&#10008;</ns0:cell><ns0:cell>Academic</ns0:cell><ns0:cell>Experiment</ns0:cell></ns0:row><ns0:row><ns0:cell>Xie et al. (2017)</ns0:cell><ns0:cell /><ns0:cell>V</ns0:cell><ns0:cell /><ns0:cell>W 6</ns0:cell><ns0:cell cols='2'>Historic</ns0:cell><ns0:cell>&#10004;</ns0:cell><ns0:cell>Academic</ns0:cell><ns0:cell>Experiment</ns0:cell></ns0:row><ns0:row><ns0:cell>Bakar et al. (2016)</ns0:cell><ns0:cell /><ns0:cell>E 2</ns0:cell><ns0:cell /><ns0:cell>R</ns0:cell><ns0:cell cols='2'>Description</ns0:cell><ns0:cell>&#10008;</ns0:cell><ns0:cell>Academic</ns0:cell><ns0:cell>Experiment</ns0:cell></ns0:row><ns0:row><ns0:cell>Portugal et al. (2017)</ns0:cell><ns0:cell /><ns0:cell>E</ns0:cell><ns0:cell /><ns0:cell>P 7</ns0:cell><ns0:cell cols='2'>Commits</ns0:cell><ns0:cell>&#10008;</ns0:cell><ns0:cell>Industry</ns0:cell><ns0:cell>Use Case</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Williams and Mahmoud (2017)</ns0:cell><ns0:cell>E</ns0:cell><ns0:cell /><ns0:cell>R</ns0:cell><ns0:cell cols='2'>Reviews</ns0:cell><ns0:cell>&#10008;</ns0:cell><ns0:cell>Academic</ns0:cell><ns0:cell>Use Case</ns0:cell></ns0:row><ns0:row><ns0:cell>Garcia and Paiva (2016)</ns0:cell><ns0:cell /><ns0:cell>E</ns0:cell><ns0:cell /><ns0:cell>R</ns0:cell><ns0:cell /><ns0:cell>Logs</ns0:cell><ns0:cell>&#10004;</ns0:cell><ns0:cell>Academic</ns0:cell><ns0:cell>Experiment</ns0:cell></ns0:row><ns0:row><ns0:cell>Hujainah et al. (2021)</ns0:cell><ns0:cell /><ns0:cell>V</ns0:cell><ns0:cell /><ns0:cell>R</ns0:cell><ns0:cell cols='2'>Historic</ns0:cell><ns0:cell>&#10008;</ns0:cell><ns0:cell>Academic</ns0:cell><ns0:cell>Experiment</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Swathine and Sumathi (2021)</ns0:cell><ns0:cell>V</ns0:cell><ns0:cell /><ns0:cell>W</ns0:cell><ns0:cell cols='2'>Historic</ns0:cell><ns0:cell>&#10004;</ns0:cell><ns0:cell>Academic</ns0:cell><ns0:cell>Experiment</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Mougouei and Powers (2021)</ns0:cell><ns0:cell>E</ns0:cell><ns0:cell /><ns0:cell>R</ns0:cell><ns0:cell cols='2'>Expert System</ns0:cell><ns0:cell>&#10008;</ns0:cell><ns0:cell>Academic</ns0:cell><ns0:cell>Experiment</ns0:cell></ns0:row><ns0:row><ns0:cell>Nhatos Model</ns0:cell><ns0:cell cols='2'>V, E, S 3 , M 4</ns0:cell><ns0:cell /><ns0:cell>R</ns0:cell><ns0:cell cols='2'>Context Histories</ns0:cell><ns0:cell>&#10004;</ns0:cell><ns0:cell>Industry</ns0:cell><ns0:cell>Use Case</ns0:cell></ns0:row><ns0:row><ns0:cell>1 Validation</ns0:cell><ns0:cell>2 Elicitation</ns0:cell><ns0:cell cols='2'>3 Specification</ns0:cell><ns0:cell cols='2'>4 Management</ns0:cell><ns0:cell cols='2'>5 Requirements</ns0:cell><ns0:cell>6 Redefinitions</ns0:cell><ns0:cell>7 Projects</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>The analysis of related works indicates four scientific contributions of Nhatos. First, the proposal</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>collaboratively approaches all RE processes, allowing everyone involved to contribute throughout the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>projects. Nhatos collects different points of view on requirements at any time during the life cycle of the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>projects, contemplating all RE processes. Second, the model addresses requirements recommendations at</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>the beginning of a new project, using histories of projects already executed, through common</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>characteristics between projects and requirements. Third, the similarity analysis of context histories and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>NLP allow the recommendation of similar requirements in the initial phase of projects. Finally, Nhatos</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>recommends future contexts based on the similarity analysis of context histories.</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Profile of Participating Teams</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Team</ns0:cell><ns0:cell>Role</ns0:cell><ns0:cell>Experience (Years)</ns0:cell><ns0:cell>Mode</ns0:cell></ns0:row><ns0:row><ns0:cell>Team A</ns0:cell><ns0:cell>Scrum Master</ns0:cell><ns0:cell>15+</ns0:cell><ns0:cell>Local</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Product Owner</ns0:cell><ns0:cell>10+</ns0:cell><ns0:cell>Distributed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Designer</ns0:cell><ns0:cell>5+</ns0:cell><ns0:cell>Local</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Developer</ns0:cell><ns0:cell>10+</ns0:cell><ns0:cell>Local</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Developer</ns0:cell><ns0:cell>5+</ns0:cell><ns0:cell>Local</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Developer</ns0:cell><ns0:cell>5+</ns0:cell><ns0:cell>Local</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Test Analyst</ns0:cell><ns0:cell>5+</ns0:cell><ns0:cell>Local</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Team B Project Manager</ns0:cell><ns0:cell>25+</ns0:cell><ns0:cell>Distributed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Developer</ns0:cell><ns0:cell>10+</ns0:cell><ns0:cell>Distributed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Developer</ns0:cell><ns0:cell>5+</ns0:cell><ns0:cell>Local</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Developer</ns0:cell><ns0:cell>5+</ns0:cell><ns0:cell>Local</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Test Analyst</ns0:cell><ns0:cell>5+</ns0:cell><ns0:cell>Local</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Recommendations Made by Project</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Project</ns0:cell><ns0:cell>Area</ns0:cell><ns0:cell cols='3'>Recommendations Accepted % Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Renegotiated Operations -Restructured</ns0:cell><ns0:cell>Finance</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>68.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Alterac &#184;&#227;o Renov Autom Cheque Especial PF</ns0:cell><ns0:cell>Finance</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>57.1</ns0:cell></ns0:row><ns0:row><ns0:cell>CDB movements in M-BANKING</ns0:cell><ns0:cell>Business &amp; Industrial</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>77.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Parameterization of Indexers</ns0:cell><ns0:cell>Finance</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>90.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Automatic lock renewal</ns0:cell><ns0:cell>Finance</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>61.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Approval Percent</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>71.0</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Semantic Analysis for Requirements Recommendation</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Included Requirement</ns0:cell><ns0:cell>Recommended Requirements</ns0:cell><ns0:cell>Distance</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Table 5 shows the results of the different settings applied. A test scenario addresses different combinations of the variables Distance, Steps Evaluation of Recommendations Made by the Nhatos</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>and Sample.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='4'># Distance Steps Sample</ns0:cell><ns0:cell>Non-Assertive Recommendations</ns0:cell><ns0:cell>Non-Assertive Recommendations (%)</ns0:cell><ns0:cell>Assertive Recommendations</ns0:cell><ns0:cell>Assertive Recommendations (%)</ns0:cell><ns0:cell>Total</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>0,25</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0,7</ns0:cell><ns0:cell>248</ns0:cell><ns0:cell>51,56</ns0:cell><ns0:cell>233</ns0:cell><ns0:cell>48,44</ns0:cell><ns0:cell>481</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>0,25</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0,7</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>70,37</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>29,63</ns0:cell><ns0:cell>27</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>0,25</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0,7</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>0,3</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0,7</ns0:cell><ns0:cell>555</ns0:cell><ns0:cell>29,65</ns0:cell><ns0:cell>1317</ns0:cell><ns0:cell>70,35</ns0:cell><ns0:cell>1872</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>0,3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0,7</ns0:cell><ns0:cell>194</ns0:cell><ns0:cell>30,03</ns0:cell><ns0:cell>452</ns0:cell><ns0:cell>69,97</ns0:cell><ns0:cell>646</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>0,3</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0,7</ns0:cell><ns0:cell>56</ns0:cell><ns0:cell>28,43</ns0:cell><ns0:cell>141</ns0:cell><ns0:cell>71,57</ns0:cell><ns0:cell>197</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>0,35</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0,7</ns0:cell><ns0:cell>904</ns0:cell><ns0:cell>16,96</ns0:cell><ns0:cell>4427</ns0:cell><ns0:cell>83,04</ns0:cell><ns0:cell>5331</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>0,35</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0,7</ns0:cell><ns0:cell>443</ns0:cell><ns0:cell>17,51</ns0:cell><ns0:cell>2087</ns0:cell><ns0:cell>82,49</ns0:cell><ns0:cell>2530</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>0,35</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0,7</ns0:cell><ns0:cell>237</ns0:cell><ns0:cell>17,52</ns0:cell><ns0:cell>1116</ns0:cell><ns0:cell>82,48</ns0:cell><ns0:cell>1353</ns0:cell></ns0:row></ns0:table><ns0:note>All test scenarios considered a 0.7 sample value. Therefore, the training sample to generate the recommendations contained 70% of the evolution of the projects' schedule (life cycle). Tests 1, 2 and 3 15/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:63121:1:1:ACCEPTED 26 Oct 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head /><ns0:label /><ns0:figDesc>70%) and four steps. This scenario obtained a hit rate of 69.97% of assertive recommendations. 452 of the 646 recommendations were correct, while 194 were unsuccessful. The different scenarios allowed a new round of tests to be carried out, obtaining the most certain configuration for the database.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>The configuration of the Scenario 4 considered a minimum distance of 0.3 (70%) and three steps,</ns0:cell></ns0:row><ns0:row><ns0:cell>obtaining a hit rate of 70.35% for assertive recommendations. Of the 1,872 inferred recommendations,</ns0:cell></ns0:row><ns0:row><ns0:cell>1,317 were correct, while 555 were incorrect. Scenario 5 considered a configuration of a minimum distance</ns0:cell></ns0:row><ns0:row><ns0:cell>of 0.3 (</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Reviewer 1 (Anonymous) Experimental design Reviewer Comment: 2. The RESTFull API Application provides a communication channel between the Hybrid Application and the Context Storage application has been implemented using right approach, however, more consistency in analysis is needed Authors: We would like to thank you for your review and suggestions. We registered in this letter only the comments that require answers and/or changes in the text to enhance the readability. We have included this analysis as a suggestion for future studies in the last paragraph in the Conclusion section. Reviewer Comment: 3. Learning phase steps to be explained in better way Authors: We have added more details of this phase in the third paragraph from the Evaluation of Recommendations Through Analysis of Context Histories subsection. We have also added the reference to Figure 4 that shows this step. Validity of the findings Reviewer Comment: the data used and provided is consistent, however, more comparative study is needed Authors: We have included a complement in the last paragraph in the Conclusion section, suggesting this comparative analysis for future studies. Additional comments Reviewer Comment: need to have relook at comparative study Authors: We suggested this comparative study as a future study in the last paragraph in the Conclusion section. Reviewer 2 (Anonymous) Basic reporting Reviewer Comment: # Literature work should be improved in the paper as it is very limited. Add one or Two table of comparison along with line diagram of existing work. Authors: Thank you for your comments and review. This letter contains only those comments that demand an answer and/or an update in the text in order to improve the readability. We enhanced the comparative table of related works in the Related Works section, including the environment in which the studies were observed and the type of analysis. Reviewer Comment: # The proposed methodology and design must be compared with existing work. Dedicated separate sub-section for this. Also, use block/line diagram for such proposed methodology. Authors: We expanded the comparison of related works, considering this suggestion. We have added new information in the Related Works section, citing the methodology and design of the analysis used by the related studies in Table 1. Validity of the findings Reviewer Comment: # Conclusion of this paper can be written in systematic way Authors: We rewrote the Conclusion section, following a systematic way to highlight the main conclusions and suggestions for future works. Reviewer 3 (Anonymous) Basic reporting Reviewer Comment: The figures in the paper are generally in good shape however, Figures 1 and 3 give a blurred look when the pdf of paper is zoomed in for better reading perhaps their quality can be improved. Authors: We appreciated your suggestions and comments. We answered in this letter only those comments that require a response and/or an update in the text. We have improved the quality of these figures to avoid get blurred. Reviewer Comment: The raw data has been shared and the interpretation of the approach and results are self contained. However, I could not open the links (https://github.com/robsonklima/nhatos api and https://github.com/robsonklima/nhatos front end) which is a requirement for this journal that a working software version should be available in an online repository. Authors: The problem when opening the links occurred because the underline character (“_”) is not considered when copying the link. The copy eliminates this character, which impacts access to the link. One can insert this character manually after copying the link to access the repositories normally. This issue occurs with both links that have this character. The other link does not require this insertion. Experimental design Reviewer Comment: The research proposed in this paper is definitely within the scope of the journal and the research questions identified are well defined, relevant and meaningful filling the gap of recommending requirements on the basis of context histories of projects. However, I have the following questions for the authors: 1) The basic project information was in Portuguese and it was translated to English using the Google Cloud Translation API. The accuracy of this API is 100%, how did the authors make up for any discrepancies in the translation and how potentially this would have impacted the results currently produced by Nhatos? Authors: The texts translated using the tool were technical, that is, they had a technical writing pattern. In an article published on Scholink (https://bit.ly/2YG4bc8) about the accuracy of the Google Cloud Translation API, it was estimated that the average hit percentage of this tool is 85%. We consider that the sum of these factors should not generate discrepancies in the final results. We included this explanation in the paragraph of line 373. Reviewer Comment: 2) On page 6 of the paper at line 234, the authors talk about the semantic similarity of requirements during the analysis of requirements. However, it will be great if they can shed some light on which semantic similarity measures for NLP have been used and why plus whether there was any syntactic analysis done prior to it or not? Authors: The semantic analysis is described starting at line 267, in the Similarity Analysis subsection. There is a reference to this explanation on page 7 to assist the reader in returning to this explanation if necessary. Validity of the findings Reviewer Comment: Since, I was unable to find and download from these two links (https://github.com/robsonklima/nhatos API and https://github.com/robsonklima/nhatos front end), therefore, it is difficult to comment on the replication and reproducibility of results. In the next version I would expect that the link is live and the files retrievable from these links. Authors: The problem when opening the repositories occurred because the underline character (“_”) is not considered in the copy/paste. The repositories can be accessed normally by inserting this character again after copying the links. Reviewer Comment: On page 14, Table 3, the data provide show acceptance percentages of five different projects none of which is above 68.7% the last row of the table shows the approval percent which is computed to be 75.1%. How is this approval percent computed and how does it go well above the accepted requirements percentage? Authors: Thank you very much for pointing out this inconsistency. We made a mistake when tabulating the results. We corrected and updated both the cited table and its references throughout the article. Reviewer Comment: Second last row of Table 3, shows an acceptance percent value of 61,6 which should be 61.6 Authors: Thanks for this comment. We have updated this value. Reviewer Comment: The conclusions are well stated and linked to research questions along with providing possible future directions. However, my final question authors is how much would we save in terms of time by using Nhatos model specifically given the fact that it sometimes generates recommendations too many to be assessed by the team members as mentioned on page 15, lines 529-30? Authors: In this scenario, Nhatos did not attend to a requirement of ubiquitous applications since these applications must be minimally intrusive. Therefore, we highlighted this information in the text (page 16, lines 533-536). Furthermore, we complemented the suggestions for future works, including the suggestion for an analysis of the saved time by using Nhatos. "
Here is a paper. Please give your review comments after reading it.
298
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Integration of legacy and third-party software systems is almost mandatory for enterprises. This fact is based mainly on exchanging information with other entities (banks, suppliers, customers, partners, etc.). That is why it is necessary to guarantee the integrity of the data and keep these integration's up-to-date due to the different global business changes is facing today to reduce the risk in transactions and avoid losing information. This article presents a Systematic Mapping Study (SMS) about integrating software units at the component level. Systematic mapping is a methodology that has been widely used in medical research and has recently begun to be used in Software Engineering to classify and structure the research results that have been published to know the advances in a topic and identify research gaps. This work aims to organize the existing evidence in the current scientific literature on integrating software units for external and data loose coupling. This information can establish lines of research and work that must be addressed to improve the integration of low-level systems.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>This article presents a Systematic Mapping Study (SMS) about integrating software units at the component level. Systematic mapping is a methodology that has been widely used in medical research and has recently begun to be used in Software Engineering (SE) to classify and structure the research results that have been published to know the advances in a topic and identify research gaps. This work organizes existing evidence in the current scientific literature on integrating software units for external and data loose coupling. This information can establish lines for future research and work that be addressed to improve the integration of low-level systems.</ns0:p></ns0:div> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Enterprises are typically made up of hundreds of home-made (in-house development) applications, purchased from third parties, legacy systems, or a combination of all of them, operating in multiple layers on different operating systems. Currently, the integration of systems acquired from third parties and legacies has become a major concern in companies. As a result, most of the applications used in the enterprise are heterogeneous, autonomous, and operate in a distributed environment. Heterogeneity has been considered one of the most severe problems to solve because it tends to cause interoperability problems. Particularly, semantic conflicts, which occur when applications use different meanings for the same information item. The challenges are integration is not an easy job; the real challenges are made up of several business and technical issues <ns0:ref type='bibr' target='#b50'>Hohpe and Woolf (2004)</ns0:ref>.</ns0:p><ns0:p>In the context of SE, this area is known as Enterprise Application Integration (EAI) <ns0:ref type='bibr' target='#b52'>Irani et al. (2003)</ns0:ref>.</ns0:p><ns0:p>EAI deals with integrating a heterogeneous set of applications and systems in any organization to integrate them and communicate information between the systems facilitating interoperability. According to this, enterprise integration is achieved using different sets of integration tools, technologies, and methodologies to ensure that transformation, translation, and communication of information items are accomplished efficiently. So, advances in integration technology, mainly concerning middleware, provide new ways to design more agile and responsive business architectures.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62267:1:1:NEW 7 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The integration of systems acquired by third parties is a real problem, mainly due to the lack of information exchange between entities such as banks, suppliers, customers, among others. Continual changes in the information systems environment have become the most important challenge in enterprises.</ns0:p><ns0:p>The applications to be integrated are usually developed by different teams that often do not focus on the integration as a relevant issue for them. Aiming at eliminating the integration challenges, EAI is proposed as a solution. Faced with this situation, the need arises for new architectures for EAI, particularly, those with which to improve loose coupling in integrating software units.</ns0:p><ns0:p>There are hardly any research work with regard to EAI in scientific literature such as <ns0:ref type='bibr' target='#b81'>Soomro and Awan (2012)</ns0:ref> and <ns0:ref type='bibr' target='#b44'>Gorkhali and Xu (2016)</ns0:ref>. Authors from <ns0:ref type='bibr' target='#b81'>Soomro and Awan (2012)</ns0:ref> reviewed industrial challenges and problems in a general form only with which the limitations and research lines are not deep enough. In addition, in the field of the EAI the technological platforms evolve rapidly as a consequence since the year of the publication of this article is more than nine years it is required to update the research to the current time. Under other conditions, the authors from <ns0:ref type='bibr' target='#b44'>Gorkhali and Xu (2016)</ns0:ref> performed a systematic literature review focused on categorizing EAI on the basis of industries. Regrettably, the primary studies were limited to those one published in Science Citation Index (SCI) and Social Science Citation Index (SSCI) database. Therefore, this study does not include publications in other recognized sources as Elsevier's, PeerJ, IEEE or Scopus. There is a SMS described by <ns0:ref type='bibr' target='#b22'>Banaeianjahromi and Smolander (2014)</ns0:ref> to survey and analyse the available literature on determining the role of enterprise architecture in enterprise integration and also to identify gaps and state-of-the-art in research. This is a very limited area study since search exclusively for methodologies and trends change over time. Another research studies focused on a particular technique to process integration such as the proposal from <ns0:ref type='bibr' target='#b27'>Cerqueira et al. (2016)</ns0:ref> which presents a SMS that investigated the use of ontologies to deal with semantics in integration at process layer level. In this regard, the work from <ns0:ref type='bibr' target='#b39'>Fusco and Aversano (2020)</ns0:ref> is about an ontology-based approach for semantic integration of heterogeneous data sources named DIF (Data Integration Framework), this work is nor particularly of the field of EAI itself. These papers allow us to go deeper into related topics but specifically none of them has addressed the specific issue regarding to resolve the external and data loose coupling for the integration at software units level. Therefore, a review is needed since it is conducted based on a scientific search strategy such as SMS. To the best of our knowledge, no systematic mapping study has been conducted in this particular topic.</ns0:p><ns0:p>The goal of this study is to provide the state-of-the-art as well as a map of existing literature in this area. Furthermore, its evolution over time is shown to enable improvement of the practice with the known research results and to identify gaps for future research. For this purpose, this review aims to present a comprehensive summary of the studies in this field during the span of 2008-2021.The contributions of this review are:</ns0:p><ns0:p>&#8226; This SMS includes quality literature from pre-defined resources and based on pre-defined inclusion/exclusion criteria. Therefore, out of the 3178 full-text articles studied, 39 articles were included.</ns0:p><ns0:p>&#8226; The proposals found in the primary studies for loose coupling software unit integration are based in an environment conformed by Service-Oriented Architecture (SOA), Web Services, and Microservices. Usually there are implemented by a pre-defined data types structure at design time, leaving an immovable structure at runtime.</ns0:p><ns0:p>&#8226; A comprehensive discussion on the existing proposals and the research gaps in this area. Furthermore, some suggestions for new research directions are suggested.</ns0:p><ns0:p>The intended audience of this research work refers but not limited to software engineers, information technology managers, business integration practitioners, researchers related to the area of EAI at software unit level as well as novice student researchers. This article is organized as follows: in Section 2 the relevant concepts to the context of this work are presented. Section 3 introduces the research and conduction protocol applied. In Section 4, are presented the mapping results. Section 5 introduces a discussion of the findings arising from research. Finally, Section 6 presents the final considerations and future work. Model-Driven Architecture. These concepts are used thorough this SMS for the sake of understandability and completeness.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Enterprise Application Integration</ns0:head><ns0:p>Enterprise Application Integration (EAI) is a discipline that dates to the beginnings of software engineering and is mainly responsible for software systems interacting with each other without any problem, understanding that this interaction refers mainly to the exchange of information between systems. Basically, it must allow software systems to be able to share data and functions between them, allowing the connection between heterogeneous data sources and applications. All this must be achieved thanks to the implementation of a middleware between the two. There is no limitation between the type of software systems that must share information. These can be open source, in-house developments, or commercial license software systems. The problem in EAI lies mainly in the fact that originally the software systems were not designed to interact with each other or together, which implies a series of situations to be solved to achieve that communication. The scope of the EAI is located mainly in the integration of software systems in business-to-business environment (B2B) <ns0:ref type='bibr' target='#b87'>( Wong (2009)</ns0:ref>). Enterprise application integration can realize an effective combination of various independent systems, as data exchange and data sharing between all processes of an enterprise. Thus it is ensured for all units of a corporation to operate over a database system together with the suppliers and customers to improve enterprises' productivity and efficiency, <ns0:ref type='bibr' target='#b89'>Zhigang and Huiping (2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Middleware</ns0:head><ns0:p>A middleware is software located between an operating system and the applications that run on it.</ns0:p><ns0:p>Middleware enables communication and data interchange in distributed software systems. The term middleware first appeared in a 1968 NATO (North Atlantic Alliance) conference report, which aimed to define the field of software engineering and included software design, production, and distribution.</ns0:p><ns0:p>The goal of that report was the interconnectivity between software systems, particularly those considered older can be connected with the new software in organizations. Using middleware allows users to make requests such as submitting forms in a web browser or allowing a web server to return dynamic web pages based on a user's profile.</ns0:p><ns0:p>Examples of middleware can be found in database, application server and message-oriented middleware. Each of these programs generally provide messaging services and the different applications communicate through messaging frameworks. There are several messaging frameworks: Simple Object Access Protocol (SOAP), Web Services, Representational State Transfer (REST) and JavaScript Object Notation (JSON), those one is explained in this section for a better comprehension of this research. The decision of which one to use depends on the requirements of the enterprise: service to be used or type of information to be communicated.</ns0:p><ns0:p>The middleware can also be used for distributed processing with actions that occur in real time simplifying the development and maintenance of complex distributed software <ns0:ref type='bibr' target='#b20'>Astley et al. (2001)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Software Units</ns0:head><ns0:p>According to <ns0:ref type='bibr' target='#b29'>Chen Hong and Guo Wen-yue (2010)</ns0:ref>, a software unit is a modular component of a program with well-defined interfaces and dependencies that enable offering or requesting a set of services or functions. It can even be a piece that performs some task, a function, a method, a class, a library (library), an application, a component, among others <ns0:ref type='bibr'>Chen (2009)</ns0:ref>. These often interact with data collections to save, update, delete, and present information.</ns0:p><ns0:p>The elements defining a software component has been widely discussed for more than twenty years (see <ns0:ref type='bibr' target='#b25'>Broy et al. (1998)</ns0:ref>). The most commonly adopted definition of a software component is that issued in <ns0:ref type='bibr' target='#b82'>Szyperski et al. (2002)</ns0:ref>, where it was defined as a unit of composition with contractually specified interfaces and explicit context dependencies only. This can be deployed independently and is subject to the design by third parties.</ns0:p></ns0:div> <ns0:div><ns0:head>3/26</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62267:1:1:NEW 7 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed Such architecture differs based on levels of integration with the component database systems and the extent of services offered by the federation <ns0:ref type='bibr' target='#b68'>Mu&#241;oz and Jos&#233; (2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.9'>Belief-Desire-Intention Architecture</ns0:head><ns0:p>This architecture, abbreviated as BDI is a reasoning model based on mental constructs used by intelligent agents. It allows the modeling of agents behaviors in an intuitive manner that complements the human intellect. BDI is based on the human reasoning pattern, known as practical reasoning. First, decide what to achieve (deliberation) and then how to do it (reasoning). The agent using this model intends to show a legitimate reasoning to achieve his goals by using his beliefs about the environment. <ns0:ref type='bibr' target='#b75'>Puica and Florea (2013)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.10'>Hub-and-Spoke</ns0:head><ns0:p>Hub-and-Spoke is an architecture applied as middleware, which uses a central message broker. In this architecture, communication is made between each application (spoke) and the central hub. The broker functionalities include routing and message transformation to the receiver spoke. Hub-and-Spoke aditionally can routing based on content, using information from the message header or some elements of the message body. The hub from the message content can determine the receiver spokes, through rules. <ns0:ref type='bibr' target='#b18'>An et al. (2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.11'>Apache Camel</ns0:head><ns0:p>Apache Camel is an open source Java framework that aims to make software integration easy and accessible, it is used as middleware. Implement EAI business integration patterns using an API to configure routing and mediation rules. It was developed by the Apache Software Foundation and acts as a tool for rule-based data routing and processing. In addition, it has connectivity with a wide variety of transport protocols and supports DSL (Domain Specific Language) to facilitate its implementation by defining classes with the concepts of the domain. Its architect is divided into three modules, the first one is the integration and routing module, in which the processes and components are connected through messages based on criteria defined by the user, these can be defined in Java, Scala, XML or Groovy. The second is the process module is used to manage and mediate messages between endpoints. Business Integration Patterns are implemented in this module. Finally, the third one is the component module provides an interface to communicate with the external world through endpoints that are specified as URI (Uniform Resource Identifier). <ns0:ref type='bibr' target='#b45'>Gosewehr et al. (2018)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.12'>Message-Oriented Middleware</ns0:head><ns0:p>Message-oriented Middleware (MOM) is a concept that involves the passing of data between applications using a communication channel that carries autonomous units of information called messages. Basically, it is a software infrastructure that supports the sending and receiving of messages between the information systems of a company. In a MOM-based communication environment, messages are normally sent and received asynchronously. Through message-based communications, applications are abstractly decoupled; senders and receivers never know each other. Instead, they send and receive messages to and from the messaging system. To achieve this, it is necessary to process the messages in a controlled way in an environment with a client / server architecture. The processing is carried out by means of a program that works as an intermediary between the messages, which is designed to manage several messages from different clients and once forward them to the corresponding server program. The middleware builds a communications blanket that avoids developers from dealing with different operating systems and network protocols. The middleware creates a communications layer that isolates developers from the complexity of different operating systems and network protocols. This middleware is commonly used in scenarios where problems related to interoperability can occur if the network is constantly changing.</ns0:p><ns0:p>MOM is commonly used in IoT (Internet of Things) applications as centralized message brokers facilitate device-to-device communication. This performance is achieved because MOM has special capabilities such as dynamic scaling, secure communication, and facilitates its integration with other tools. Additionally, this architecture provides several features such as i) asynchronous and synchronous messages transmission; ii) the ability to convert the data format according to the data contained in the messages to be compatible with the application who will receive it); iii) loose coupling among applications; iv) parallel processing of messages; v) management of message preference levels <ns0:ref type='bibr' target='#b17'>Albano et al. (2015)</ns0:ref>.</ns0:p><ns0:p>According to <ns0:ref type='bibr' target='#b88'>Yongguo et al. (2019)</ns0:ref> Manuscript to be reviewed Computer Science multi-point interaction and loosely coupling among members is accepted as the most promising solution for communication-interaction between different systems.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.13'>Service Component Architecture</ns0:head><ns0:p>Service Component Architecture (SCA) is a software technology designed to provide a model for applications that follow service-oriented architecture principles. The main concern of this architecture is to provide an open specification allowing multiple vendors to implement support for SCA in their development tools and runtimes. This is why it offers specific support for various component implementation and interface types such as Web Services Description Language (WSDL) interfaces and Java classes with corresponding interfaces <ns0:ref type='bibr' target='#b38'>Fiadeiro et al. (2006)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.14'>Grid Computing</ns0:head><ns0:p>Grid computing is a computer system that coordinates different computers with a hardware and software infrastructure to solve large-scale problems. Generally, a grid is responsible for performing several tasks within a network, however, it can also work in specialized applications. The term used to define grid computing originates from an analogy with the electric power grid: we can connect to the grid to obtain computing power without worrying about where it comes from. Just like we do when we plug in an electrical device <ns0:ref type='bibr' target='#b53'>Jacob et al. (2005)</ns0:ref>.</ns0:p><ns0:p>Grid computing is designed to solve problems that are too big for a supercomputer while maintaining the ability to process many small problems. Basically, it is based on virtualization between technologies, platforms and organizations, that is, a distributed computing infrastructure that is evolving in support of the application between organizations and the sharing of resources through the use of open standards.</ns0:p><ns0:p>Within the grid computing hardware and software infrastructure there is a variety of resources, such as programming languages, either on a network or through the use of open standards with specific guidelines to achieve a common goal. Grid computing operations are divided into two:</ns0:p><ns0:p>1. Data Grid: This is a set of services that provides individuals or groups of users with the ability to access, modify and transfer large amounts of geographically distributed data for research purposes.</ns0:p><ns0:p>2. CPU Scavenging Grid: Is a technique that uses instruction cycles in computers to avoid wasted during the time the device waits for input from the user or other slower devices.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.15'>Microservices REST</ns0:head><ns0:p>The microservice architecture emerged as a new paradigm for programming applications employing the composition of small services, each running its processes and communicating via lightweight mechanisms. The term microservices was first introduced in 2011 at an architectural workshop to describe the participants' common ideas in software architecture patterns; it is a fresh concept in software architecture, highlighting the design and development of deeply maintainable and scalable software. Microservices manage growing complexity by functionally decomposing large systems into a set of independent services, <ns0:ref type='bibr' target='#b37'>Dragoni et al. (2017b)</ns0:ref>. A Microservice is a small, single service offered by a company. It derives from the distributed computing architecture that connects many small services rather than having one large service. The Microservice can then be delivered through a Representational State Transfer (REST) API.</ns0:p><ns0:p>REST is a software architectural style that defines the set of rules to be used for creating web services. It allows requesting systems to access and manipulate web resources by using a uniform and predefined set of rules. Interaction in REST-based systems happens through the Internet's HTTP <ns0:ref type='bibr' target='#b85'>Webber et al. (2010)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.16'>Service-Oriented Architecture (SOA)</ns0:head><ns0:p>The formal definition of the term Service-Oriented Architecture (SOA) was given by the SOA Working Group which is member of The Open Group. It remarks that SOA is an architectural style that created for a special form of thinking in terms of services and service-based development and the outcomes of services called service orientation. In this regard, an architectural style is a set of design decisions that can be applied to a recurring design problem and that can be parameterized for different contexts where that design problem appears and a service is considered a logical representation of a repetitive operation from the business logic, e. g., check order, review expired products. One of the main characteristics of a service is that is self-contained but can be constituted by several services. The SOA architectural style has a set of special features that must be applied such as i) it is based on the commercial activities of the company,</ns0:p></ns0:div> <ns0:div><ns0:head>6/26</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_3'>2021:06:62267:1:1:NEW 7 Oct 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>those used in the real world, that is, it reflects the business processes of the company with the client or with other companies, ii) services are named and represented as business processes or company rules that represent a description of the company. Services are implemented through service orchestration, and iii) it must be implemented through standards that allow maintaining interoperability of services <ns0:ref type='bibr' target='#b47'>Group (2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.17'>Model-Driven Architecture</ns0:head><ns0:p>Model-Driven Architecture (MDA) 1 standard is an approach to software design, development and implementation supported by the OMG (Object Management Group). MDA provides guidelines for structuring software specifications that are expressed as models.</ns0:p><ns0:p>MDA is a three-layer architecture where in the first one, the Computational Independent Model (CIM) specifies the project requirements and through a series of model-to-model transformations (M2M) the models of the second level of the three-layer architecture are obtained. These are the Platform Independent Models (PIM), which lack specifications on the implementation technology, normally are represented using class diagrams. Finally, the PIMs become Platform Specific Models (PSM), which are obtained through M2M transformations. PSM models are converted to source code as specified in the tool that implements it. The difference among PIM and PSM models is that PSM models are represented using the platform implementation technology, e. g., the programming language and data-base management system to use <ns0:ref type='bibr' target='#b15'>Aguilar et al. (2010)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>SURVEY METHODOLOGY</ns0:head><ns0:p>To know different proposals to improve the loose coupling in the integration of software units around this topic, a Systematic Mapping Study (SMS) was conducted. SMS is a methodology widely used in research in the medical area. Recently, it has begun to be applied to the field of SE to classify and structure the research results published. To learn about advances in a topic and identify gaps in research, there is also the methodology known as Systematic Literature Review (SLR). It seeks to identify best practices (based on empirical evidence) by conducting an in-depth exploration of the studies, describing their methods and results.</ns0:p><ns0:p>The difference from SMS is that it seeks to provide a more general vision, and SLR is focused on gathering and synthesizing evidence <ns0:ref type='bibr' target='#b74'>Petersen et al. (2008)</ns0:ref>. The main goal of SMS is to provide an overview of the research area and identify the amount and type of research and the available results. It is also essential to map published frequencies over time to understand trends and identify forums where research in the area has been presented <ns0:ref type='bibr' target='#b58'>Kitchenham et al. (2010)</ns0:ref>.</ns0:p><ns0:p>In this work, the methodology for SLR described in <ns0:ref type='bibr' target='#b74'>Petersen et al. (2008)</ns0:ref> was used. However, the adaptation proposed in <ns0:ref type='bibr' target='#b58'>Kitchenham et al. (2010)</ns0:ref> is applied to adjust to the SMS.</ns0:p><ns0:p>The review procedure to follow is composed of five stages: 1) Definition of research questions, 2) Search for primary studies, 3) Selection of documents for inclusion and exclusion, 4) Classification schema, and 5) Data extraction and systematic mapping. Each stage is detailed next concerning how it was carried out for this research.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Definition of research questions</ns0:head><ns0:p>To bring out the SMS, a total of three research questions (RQs) were designed. These questions allow us to categorize and summarize the current knowledge concerning loose coupling of software units within Enterprise Application Integration. The goal is to identify gaps in current research to suggest areas for further investigation and to provide useful knowledge for software architects practitioners. The RQs are described next. RQ-1. Which are the main proposals to improve the loose coupling in the integration of software units?</ns0:p><ns0:p>It is intended to know the technological contribution that the primary studies make in the challenge of software units integration. The contributions would be, i.e., architecture proposal, framework, architectural pattern, a tool, etc. Manuscript to be reviewed Computer Science RQ-3. Which technology or frameworks have been developed for loose coupling in software unit integration? This question includes libraries, and its goal is to determine whether there is a lack of tools to assist developers in the loose coupling of software unit integrations.</ns0:p><ns0:p>The scope of the review was defined as recommended by <ns0:ref type='bibr' target='#b57'>Kitchenham (2007)</ns0:ref> as follows: Population: researchers, professionals, and entrepreneurs who should improve the integration of software units.</ns0:p><ns0:p>Intervention: any study that contains the description of a software unit integration solution at the software unit (component) level. Study design: applications in industry or academic examples. Result: evolution over time of the use of software unit integration technologies at the component level.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Search of primary studies</ns0:head><ns0:p>Primary studies were identified using search strings in scientific databases. The Springer, IEEE, CON-RICYT 2 , Google Scholar, arXiv, and DOAJ databases were used to select the primary studies in this work. It is important to point out that CONRICYT is a resource supported by the Mexican government that allows researchers to search for scientific articles in databases/editorials such as Web of Science and Elsevier, among others.</ns0:p><ns0:p>To search for the scientific production associated with the concept of Loose Coupling and Enterprise Application Integration, the keywords were defined to construct the search strings to be consulted. To do this, keywording was performed <ns0:ref type='bibr' target='#b74'>Petersen et al. (2008)</ns0:ref>. In the first place, the main concepts of the research were identified such as keywords. Then, similar terms (synonyms) or phrases that might also be used to describe these concepts were defined. Next, a thesaurus was consulted to find synonyms. After that, the search terms were combined using Boolean operators. It is essential to mention that a recommended manner to create a search string is structuring them in terms of population, intervention, comparison, and result <ns0:ref type='bibr' target='#b57'>Kitchenham (2007)</ns0:ref>.</ns0:p><ns0:p>The selected search period is from the years 2008 to 2021. The restriction with respect to the time period is to achieve a focused approach, the search was narrowed down to published journal articles from 2008 to 2021. Moreover, since the field of EAI and, especially at the data level integration, is relatively old and it has been during the last decade where these ideas have been most developed with the emergence of web services and cloud computing. Likewise, this period of time has been validated during the development of the systematic mapping, since all the studies in this area are located within that period of time. In this sense, generic search expressions were considered considering the structure of each research question, the essential terms, and identified synonyms. The expressions were constructed using logical operators for searches (AND and OR). Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> shows how the search string was generated concerning the research questions.</ns0:p><ns0:p>Based on the defined search strings detailed in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>, a bibliometric analysis was performed using Vosviewer 3 software. To identify the tendencies of the literature on loose coupling in the integration of software units, an initial analysis of co-occurrences of keywords was conducted based on articles with at least five occurrences. The research resulted in six clusters (see Fig. <ns0:ref type='figure'>1</ns0:ref>) involving eighty-one keywords. The clusters are (1) loose coupling, (2) enterprise application integration, (3) management, (4) performance, (5) model and ( <ns0:ref type='formula'>6</ns0:ref>) impact.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Selection of documents for inclusion and exclusion</ns0:head><ns0:p>An initial step was completed to remove duplicates. Hence, it was structured in such a way that the name of the technology and all the references of the main study in which it was found were listed using a set of tags with which an initial verification of the information could be performed. In case of tags reporting unreliable information, the primary studies were revised again to solve inconsistencies.</ns0:p><ns0:p>Inclusion and exclusion criteria were established to determine the relevance of the selection process of primary studies.</ns0:p><ns0:p>The defined inclusion criteria consist of i) the search terms appearing at the working title or abstract considering the publication date from 2008 to 2021, ii) research articles written in English language, iii) articles with the full text available in the bibliographic source, and iv) articles with potential to answer some of the RQs from Section 3.1. Likewise, the abstract refers to the problem covered by the corresponding research question.</ns0:p></ns0:div> <ns0:div><ns0:head>8/26</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62267:1:1:NEW 7 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Most important terms Synonyms, terms and, topics</ns0:p></ns0:div> <ns0:div><ns0:head>Search expression</ns0:head><ns0:p>RQ-1 'Loose Coupling', 'Enterprise Application Integration', 'Software Integration Proposal', 'Coupling', 'Integration' 'EAI', 'Software Unit Integration'</ns0:p><ns0:p>Loose Coupling OR Enterprise Application Integration OR Software Integration Proposal OR (Coupling AND Integration) RQ-2 'Loose Coupling', 'Enterprise Application Integration', 'Software Integration Proposal', 'Coupling', 'Integration', 'Technological architectures' 'EAI', 'Software Unit Integration' (Loose Coupling AND Technological architectures) OR (Enterprise Application Integration AND Technological architectures) OR (Software Integration Proposal AND Technological architectures) OR ((Coupling) AND (Integration) AND (Technological architectures)) RQ-3 'Loose Coupling', 'Enterprise Application Integration', 'Software Integration Proposal', 'Coupling', 'Integration', 'Framework' 'EAI', 'Software Unit Integration', 'Library' Concerning the exclusion criteria, it was decided to exclude publications written in non-English languages, panel discussions, presentation slides, and tutorials. All articles retrieved unrelated to the topic explored were also excluded, also considering duplicate documents of the same study. Articles that highlights the initial draft of work in progress type articles were excluded also.To assess the relevance of each primary study to our topic, an iterative procedure was followed: all primary studies were analyzed on the basis of their title, abstract, and their full text.</ns0:p><ns0:formula xml:id='formula_0'>(</ns0:formula><ns0:p>The selection process consists of three stages conducted sequentially by three reviewers (two researchers and one collaborator). In the first stage, each reviewer applied the inclusion and exclusion criteria for the title, abstract, and keywords from the articles found. In the next stage, each reviewer applied the same criteria to a set of articles assigned to him, which now includes the introduction and conclusion. Afterwards, a set of candidate articles (see the second row of Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>) were obtained. In the third stage, the candidate articles were analyzed. In this form, out of a total of 3095 articles, a total of 39 primary studies were selected for mapping (see the third row of </ns0:p></ns0:div> <ns0:div><ns0:head n='3.4'>Classification Scheme</ns0:head><ns0:p>For the SMS, a systematic process was followed, which is shown in Businesses must be agile and flexible, and IT managers are being asked to deliver improved functionality while leveraging existing IT investment. Mostly of business organizations nowadays are using packaged software for their key business processes and goals (legacy business applications). Some of them are those who attend activities with respect to Enterprise Resource Planning (ERP), Supply</ns0:p><ns0:p>Chain Management (SCM), Customer Relationship Management (CRM), and Electronic Commerce (EC). These systems assist business organizations in supporting their operational and financial goals.</ns0:p><ns0:p>Bearing this considerations in mind, the classification scheme is divided in: Proposals, Architectures and Technologies/Libraries.</ns0:p><ns0:p>The first classification scheme (Proposals) refers to loose coupling software unit integration proposed to integrate these packaged software applications with each other regardless of its conformation. This scheme can include a framework, an architecture, a full computer package, or software abstractions that facilitate the development of loos coupling software unit integration, i.e. systems where elements can be easily added, removed, or replaced without needing widespread changes across the system. The second classification scheme refers to architectures that have been considered to improve loose coupling in software unit integration. The goal is to analyze the technological architectures proposed at loose level components until now, as well as the emerging architectural styles for developing and integrating enterprise applications. The third classification scheme refers to technologies and libraries that have been developed for loose coupling in software unit integration. In this concern, the organizational and technical framework to enable an enterprise to deliver self-describing and platform independent business functionality is considered.</ns0:p><ns0:p>The classification scheme used can be consulted at the cloud 4 , where it is possible to see the classification according to proposals, architectures and technologies.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5'>Data Extraction and Systematic Mapping</ns0:head><ns0:p>Once defined the classification scheme in 3.4, the relevant articles were then classified in order to perform the data extraction. As shown in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>, the classification scheme works while extracting data, such as adding new categories or merging and splitting existing them. In this step, a spreadsheet is used to document the data extraction process which contains each category in the classification scheme (proposals, architectures and technologies). When data is entered into the schema, a brief explanation was provided, detailing why the article should be in a particular category <ns0:ref type='bibr' target='#b52'>Irani et al. (2003)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>MAPPING RESULTS</ns0:head><ns0:p>This section presents and analyzes the results obtained after conducting the data extraction process from the primary studies. Several articles were found in the literature focused on different aspects of EAI. The selected studies provided relevant knowledge on the research questions. It is important to remark that the 39 primary studies found provide an answer to more than one research question. These are answered below:</ns0:p><ns0:p>RQ-1. Which are the main proposals to improve the loose coupling in the integration of software units?</ns0:p><ns0:p>The main proposals obtained from the publications were those based on SOA, Web Services and Microservices. These proposals implemented different forms to perform the loose coupling between the software units, using techniques based on asynchronous messages through middleware queues and topics. Some of them used to make this task standardized service contracts such as WSDL (Web Services Description Language) for Web Services and Microservices. This is done by the implementations of protocols such as SOAP (Simple Object Access Protocol), HTTP, REST with the help of the data schema standard such as XSD (XML Schema Definition). For XSD, the data travels in an XML format (eXtensible Markup Language); in the case of REST the format used is JSON. Another alternative that arises is through the construction and development of Frameworks, Constraints, and Models of Metadata. The use of Federated Database Systems is another way to establish loose coupling at the data level using the creation of Data Models where information is exchanged based on predefined schemes called Canonical Data Models (CDM). Most recently, SOA implements an orchestration of Web Services and Microservices for loose coupling between the software units. Next, Table <ns0:ref type='table'>3</ns0:ref> summarizes the main proposals collected from primary studies.</ns0:p><ns0:p>From the proposals found in the publications, 34,5% were implemented with SOA, 24,1% through Manuscript to be reviewed Table <ns0:ref type='table'>3</ns0:ref>. Main proposals for the improvement of the loose coupling between software units.</ns0:p><ns0:note type='other'>Computer Science RQ</ns0:note><ns0:p>RQ-2. Which technology architectures have been considered to improve loose coupling in software unit integration's?</ns0:p><ns0:p>From the review made to the primary studies, it was found that a series of architectures were applied in the integration of software units to improve the loose coupling in the mentioned integration's. The architectures most implemented for this purpose were SOA and Microservices REST. SOA comes to be an orchestration of technologies supported by existing communication protocols such as SOAP and HTTP.</ns0:p><ns0:p>Microservices based on REST is the second most used architecture for this purpose. Table <ns0:ref type='table'>4</ns0:ref> presents each one of these proposals found in the primary studies. As shown in Fig. <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>, SOA architecture represents 50% of the implementations found in the primary studies, Microservices represents 15,6%, the 6,3% the PUB/SUB architecture; these, are as a whole 72%. However, there are other architectures that have been <ns0:ref type='formula'>2009</ns0:ref>) and Grid Computing <ns0:ref type='bibr' target='#b41'>Garc&#237;a and Montoya (2011)</ns0:ref>, which is not widely used. This corresponds to the rest 28% of the total of primary studies.</ns0:p><ns0:p>In the case of the implemented architectures, Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref> shows that the SOA architecture represents 50% of the implementations found in the primary studies, Microservices represents 17,6%, the 11,8% through Web Services; these are as a whole approximately 80%. The rest 20% a series of diverse architectures such as PUB/SUB, Camel Apache, Federated Database Systems <ns0:ref type='bibr' target='#b68'>Mu&#241;oz and Jos&#233; (2009)</ns0:ref>, <ns0:ref type='bibr'>BDI Weyns and Georgeff (2009)</ns0:ref>, Intermediate Layer and SCA <ns0:ref type='bibr' target='#b61'>Ma et al. (2009)</ns0:ref>.</ns0:p><ns0:p>RQ-3. Which technology or frameworks (including libraries) have been developed for loose coupling in software unit integration? Several technologies were found in primary studies to address loose coupled in software unit integra- Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Entirely, 79% of the 39 primary studies give a solution in the loose coupling in software units integration through these technologies. It is important to mention that, since these were selected from the primary studies reviewed, these technologies represent a large percentage of the results obtained in the review. Likewise, their application and publication in scientific articles highlight the fact that they are considered as the main technologies used today for the integration of loose coupling software units.</ns0:p><ns0:p>To provide a solution in the loose coupling in software integration's, the scientific community made a prominent effort to solve this problem from 2008 to 2011. Regrettably, this attempt began to decline in the period 2012 to 2017. Nevertheless, it gained momentum again in the year of 2018. It is possible to see that in these periods from 2008 to 2021, SOA, Web Services, and Microservices were the technologies primarily implemented for this purpose leaving aside technologies such as XML (eXtensible Markup Language). The analysis presented in Section 4 shows a trend towards the investigation of approaches and architectures to increase connectivity since this is the most significant issue for EAI. The classification by year of publication concerning the different proposed research questions are shown in Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref>.</ns0:p><ns0:p>Most of the proposals found in the primary studies for loose coupling software unit integration are based in an environment conformed by SOA, Web Services, and Microservices. In this environment, the nodes of the network make their resources available to other participants in the network as independent services to which they have access in a standardized way, most of the definitions identify the use of Web Services using SOAP and WSDL in its implementation; however, it can be implemented using any service-based technology.</ns0:p><ns0:p>To clarify the limitations found on these most implemented proposals, a WSDL structure for analysis is introduced in Fig. <ns0:ref type='figure' target='#fig_10'>7</ns0:ref>. This exemplifies how data types are defined at design time, leaving an immovable structure at runtime. In this concern, a WSDL document defines a set of services as collections of network Manuscript to be reviewed Table <ns0:ref type='table'>4</ns0:ref>. Main architectures used to improve loose coupling in software units integration. endpoints or ports. The abstract definition of endpoints and messages is separated from their concrete network deployment or data format bindings. This separation enables the reuse of abstract definitions such as messages and types of ports. The messages are structured in such a way that they represent a description of the data that is exchanged that includes the ports as collections of operations to be performed. In that sense, the reusable link is made up of a protocol and contains the data format specifications for a particular type of port and a collection of ports defines a service.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>The definition of network services in the structure of a WSDL document is made up of a series of elements, these are: Types, Message, Operation, Port Type, Binding, Port, and Service. These are detailed below:</ns0:p><ns0:p>1. Types: these are the data types of a system and are commonly defined in XSD structure.</ns0:p><ns0:p>2. Message: it is the structure that contains the data that is exchanged in a connection.</ns0:p><ns0:p>3. Operation: describe the actions supported by the service.</ns0:p><ns0:p>4. Port Type: are the operations supported by various endpoints.</ns0:p><ns0:p>5. Binding: this permits to specify the data format for a particular type of port.</ns0:p><ns0:p>6. Port, it is a combination of a link and a network address, it is a defined end point.</ns0:p><ns0:p>7. Service: is defined as a collection of related points.</ns0:p><ns0:p>These elements are divided in two parts, the Concrete part, which defines the 'how' and 'where'</ns0:p><ns0:p>and the Abstract for the definition of what the service does through the messages it sends and receives.</ns0:p><ns0:p>Analyzing this structure, there is a deficiency in the high external and data level coupling in the integration of applications or software units because it is created before being used. According to Fig. <ns0:ref type='figure' target='#fig_10'>7</ns0:ref>, the element highlighted in yellow corresponds to the node (element) &lt;message&gt;, this element generates this coupling issue because it is created under a predefined fixed structure XSD before being used.</ns0:p><ns0:p>Analyzing the structure of the XSD showed in Figs. <ns0:ref type='figure' target='#fig_11'>7 and 8</ns0:ref>, if it is considered a change in the element highlighted &lt;TradePrice&gt;, this contract would impact all the applications or software units that are integrated into this structure. This which will cause a close coupling between them, consequently a high cost of maintenance and time. This is due to Fig. <ns0:ref type='figure' target='#fig_10'>7</ns0:ref> exemplifies how data types are defined at design time, leaving an immovable structure at runtime. In Fig. <ns0:ref type='figure' target='#fig_11'>8</ns0:ref>, the label of the &lt;TradePrice&gt; element is Manuscript to be reviewed</ns0:p><ns0:p>Computer Science observed, where the tags highlighted in yellow in the code represents the data and the properties of the structure called &lt; price &gt; and the type of this property &lt; f loat &gt; marking the strict way of receiving the exchange of information between software units. Therefore, if a change in the element &lt;TradePrice&gt; is considered, this contract would impact all the applications or software units that are integrated into this structure.</ns0:p><ns0:p>Clarifying this, let us assume this scenario, a Web Service or Microservice is consumed by a hundred clients where each one of them uses the same service contract (or WSDL document) to access to perform an operation. At once, if a change in the type of data is requested by some of those clients it will be necessary to create another new contract or WSDL document for each client that has changed. Furthermore, this scenario can be present if a client wants to integrate a new data type making it necessary to send one more data. This is important, since this entails having control and maintenance for each contract created, which implies that it is mandatory to create a hundred contracts in order to maintain the integration. This would represent a high cost in resources such as time, effort occasioning and increment in the project budget.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>DISCUSSION</ns0:head><ns0:p>This SMS has brought several perceptions into the research trends in EAI at the software unit level. These discernment's are discussed in this section.</ns0:p><ns0:p>Throughout the years, the main problem of EAI has been the communication and exchange of data between heterogeneous systems. In recent years, particularly in the period covered by this systematic mapping study, new technologies have emerged to develop software systems. Moreover, some existing technologies have been used in combination to provide robust frameworks for applications development that satisfy the needs of enterprises.</ns0:p><ns0:p>An issue that has been left behind over the years is the one related to considering an EAI project as an independent project reflecting its unique characteristics. No research was found on primary studies with regard to proposals for a step-by-step guide for EAI project definition and implementation. This is a critical issue to be studied since a software development project with a distributed architecture optimized for the data exchange is not the same as an EAI project. The SE guidelines allow obtaining a product that satisfies the customer's needs in a software development project, but this is not the case in EAI projects. There is no generic methodological approach for enterprises to implement it. This kind of project is considered one more component of a traditional software development process that implies a delay in its completion. Therefore, it is not considered an independent project that must be defined within Manuscript to be reviewed</ns0:p><ns0:p>Computer Science a methodological approach only for EAI. In this sense, enterprises do not prioritize questions such as how to measure the value that an EAI project brings to them in the near future and how it could help significantly reduce maintenance costs. When they decide on an EAI project, they lack ad-hoc planning guided through a methodology for it.</ns0:p><ns0:p>Through the constant evolution of technological platforms for the development of software systems, many proposals for EAI have emerged in the scientific literature according to primary studies found. Most of them are based on SOA and microservices (see Fig. <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>).</ns0:p><ns0:p>As shown in figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref>, the evolution of technology has been such that it has overcome the proposals that should exist to solve problems that the EAI has dragged on from the time when its early years.</ns0:p><ns0:p>Furthermore, the scene is complicated again, as novel technologies appear to develop software systems, and as a result, new integration essentials are born with it. In this regard, research has dedicated much attention to succeeding simple data exchange among software units. Therefore, data privacy and security concerns have increased. Even though this area is out of the scope of this SMS, it is well-thought-out that research to date has overlooked the data security and privacy problems that emerged due to technological platforms and thus have not been adequately studied.</ns0:p><ns0:p>Over time the research in EAI does not provide a general framework for integration projects, nor at the level of data exchange, databases, or interfaces. In this regard, some efforts have been made, integration proposals have been industrialized, and scientific articles have even been published presenting solutions.</ns0:p><ns0:p>The proposals help along with being universal for EAI because they are based on MDA (see section 2.15).</ns0:p><ns0:p>The advantage proposed by MDA approaches to solve the EAI problem is that the same models created can be converted to the source code of the user's preference. This provides an advantage to EAI projects because by reusing the models, it is possible to generate the exact solution for different technological platforms <ns0:ref type='bibr' target='#b16'>Alahmari et al. (2010)</ns0:ref> and to build help along with new integrations in models to obtain the source-code for the integration. The advantage of this idea regards to improve a vital deficiency that is the re-configuration of business systems and not only the integration. Regardless of the importance of an approach like MDA, there has been not enough research in this field, as have many other topics at EAI.</ns0:p><ns0:p>The scientific literature emphasizes that current EAI solutions face a heterogeneity problem. Therefore, EAI solutions lack a robust and consistent integration approach supported by a methodology designed for that labor. Particularly, dedicated to the integration of heterogeneous enterprise applications that consider Requirements Engineering (RE) activities <ns0:ref type='bibr' target='#b15'>Aguilar et al. (2010)</ns0:ref>. Which can be modeled according to the necessities of the integration bearing in mind important software quality attributes such as security and privacy. Even that permits the requirements to be explicitly modeled on-demand as planned integration can be improved in execution. Nonetheless, it is predictable that the research will be uniformly concentrated Manuscript to be reviewed</ns0:p><ns0:p>Computer Science through all activities of RE with emphasis within different research categories that provide supports to validated solutions such as experience reports, frameworks, and empirical studies that assist their decision-making. Nowadays, this is unaccomplished in EAI at the software unit level.</ns0:p><ns0:p>As mentioned above, enterprise applications are growing in number in different sectors of society.</ns0:p><ns0:p>Almost all the business processes they handle have their software systems. These systems are based on different platforms and recent technologies. For this reason, they include multiple sources that lack interoperability, so dynamic EAI solutions are needed to solve integration problems. Dynamic EAI solutions can be achieved using SOA web services and with recent technology such as REST microservices. An advantage of these new technologies is the ease of integration at a low level that improves data exchange among applications. This enables interoperability through a controlled flow of data.</ns0:p><ns0:p>As new technologies evolve, it is important that EAI solutions adapt to these new technologies, but it is mandatory to increase the research in this area because the evolution is so fast that the problems that exist today will continue to exist tomorrow along with others more formed as a result of: i) new implementation technologies, ii) the growing demand of the telecommunications market, and iii) a rapid change of the enterprise IT department as well as their technological environment. In addition, meanwhile, the main application of the EAI is the collaboration among systems of different enterprises (e. g., banks for payments in sales, shipping services, as well as product providers). They dedicate a considerable amount of their budget more than human resources to maintain the exchange of data between the enterprises with which they interact in their business processes operating efficiently. Therefore, by creating interfaces at the software unit level that can integrate their applications, reducing the resources dedicated to maintenance will be possible. According to primary studies, SOA, as described by various authors as an architecture that can help solve not only the integration problem but also optimize integration techniques.</ns0:p><ns0:p>Finally, today, the EAI faces two significant challenges: syntactic and semantic integrations among enterprise applications at the level of software or data units. One reason for this is because each department/area built its software systems, making interoperability difficult. Primary studies show that it is essential to have a coherent semantic integration approach due to analysis. For this, the proposals continue to use a declarative definition for the data types and formats of the field that will facilitate the exchange of information in the integration configuration.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>CONCLUSIONS</ns0:head><ns0:p>This work presents the results obtained after conducting an SMS. This research creates a synthesis of the current state of the art concerning loose coupling in software unit integrations in EAI. The goal is to provide scholars and practitioners with a comprehensive summary of recent research on this topic.</ns0:p><ns0:p>Unfortunately, the literature lacks studies that report research work summarizing recent trends in this area.</ns0:p><ns0:p>For this, a total of 3178 articles published in the literature were considered. These were extracted from scientific sources such as Springer, IEEE, CONRICYT, World Wide Web through Google Scholar, arXiv, and DOAJ. In the end, 39 primary studies (see Tables <ns0:ref type='table'>6 and 7</ns0:ref>) were analyzed in-depth because they fulfilled the research questions proposed according to our inclusion and exclusion criteria detailed in Section 3.3.</ns0:p><ns0:p>SMS results shows that, in 13 years period, from 2008 to 2021, the same architectures or architectural patterns and technologies are still being used. Among them are SOA, Web Services, and in recent years Microservices. However, the technology used is limited because they continue using a WSDL data structure or service contract. Therefore, the coupling at the data level becomes a tight coupling.</ns0:p><ns0:p>Additionally, this work remarks that EAI, particularly at loose coupling in software unit integration, has distinct requirements for e-commerce, banking, manufacturing, energy, and healthcare industries. Hence, to perform successful integration, various frameworks, architectures, and approaches are obligatorily needed. Thus, it is necessary to provide solutions within the architectures that offer us the goodness of decoupling the software units when they are integrated.</ns0:p><ns0:p>EAI at the software unit level is more than a technological trend. It is a form to consider structuring the information system to leverage the existing IT investments more effectively when developing new applications. Although the EAI has existed since the beginning of IS, it has constantly been evolving.</ns0:p><ns0:p>As a result, integration has become more important than development in creating new applications for enterprises to deal with this problem.</ns0:p></ns0:div> <ns0:div><ns0:head>19/26</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62267:1:1:NEW 7 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The paper concludes that, as is well-known, EAI is a technology that helps an enterprise to achieve integration to inter and external software systems for data exchange. However, the integration technology solutions are often brand-named, which present interoperability issues because vendors restrict access to the code level, the complexity of services, and connectivity issues. Nevertheless, the distributed environment of enterprise applications outcomes in a complicated integration system. Consequently, new methodologies, platforms, protocols, technologies, and frameworks are still necessary to accomplish an all-inclusive EAI. In this regard, the future of EAI is based on platform-independent based solutions such as MDA, but lack of standards, robust frameworks for model-to-model (M2M) and model-to-text (M2T) transformations, problems related to tool support lacking usability with a poor user experience.</ns0:p><ns0:p>This must be reviewed in terms of security and performance. Model-Driven technologies have been there</ns0:p><ns0:p>since the 2000 year, and there are still the same issues. Likewise, changing dynamics of the application development process and usage pose a further challenge to achieve the desired result from EAI. Some of the current research suggests that much of EAI research is concentrated on developing a framework for EAI that can be used in different applications domains such as e-commerce, healthcare, and enterprise resource management systems. Additionally, the research on EAI is insufficient. There is an urgent demand to conduct more in-depth and significant research in developing new and enhanced frameworks and methodologies for EAI for cloud computing and IoT because these are technologies that are growing worldwide.</ns0:p><ns0:p>Some directions that future research from this SMS are suggested next:</ns0:p><ns0:p>1. The growing demand of the telecommunications market requires new implementation technologies, especially the development of robust frameworks considering cloud computing because enterprises applications drive their systems to this technology.</ns0:p><ns0:p>2. The emergence of new technologies, particularly technological platforms, demands more research at the EAI focused on the heterogeneity problem. The migration of old systems to new platforms such as cloud computing requires methodological guides and trained human resources who can direct the migration and integration with new technologies within the company.</ns0:p><ns0:p>3. The lack of research in the Requirements Engineering area within a methodological approach to implement an EAI project from the scratch. The idea to consider an EAI project as a different software development project must be adopted in enterprises because both have their particular characteristics. However, most of the time, they are considered the same or part of the main development software project because EAI's development cost is higher than a traditional approach.</ns0:p><ns0:p>Furthermore, implementation takes more time and consumes more resources.</ns0:p><ns0:p>4. EAI is for data exchange among different technological platforms. Until now, microservices and SOAP are the most used technological platforms for that goal, according to Table <ns0:ref type='table'>3</ns0:ref>. It is well known that microservices are distributed above several data centers, cloud providers, and host servers.</ns0:p><ns0:p>Therefore, constructing an infrastructure through many cloud locations increases the probability of losing control and visibility of the application components. In this regard, data security and privacy issues must be of greater importance to researchers. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62267:1:1:NEW 7 Oct 2021) Manuscript to be reviewed Computer Science 2 ENTERPRISE APPLICATION INTEGRATION FUNDAMENTALS This section introduces fundamental concepts necessary for a correct understanding of the rest of the article. The concepts addressed are Enterprise Application Integration, Middleware, Software Unit, External Coupling, Data Model, Loose Coupling, PUB/SUB Architecture, Federated Database System, Belief-Desire-Intention Architecture, Hub-and-Spoke, Apache Camel, Message-Oriented Middleware, Service Component Architecture, Grid Computing, Microservices REST, Service-Oriented Architecture and</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>RQ-2. Which technology architectures have been considered to improve loose coupling in software unit integrations? The question tried to analyze the technological architectures proposed until now to improve loose coupling in software unit integrations. 1 https://www.omg.org/mda/ 7/26 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62267:1:1:NEW 7 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .Figure 1 .</ns0:head><ns0:label>21</ns0:label><ns0:figDesc>Figure 1. Network Visualization.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Building the Classification Scheme.</ns0:figDesc><ns0:graphic coords='12,141.73,63.81,413.22,186.77' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>-1 Proposals based on Quantity Reference SOA 10 Green (2013); Voican (2012); Cuadrado et al. (2008); Herrera Quintero et al. (2010); Devi et al. (2014); Kim (2009); Chen Hong and Guo Wen-yue (2010); Chen (2009); Coronado-Garc&#237;a et al. (2011); Deng et al. (2009); Ma et al. (2009)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>implemented such as PUB/SUB Green (2013); Antipov et al. (2016), Hub and Spoke Mohan et al. (2013), Camel Apache Cranefield and Ranathunga (2013), MOM (Message-Oriented Middleware) Guti&#233;rrez et al. (2013), Federated Database Systems Mu&#241;oz and Jos&#233; (2009), BDI (Belief Desire Intention) Weyns and Georgeff (2009), Intermediate Layer Lehsten et al. (2011), SCA (Service Component Architecture) Ma et al. (</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Percentage of distribution for the main proposals found in the primary studies.</ns0:figDesc><ns0:graphic coords='14,141.73,63.53,383.66,269.55' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Percentage by architecture found that has been implemented according to the primary studies obtained. From RQ-2.</ns0:figDesc><ns0:graphic coords='15,141.73,80.66,385.46,269.55' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Most implemented technologies. RQ-3.</ns0:figDesc><ns0:graphic coords='15,141.73,426.07,413.37,263.11' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Year of publication per research question.</ns0:figDesc><ns0:graphic coords='17,141.73,88.42,413.15,247.46' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. XML code fragment definition of &lt;message&gt; element in WSDL document for analysis purposes.</ns0:figDesc><ns0:graphic coords='17,141.73,414.74,302.50,255.12' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Fragment XML code definition of element &lt;xsd1: TradePrice&gt; in WSDL document for analysis purposes.</ns0:figDesc><ns0:graphic coords='19,141.73,63.78,326.48,212.60' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>5.</ns0:head><ns0:label /><ns0:figDesc>Consider to make more effort in Model-Driven based solutions. Considering MDA for framework development since their advantages are notorious and can be perfectly implemented in EAI ate software unit level. The solution for EAI can be generated in several programming languages just executing the M2M or M2T transformation over the same definition models. As future work, we propose an architecture or pattern using a Dynamic Data Canonical Model Mork et al. (2014) through the management of Agnostic Messages Celar et al. (2017). The messages will create a low external and data level coupling established in the service contracts that help integrate software units.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='11,141.73,63.90,326.79,283.35' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>, the advantages provided by MOM regarding asynchronous and</ns0:figDesc><ns0:table><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62267:1:1:NEW 7 Oct 2021)</ns0:cell><ns0:cell>5/26</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Structuring search strings.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Loose Coupling AND Frame-</ns0:cell></ns0:row><ns0:row><ns0:cell>work) OR (Enterprise Applica-</ns0:cell></ns0:row><ns0:row><ns0:cell>tion Integration AND Frame-</ns0:cell></ns0:row><ns0:row><ns0:cell>work) OR (Software Integration</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposal AND Framework) OR</ns0:cell></ns0:row><ns0:row><ns0:cell>((Coupling) AND (Integration)</ns0:cell></ns0:row><ns0:row><ns0:cell>AND (Framework))</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>).</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='8'>Search engine Springer IEEE CONRICYT Schoolar Google arXiv DOAJ Total</ns0:cell></ns0:row><ns0:row><ns0:cell>Search results</ns0:cell><ns0:cell>593</ns0:cell><ns0:cell>889</ns0:cell><ns0:cell>319</ns0:cell><ns0:cell>900</ns0:cell><ns0:cell>293</ns0:cell><ns0:cell>184</ns0:cell><ns0:cell>3178</ns0:cell></ns0:row><ns0:row><ns0:cell>Candidates</ns0:cell><ns0:cell>583</ns0:cell><ns0:cell>864</ns0:cell><ns0:cell>306</ns0:cell><ns0:cell>881</ns0:cell><ns0:cell>278</ns0:cell><ns0:cell>171</ns0:cell><ns0:cell>2912</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Primary studies 4</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>39</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Search result and filtering divided by source.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>tion, most of them being applied to architectures already validated such as those based on architectural patterns or archetypes. These are SOA, SOAP, WSDL, XSLT, ESB (Enterprise Service Bus), XML, XSD,</ns0:figDesc><ns0:table><ns0:row><ns0:cell>BPEL (Business Process Execution Language), JMS (Java Message Service). For this reason, another set</ns0:cell></ns0:row><ns0:row><ns0:cell>of technologies found for these purposes are the Microservices REST, and JSON. The primary studies also</ns0:cell></ns0:row><ns0:row><ns0:cell>reported the application of Web Services with WSDL, SOAP, XML and XSD for integration's. Further-</ns0:cell></ns0:row><ns0:row><ns0:cell>more, technologies such as PUB/SUB, JMS, Queue, Topics, MDB (Message Driver Bean) were used to</ns0:cell></ns0:row><ns0:row><ns0:cell>for software unit integration. Additionally, a set of technologies with fewer implementations with regard</ns0:cell></ns0:row><ns0:row><ns0:cell>to software units integration were found, these are Apache Camel which is an open-source integration</ns0:cell></ns0:row></ns0:table><ns0:note>framework for data production or consumption, HL7 (Health Level Seven)<ns0:ref type='bibr' target='#b41'>Garc&#237;a and Montoya (2011)</ns0:ref>, Dublin Core, CORBA (Common Object Request Broker Architecture), RMI (Java Remote Method Invocation), Canonical Model, ODBC/JDBC (Open Database Connectivity/Java Database Connectivity) Mu&#241;oz and Jos&#233; (2009), Self Adaptive, AI (Artificial Intelligence), KQML (Knowledge Query Manipulation Language), Neuronal Network Weyns and Georgeff (2009) and Service Component Architecture Ma et al. (2009). Next, Table 5 summarizes the technologies extracted from the primary studies according to 12/26 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62267:1:1:NEW 7 Oct 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Main technologies used to improve loose coupling.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:note place='foot' n='2'>Consorcio Nacional de Recursos de Informaci&#243;n Cient&#237;fica y Tecnol&#243;gica of Mexico 3 https://www.vosviewer.com</ns0:note> <ns0:note place='foot' n='4'>https://figshare.com/s/a65784c26931b96570fb 10/26 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62267:1:1:NEW 7 Oct 2021)</ns0:note> <ns0:note place='foot' n='16'>/26 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62267:1:1:NEW 7 Oct 2021)</ns0:note> </ns0:body> "
"In this document we summarize the changes produced after the review of our article based on the feedback provided by the anonymous reviewers. We respond in detail to their valuable comments and requests. RESPONSE TO REVIEWER 1 Basic reporting The paper presents a systematic mapping study (SMS) aimed at assessing the most recent (2008-2021) proposals, architectures, and technologies supporting the loosely coupled integration of software components, especially, in the field of Enterprise Application Integration (EAI). R. Firstly, thank you for your kind review. - The paper is clear, and well organised in structure, despite the English quality could be definitely improved but letting a native speaker read the paper: there are a number of recurrent typos. R. The corrections made directly in the document were already taken into consideration. However, by the recommendation of the Editor, a full English correction will be done when the paper finishes the major revision; it will be used the technical revision offered by PeerJ Journal. - A number of topics are surveyed in the paper, and for each of them, relevant entry point papers are referenced. However, the amount of information provided for each topic (e.g. in sec 2) is very concise, and barely sufficient for the reader which already has a basic understanding of the topic---therefore, I believe, more information should be provided for the readers which are not confident with those topics. R. The information in Section 2 was complemented in order to improve the compression of the readers. - Images and tables are of adequate quality, despite not all images are vectorial. Figures 7 and 8 are somewhat low quality. Maybe a listing would more adequate. R. The Figures 7 and 8 were enhanced in order to improve the reading. - I bet raw data could not be made available -- as it consists of the surveyed papers, I suppose -- yet, the authors share some intermediate data they have collected to produce the SMS, via a publicly available Google Sheet document. May I suggest sharing the data using a less volatile means? E.g. GitHub or Codeocean (maybe ask the editor what's the most adequate platform in their opinion). R. The platform selected to share the information (figshare) was recommended by the Editor. - Surveys of this sort are very important within the scopes of computer science and software engineering, and they cover a quite large amount of aspects of these disciplines. Furthermore, I'm not aware of any similar survey since the last decade, so I believe the contribution is useful and timely. R. Thank you for your kind comments. - The introduction of the paper is clear, despite I struggle in understanding what does the 'external' word -- which occurs into the title, abstract, and introduction -- refers to. R. Thank you for pointing this out. In order to clarify that word, in Section 2.4 the external coupling concept is introduced, line (146). Experimental design In my opinion, the study design could be improved in a couple of ways. - All the current research questions attempt to identify _which_ proposals/architectures/technologies are available into the literature to deal with loosely coupling and enterprise integration. For this reason, the results (sec. 4) are slightly more than an enumeration of keywords and references. This is for sure an extremely useful starting point, yet does not help the reader in understanding _how_ and to what extent the enumerated proposals/architectures/technologies favour loosely coupled integration. Maybe more queries and further analysis may be required to serve this purpose. R. Thank you for your comment, we have decided to make two major changes to the structure of the article. First, in Section 4, now entitled MAPPING RESULTS (line 449), we present the research results. Although the systematic mappings are not precisely aimed at identifying gaps in the study conducted, we have considered including them in a new section. In Section 5 DISCUSSION (line 579), we present some identified gaps that are not yet addressed by the scientific community. By deepening the analysis, the scientific community will use it as a starting point to give continuity to this important line of work. - Also, I am not sure the current study design could be capable of capturing a shift in the meaning of the terms used to select primary studies. In other words, it seems to me that the authors are tailoring their queries to some terms which were extensively used one decade ago. Therefore, it is unsurprising that they are able to find proposals/architectures/technologies from around that period. It seems to me that the current design of the study does a good job in understanding the evolution of technologies of EAI in the last decade, despite it may struggle in figuring out whether new proposals/architectures/technologies have joined the game in the meanwhile. R. Dear reviewer, be grateful for the comments. About that, the search terms were obtained through the technique known as keywording. The technique mentioned consists of obtaining terms from the research questions and improving the accuracy of the search in the different search engines. They obtain synonyms of these terms; perhaps that is why they seem terms that were used a long time ago to the reviewer. This technique is recommended to obtain better results in the data sources consulted. Validity of the findings - At its current state, section 5 (discussion) is essentially useless. It spends a few pages discussing the rigidity of SOA approaches due to syntactical aspects of WDSL and... that's basically it. I believe a more elaborate discussion is needed. An interesting discussion, in my view, should attempt to investigate the actual contribution of the selected proposals/architectures/technologies w.r.t. the problem of software integration (i.e. how they solve it), thus eliciting which problems can be currently considered solved and, possibly, which ones cannot. R. Based on the suggestions made concerning deepening the analysis, we have decided to make two changes to the structure of the article. First, in Section 4, now entitled MAPPING RESULTS (line 449), we present the research results. Although the systematic mappings are not precisely aimed at identifying gaps in the study conducted, we have considered including them in a new section. In Section 5 DISCUSSION (line 579), we present some identified gaps that are not yet addressed by the scientific community. By deepening the analysis, the scientific community will use it as a starting point to give continuity to this important line of work. - Also, I believe the author should elaborate their conclusions (sec 6) better than this. Currently, the section consists of 3 paragraphs: the first one summarises the methodology, the second one essentially states that the major issue of integration is WDSL (ok, but it sounds somewhat reductive to me), and the last one briefly summarises the importance of EAI, its reliance on proprietary technologies (which has never been discussed before in the paper) and some future works involving 'Dynamic Data Canonical Model' and 'Agnostic Messages'---both aspects are lacking references and have not been described before in the paper. R. As recommended, Section 6 (CONCLUSIONS, line 658) was improved to present a better approach to the research. Also, it is essential to highlight two points, 1) the reference to 'Dynamic Data Canonical Model' and 'Agnostic Messages' was included, and 2) the information about these topics are not extended due that this line of future research identified is a current research work. RESPONSE TO REVIEWER 2 Basic reporting The paper tackles an interesting problem related to how software units interconnect and focuses on the concept and current body of knowledge related to loose coupling. It performs a systematic mapping study to analyze the existing challenges in this domain. The paper generally reads well, and the topic is very living to the journal. It is also essential for the software engineering community to be up to date with all the challenges exposed by software developers and researchers. I have a few comments about the study design and results, which I will enumerate as I review the paper: R. Firstly, thank you for your kind review. Experimental design - I did not understand why the authors have associated their research questions with the period of 2008 all the way to 2019. R. The restriction concerning the period time to achieve a focused approach narrowed the search to published journal articles from 2008 to 2021. Moreover, since the field of EAI and, especially at the data level integration, is relatively old and it has been during the last decade where these ideas have been most developed with the emergence of web services and cloud computing. Likewise, this period has been validated during the development of the systematic mapping, since all the studies in this area are located within that period of time. - Also, it was not clear to me why the authors have chosen to focus and this period exactly, keeping in mind that most of the references that the authors have used mainly to introduce all the concepts that they are surveying were actually pretty old and, to say the least, are before 2008. So, the choice of the period needs to be justified. R. The articles cited in Section 2 of this work are publications, as the reviewer correctly indicates, from the year 2008 or earlier, this is because the concepts of software engineering that have been handled through the years until today remain current. In this sense, what has evolved is the technology for the implementation and integration of business applications, which in some cases, the new technologies are combinations of others that already exist both conceptually and architecturally, for example the use of XML in microservices today and WSDL (derived from XML) in SOAP. - I think the authors should carefully explain how they performed the classification of the papers going from the selection of the potential venues to explore, all the way to the choice of documents using inclusion and exclusion criteria. R. A better explanation of this topic is presented in Sections 3.2 and 3.3. - Yet, it was not clear to me how the authors have moved from a total of 3,095 articles all the way to 95 articles. If this has been done using the procedure outlined in figure number two, then this would be just along for manual analysis to what extent were able to do it without any buttons without missing any studies properly. R. As a SMS (Systematic Mapping Study) indicates, the goal is to provide the state-of-the-art and a map of existing literature in this area. This SMS includes quality literature from pre-defined resources and based on pre-defined inclusion/exclusion criteria. Therefore, out of the 3178 full-text articles studied, 39 articles were included. Each exclusion and inclusion rule used in this investigation is explained in Section 3.3 (line 383). - The authors have stated that the goal of the study is to identify gaps in current research to suggest areas for further investigation. However, the research questions outlined to fulfill this goal are most likely to be just exploratory and reporting numbers related to primary studies related to software unit integration and existing architectures and frameworks. Little is known about how exactly the Gap will be detected and how the authors have framed this Gap in terms of missing technology or with respect to the challenges related to software unit integration. In this context, I was expecting to see more limitations, errors, issues and problems related to each technology or framework and how existing studies are or not covering it. This is also reflected in the results and mainly the discussion section that likes deep analysis of the limitations of existing papers. All the reflections made in the discussion are rather generic and known. Even the example brought by the authors about the limitation of defining data types during design time then dealing with them at the runtime is already a known problem. Besides being already known, this limitation does not seem to be extracted directly from the studies but rather it is a reflection of the authors of what they think it is a problem in these papers. R. The goal of the SMS (Systematic Mapping Study) is to provide the state-of-the-art and a map of existing literature in this area. And in order to improve, we include Section 5 DISCUSSION (line 579) that presents a deep analysis of the results. Also, Section 6 CONCLUSIONS (line 658) clarified the investigation and presented some future research lines identifying. - Section 4 which is called analysis of results seems to be just reporting results without any analysis. I did not see anything deeper than just specifying papers, which can be done using a classifier with proper training. The manual validation of the authors would have proposed a better understanding of these papers and what are the main challenges they are trying to solve. That would be better to improve the paper. R. Based on the suggestions made concerning deepening the analysis, we have decided to make two changes to the structure of the article. First, in Section 4, now entitled MAPPING RESULTS (line 449), we present the research results. Although the systematic mappings are not precisely aimed at identifying gaps in the study conducted, we have considered including them in a new section. In Section 5 DISCUSSION (line 579), we present some identified gaps that are not yet addressed by the scientific community. By deepening the analysis, the scientific community will use it as a starting point to give continuity to this important line of work. Additional comments - The authors mentioned the use of an excel sheet for their PS. It would be strongly recommended to share it with the community for replication and extension purposes. R. The data is available in the platform figshare (https://figshare.com/s/a65784c26931b96570fb), The platform selected to share the information (figshare) was recommended by the Editor. "
Here is a paper. Please give your review comments after reading it.
299
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Support vector machine (SVM) is a robust machine learning method and widely used in classification. However, the traditional SVM training methods may reveal personal privacy when the training data contains sensitive information. In the training process of SVMs, working set selection is a vital step for the sequential minimal optimization-type decomposition methods. To avoid complex sensitivity analysis and the influence of highdimensional data on noise of the existing SVM classifiers with privacy protection, we proposed a new differentially private working set selection algorithm (DPWSS) in this paper, which utilized the exponential mechanism to privately select working sets. We theoretically proved that the proposed algorithm satisfied differential privacy. The extended experiments showed that the DPWSS algorithm achieved classification accuracy almost the same as the original non-privacy SVM under different parameters. The errors of optimized objective value between the two algorithms were nearly less than two, meanwhile, the DPWSS algorithm had a higher execution efficiency by comparing iterations on different datasets. To the best of our knowledge, DPWSS is the first private working set selection algorithm based on differential privacy.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>In recent years, with the rapid development of artificial intelligence, cloud computing, and big data technologies, data sharing and analysis are becoming easier and more practical. A large amount of individual information is stored in electronic databases, such as economic records, medical records, web search records, and social network data, which poses a great threat to personal privacy. Support vector machine (SVM) <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>[2][3][4] <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref> , as one of the most widely used and robust machine learning methods for classification, trains a classification model by solving an optimization problem and requires only as few as a dozen examples for training. However, when the training data sets contain sensitive information, directly releasing the SVM classification model may reveal personal privacy.</ns0:p><ns0:p>Generally speaking, training SVMs is to solve a large optimization problem of quadratic programming (QP).</ns0:p><ns0:p>Sequential minimal optimization (SMO) <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref>[7] <ns0:ref type='bibr' target='#b8'>[8]</ns0:ref> is currently a commonly used decomposition method for training SVMs by solving the smallest QP optimization problem, and only needs two elements in every iteration. In all kinds of SMOtype decomposition methods, working set selection (WSS) is an important step. Different WSS algorithms determine the convergence efficiency of the SVM training process. Differential privacy (DP) <ns0:ref type='bibr' target='#b9'>[9]</ns0:ref>[10] <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref> was proposed by a series of work of Dwork et al. from 2006, is becoming an accepted standard for privacy protection in sensitive data analysis. DP ensures that adding or removing a single item does not affect the analysis outcome too much, and the privacy level is quantified by a privacy budget &#949;. DP is realized by introducing randomness or uncertainty. According to the difference of data types, it mainly includes Laplace mechanism, Gaussian mechanism, and exponential mechanism. Among them, the Laplace mechanism and Gaussian mechanism are mostly used for numerical data, while the exponential mechanism is used for non-numerical data.</ns0:p><ns0:p>In this paper, we studied the privacy leakage problem of the traditional SVM training methods. There were some shortcomings in the existing SVM classifiers with privacy protection, such as the low classification accuracy, the requirements on the differentiability of the objective function, the complex sensitivity analysis, and the influence of high-dimensional data on noise. We gave a solution by introducing randomness in the training process of SVMs to privately release the classification model. The main contributions in this paper concluded as follows:</ns0:p><ns0:p>&#61548; We proposed an improved WSS method for training SVMs and designed a simple scoring function for the exponential mechanism, in which the sensitivity was easy to analyze.</ns0:p><ns0:p>&#61548; We proposed a new differentially private working set selection algorithm (DPWSS) based on the exponential mechanism, which was achieved by privately selecting the working set in every iteration.</ns0:p><ns0:p>&#61548; To improve the utilization of the privacy budget, every violating pair was selected only once during the entire training process.</ns0:p><ns0:p>&#61548; We analyzed theoretically that the DPWSS algorithm satisfied the requirement of DP, and evaluated the classification accuracy, algorithm stability, and execution efficiency of the DPWSS algorithm versus the original non-privacy SVM algorithm through extended experiments.</ns0:p><ns0:p>The rest of this paper is organized as follows. Section 2 discusses related work. Section 3 introduces the background knowledge of SVMs, WSS, and DP. Section 4 proposes a novel DPWSS algorithm, DPWSS. Section 5 gives the experimental evaluation of the performance of DPWSS. Lastly, Section 6 concludes the research work.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Related work</ns0:head><ns0:p>In this section, we briefly review some work related to privacy-preserving SVMs. Mangasarian et al. <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref> considered the classification problem of sharing private data by separate agents and proposed using random kernels for horizontally partitioned data. Lin et al. <ns0:ref type='bibr' target='#b13'>[13]</ns0:ref> pointed out an inherent privacy violation problem of support vectors. They proposed a privacy-preserving SVM classifier, PPSVC, which replaced the Gaussian kernel with an approximate decision function.</ns0:p><ns0:p>As DP is becoming an accepted standard for private data analysis, some SVM classification models based on DP produced in the recent two decades. Chaudhuri et al. proposed two popular perturbation-based techniques output perturbation and objective perturbation <ns0:ref type='bibr'>[14][15]</ns0:ref> . Output perturbation introduced randomness into the weight vector w after the optimization process, and the randomness scale was determined by the sensitivity of w. On the contrary, objective perturbation introduced randomness into the objective function before the optimization, and the randomness scale was independent of the sensitivity of w. However, the sensitivity of output perturbation and objective perturbation is difficult to analyze <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> . Rubinstein et al. <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref> proposed a private kernel SVM algorithm PrivateSVM for convex loss functions and translation-invariant kernels with Fourier transformation and output perturbation to release private SVM classification model. To alleviate too much noise, Li et al. <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref> developed a hybrid private SVM model using a small portion of public data to calculate the Fourier transformation. Zhang et al. <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> constructed a novel private SVM classifier by dual variable perturbation, which added Laplace noise to the corresponding dual variables according to the ratio of errors.</ns0:p><ns0:p>Different from those kinds of perturbation-based techniques mentioned above, which introduced randomness into the output result or objective function, the DPWSS algorithm introduced randomness during the process of WSS. Therefore, it avoided complex sensitivity analysis and the influence of high-dimensional data on noise, meanwhile improved the performance of the classification model to some extent.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Preliminaries</ns0:head><ns0:p>In this section, we introduce some background knowledge of SVM, WSS, and DP. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> summarizes the notations in the following sections. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.'>SVMs</ns0:head><ns0:p>The SVM is an efficient classification method in machine learning that originated from structural risk minimization <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> . It finds an optimal separating hyperplane with the maximal margin to train a classification model. Given training instances x i R n and labels y i {1,-1}, the main task for training a SVM is to solve the optimization &#61646; &#61646; problem of quadratic programming as follows <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> :</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:1:2:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_0'>&#61537; &#61537; &#61537; &#61537; &#61537; T T e Q f &#61485; &#61501; 2 1 ) ( min Subject to , l i C i ,..., 1 , 0 &#61501; &#61603; &#61603; &#61537; , 0 &#61501; &#61537; T y</ns0:formula><ns0:p>where Q is a symmetric matrix with Q ij =y i y j K(x i ,x j ), and K is the kernel function, e is a vector with all 1's, C is the upper bound of vector &#945;. Most optimization methods have difficulty in handling the large matrix Q, as the whole vector &#945; should be updated in every iterative step. However, the decomposition methods only update a subset of vector &#945; in every iteration, named the working set, and change from one iteration to another. SMO-type decomposition methods restrict the working set B with only two elements <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref> .</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.'>WSS</ns0:head><ns0:p>WSS is a vital step in SMO-type decomposition methods. There are several common WSS algorithms, which originally derived from the optimality conditions of Karush-Kuhn-Tucker (KKT). The working set B should contain a pair of elements violating the KKT optimality conditions, called 'violating pair' <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref> .</ns0:p><ns0:p>Definition 1 (Violating pair <ns0:ref type='bibr'>[7][20]</ns0:ref> ). Under the following restrictions: ,</ns0:p><ns0:formula xml:id='formula_1'>} 1 , 0 1 , | { ) ( &#61485; &#61501; &#61502; &#61501; &#61500; &#61626; t t t t up y or y C t I &#61537; &#61537; &#61537; . (<ns0:label>(2)</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>) } 1 , 0 1 , | { ) ( &#61501; &#61502; &#61485; &#61501; &#61500; &#61626; t t t t low y or y C t I &#61537; &#61537; &#61537; For the k th iteration, if ,<ns0:label>3</ns0:label></ns0:formula><ns0:p>, and , then {i, j} is a 'violating pair'.</ns0:p><ns0:formula xml:id='formula_3'>) ( k up I i &#61537; &#61646; ) ( k low I j &#61537; &#61646; j k j i k i f y f y ) ( ) ( &#61537; &#61537; &#61649; &#61485; &#61502; &#61649; &#61485;</ns0:formula><ns0:p>Violating pairs are important in WSS. If working set B is a violating pair, the function value in SMO-type decomposition methods strictly decrease <ns0:ref type='bibr' target='#b22'>[21]</ns0:ref> . Under the definition of violating pair, a natural choice of the working set B is the 'maximal violating pair', which most violates the KKT optimality condition.</ns0:p><ns0:p>WSS 1 (WSS via the 'maximal violating pair' <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref>[20] <ns0:ref type='bibr' target='#b25'>[24]</ns0:ref> ). Under the same restrictions (2) and (3) in Definition 1,</ns0:p><ns0:formula xml:id='formula_4'>1. Select , (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>) )} ( | ) ( { max arg k up t k t t I t f y i &#61537; &#61537; &#61646; &#61649; &#61485; &#61646; , (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>) )} ( | ) ( { min arg k low t k t t I t f y j &#61537; &#61537; &#61646; &#61649; &#61485; &#61646; or . (<ns0:label>6</ns0:label></ns0:formula><ns0:formula xml:id='formula_7'>) )} ( | ) ( { max arg k low t k t t I t f y j &#61537; &#61537; &#61646; &#61649; &#61646; 2. Return B = {i, j}.</ns0:formula><ns0:p>Keerthi et al. <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref> first proposed the maximal violating pair, which has become a popular way in WSS. Fan et al. <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> pointed out that it was concerned with the first order approximation of f(&#945;) in (1) and gave a detailed explanation. Meanwhile, they proposed a new WSS algorithm by using more accurate second order information. WSS 2 (WSS using second order information <ns0:ref type='bibr'>[20][24]</ns0:ref> ). Manuscript to be reviewed Computer Science</ns0:p><ns0:formula xml:id='formula_8'>, (<ns0:label>7</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>) it tt ii it K K K a 2 &#61485; &#61483; &#61626; , (8) 0 ) ( ) ( &#61502; &#61649; &#61483; &#61649; &#61485; &#61626; t k t i k i it f y f y b &#61537; &#61537; (9) . , 0 otherwise a if a a it it it &#61502; &#61678; &#61677; &#61676; &#61626; &#61556; 2. Select , (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>) )} ( | ) ( { max arg k up t k t t I t f y i &#61537; &#61537; &#61646; &#61649; &#61485; &#61646; . (<ns0:label>11</ns0:label></ns0:formula><ns0:formula xml:id='formula_11'>) &#61694; &#61693; &#61692; &#61649; &#61485; &#61500; &#61649; &#61485; &#61646; &#61678; &#61677; &#61676; &#61485; &#61646; i k i t k t k low it it t f y f y I t a b j ) ( ) ( ), ( | min arg 2 &#61537; &#61537; &#61537; 3. Return B = {i, j}.</ns0:formula><ns0:p>WSS 2 checked only O(l) possible working sets to select j through using the same i as in WSS 1, and has been used in the software LIBSVM <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref> (since version 2.8). It was valid for all symmetric kernel matrices K, including the nonpositive definite kernel.</ns0:p><ns0:p>Lin <ns0:ref type='bibr'>[22][23]</ns0:ref> pointed out the maximal violating pair was important to SMO-type methods. When the working set B was the maximal violating pair, SMO-type methods converged to a stationary point. Otherwise, it was uncertain whether the convergence would be established. Chen et al. <ns0:ref type='bibr' target='#b25'>[24]</ns0:ref> proposed a general WSS method via the 'constant-factor violating pair', which was considered to be a 'sufficiently violated' pair. And they proved the convergence of the WSS method. <ns0:ref type='bibr'>[20][24]</ns0:ref> ).</ns0:p></ns0:div> <ns0:div><ns0:head>WSS 3 (WSS via the 'constant-factor violating pair'</ns0:head><ns0:p>1. Given a fixed 0 &lt; &#963; &#8804; 1 in all iterations.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Compute</ns0:head><ns0:formula xml:id='formula_12'>, (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_13'>) )} ( | ) ( { max ) ( k up t k t t k I t f y m &#61537; &#61537; &#61537; &#61646; &#61649; &#61485; &#61501; . (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>) )} ( | ) ( { min ) ( k low t k t t k I t f y M &#61537; &#61537; &#61537; &#61646; &#61649; &#61485; &#61501; 3.Select i, j satisfying , ,<ns0:label>(14)</ns0:label></ns0:formula><ns0:formula xml:id='formula_15'>) ( k up I i &#61537; &#61646; ) ( k low I j &#61537; &#61646; . (<ns0:label>15</ns0:label></ns0:formula><ns0:formula xml:id='formula_16'>) 0 )) ( ) ( ( ) ( ) ( &#61502; &#61485; &#61619; &#61649; &#61483; &#61649; &#61485; k k j k j i k i M m f y f y &#61537; &#61537; &#61555; &#61537; &#61537; 4. Return B = {i, j}.</ns0:formula><ns0:p>Clearly (15) guaranteed the quality of the working set B if it was related to the maximal violating pair. Fan et al. <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> explained that WSS 2 was a special case of WSS 3 under the special value of &#963;.</ns0:p><ns0:p>Furthermore, Zhao et al. <ns0:ref type='bibr' target='#b26'>[25]</ns0:ref> employed algorithm WSS 2 to test the datasets by LIBSVM and found two interesting phenomena. One was some &#945; not updated in the entire training process. Another was some &#945; updated again and again.</ns0:p><ns0:p>Therefore, they proposed a new method WSS-WR and a certain &#945; was selected only once to improve the efficiency of WSS, especially the reduction of the training time. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Recently, with the advent of the digital age, huge amounts of personal information have been collected by web services and mobile devices. Although data sharing and mining large-scale personal information can help improve the functionality of these services, it also raises privacy concerns for data contributors. DP provides a mathematically rigorous definition of privacy and has become a new accepted standard for private data analysis. It ensures that any possible outcome of an analysis is almost equal regardless of an individual's presence or absence in the dataset, and the output difference is controlled by a relatively small privacy budget. The smaller the budget, the higher the privacy. Therefore, the adversary cannot distinguish whether an individual's in the dataset <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref> . Furthermore, DP is compatible with various kinds of data sources, data mining algorithms, and data release models.</ns0:p><ns0:p>In dataset D, each row corresponds to one individual, and each column represents an attribute value. If two datasets D and D' only differ on one element, they are defined as neighboring datasets. DP aims to mask the different results of the query function f in neighboring datasets. The maximal difference of the query results is defined as the sensitivity &#916;f. DP is generally achieved by a randomized mechanism , which returns a random vector from a probability d R D M &#61614; : distribution. A mechanism M satisfies DP if the affection of the outcome probability by adding or removing a single element is controlled within a small multiplicative factor <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref> . The formal definition is given as follows.</ns0:p><ns0:p>Definition 2 (&#603;-differential privacy <ns0:ref type='bibr' target='#b9'>[9]</ns0:ref> ). A randomized mechanism M gives &#603;-DP if for all datasets D and D' differing on at most one element, and for all subsets of possible outcomes S Range(M),</ns0:p><ns0:formula xml:id='formula_17'>&#61645; . (<ns0:label>16</ns0:label></ns0:formula><ns0:formula xml:id='formula_18'>) ] ) ' ( Pr[ ) exp( ] ) ( Pr[ S D M S D M &#61646; &#61620; &#61603; &#61646;</ns0:formula></ns0:div> <ns0:div><ns0:head>&#61541;</ns0:head><ns0:p>Sensitivity is a vital concept in DP that represents the largest affection of the query function output made by a single element. Meanwhile, sensitivity determines the requirements of how much perturbation by a particular query function <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref> . Definition 3 (Sensitivity <ns0:ref type='bibr' target='#b9'>[9]</ns0:ref> ). For a given query function , and neighboring datasets D and D',</ns0:p><ns0:formula xml:id='formula_19'>the d R D f &#61614; : sensitivity of f is defined as . (<ns0:label>17</ns0:label></ns0:formula><ns0:formula xml:id='formula_20'>) 1 ' , ) ' ( ) ( max D f D f f D D &#61485; &#61501; &#61508;</ns0:formula><ns0:p>The sensitivity depends only on the query function f, and not on the instances in datasets. f &#61508; Any mechanism that meets Definition 2 is considered as satisfying DP <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref> . Currently, two principal mechanisms have been used for realizing DP: the Laplace mechanism <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref> and the exponential mechanism <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref> . Definition 4 (Laplace mechanism <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref> ). For a numeric function on a dataset D, the mechanism M in Eq. (18) provides &#603;-DP.</ns0:p><ns0:formula xml:id='formula_21'>. (<ns0:label>18</ns0:label></ns0:formula><ns0:formula xml:id='formula_22'>) d f Lap D f D M ) ( ) ( ) ( &#61541; &#61508; &#61483; &#61501;</ns0:formula><ns0:p>The Laplace mechanism gets the real results from the numerical query and then perturbs it by adding independent random noise. Let Lap(b) represent the random noise sampled from a Laplace distribution according to sensitivity. The Laplace mechanism is usually used for numerical data, while for the non-numerical queries, DP uses the exponential mechanism to randomize results. Definition 5 (Exponential mechanism <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref> ). Let be a score function on a dataset D that measures the quality of ) , ( r D q output , represents the sensitivity. The mechanism</ns0:p><ns0:formula xml:id='formula_23'>M satisfies &#603;-DP if R r &#61646; q &#61508; . (<ns0:label>19</ns0:label></ns0:formula><ns0:formula xml:id='formula_24'>) &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61621; &#61501; q r D q r return D M 2 ) , ( exp ) (</ns0:formula></ns0:div> <ns0:div><ns0:head>&#61541;</ns0:head><ns0:p>The exponential mechanism is useful to select a discrete output in a differentially private manner, which employs a score function q to evaluate the quality of an output r with a nonzero probability.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>DPWSS</ns0:head><ns0:p>In this paper, we study the problem of how to privately release the classification model of SVMs while satisfying DP.</ns0:p><ns0:p>To overcome the shortcomings of the privacy-preserving SVM classification methods, such as low accuracy or complex sensitivity analysis of output perturbation and objective perturbation, we proposed the algorithm DPWSS for training SVM in this section. The DPWSS algorithm was achieved by privately selecting the working set with the exponential mechanism in every iteration. As far as we know, DPWSS is the first private WSS algorithm based on DP.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.'>An improved WSS method</ns0:head><ns0:p>In the process of training SVMs, WSS was an important step in SMO-type decomposition methods. Meanwhile, the special properties of the selection process in WSS was perfectly combined with the exponential mechanism of DP. </ns0:p><ns0:formula xml:id='formula_25'>1. Given a fixed 0 &lt; &#963; &#8804; 1 in all iterations. 2. Compute , (<ns0:label>20</ns0:label></ns0:formula><ns0:formula xml:id='formula_26'>) )} ( | ) ( { max ) ( k up t k t t k I t f y m &#61537; &#61537; &#61537; &#61646; &#61649; &#61485; &#61501; . (<ns0:label>21</ns0:label></ns0:formula><ns0:formula xml:id='formula_27'>) )} ( | ) ( { max ) ( ' k low t k t t k I t f y M &#61537; &#61537; &#61537; &#61646; &#61649; &#61501; 3. Select i, j satisfying , ,<ns0:label>(22)</ns0:label></ns0:formula><ns0:formula xml:id='formula_28'>) ( arg k m i &#61537; &#61646; ) ( k low I j &#61537; &#61646; . (<ns0:label>23</ns0:label></ns0:formula><ns0:formula xml:id='formula_29'>) 0 )) ( ' ) ( ( ) ( ) ( &#61502; &#61483; &#61619; &#61649; &#61483; k k j k j k M m f y m &#61537; &#61537; &#61555; &#61537; &#61537; 4. Return B = {i, j}.</ns0:formula></ns0:div> <ns0:div><ns0:head n='4.2.'>The score function and sensitivity in the exponential mechanism</ns0:head><ns0:p>In the exponential mechanism, the scoring function was an important guarantee for achieving DP. The rationality of scoring function design was directly related to the execution efficiency of mechanism M. For one output r, the greater the value of the scoring function, the greater the probability that r would be selected. Based on the definition of the 'maximal violating pair', it was obvious that .</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_30'>) j k j k k k f y m M m ) ( ) ( ) ( ' ) ( &#61537; &#61537; &#61537; &#61537; &#61649; &#61483; &#61619; &#61483;<ns0:label>24</ns0:label></ns0:formula><ns0:p>From Neq. ( <ns0:ref type='formula' target='#formula_28'>23</ns0:ref>) and (24), we concluded that .</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_31'>) 0 )) ( ' ) ( ( ) ( ) ( ) ( ' ) ( &#61502; &#61483; &#61619; &#61649; &#61483; &#61619; &#61483; k k j k j k k k M m f y m M m &#61537; &#61537; &#61555; &#61537; &#61537; &#61537; &#61537;<ns0:label>25</ns0:label></ns0:formula><ns0:p>We designed a simple scoring function q(D, r) for the DPWSS algorithm based on WSS 4 and Neq. ( <ns0:ref type='formula' target='#formula_31'>25</ns0:ref> </ns0:p><ns0:formula xml:id='formula_32'>&#61555; &#61537; &#61537; &#61537; &#61537; &#61619; &#61483; &#61649; &#61483; &#61501; &#61619; ) ( ' ) ( ) ( ) ( ) , ( 1 k k j k j k M m f y m r D q</ns0:formula><ns0:p>where r denoted the working set B, which contained violating pair i and j. The larger the value of scoring function q(D, r), the closer the selected violation pair was to the maximal violation pair. The sensitivity of scoring function q(D, r) was</ns0:p><ns0:formula xml:id='formula_33'>, (<ns0:label>27</ns0:label></ns0:formula><ns0:formula xml:id='formula_34'>) &#61555; &#61485; &#61501; &#61508; 1 q</ns0:formula><ns0:p>and the value of was a small number, less than 1. q &#61508;</ns0:p><ns0:p>In the exponential mechanism, the output r was selected randomly with probability .</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_35'>) &#61669; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61501; R r q r D q q r D q r ' 2 ) ' , ( exp 2 ) , ( exp ) Pr( &#61541; &#61541;<ns0:label>28</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head n='4.3.'>Privacy budget</ns0:head><ns0:p>Privacy budget was a vital parameter in DP, which controled the privacy level in a randomized mechanism M. The smaller the privacy budget, the higher the privacy level. When the allocated privacy budget ran out, mechanism M would lose privacy protection, especially for the iteration process. To improve the utilization of the privacy budget, every pair of working sets was selected only once during the entire training process as in [25]. Meanwhile, in DPWSS every iteration was based on the result of last iteration, but not based on the entire original dataset. Therefore, there was no need to split the privacy budget for every iteration.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4.'>Description of DPWSS algorithm</ns0:head><ns0:p>In the DPWSS algorithm, DP was achieved by privately selecting the working set with the exponential mechanism in every iteration. We first presented an overview of the DPWSS algorithm and then elaborated on the key steps. Finally, we described an SMO-type decomposition method using the DPWSS algorithm in detail.</ns0:p><ns0:p>The description of the DPWSS algorithm is shown below.</ns0:p></ns0:div> <ns0:div><ns0:head>Algorithm 1 DPWSS</ns0:head><ns0:p>Input: G: gradient array; y: array of every instance labels with {+1, -1}; l: number of instances; &#945;: dual vector; I: the violating pair selected flag bool matrix; &#963;: constant-factor; &#603;: privacy budget; eps: stopping tolerance; Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science 8:</ns0:note><ns0:p>if q(D, r t ) &#8805; &#963; then 9: q'(D, r t ) &#8592; q(D, r t ); The DPWSS algorithm selected multiple violating pairs that met the constraints based on WSS 4, and then randomly selected one with a certain probability by the exponential mechanism to satisfy DP. Firstly, the DPWSS algorithm computed m(&#945;) and M'(&#945;) for the scoring function q from Line 1 to Line 4 and determined i as one element of the violating pair. Secondly, it computed the scoring function q from Line 5 to Line 12. The constraints in Line 6 represented that the violating pair {i, j} had not been previously selected, meanwhile the value range of the other element j and the violating pair were valid for the changes of gradient G. The constraints in Line 8 represented that the scoring function value was effective under constant-factor &#963;. Line 14 and Line 15 are key steps in the exponential mechanism, which randomly selected a violating pair with the chosen probability of the scoring function q. Lastly, the DPWSS algorithm outputed the violating pair {i, j} as the working set B in Line 15.</ns0:p><ns0:p>In summary, a SMO method using the DPWSS algorithm is shown below. </ns0:p></ns0:div> <ns0:div><ns0:head>End</ns0:head><ns0:p>Algorithm 2 was an iterative process, which first selected working set B by DPWSS, then updated dual vector &#945; and gradient G in every iteration. After the iterative process, the algorithm outputed the final &#945;. There were three ways to get out of the iterative process. One was that &#945; was a stationary point, another was that all violating pairs had been selected, and the last one was that the number of iterations exceeded the maximum value. Using Algorithm 2, we privately released the classification model of SVMs with dual vector &#945; while satisfying the requirement of DP.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5.'>Privacy analysis</ns0:head><ns0:p>In the DPWSS algorithm, randomness was introduced by randomly selecting working sets with the exponential mechanism. According to the definition of DP mentioned in Section 3, we proved that the DPWSS algorithm satisfied DP strictly by theorem 1 as shown below.</ns0:p><ns0:p>Theorem 1 DPWSS algorithm satisfied DP.</ns0:p><ns0:p>Proof Let M(D, q) be to select the output r of the violating pair in one iteration, and &#603; be the allocated privacy budget in DPWSS algorithm. Based on Eq. ( <ns0:ref type='formula' target='#formula_35'>28</ns0:ref>), we randomly selected violating pair r as a working set with the following probability by the exponential mechanism. To accord with the standard form of the exponential mechanism, we used q to denote q' in the DPWSS algorithm.</ns0:p><ns0:formula xml:id='formula_36'>&#61669; &#61669; &#61646; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61501; &#61501; &#61501; O r O r q r D q q r D q q r D q q r D q r q D M r q D M ' ' 2 ) ' , ' ( exp 2 ) , ' ( exp 2 ) ' , ( exp 2 ) , (<ns0:label>exp</ns0:label></ns0:formula><ns0:formula xml:id='formula_37'>) ) , ' ( Pr( ) ) , ( Pr( &#61541; &#61541; &#61541; &#61541; &#61669; &#61669; &#61646; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61620; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61485; &#61501; O r O r q r D q q r D q q r D q r D q ' ' 2 ) ' , (<ns0:label>exp 2 ) ' , ' ( exp 2 )) , ' ( ) , ( ( exp &#61541;</ns0:label></ns0:formula><ns0:formula xml:id='formula_38'>&#61541; &#61541; &#61669; &#61669; &#61646; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61688; &#61686; &#61671; &#61672; &#61670; &#61620; &#61687; &#61688; &#61686; &#61671; &#61672; &#61670; &#61603; O r O r q r D q q r D q ' ' 2 ) ' , ( exp 2 ) ' , ( exp 2 exp 2 exp &#61541; &#61541; &#61541; &#61541; &#61669; &#61669; &#61646; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61620; &#61687; &#61688; &#61686; &#61671; &#61672; &#61670; &#61620; &#61687; &#61688; &#61686; &#61671; &#61672; &#61670; &#61501; O r O r q r D q q r D q ' ' 2 ) ' , ( exp 2 ) ' , ( exp 2 exp 2 exp &#61541; &#61541; &#61541; &#61541; ) exp(&#61541; &#61501; According to Definition 2, we prove that . ) ) , ' ( Pr( ) exp( ) ) , ( Pr( r q D M r q D M &#61501; &#61620; &#61603; &#61501;</ns0:formula></ns0:div> <ns0:div><ns0:head>&#61541;</ns0:head><ns0:p>Therefore, the DPWSS algorithm satisfied DP.</ns0:p><ns0:p>Algorithm 2 was an iterative process, in which DPWSS was a vital step to privately select a working set. As the DPWSS algorithm satisfied DP, we performed the steps of updating dual vector &#945; and gradient G in every iteration without access to private data. To improve the utilization of the privacy budget, every pair of working sets was selected only once during the entire training process. Meanwhile, in Algorithm 2 every iteration was based on the result of last iteration, but not based on the original datasets. Therefore, Algorithm 2 satisfied DP.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.'>Experiments</ns0:head><ns0:p>In this section, we compared the performance of the DPWSS algorithm with WSS 2, which was a classical nonprivate WSS algorithm and has been used in the software LIBSVM <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref> . The comparison between WSS 2 and WSS 1 was done in [20]. We didn't compare the DPWSS algorithm with other private SVMs. One reason was that randomness was introduced in different ways, and the other reason was that the DPWSS algorithm achieved classification accuracy and optimized objective value almost the same as the original non-privacy SVM algorithm.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1.'>Datasets and experimental environment</ns0:head><ns0:p>The datasets were partly selected for the experiments as [19], [20], and [25]. All datasets were for binary classification, and available at http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/. The basic information of the ten datasets is shown in Table <ns0:ref type='table'>2</ns0:ref> below. To make the figures look neater in the experiments, we used breast to denote the breastcancer dataset and german to denote the german.number dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 2.</ns0:head><ns0:p>To carry out the contrast experiments efficiently, we used LIBSVM (version 3.24) as an implementation of the DPWSS algorithm in C++ language and GNU Octave (version 5.2). All parameters were set to default values.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.'>An example of a private classification model</ns0:head></ns0:div> <ns0:div><ns0:head>Unlike other privacy SVMs, which introduced randomness into the objective function or classification result by</ns0:head><ns0:p>Laplace mechanism, our method was achieved by privately selecting the working set with the exponential mechanism in every iteration. We gave an example of a private classification model to show how privacy was protected in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. The data used two columns of the heart dataset and moved the positive and negative instances to each end for easier classification. The solid lines represent the original non-private classification model and circles represent support vectors. The dotted lines represent a private classification model by training SVM with the DPWSS algorithm. It was observed that the differences between the private and non-private classification models were very small, and achieved similar accuracy of classification. Randomness was introduced into the training process of SVM. All the classification models generated were different from each other in every training process to protect the training data privacy. </ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.'>Algorithm performance experiments</ns0:head><ns0:p>In this section, we evaluated the performance of the DPWSS algorithm versus WSS 2 by experiments for the entire training process. The metrics of performance included classification accuracy, algorithm stability, and execution efficiency under different constant-factor &#963; and privacy budget &#603;. The classification accuracy was measured by AUC (the area under a ROC curve). The higher the AUC, the better the usability of the algorithm. The algorithm stability was measured by the error of optimized objective value between DPWSS algorithm and WSS 2, named objError. The smaller the objError, the better the stability of the algorithm. And the execution efficiency of the algorithm was measured by the ratio of iteration between the two algorithms, named iterationRatio. The smaller the iterationRatio, the better the execution efficiency of the algorithm. We didn't compare the training time between the two algorithms as it was a millisecond class for the entire training process to most of the datasets except ijcnn1.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:1:2:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To evaluate the influence of different constant-factor &#963; and privacy budget &#603; to the three metrics for algorithm performance, we set &#963; at 0.1, 0.3, 0.5 and 0.7 under &#603; fixed at 1 and set &#603; at 0.1, 0.5 and 1 under &#963; fixed at 0.5. We didn't set &#963; at 0.9, because under the circumstances most of the violating pairs would be filtered out that the algorithm failed to reach the final objective value.</ns0:p><ns0:p>Firstly, we measured the classification accuracy of the DPWSS algorithm versus WSS 2 by AUC. The experimental results were shown in Figure <ns0:ref type='figure' target='#fig_10'>2</ns0:ref> to Figure <ns0:ref type='figure' target='#fig_17'>9</ns0:ref>. Observed from the results, the DPWSS algorithm achieved almost the same classification accuracy as WSS 2 on ten datasets under different &#963; and &#603;. Due to the repeated execution of the iterative process, the DPWSS algorithm obtained a well private classification model. The classification accuracy was not affected by the randomness of DP and the filtering effect of parameter &#963; on violating pairs. The DPWSS algorithm introduced randomness into the training process of SVMs, not into the objective function or classification result. There were no requirements of the differentiability of the objective function and the complex sensitivity analysis, and the less influence of high-dimensional data on noise. Therefore, the DPWSS algorithm achieved the target extremum through the optimization process and higher classification accuracy under the current condition. Secondly, we compared the optimized objective values and measured the algorithm stability by objError between the DPWSS algorithm and WSS 2. The experimental results were shown in Table <ns0:ref type='table'>3</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_29'>10</ns0:ref> to Figure <ns0:ref type='figure' target='#fig_20'>13</ns0:ref>. Observed from the results, the DPWSS algorithm achieved similar optimized objective values with WSS 2 on ten datasets under different &#963; and &#603;. The errors between the DPWSS algorithm and WSS 2 were very small (within two). Due to the repeated execution of the iterative process, the DPWSS algorithm converged stably to optimized objective values and was not affected by the randomness of DP and the filtering effect of parameter &#963; on violating pairs. With the increase of &#963;, the errors also tended to increase. Table <ns0:ref type='table'>3</ns0:ref>. Lastly, we compared the iterations and measured the execution efficiency by iterationRatio between the two algorithms. The experimental results were shown in Table <ns0:ref type='table'>4</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_29'>14</ns0:ref> to Figure <ns0:ref type='figure' target='#fig_28'>23</ns0:ref>. Observed from the results, the DPWSS algorithm achieved higher execution efficiency with fewer iteration versus WSS 2 on ten datasets under different &#963; and &#603;. Because the DPWSS algorithm introduced randomness into the WSS process, the iterations would increase more or less. However, with the increase of constant-factor &#963;, the iterations were affected by the filtering effect of it on violating pairs larger and larger. When &#963; increased to 0.3, the execution efficiency of the DPWSS algorithm was already higher than WSS 2 for most datasets. When &#963; increased to 0. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure 10.</ns0:note><ns0:p>Computer Science far less than WSS2 for all datasets except ijcnn1. Therefore, our method set larger &#963; for big datasets. While the privacy budget &#603; had little effect on iterations under a fixed constant-factor &#963;. </ns0:p></ns0:div> <ns0:div><ns0:head>Table 4.</ns0:head><ns0:note type='other'>Figure 14.</ns0:note><ns0:note type='other'>Figure 17.</ns0:note></ns0:div> <ns0:div><ns0:head n='6.'>Conclusions</ns0:head><ns0:p>In this paper, we studied the privacy leakage problem of the traditional SVM training methods. The DPWSS algorithm was proposed to release a private classification model of SVM and theoretically proved to satisfy DP through utilizing the exponential mechanism to privately select working sets in every iteration. The extended experiments showed that the DPWSS algorithm achieved similar classification accuracy and the optimized objective value as the original non-privacy SVM under different parameters. Meanwhile, the DPWSS algorithm had a higher execution efficiency by comparing iterations on different datasets. In the DPWSS algorithm, randomness is introduced in the training process. The most prominent advantages include that there are no requirements for differentiability of the objective function and complex sensitivity analysis compared with output perturbation or objective perturbation methods. It's a challenge that parameter setting of the constant-factor &#963; for different datasets. The idea of introducing randomness into the optimization process can be easily extended to other privacy-preserving machine learning algorithms, and how to ensure that the method meets the DP requirements is another challenge. Furthermore, the DPWSS algorithm is valid to release a private classification model for linear SVM, while invalid for other non-linear kernel SVM as the privacy disclosure problem of the support vectors in kernel function. In future work, we will study how to release a private classification model for non-linear kernel SVMs. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:p>The classification accuracy of DPWSS for different &#963; (&#603;=1) versus WSS 2 on dataset 6 to 10 with shrinking. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:p>The classification accuracy of DPWSS for different &#603; (&#963;=0.5) versus WSS 2 on dataset 1 to 5 with shrinking.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:1:2:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:p>The Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:p>The Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:p>The classification accuracy of DPWSS for different &#603; (&#963;=0.5) versus WSS 2 on dataset 6 to 10 without shrinking. </ns0:p></ns0:div><ns0:figure xml:id='fig_1'><ns0:head>1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Define ait and bit, PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:1:2:NEW 9 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>3. 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>DP PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:1:2:NEW 9 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>WSS 4 (</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>WSS 3 algorithm was a more general algorithm to select a working set by checking nearly possible B's to ) ( 2 l O decide j, although under the restricted condition of parameter &#963;. By using the same as in WSS 2B's, we proposed WSS 4 to select a working set based on WSS 3 as below. To make the ) (l O algorithm easy to understand, we replaced with . An improved WSS via the 'constant-factor violating pair')</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>) as follows PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:1:2:NEW 9 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Output: B: working set; Begin 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>initialize m(&#945;) and M'(&#945;) to -INF; 2: find m(&#945;) by Eq. (20) for t in [0:l-1] and t in I up (&#945;); 3: set i = t; 4: find M'(&#945;) by Eq. (21) for t in [0:l-1] and t in I low (&#945;); 5: for t = 0 to l-1 6: if I[i][t] = =false and t in I low (&#945;) and m(&#945;)+y[t]*G[t] &gt; eps then 7: compute scoring function q(D, r t ) by Neq. (26); PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:1:2:NEW 9 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head>Figure 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure 16.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_23'><ns0:head>Figure 18 .</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Figure 18.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_24'><ns0:head>Figure 19 .</ns0:head><ns0:label>19</ns0:label><ns0:figDesc>Figure 19.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_25'><ns0:head>Figure 20 .</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>Figure 20.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_26'><ns0:head>Figure 21 .</ns0:head><ns0:label>21</ns0:label><ns0:figDesc>Figure 21.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_27'><ns0:head>Figure 22 .</ns0:head><ns0:label>22</ns0:label><ns0:figDesc>Figure 22.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_28'><ns0:head>Figure 23 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 23.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_29'><ns0:head>Figure 1 An</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_30'><ns0:head /><ns0:label /><ns0:figDesc>The classification accuracy of DPWSS for different &#963; (&#603;=1) versus WSS 2 on dataset 1 to 5 with shrinking. PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:1:2:NEW 9 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_31'><ns0:head /><ns0:label /><ns0:figDesc>The classification accuracy of DPWSS for different &#963; (&#603;=1) versus WSS 2 on dataset 1 to 5 without shrinking. PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:1:2:NEW 9 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_32'><ns0:head>Figure 5 The</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_33'><ns0:head /><ns0:label /><ns0:figDesc>classification accuracy of DPWSS for different &#603; (&#963;=0.5) versus WSS 2 on dataset 1 to 5 without shrinking. PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:1:2:NEW 9 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_34'><ns0:head /><ns0:label /><ns0:figDesc>classification accuracy of DPWSS for different &#603; (&#963;=0.5) versus WSS 2 on dataset 6 to 10 with shrinking. PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:1:2:NEW 9 Jun 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> </ns0:body> "
"Cover letter Dear Editors We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. In particular, we polished the manuscript and corrected some grammar and typo errors. We also improved the quality of figures with png format. We believe that the manuscript is now suitable for publication in PeerJ Computer Science. Zhenlong Sun On behalf of all authors. Reviewer 1 1.What is the base of author claiming that DPWSS may be the first private working set selection algorithm based on differential privacy? In the scope of the literature search related to privacy support vector machines, no papers related to privacy working sets selection were found, especially those based on differential privacy. If the reviewer has any questions about this, we can also change ‘first’ to ‘efficient’, but this is not enough to reflect the innovation and contribution of this paper. 2.Abstract is well-written but can include more info about experimental results. We described the experimental results more briefly in the abstract. 3.Related work section is too short. Authors should include some recent works and list their limitations. We have cited most of the papers on the topic of privacy support vector machines. In the last paragraph of related work, it summarizes that the biggest difference between these papers and our manuscript lies in the different ways of introducing randomness, which is also the innovation of this paper. 4.Unnecessary background has increased the paper length. We believe that detailed background information is necessary, especially for early-career researchers. We introduced the basic concepts of support vector machines and differential privacy as in most articles and described the development process of working set selection in detail so that the readers can better understand how the method in this article came about. 5.Can you compare your approach with an existing approach. In the research field of machine learning and privacy protection, a typical method is to compare the performance between private algorithms and non-private algorithms. We didn’t compare the DPWSS with other private SVMs as the randomness was introduced in different ways. However, we compared it with the classical non-private WSS algorithm and achieved classification accuracy and optimized objective value almost the same between them. This is a distinct advantage over most approaches. 6.Please revise conclusion to conclude the work. We revised the conclusion section. 7.Please proof-read the manuscript. We polished the manuscript and corrected some grammar and typo errors. Reviewer 2 1. Please improve overall readability of the paper. We polished the manuscript and corrected some grammar and typo errors. 2. The objectives of this paper need to be polished. We polished the objectives in the introduction section. 3. Introduction is poorly written. We revised the introduction section. 4. Relevant literature review of latest similar research studies on the topic at hand must be discussed. We have cited most of the papers on the topic of privacy support vector machines. In the last paragraph of related work, it summarizes that the biggest difference between these papers and our manuscript lies in the different ways of introducing randomness, which is also the innovation of this paper. 5. Result section need to be polished. We polished the result section. 6. There are some grammar and typo errors. We polished the manuscript and corrected some grammar and typo errors. 7. Improve the quality of figures We improved the quality of figures with png format. 8. Define all the variables before using We defined all the variables in Algorithm 1 and Algorithm 2 except some commonly used loop variables and the variables specified in the previous sections. Reviewer 3 1.Please improve the abstract. It should highlight the background as well and add some factual results that shows how much results have your approach achieved. We described the experimental results in the abstract more briefly. 2. Introduction is too long. Please reduce it. It should be specific. We believe that detailed background information is necessary, especially for early-career researchers. We introduced the basic concepts of support vector machines and differential privacy as in most articles in detail so that the readers can better understand how the idea in this article came about. 3. in introduction.... Support vector machine (SVM)...... quadratic programming (QP......... Sequential Minimal Optimization (SMO).... make it in a single format. We revised these proper nouns in a single format. 4. The quality of the figures can be improved more. We improved the quality of figures with png format. 5. Result section is written badly. Explain results extensively and clearly explain the flow of results. We supplemented the explanation of the experimental results more detailed. 6. All equations, tables and figure should be cited in the text. We cited all the tables, figures, and most of the equations in the text, except for some equations that were used to interpret context or maintain continuity. 7. The author should add more details of the results and improve the mathematical models. We supplemented the explanation of the experimental results more detailed. 8. Future work and Challenges are the important things that lead the researchers to take the current state-of-the-art to further step. The authors should explain What open issues and challenges are needed to be addressed by the researchers in this field? We supplement future work and challenges in the conclusion section. 9. What are the evaluations used for the verification of results (like, roc curve)? we evaluated the performance of the DPWSS algorithm by three metrics included classification accuracy, algorithm stability, and execution efficiency under different parameters. The classification accuracy was measured by AUC values (the area under a ROC curve). We didn’t use the figures of the ROC curve due to the limitation of the length of the article. 10. Abbreviations and Acronyms should be added only the first time. Then only acronyms should be used in the entire paper. We corrected some acronyms when not used for the first time. 11. Few latest references can be added and explained a bit in the related work section. some are given below. We have cited most related papers on the topic of privacy support vector machines. 12. Problem statement must be in the introduction. We supplemented the problem statement in the introduction section. 13. Grammar check is required. We polished the manuscript and corrected some grammar and typo errors. "
Here is a paper. Please give your review comments after reading it.
301
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Support vector machine (SVM) is a robust machine learning method and is widely used in classification. However, the traditional SVM training methods may reveal personal privacy when the training data contains sensitive information. In the training process of SVMs, working set selection is a vital step for the sequential minimal optimization-type decomposition methods. To avoid complex sensitivity analysis and the influence of highdimensional data on the noise of the existing SVM classifiers with privacy protection, we proposed a new differentially private working set selection algorithm (DPWSS) in this paper, which utilized the exponential mechanism to privately select working sets. We theoretically proved that the proposed algorithm satisfied differential privacy. The extended experiments showed that the DPWSS algorithm achieved classification accuracy almost the same as the original non-privacy SVM under different parameters. The errors of optimized objective value between the two algorithms were nearly less than two, meanwhile, the DPWSS algorithm had a higher execution efficiency than the original nonprivacy SVM by comparing iterations on different datasets. To the best of our knowledge, DPWSS is the first private working set selection algorithm based on differential privacy.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>In recent years, with the rapid development of artificial intelligence, cloud computing, and big data technologies, data sharing and analysis are becoming easier and more practical. A large amount of individual information is stored in electronic databases, such as economic records, medical records, web search records, and social network data, which poses a great threat to personal privacy. Support vector machine (SVM) <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>[2][3][4] <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref> , as one of the most widely used and robust machine learning methods for classification, trains a classification model by solving an optimization problem and requires only as few as a dozen examples for training. However, when the training data sets contain sensitive information, directly releasing the SVM classification model may reveal personal privacy.</ns0:p><ns0:p>Generally speaking, training SVMs is to solve a large optimization problem of quadratic programming (QP).</ns0:p><ns0:p>Sequential minimal optimization (SMO) <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>[7] <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref> is currently a commonly used decomposition method for training SVMs by solving the smallest QP optimization problem, and only needs two elements in every iteration. In all kinds of SMOtype decomposition methods, working set selection (WSS) is an important step. Different WSS algorithms determine the convergence efficiency of the SVM training process. Differential privacy (DP) <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>[10] <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> was proposed by a series of work of Dwork et al. from 2006, is becoming an accepted standard for privacy protection in sensitive data analysis. DP ensures that adding or removing a single item does not affect the analysis outcome too much, and the privacy level is quantified by a privacy budget &#949;. DP is realized by introducing randomness or uncertainty. According to the difference of data types, it mainly includes Laplace mechanism, Gaussian mechanism, and exponential mechanism. Among them, the Laplace mechanism and Gaussian mechanism are mostly used for numerical data, while the exponential mechanism is used for non-numerical data.</ns0:p><ns0:p>In this paper, we studied the privacy leakage problem of the traditional SVM training methods. There were some shortcomings in the existing SVM classifiers with privacy protection, such as the low classification accuracy, the requirements on the differentiability of the objective function, the complex sensitivity analysis, and the influence of high-dimensional data on noise. We gave a solution by introducing randomness in the training process of SVMs to privately release the classification model. The main contributions in this paper concluded as follows:</ns0:p><ns0:p>&#61548; We proposed an improved WSS method for training SVMs and designed a simple scoring function for the exponential mechanism, in which the sensitivity was easy to analyze.</ns0:p><ns0:p>&#61548; We proposed a new differentially private working set selection algorithm (DPWSS) based on the exponential mechanism, which was achieved by privately selecting the working set in every iteration.</ns0:p><ns0:p>&#61548; To improve the utilization of the privacy budget, every violating pair was selected only once during the entire training process.</ns0:p><ns0:p>&#61548; We analyzed theoretically that the DPWSS algorithm satisfied the requirement of DP, and evaluated the classification accuracy, algorithm stability, and execution efficiency of the DPWSS algorithm versus the original non-privacy SVM algorithm through extended experiments.</ns0:p><ns0:p>The rest of this paper is organized as follows. Section 2 discusses related work. Section 3 introduces the background knowledge of SVMs, WSS, and DP. Section 4 proposes a novel DPWSS algorithm. Section 5 gives the experimental evaluation of the performance of DPWSS. Lastly, Section 6 concludes the research work.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Related work</ns0:head><ns0:p>In this section, we briefly review some work related to privacy-preserving SVMs. Mangasarian et al. <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref> considered the classification problem of sharing private data by separate agents and proposed using random kernels for vertically partitioned data. Lin et al. <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> pointed out an inherent privacy violation problem of support vectors. They proposed a privacy-preserving SVM classifier, PPSVC, which replaced the Gaussian kernel with an approximate decision function. In these two methods, the degree of privacy protection cannot be proved as the private SVMs based on DP.</ns0:p><ns0:p>As DP is becoming an accepted standard for private data analysis, some SVM classification models based on DP produced in the recent two decades. Chaudhuri et al. proposed two popular perturbation-based techniques output perturbation and objective perturbation <ns0:ref type='bibr'>[14][15]</ns0:ref> . Output perturbation introduced randomness into the weight vector w after the optimization process, and the randomness scale was determined by the sensitivity of w. On the contrary, objective perturbation introduced randomness into the objective function before the optimization, and the randomness scale was independent of the sensitivity of w. However, the sensitivity of the two perturbation-based techniques is difficult to analyze <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref> and the objective perturbation requires the loss function satisfying certain convexity and differentiability criteria. Rubinstein et al. <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref> proposed a private kernel SVM algorithm PrivateSVM for convex loss functions with Fourier transformation and output perturbation to release the private SVM classification model. However, the classification model was valid only for the translation-invariant kernels. To alleviate too much noise in the final outputs, Li et al. <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref> developed a hybrid private SVM model that used a small portion of public data to calculate the Fourier transformation. However, public data is hard to obtain in the modern private world. Zhang et al. <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref> constructed a novel private SVM classifier by dual variable perturbation, which added Laplace noise to the corresponding dual variables according to the ratio of errors.</ns0:p><ns0:p>Different from those kinds of perturbation-based techniques mentioned above, which introduced randomness into the output result or objective function, the DPWSS algorithm introduced randomness during the process of WSS. Therefore, it avoided complex sensitivity analysis and the influence of high-dimensional data on noise, meanwhile improved the performance of the classification model to some extent.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Preliminaries</ns0:head><ns0:p>In this section, we introduce some background knowledge of SVM, WSS, and DP. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> summarizes the notations in the following sections. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.'>Support vector machines</ns0:head><ns0:p>The SVM is an efficient classification method in machine learning that originated from structural risk minimization <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> . It finds an optimal separating hyperplane with the maximal margin to train a classification model. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Given training instances x i R n and labels y i {1,-1}, the main task for training an SVM is to solve the optimization &#61646; &#61646; problem of quadratic programming as follows <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref> :</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>&#61537; &#61537; &#61537; &#61537; &#61537; T T e Q f &#61485; &#61501; 2 1 ) ( min Subject to , l i C i ,..., 1 , 0 &#61501; &#61603; &#61603; &#61537; , 0 &#61501; &#61537; T y</ns0:formula><ns0:p>where Q is a symmetric matrix with Q ij =y i y j K(x i ,x j ), and K is the kernel function, e is a vector with all 1's, C is the upper bound of vector &#945;. Most optimization methods have difficulty in handling the large matrix Q, as the whole vector &#945; should be updated in every iterative step. However, the decomposition methods only update a subset of vector &#945; in every iteration, named the working set, and change from one iteration to another. SMO-type decomposition methods restrict the working set B with only two elements <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref> .</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.'>Working set selection</ns0:head><ns0:p>WSS is a vital step in SMO-type decomposition methods which originally derived from the optimality conditions of Karush-Kuhn-Tucker (KKT). WSS is to select the working set B for every iteration in the optimization process by different approaches. The working set B should contain a pair of elements violating the KKT optimality conditions, called 'violating pair' <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> .</ns0:p><ns0:p>Definition 1 (Violating pair <ns0:ref type='bibr'>[7][20]</ns0:ref> ). Under the following restrictions: , </ns0:p><ns0:formula xml:id='formula_1'>} 1 , 0 1 , | { ) ( &#61485; &#61501; &#61502; &#61501; &#61500; &#61626; t t t t up y or y C t I &#61537; &#61537; &#61537; . (<ns0:label>(2)</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>) } 1 , 0 1 , | { ) ( &#61501; &#61502; &#61485; &#61501; &#61500; &#61626; t t<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>I i &#61537; &#61646; ) ( k low I j &#61537; &#61646; j k j i k i f y f y ) ( ) ( &#61537; &#61537; &#61649; &#61485; &#61502; &#61649; &#61485;</ns0:formula><ns0:p>Violating pairs are important in WSS. If working set B is a violating pair, the function value in SMO-type decomposition methods strictly decreases <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref> . Under the definition of violating pair, a natural choice of the working set B is the 'maximal violating pair', which most violates the KKT optimality condition. WSS 1 (WSS via the 'maximal violating pair' <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>[20] <ns0:ref type='bibr' target='#b23'>[24]</ns0:ref> ). Under the same restrictions (2) and (3) in Definition 1,</ns0:p><ns0:formula xml:id='formula_4'>1. Select , (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>) )} ( | ) ( { max arg k up t k t t I t f y i &#61537; &#61537; &#61646; &#61649; &#61485; &#61646; , (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>) )} ( | ) ( { min arg k low t k t t I t f y j &#61537; &#61537; &#61646; &#61649; &#61485; &#61646; or . (<ns0:label>6</ns0:label></ns0:formula><ns0:formula xml:id='formula_7'>) )} ( | ) ( { max arg k low t k t t I t f y j &#61537; &#61537; &#61646; &#61649; &#61646; 2. Return B = {i, j}.</ns0:formula><ns0:p>Keerthi et al. <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> first proposed the maximal violating pair, which has become a popular way in WSS. Fan et al. <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref> pointed out that it was concerned with the first order approximation of f(&#945;) in (1) and gave a detailed explanation. Meanwhile, they proposed a new WSS algorithm by using more accurate second order information. WSS 2 (WSS using second order information <ns0:ref type='bibr'>[20][24]</ns0:ref> ).</ns0:p><ns0:p>1. Define ait and bit, ,</ns0:p><ns0:formula xml:id='formula_8'>it tt ii it K K K a 2 &#61485; &#61483; &#61626; ,<ns0:label>(7)</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>) ( ) ( &#61502; &#61649; &#61483; &#61649; &#61485; &#61626; t k t i k i it f y f y b &#61537; &#61537; (9) . , 0 otherwise a if a a it it it &#61502; &#61678; &#61677; &#61676; &#61626; &#61556; 2. Select , (<ns0:label>(8) 0</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>) )} ( | ) ( { max arg k up t k t t I t f y i &#61537; &#61537; &#61646; &#61649; &#61485; &#61646; . (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_11'>) &#61694; &#61693; &#61692; &#61649; &#61485; &#61500; &#61649; &#61485; &#61646; &#61678; &#61677; &#61676; &#61485; &#61646; i k i t k t k low it it t f y f y I t a b j ) ( ) ( ), ( | min arg 2 &#61537; &#61537; &#61537; 3. Return B = {i, j}.<ns0:label>11</ns0:label></ns0:formula><ns0:p>WSS 2 used second order information and checked only O(l) possible working sets to select j through using the same i as in WSS 1. Which made the WSS algorithms achieve faster convergence than existing selection methods using first order information. It has been used in the software LIBSVM <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref> (since version 2.8) and is valid for all symmetric kernel matrices K, including the non-positive definite kernel.</ns0:p><ns0:p>Lin <ns0:ref type='bibr'>[22][23]</ns0:ref> pointed out the maximal violating pair was important to SMO-type methods. When the working set B was the maximal violating pair, SMO-type methods converged to a stationary point. Otherwise, it was uncertain whether the convergence would be established. Chen et al. <ns0:ref type='bibr' target='#b23'>[24]</ns0:ref> proposed a general WSS method via the 'constant-factor violating pair'. Under a fixed constant-factor &#963; specified by the user, the selected violating pair was linked to the maximal violating pair. The 'constant-factor violating pair' was considered to be a 'sufficiently violated' pair. And they proved the convergence of the WSS method. <ns0:ref type='bibr'>[20][24]</ns0:ref> ).</ns0:p></ns0:div> <ns0:div><ns0:head>WSS 3 (WSS via the 'constant-factor violating pair'</ns0:head><ns0:formula xml:id='formula_12'>1. Given a fixed 0 &lt; &#963; &#8804; 1 in all iterations. 2. Compute , (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_13'>) )} ( | ) ( { max ) ( k up t k t t k I t f y m &#61537; &#61537; &#61537; &#61646; &#61649; &#61485; &#61501; . (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>) )} ( | ) ( { min ) ( k low t k t t k I t f y M &#61537; &#61537; &#61537; &#61646; &#61649; &#61485; &#61501; 3.Select i, j satisfying , ,<ns0:label>(14)</ns0:label></ns0:formula><ns0:formula xml:id='formula_15'>) ( k up I i &#61537; &#61646; ) ( k low I j &#61537; &#61646; . (<ns0:label>15</ns0:label></ns0:formula><ns0:formula xml:id='formula_16'>) 0 )) ( ) ( ( ) ( ) ( &#61502; &#61485; &#61619; &#61649; &#61483; &#61649; &#61485; k k j k j i k i M m f y f y &#61537; &#61537; &#61555; &#61537; &#61537; 4. Return B = {i, j}.</ns0:formula><ns0:p>Clearly (15) guaranteed the quality of the working set B if it was related to the maximal violating pair. Fan et al. <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref> explained that WSS 2 was a special case of WSS 3 under the special value of &#963;. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Furthermore, Zhao et al. <ns0:ref type='bibr' target='#b24'>[25]</ns0:ref> employed algorithm WSS 2 to test the datasets by LIBSVM and found two interesting phenomena. One was some &#945; not updated in the entire training process. Another was some &#945; updated again and again.</ns0:p><ns0:p>Therefore, they proposed a new method WSS-WR and a certain &#945; was selected only once to improve the efficiency of WSS, especially the reduction of the training time.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.'>Differential privacy</ns0:head><ns0:p>Recently, with the advent of the digital age, huge amounts of personal information have been collected by web services and mobile devices. Although data sharing and mining large-scale personal information can help improve the functionality of these services, it also raises privacy concerns for data contributors. DP provides a mathematically rigorous definition of privacy and has become a new accepted standard for private data analysis. It ensures that any possible outcome of an analysis is almost equal regardless of an individual's presence or absence in the dataset, and the output difference is controlled by a relatively small privacy budget. The smaller the budget, the higher the privacy. Therefore, the adversary cannot distinguish whether an individual's in the dataset <ns0:ref type='bibr' target='#b25'>[26]</ns0:ref> . Furthermore, DP is compatible with various kinds of data sources, data mining algorithms, and data release models.</ns0:p><ns0:p>In dataset D, each row corresponds to one individual, and each column represents an attribute value. If two datasets D and D' only differ on one element, they are defined as neighboring datasets. DP aims to mask the different results of the query function f in neighboring datasets. The maximal difference of the query results is defined as the sensitivity &#916;f. DP is generally achieved by a randomized mechanism , which returns a random vector from a probability d R D M &#61614; : distribution. A mechanism M satisfies DP if the affection of the outcome probability by adding or removing a single element is controlled within a small multiplicative factor <ns0:ref type='bibr' target='#b26'>[27]</ns0:ref> . The formal definition is given as follows.</ns0:p><ns0:p>Definition 2 (&#603;-differential privacy <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> ). A randomized mechanism M gives &#603;-DP if for all datasets D and D' differing on at most one element, and for all subsets of possible outcomes S Range(M),</ns0:p><ns0:formula xml:id='formula_17'>&#61645; . (<ns0:label>16</ns0:label></ns0:formula><ns0:formula xml:id='formula_18'>) ] ) ' ( Pr[ ) exp( ] ) ( Pr[ S D M S D M &#61646; &#61620; &#61603; &#61646;</ns0:formula></ns0:div> <ns0:div><ns0:head>&#61541;</ns0:head><ns0:p>Sensitivity is a vital concept in DP that represents the largest affection of the query function output made by a single element. Meanwhile, sensitivity determines the requirements of how much perturbation by a particular query function <ns0:ref type='bibr' target='#b27'>[28]</ns0:ref> . Definition 3 (Sensitivity <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> ). For a given query function , and neighboring datasets D and D',</ns0:p><ns0:formula xml:id='formula_19'>the d R D f &#61614; : sensitivity of f is defined as . (<ns0:label>17</ns0:label></ns0:formula><ns0:formula xml:id='formula_20'>) 1 ' , ) ' ( ) ( max D f D f f D D &#61485; &#61501; &#61508;</ns0:formula><ns0:p>The sensitivity depends only on the query function f, and not on the instances in datasets. f &#61508; Any mechanism that meets Definition 2 is considered as satisfying DP <ns0:ref type='bibr' target='#b26'>[27]</ns0:ref> . Currently, two principal mechanisms have been used for realizing DP: the Laplace mechanism <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> and the exponential mechanism <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> . Definition 4 (Laplace mechanism <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> ). For a numeric function on a dataset D, the mechanism M in Eq. (18) provides &#603;-DP.</ns0:p><ns0:formula xml:id='formula_21'>. (<ns0:label>18</ns0:label></ns0:formula><ns0:formula xml:id='formula_22'>) d f Lap D f D M ) ( ) ( ) ( &#61541; &#61508; &#61483; &#61501;</ns0:formula><ns0:p>The Laplace mechanism gets the real results from the numerical query and then perturbs it by adding independent random noise. Let Lap(b) represent the random noise sampled from a Laplace distribution according to sensitivity. The Laplace mechanism is usually used for numerical data, while for the non-numerical queries, DP uses the exponential mechanism to randomize results. Definition 5 (Exponential mechanism <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> ). Let be a score function on a dataset D that measures the quality of ) , ( r D q output , represents the sensitivity. The mechanism</ns0:p><ns0:formula xml:id='formula_23'>M satisfies &#603;-DP if R r &#61646; q &#61508; . (<ns0:label>19</ns0:label></ns0:formula><ns0:formula xml:id='formula_24'>) &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61621; &#61501; q r D q r return D M 2 ) , ( exp ) (</ns0:formula></ns0:div> <ns0:div><ns0:head>&#61541;</ns0:head><ns0:p>The exponential mechanism is useful to select a discrete output in a differentially private manner, which employs a score function q to evaluate the quality of an output r with a nonzero probability.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>DPWSS algorithm</ns0:head><ns0:p>In this paper, we study the problem of how to privately release the classification model of SVMs while satisfying DP.</ns0:p><ns0:p>To overcome the shortcomings of the privacy-preserving SVM classification methods, such as low accuracy or complex sensitivity analysis of output perturbation and objective perturbation, we proposed the algorithm DPWSS for training SVM in this section. The DPWSS algorithm was achieved by privately selecting the working set with the exponential mechanism in every iteration. As far as we know, DPWSS is the first private WSS algorithm based on DP.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.'>An improved WSS method</ns0:head><ns0:p>In the process of training SVMs, WSS was an important step in SMO-type decomposition methods. Meanwhile, the special properties of the selection process in WSS were perfectly combined with the exponential mechanism of DP. WSS 3 algorithm was a more general algorithm to select a working set by checking nearly 1. Given a fixed 0 &lt; &#963; &#8804; 1 in all iterations.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Compute</ns0:head><ns0:formula xml:id='formula_25'>, (<ns0:label>20</ns0:label></ns0:formula><ns0:formula xml:id='formula_26'>) )} ( | ) ( { max ) ( k up t k t t k I t f y m &#61537; &#61537; &#61537; &#61646; &#61649; &#61485; &#61501; . (<ns0:label>21</ns0:label></ns0:formula><ns0:formula xml:id='formula_27'>) )} ( | ) ( { max ) ( ' k low t k t t k I t f y M &#61537; &#61537; &#61537; &#61646; &#61649; &#61501; 3. Select i, j satisfying , ,<ns0:label>(22)</ns0:label></ns0:formula><ns0:formula xml:id='formula_28'>) ( arg k m i &#61537; &#61646; ) ( k low I j &#61537; &#61646; . (<ns0:label>23</ns0:label></ns0:formula><ns0:formula xml:id='formula_29'>) 0 )) ( ' ) ( ( ) ( ) ( &#61502; &#61483; &#61619; &#61649; &#61483; k k j k j k M m f y m &#61537; &#61537; &#61555; &#61537; &#61537; 4. Return B = {i, j}.</ns0:formula></ns0:div> <ns0:div><ns0:head n='4.2.'>The score function and sensitivity in the exponential mechanism</ns0:head><ns0:p>In the exponential mechanism, the scoring function was an important guarantee for achieving DP. The rationality of scoring function design was directly related to the execution efficiency of mechanism M. For one output r, the greater the value of the scoring function, the greater the probability that r would be selected. Based on the definition of the 'maximal violating pair', it was obvious that Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science .</ns0:note><ns0:p>(24)</ns0:p><ns0:formula xml:id='formula_30'>j k j k k k f y m M m ) ( ) ( ) ( ' ) ( &#61537; &#61537; &#61537; &#61537; &#61649; &#61483; &#61619; &#61483;</ns0:formula><ns0:p>From Neq. ( <ns0:ref type='formula' target='#formula_28'>23</ns0:ref>) and ( <ns0:ref type='formula'>24</ns0:ref>), we concluded that .</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_31'>) 0 )) ( ' ) ( ( ) ( ) ( ) ( ' ) ( &#61502; &#61483; &#61619; &#61649; &#61483; &#61619; &#61483; k k j k j k k k M m f y m M m &#61537; &#61537; &#61555; &#61537; &#61537; &#61537; &#61537;<ns0:label>25</ns0:label></ns0:formula><ns0:p>We designed a simple scoring function q(D, r) for the DPWSS algorithm based on WSS 4 and Neq. (25) as follows</ns0:p><ns0:formula xml:id='formula_32'>, (<ns0:label>26</ns0:label></ns0:formula><ns0:formula xml:id='formula_33'>) &#61555; &#61537; &#61537; &#61537; &#61537; &#61619; &#61483; &#61649; &#61483; &#61501; &#61619; ) ( ' ) ( ) ( ) ( ) , (<ns0:label>1</ns0:label></ns0:formula><ns0:formula xml:id='formula_34'>k k j k j k M m f y m r D q</ns0:formula><ns0:p>where r denoted the working set B, which contained violating pair i and j. The larger the value of scoring function q(D, r), the closer the selected violation pair was to the maximal violation pair. The sensitivity of scoring function q(D, r) was</ns0:p><ns0:formula xml:id='formula_35'>, (<ns0:label>27</ns0:label></ns0:formula><ns0:formula xml:id='formula_36'>) &#61555; &#61485; &#61501; &#61508; 1 q</ns0:formula><ns0:p>and the value of was a small number, less than 1. q &#61508;</ns0:p><ns0:p>In the exponential mechanism, the output r was selected randomly with probability .</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_37'>) &#61669; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61501; R r q r D q q r D q r ' 2 ) ' , (<ns0:label>28</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>&#61541; &#61541;</ns0:head></ns0:div> <ns0:div><ns0:head n='4.3.'>Privacy budget</ns0:head><ns0:p>Privacy budget was a vital parameter in DP, which controlled the privacy level in a randomized mechanism M. The smaller the privacy budget, the higher the privacy level. When the allocated privacy budget ran out, mechanism M would lose privacy protection, especially for the iteration process. To improve the utilization of the privacy budget, every pair of working sets was selected only once during the entire training process as in [25]. Meanwhile, in DPWSS every iteration was based on the result of the last iteration, but not based on the entire original dataset. Therefore, there was no need to split the privacy budget for every iteration.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4.'>Description of DPWSS algorithm</ns0:head><ns0:p>In the DPWSS algorithm, DP was achieved by privately selecting the working set with the exponential mechanism in every iteration. We first presented an overview of the DPWSS algorithm and then elaborated on the key steps. Finally, we described an SMO-type decomposition method using the DPWSS algorithm in detail.</ns0:p><ns0:p>The description of the DPWSS algorithm is shown below.</ns0:p></ns0:div> <ns0:div><ns0:head>Algorithm 1 DPWSS</ns0:head><ns0:p>Input: G: gradient array; y: array of every instance labels with {+1, -1}; l: number of instances; &#945;: dual vector; I: the violating pair selected flag bool matrix; &#963;: constant-factor; &#603;: privacy budget; eps: stopping tolerance; if </ns0:p><ns0:formula xml:id='formula_39'>I[i][t] = =false</ns0:formula></ns0:div> <ns0:div><ns0:head>End</ns0:head><ns0:p>The DPWSS algorithm selected multiple violating pairs that met the constraints based on WSS 4, and then randomly selected one with a certain probability by the exponential mechanism to satisfy DP. Firstly, the DPWSS algorithm computed m(&#945;) and M'(&#945;) for the scoring function q from Line 1 to Line 4 and determined i as one element of the violating pair. Secondly, it computed the scoring function q from Line 5 to Line 12. The constraints in Line 6 represented that the violating pair {i, j} had not been previously selected, meanwhile the value range of the other element j and the violating pair were valid for the changes of gradient G. The constraints in Line 8 represented that the scoring function value was effective under constant-factor &#963;. Line 14 and Line 15 are key steps in the exponential mechanism, which randomly selected a violating pair with the chosen probability of the scoring function q. Lastly, the DPWSS algorithm output the violating pair {i, j} as the working set B in Line 15. The time and memory complexity of DPWSS algorithm is O(l).</ns0:p><ns0:p>In summary, a SMO method using the DPWSS algorithm is shown below. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>End</ns0:head><ns0:p>Algorithm 2 was an iterative process, which first selected working set B by DPWSS, then updated dual vector &#945; and gradient G in every iteration. After the iterative process, the algorithm output the final &#945;. There were three ways to get out of the iterative process. One was that &#945; was a stationary point, another was that all violating pairs had been selected, and the last one was that the number of iterations exceeded the maximum value. Using Algorithm 2, we privately released the classification model of SVMs with dual vector &#945; while satisfying the requirement of DP.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5.'>Privacy analysis</ns0:head><ns0:p>In the DPWSS algorithm, randomness was introduced by randomly selecting working sets with the exponential mechanism. By using the exponential mechanism, a violating pair was selected randomly with a certain probability. The greater the probability, the closer the selected violating pair was to the maximal violating pair. For every iteration, the violating pair in the outputs of the DPWSS algorithm was uncertain. The uncertainty masked the impact of individual record changes on the algorithm results, thus protecting the data privacy.</ns0:p><ns0:p>According to the definition of DP mentioned in Section 3, we proved that the DPWSS algorithm satisfied DP strictly by theorem 1 as shown below.</ns0:p><ns0:p>Theorem 1 DPWSS algorithm satisfied DP.</ns0:p><ns0:p>Proof Let M(D, q) be to select the output r of the violating pair in one iteration, and &#603; be the allocated privacy budget in the DPWSS algorithm. Based on Eq. ( <ns0:ref type='formula' target='#formula_37'>28</ns0:ref>), we randomly selected violating pair r as a working set with the following probability by the exponential mechanism. To accord with the standard form of the exponential mechanism, we used q to denote q' in the DPWSS algorithm. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_40'>&#61669; &#61669; &#61646; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61501; &#61501; &#61501; O r O r q r D q q r D q q r D q q r D q r q D M r q D M ' ' 2 ) ' , ' ( exp 2 ) , ' ( exp 2 ) ' , (<ns0:label>exp 2 ) , ( exp ) ) , ' ( Pr( ) ) , ( Pr( &#61541;</ns0:label></ns0:formula><ns0:formula xml:id='formula_41'>&#61541; &#61541; &#61541; &#61669; &#61669; &#61646; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61620; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61485; &#61501; O r O r q r D q q r D q q r D q r D q ' ' 2 ) ' , (<ns0:label>exp 2 )</ns0:label></ns0:formula><ns0:formula xml:id='formula_42'>Computer Science &#61669; &#61669; &#61646; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61688; &#61686; &#61671; &#61672; &#61670; &#61620; &#61687; &#61688; &#61686; &#61671; &#61672; &#61670; &#61603; O r O r q r D q q r D q ' ' 2 ) ' , ( exp 2 ) ' , ( exp 2 exp 2 exp &#61541; &#61541; &#61541; &#61541; &#61669; &#61669; &#61646; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61620; &#61687; &#61688; &#61686; &#61671; &#61672; &#61670; &#61620; &#61687; &#61688; &#61686; &#61671; &#61672; &#61670; &#61501; O r O r q r D q q r D q ' ' 2 ) ' , ( exp 2 ) ' , ( exp 2 exp 2 exp &#61541; &#61541; &#61541; &#61541; ) exp(&#61541; &#61501; According to Definition 2, we prove that . ) ) , ' ( Pr( ) exp( ) ) , ( Pr( r q D M r q D M &#61501; &#61620; &#61603; &#61501;</ns0:formula></ns0:div> <ns0:div><ns0:head>&#61541;</ns0:head><ns0:p>Therefore, the DPWSS algorithm satisfied DP.</ns0:p><ns0:p>Algorithm 2 was an iterative process, in which DPWSS was a vital step to privately select a working set. As the DPWSS algorithm satisfied DP, we performed the steps of updating dual vector &#945; and gradient G in every iteration without access to private data. To improve the utilization of the privacy budget, every pair of working sets was selected only once during the entire training process. Meanwhile, in Algorithm 2 every iteration was based on the result of the last iteration, but not based on the original datasets. Therefore, Algorithm 2 satisfied DP.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.'>Experiments</ns0:head><ns0:p>In this section, we compared the performance of the DPWSS algorithm with WSS 2, which was a classical nonprivate WSS algorithm and has been used in the software LIBSVM <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref> . The comparison between WSS 2 and WSS 1 was done in [20]. We didn't compare the DPWSS algorithm with other private SVMs. One reason was that randomness was introduced in different ways, and the other reason was that the DPWSS algorithm achieved classification accuracy and optimized objective value almost the same as the original non-privacy SVM algorithm.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1.'>Datasets and experimental environment</ns0:head><ns0:p>The datasets were partly selected for the experiments as [19], [20], and [25]. All datasets were for binary classification, and available at http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/. The basic information of the datasets is shown in Table <ns0:ref type='table'>2</ns0:ref> below. To make the figures look neater in the experiments, we used breast to denote the breast-cancer dataset and german to denote the german.number dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 2.</ns0:head><ns0:p>To carry out the contrast experiments efficiently, we used LIBSVM (version 3.24) as an implementation of the DPWSS algorithm in C++ language and GNU Octave (version 5.2). All parameters were set to default values.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.'>An example of a private classification model</ns0:head><ns0:p>Unlike other privacy SVMs, which introduced randomness into the objective function or classification result by the Laplace mechanism, our method was achieved by privately selecting the working set with the exponential mechanism in every iteration. We gave an example of a private classification model to show how privacy was protected in Figure <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>. The data used two columns of the heart dataset and moved the positive and negative instances to each end for easier classification. The solid lines represent the original non-private classification model and circles represent support vectors. The dotted lines represent a private classification model by training SVM with the DPWSS algorithm. It was observed that the differences between the private and non-private classification models were very small, and achieved similar accuracy of classification. Randomness was introduced into the training process of SVM. All the classification models generated were different from each other in every training process to protect the training data privacy. The rank i denoted the serial number of instance i after sorting by the probability, M was the number of positive instances and N was the number of negative instances. The higher the AUC, the better the usability of the algorithm. The algorithm stability was measured by the error of optimized objective value between DPWSS algorithm and WSS 2, named objError. The smaller the objError, the better the stability of the algorithm. And the execution efficiency of the algorithm was measured by the ratio of iteration between the two algorithms, named iterationRatio. The smaller the iterationRatio, the better the execution efficiency of the algorithm. We didn't compare the training time between the two algorithms as it was a millisecond class for the entire training process to most of the datasets.</ns0:p><ns0:p>To evaluate the influence of different constant-factor &#963; and privacy budget &#603; to the three metrics for algorithm performance, we set &#963; at 0.1, 0.3, 0.5 and 0.7 under &#603; fixed at 1 and set &#603; at 0.1, 0.5 and 1 under &#963; fixed at 0.5. We didn't set &#963; at 0.9, because under the circumstances, most of the violating pairs would be filtered out that the algorithm failed to reach the final objective value.</ns0:p><ns0:p>Firstly, we measured the classification accuracy of the DPWSS algorithm versus WSS 2 by AUC. The experimental results are shown in Figure <ns0:ref type='figure'>2</ns0:ref> to Figure <ns0:ref type='figure'>9</ns0:ref>. Observed from the results, the DPWSS algorithm achieved almost the same classification accuracy as WSS 2 on all datasets under different &#963; and &#603;. Due to the repeated execution of the iterative process, the DPWSS algorithm obtained a well private classification model. The classification accuracy was not affected by the randomness of DP and the filtering effect of parameter &#963; on violating pairs. The DPWSS algorithm introduced randomness into the training process of SVMs, not into the objective function or classification result. There were no requirements of the differentiability of the objective function and the complex sensitivity analysis, and the less influence of high-dimensional data on noise. Therefore, the DPWSS algorithm achieved the target extremum through the optimization process and higher classification accuracy under the current condition. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure 2.</ns0:note></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Secondly, we compared the optimized objective values and measured the algorithm stability by objError between the DPWSS algorithm and WSS 2. The experimental results are shown in Table <ns0:ref type='table'>3</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_5'>10</ns0:ref> to Figure <ns0:ref type='figure' target='#fig_11'>13</ns0:ref>. Observed from the results, the DPWSS algorithm achieved similar optimized objective values with WSS 2 on all datasets under different &#963; and &#603;. The errors between the DPWSS algorithm and WSS 2 were very small (within two). Due to the repeated execution of the iterative process, the DPWSS algorithm converged stably to optimized objective values and was not affected by the randomness of DP and the filtering effect of parameter &#963; on violating pairs. With the increase of &#963;, the errors also tended to increase.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref>. </ns0:p><ns0:note type='other'>Figure 10.</ns0:note><ns0:note type='other'>Figure 12.</ns0:note><ns0:p>Figure <ns0:ref type='figure' target='#fig_11'>13</ns0:ref>.</ns0:p><ns0:p>Lastly, we compared the iterations and measured the execution efficiency by iterationRatio between the two algorithms. The experimental results are shown in Table <ns0:ref type='table'>4</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_12'>14</ns0:ref> to Figure <ns0:ref type='figure' target='#fig_13'>25</ns0:ref>. Observed from the results, the DPWSS algorithm achieved higher execution efficiency with fewer iterations versus WSS 2 on all datasets under different &#963; and &#603;. Because the DPWSS algorithm introduced randomness into the WSS process, the iterations would increase more or less. However, with the increase of constant-factor &#963;, the iterations were affected by the filtering effect of it on violating pairs larger and larger. When &#963; increased to 0.3, the execution efficiency of the DPWSS algorithm was already higher than WSS 2 for most datasets. When &#963; increased to 0.7, the iterations of the DPWSS algorithm were far less than WSS2 for all datasets except ijcnn1. Therefore, our method should set larger &#963; for big datasets. While the privacy budget &#603; had little effect on iterations under a fixed constant-factor &#963;. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Table 4.</ns0:head><ns0:note type='other'>Figure 14.</ns0:note><ns0:note type='other'>Figure 18.</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:p>The Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:p>The Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:p>The classification accuracy of DPWSS for different &#603; (&#963;=0.5) versus WSS 2 on dataset 7 to 12 without shrinking. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:3:0:NEW 13 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:3:0:NEW 13 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>WSS 4 (</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>j, although under the restricted condition of parameter &#963;. By using the same as in WSS 2s, we proposed WSS 4 to select a working set based on WSS 3 as below. To make the ) An improved WSS via the 'constant-factor violating pair')</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:3:0:NEW 13 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Output: B: working set; Begin 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>initialize m(&#945;) and M'(&#945;) to -INF; 2: find m(&#945;) by Eq. (20) for t in [0:l-1] and t in I up (&#945;); PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:3:0:NEW 13 Sep 2021) Manuscript to be reviewed Computer Science 3: set i = t; 4: find M'(&#945;) by Eq. (21) for t in [0:l-1] and t in I low (&#945;); 5: for t = 0 to l-1 6:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Algorithm 2 ABegin 1 : 3 :</ns0:head><ns0:label>213</ns0:label><ns0:figDesc>SMO method using DPWSSInput: Q: kernel symmetric matrix; y: array of every instance labels with {+1, -1}; l: number of instances; C: upper bound of all dual variables; Output: &#945;: dual vector; initialize gradient array G to all -1, dual vector &#945; to all 0, and violating pair selected flag bool matrix I to all 0; 2: find &#945; 1 as the initial feasible solution, set k = 1; while k &lt; max_iter4:if &#945; k is a stationary point then5: exit the loop; PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:3:0:NEW 13 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:3:0:NEW 13 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 1 . 5 . 3 .</ns0:head><ns0:label>153</ns0:label><ns0:figDesc>Figure 1.5.3. Algorithm performance experimentsIn this section, we evaluated the performance of the DPWSS algorithm versus WSS 2 by experiments for the entire training process. The metrics of performance included classification accuracy, algorithm stability, and execution</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 8 .Figure 9 .</ns0:head><ns0:label>89</ns0:label><ns0:figDesc>Figure 8.Figure 9.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>Figure 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure 16.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head>Figure 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Figure 17.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>Figure 19 .</ns0:head><ns0:label>19</ns0:label><ns0:figDesc>Figure 19.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head>Figure 20 .</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>Figure 20.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_23'><ns0:head>Figure 21 .</ns0:head><ns0:label>21</ns0:label><ns0:figDesc>Figure 21.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_24'><ns0:head>Figure 22 .</ns0:head><ns0:label>22</ns0:label><ns0:figDesc>Figure 22.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_25'><ns0:head>Figure 23 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 23.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_26'><ns0:head>Figure 24 .</ns0:head><ns0:label>24</ns0:label><ns0:figDesc>Figure 24.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_27'><ns0:head>Figure 25 . 6 .</ns0:head><ns0:label>256</ns0:label><ns0:figDesc>Figure 25.6. Conclusions</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_28'><ns0:head>Figure 6 The</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_29'><ns0:head /><ns0:label /><ns0:figDesc>classification accuracy of DPWSS for different &#603; (&#963;=0.5) versus WSS 2 on dataset 1 to 6 without shrinking. PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:3:0:NEW 13 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_30'><ns0:head /><ns0:label /><ns0:figDesc>classification accuracy of DPWSS for different &#603; (&#963;=0.5) versus WSS 2 on dataset 7 to 12 with shrinking. PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:3:0:NEW 13 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='41,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> </ns0:body> "
"Cover letter Dear Editors We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. We believe that the manuscript is now suitable for publication in PeerJ Computer Science. Zhenlong Sun On behalf of all authors. Reviewer 4 1. There are quite a number of vague statements across the manuscript. As an example, what do the authors mean by saying (in the abstract): “The errors of optimized objective value between the two algorithms were nearly less than two”? And then, “the DPWSS algorithm had a higher execution efficiency by comparing iterations on different datasets” – higher than what? Than the “original non-privacy SVM”? Yes, we compared the performance of the DPWSS algorithm with a classical non-private SVM algorithm which has been used in the most popular software LIBSVM. In the research field of machine learning and privacy protection, a typical method is to compare the performance between private algorithms and non-private algorithms. We didn’t compare the DPWSS with other private SVMs as the randomness was introduced in different ways. However, we compared it with the classical non-private WSS algorithm and achieved classification accuracy and optimized objective value almost the same between them. This is a distinct advantage over most approaches. 2. Please avoid using bulked references, such as “Support vector machine (SVM)[1][2][3][4][5]” – it does not bring any useful to the reader. Please unfold such bulked references. We believe that detailed background information is necessary, especially for early-career researchers. However, due to the space limitations, some basic knowledge was not described from the related articles. The early-career researchers can better understand how the method in this article came about through reading these articles in detail. 3. The authors should clearly state what do they mean by the “working set selection”. We added the statement of working set selection in section 3.2. 4. Please avoid naming sections as “DP” etc (I suggest unfolding acronyms in the section/subsection headers). We corrected the acronyms in the section headers. 5. What is the time and memory complexity of the proposed algorithm? The time and memory complexity of DPWSS algorithm is O(l), where l is the number of instances. 6. The authors sort of failed to contextualize their work within the state of the art of training SVMs. There have been a multitude of methods proposed so far, which could potentially impact maintaining privacy of the classifier. As an example, there are a number of training set selection methods around which could be used for selecting such training examples that could help maintain privacy. Are such methods possible to combine with the proposed approach? It would be useful to discuss that to elaborate a bit more comprehensive view of the problem of SVM training. . Yes, these training set selection methods can be easily combined with the proposed approach especially for the large-scale training problems. We added the discuss in the conclusion section. 7. Although the English is acceptable, the manuscript would benefit from proofreading (there are several grammatical errors around). We polished the manuscript and corrected some grammar errors. 8. The characteristics of the datasets used in the experimental study must be expanded. Specifically, what is the imbalance ratio within each dataset? Does it have any impact on the overall performance of the optimizer? Also, the number of datasets is fairly small – currently, it is common to test such SVM-related algorithms over 50+ datasets to fully understand their generalization abilities. To this end, I suggest including much more benchmarks in the experimental validation (especially given that only one dataset may be considered “large”). Since SVMs are insensitive to imbalance data and the DPWSS algorithm can be easily combined with the methods that can solve imbalance data, the problem of imbalance data was not considered in this manuscript. And we added two datasets in the experiments. 9. Please present all quality metrics used in the manuscript in a formal (mathematical way) through providing their formulae. We added the mathematical formula of AUC. 10. It would be useful to at least discuss if the proposed method is suitable for multi-class classification using support vector machines (e.g., in the one-vs-all strategy). Because the DPWSS algorithm doesn’t change the training process of the classical non-privacy SVMs, it is also suitable for multi-class classification. We added the discuss in the conclusion section. 11. The authors should present all relevant quality metrics: AUC, accuracy, precision, recall, F1 and MCC. These metrics can represent the classification accuracy of the algorithm from different aspects. We using AUC to measure it as most privacy SVMs articles. The classification accuracy is only one aspect of the performance of the DPWSS algorithm. The algorithm stability and execution efficiency of the algorithm can also reflect the advantages of the DPWSS algorithm. Therefore, due to space limitations, this paper only uses AUC to represent classification accuracy. 12. The authors should back up their conclusions with appropriate statistical testing (here, non-parametric statistical tests) to fully understand if the differences between different optimizers are significant in the statistical sense. We have measured the performance of the proposed algorithm by classification accuracy, algorithm stability, and execution efficiency under different parameters as most privacy SVMs methods. In the research field of privacy SVMs, the usual practice is to test the performance of the algorithm under different privacy parameters. If I fail to understand your meanings? Thank you for your more detailed explanation. 13. In my opinion, the authors should extend the experimental validation (as suggested in my previous comments), to provide clear and unbiased view on the generalization abilities of the proposed techniques. We added two datasets in the experiments to extend the experimental validation. Reviewer 5 1. This paper presents a differentially private working set selection to train support vector machine. They focus on decomposition in the sequential minimal optimization and perturb each iteration with exponential mechanism. The novelty is quite limited to this optimization and no empirical comparison to any existing differentially private SVM. The paper is clearly organized while some of the terms and notations are not consistently explained, such as constant-factor. Recently, the most common used privacy SVMs methods are objective perturbation and output perturbation. Their randomness is introduced into the objective function or the weight vector, while in the DPWSS algorithm, randomness is introduced in the training process. The most prominent advantages include that there are no requirements for differentiability of the objective function and complex sensitivity analysis compared with objective perturbation or output perturbation methods. We compared the performance of the DPWSS algorithm with a classical non-private SVM algorithm which has been used in the most popular software LIBSVM. In the research field of machine learning and privacy protection, a typical method is to compare the performance between private algorithms and non-private algorithms. We didn’t compare the DPWSS with other private SVMs as the randomness was introduced in different ways. However, we compared it with the classical non-private WSS algorithm and achieved classification accuracy and optimized objective value almost the same between them. This is a distinct advantage over most approaches. We added the explanation of constant-factor in section 3.2. 2. I notice that the authors emphasize that their scheme is free from the influence of high-dimensional data on noise while their evaluation is mostly on dataset with at most a few hundred dimensions. The author should clarify the definition of “high-dimension” in the case of differential privacy adoption in SVM and address this claim by using a substantially high-dimensional dataset. We added two datasets in the experiments to extend the experimental validation. 3. For Figure 2 to 9, it is very challenging to tell which scheme is superior as they look the same to me. I highly suggest the authors to start the y-axis at a higher value such as 0.6. We improved the quality of figures. 4. Similar issues are on Figure 10-12 given none of the combination gives an error over 2.5. The authors can scale up the value to address the performance difference. We improved the quality of figures. 5. Also it is confusing to use curve chart when plotting the results for different \epsilon and \delta combination as they do not represent any sort of performance trend. If the authors intend to demonstrate the impact of different parameters, they should fix on one and vary the other. Two graphs or matrix tables are more suitable for that purpose. We also noticed that problem, the test results of different parameters were placed in one graph due to space constraints. We have split the data graphs. 6. Minor: Section 3.1: for training a SVM -> an Section 5.3: under the circumstances most of the violating -> under the circumstances, most We corrected these errors. "
Here is a paper. Please give your review comments after reading it.
302
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Support vector machine (SVM) is a robust machine learning method and is widely used in classification. However, the traditional SVM training methods may reveal personal privacy when the training data contains sensitive information. In the training process of SVMs, working set selection is a vital step for the sequential minimal optimization-type decomposition methods. To avoid complex sensitivity analysis and the influence of highdimensional data on the noise of the existing SVM classifiers with privacy protection, we propose a new differentially private working set selection algorithm (DPWSS) in this paper, which utilizes the exponential mechanism to privately select working sets. We theoretically prove that the proposed algorithm satisfies differential privacy. The extended experiments show that the DPWSS algorithm achieves classification capability almost the same as the original non-privacy SVM under different parameters. The errors of optimized objective value between the two algorithms are nearly less than two, meanwhile, the DPWSS algorithm has a higher execution efficiency than the original non-privacy SVM by comparing iterations on different datasets. To the best of our knowledge, DPWSS is the first private working set selection algorithm based on differential privacy.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>In recent years, with the rapid development of artificial intelligence, cloud computing, and big data technologies, data sharing and analysis are becoming easier and more practical. A large amount of individual information is stored in electronic databases, such as economic records, medical records, web search records, and social network data, which poses a great threat to personal privacy. Support vector machine (SVM) is one of the most widely used and robust machine learning methods for classification. Boser et al. <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> proposed the earliest SVM classification idea by maximizing the margin between the training patterns and the decision boundary. Cortes et al. <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> solved the classification problem of non separable training data though non linearly mapping them to a very high dimension feature space. Vapnik <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> considered three different kernels to construct learning machines with different types of nonlinear decision surfaces in the input space. Bureges et al <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref> gave an overview on linear SVMs and kernel SVMs with numerous examples for pattern recognition. Chang and Lin <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref> developed a popular library LIBSVM for SVMs and presented all the implementation details. A SVM trains a classification model by solving an optimization problem and requires only as few as a dozen examples for training. However, when the training data sets contain sensitive information, directly releasing the SVM classification model may reveal personal privacy.</ns0:p><ns0:p>Generally speaking, training SVMs is to solve a large optimization problem of quadratic programming (QP).</ns0:p><ns0:p>Sequential minimal optimization (SMO) <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref> is currently a commonly used decomposition method for training SVMs by solving the smallest QP optimization problem, and only needs two elements in every iteration. Keerthi et al. <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref> employed two threshold parameters to derive modifications of SMO and it performed significantly faster than the original SMO algorithm. In all kinds of SMO-type decomposition methods, working set selection (WSS) is an important step. Different WSS algorithms determine the convergence efficiency of the SVM training process. Zuo et al. <ns0:ref type='bibr' target='#b8'>[8]</ns0:ref> proposed an improved WSS and a simplified minimization step for the SMO-type decomposition method. Differential privacy (DP) <ns0:ref type='bibr' target='#b9'>[9]</ns0:ref> was proposed by a series of work of Dwork et al. from 2006, which has been becoming an accepted standard for privacy protection in sensitive data analysis. DP ensures that adding or removing a single item does not affect the analysis outcome too much, and the privacy level is quantified by a privacy budget &#949;. DP is realized by introducing randomness or uncertainty. According to the difference of data types, it mainly includes Laplace mechanism <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref> , Gaussian mechanism, and exponential mechanism <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref> . Among them, the Laplace mechanism and In this section, we introduce some background knowledge of SVM, WSS, and DP. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> summarizes the notations in the following sections. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.'>Support vector machines</ns0:head><ns0:p>The SVM is an efficient classification method in machine learning that originates from structural risk minimization <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> .</ns0:p><ns0:p>It finds an optimal separating hyperplane with the maximal margin to train a classification model. Given training instances x i R n and labels y i {1,-1}, the main task for training a SVM is to solve the QP optimization problem as &#61646; &#61646; follows <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> :</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>&#61537; &#61537; &#61537; &#61537; 1 ) ( min Subject to , l i C i ,..., 1 , 0 &#61501; &#61603; &#61603; &#61537; , 0 &#61501; &#61537; T y</ns0:formula><ns0:p>where Q is a symmetric matrix with Q ij =y i y j K(x i ,x j ), and K is the kernel function, e is a vector with all 1's, C is the upper bound of vector &#945;.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.'>Working set selection</ns0:head><ns0:p>Generally, the QP problem is hard to solve in the training process of the SVMs. When the optimization methods handle the large matrix Q, the whole vector &#945; will be updated repeatedly in the iterative process. Nevertheless, the decomposition methods only update a subset of vector &#945; in every iteration to solve the challenge and change from one iteration to another. The subset is called the working set. The method for determining the working set is called WSS, which originally derives from the optimality conditions of Karush-Kuhn-Tucker (KKT). Furthermore, SMO-type decomposition methods restrict the working set to only two elements <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref> . A pair of elements that violate the KKT optimality conditions are called 'violating pair' <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref> .</ns0:p><ns0:p>Definition 1 (Violating pair <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref> <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> ). Under the following restrictions:</ns0:p><ns0:formula xml:id='formula_1'>, (2) } 1 , 0 1 , | { ) ( &#61485; &#61501; &#61502; &#61501; &#61500; &#61626; t t t t up y or y C t I &#61537; &#61537; &#61537; . (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>) } 1 , 0 1 , | { ) ( &#61501; &#61502; &#61485; &#61501; &#61500; &#61626; t t t t low y or y C t I &#61537; &#61537; &#61537; For the k th iteration, if ,</ns0:formula><ns0:p>, and , then {i, j} is a 'violating pair'.</ns0:p><ns0:formula xml:id='formula_3'>) ( k up I i &#61537; &#61646; ) ( k low I j &#61537; &#61646; j k j i k i f y f y ) ( ) ( &#61537; &#61537; &#61649; &#61485; &#61502; &#61649; &#61485;</ns0:formula><ns0:p>Violating pairs are important in WSS. If working set B is a violating pair, the function value in SMO-type decomposition methods strictly decreases <ns0:ref type='bibr' target='#b22'>[21]</ns0:ref> . Under the definition of violating pair, a natural choice of the working set B is the 'maximal violating pair', which most violates the KKT optimality condition. WSS 1 (WSS via the 'maximal violating pair' <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref>[20] <ns0:ref type='bibr' target='#b25'>[24]</ns0:ref> ). Under the same restrictions (2) and (3) in Definition 1, </ns0:p><ns0:formula xml:id='formula_4'>1. Select , (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>) )} ( | ) ( { max arg k up t k t t I t f y i &#61537; &#61537; &#61646; &#61649; &#61485; &#61646; , (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>) )} ( | ) ( { min arg k low t k t t I t f y j &#61537; &#61537; &#61646; &#61649; &#61485; &#61646; PeerJ Comput. Sci.</ns0:formula><ns0:formula xml:id='formula_7'>) )} ( | ) ( { max arg k low t k t t I t f y j &#61537; &#61537; &#61646; &#61649; &#61646; 2. Return B = {i, j}.<ns0:label>6</ns0:label></ns0:formula><ns0:p>Keerthi et al. <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref> first proposed the maximal violating pair, which has become a popular way in WSS. Fan et al. <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> pointed out that it was concerned with the first order approximation of f(&#945;) in (1) and gave a detailed explanation. Meanwhile, they proposed a new WSS algorithm by using more accurate second order information.</ns0:p><ns0:p>WSS 2 (WSS using second order information <ns0:ref type='bibr'>[20][24]</ns0:ref> ).</ns0:p><ns0:p>1. Define ait and bit, ,</ns0:p><ns0:formula xml:id='formula_8'>it tt ii it K K K a 2 &#61485; &#61483; &#61626; , (<ns0:label>(7)</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>) 0 ) ( ) ( &#61502; &#61649; &#61483; &#61649; &#61485; &#61626; t k t i k i it f y f y b &#61537; &#61537; (9) . , 0 otherwise a if a a it it it &#61502; &#61678; &#61677; &#61676; &#61626; &#61556; 2. Select , (<ns0:label>8</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>) )} ( | ) ( { max arg k up t k t t I t f y i &#61537; &#61537; &#61646; &#61649; &#61485; &#61646; . (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_11'>) &#61694; &#61693; &#61692; &#61649; &#61485; &#61500; &#61649; &#61485; &#61646; &#61678; &#61677; &#61676; &#61485; &#61646; i k i t k t k low it it t f y f y I t a b j ) ( ) ( ), ( | min arg 2 &#61537; &#61537; &#61537; 3. Return B = {i, j}.<ns0:label>11</ns0:label></ns0:formula><ns0:p>WSS 2 uses second order information and checks only O(l) possible working sets to select j through using the same i as in WSS 1. The WSS 2 algorithm achieves faster convergence than existing selection methods using first order information. It has been used in the software LIBSVM <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref> (since version 2.8) and is valid for all symmetric kernel matrices K, including the non-positive definite kernel.</ns0:p><ns0:p>Lin <ns0:ref type='bibr'>[22][23]</ns0:ref> pointed out the maximal violating pair was important to SMO-type methods. When the working set B is the maximal violating pair, SMO-type methods converge to a stationary point. Otherwise, it is uncertain whether the convergence will be established. Chen et al. <ns0:ref type='bibr' target='#b25'>[24]</ns0:ref> proposed a general WSS method via the 'constant-factor violating pair'. Under a fixed constant-factor &#963; specified by the user, the selected violating pair is linked to the maximal violating pair.</ns0:p><ns0:p>The 'constant-factor violating pair' is considered to be a 'sufficiently violated' pair. And they prove the convergence of the WSS method. WSS 3 (WSS via the 'constant-factor violating pair' <ns0:ref type='bibr'>[20][24]</ns0:ref> ). </ns0:p><ns0:formula xml:id='formula_12'>1. Given a fixed 0 &lt; &#963; &#8804; 1 in all iterations. 2. Compute , (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_13'>) )} ( | ) ( { max ) ( k up t k t t k I t f y m &#61537; &#61537; &#61537; &#61646; &#61649; &#61485; &#61501; . (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>) )} ( | ) ( { min ) ( k low t k t t k I t f y M &#61537; &#61537; &#61537; &#61646; &#61649; &#61485; &#61501; 3.Select i,</ns0:formula><ns0:formula xml:id='formula_15'>) ( k up I i &#61537; &#61646; ) ( k low I j &#61537; &#61646; . (15) 0 )) ( ) ( ( ) ( ) ( &#61502; &#61485; &#61619; &#61649; &#61483; &#61649; &#61485; k k j k j i k i M m f y f y &#61537; &#61537; &#61555; &#61537; &#61537; 4. Return B = {i, j}.<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>Clearly (15) guarantees the quality of the working set B if it is related to the maximal violating pair. Fan et al. <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> explained that WSS 2 was a special case of WSS 3 under the special value of &#963;.</ns0:p><ns0:p>Furthermore, Zhao et al. <ns0:ref type='bibr' target='#b26'>[25]</ns0:ref> employed algorithm WSS 2 to test the datasets by LIBSVM. They find two interesting phenomena. One is that some &#945; are not updated in the entire training process. Another is that some &#945; are updated again and again. Therefore, they propose a new method WSS-WR and a certain &#945; are selected only once to improve the efficiency of WSS, especially the reduction of the training time.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3.'>Differential privacy</ns0:head><ns0:p>Recently, with the advent of the digital age, huge amounts of personal information have been collected by web services and mobile devices. Although data sharing and mining large-scale personal information can help improve the functionality of these services, it also raises privacy concerns for data contributors. DP provides a mathematically rigorous definition of privacy and has become a new accepted standard for private data analysis. It ensures that any possible outcome of an analysis is almost equal regardless of an individual's presence or absence in the dataset, and the output difference is controlled by a relatively small privacy budget. The smaller the budget, the higher the privacy. Therefore, the adversary cannot distinguish whether an individual's in the dataset <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref> . Furthermore, DP is compatible with various kinds of data sources, data mining algorithms, and data release models.</ns0:p><ns0:p>In dataset D, each row corresponds to one individual, and each column represents an attribute value. If two datasets D and D' only differ on one element, they are defined as neighboring datasets. DP aims to mask the different results of the query function f in neighboring datasets. The maximal difference of the query results is defined as the sensitivity &#916;f. DP is generally achieved by a randomized mechanism , which returns a random vector from a probability d R D M &#61614; : distribution. A mechanism M satisfies DP if the affection of the outcome probability by adding or removing a single element is controlled within a small multiplicative factor <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref> . The formal definition is given as follows.</ns0:p><ns0:p>Definition 2 (&#603;-differential privacy <ns0:ref type='bibr' target='#b9'>[9]</ns0:ref> ). A randomized mechanism M gives &#603;-DP if for all datasets D and D' differing on at most one element, and for all subsets of possible outcomes S Range(M),</ns0:p><ns0:formula xml:id='formula_16'>&#61645; . (<ns0:label>16</ns0:label></ns0:formula><ns0:formula xml:id='formula_17'>) ] ) ' ( Pr[ ) exp( ] ) ( Pr[ S D M S D M &#61646; &#61620; &#61603; &#61646;</ns0:formula></ns0:div> <ns0:div><ns0:head>&#61541;</ns0:head><ns0:p>Sensitivity is a vital concept in DP that represents the largest affection of the query function output made by a single element. Meanwhile, sensitivity determines the requirements of how much perturbation by a particular query function <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref> . Definition 3 (Sensitivity <ns0:ref type='bibr' target='#b9'>[9]</ns0:ref> ). For a given query function , and neighboring datasets D and D',</ns0:p><ns0:formula xml:id='formula_18'>the d R D f &#61614; : sensitivity of f is defined as . (<ns0:label>17</ns0:label></ns0:formula><ns0:formula xml:id='formula_19'>) 1 ' , ) ' ( ) ( max D f D f f D D &#61485; &#61501; &#61508;</ns0:formula><ns0:p>The sensitivity depends only on the query function f, and not on the instances in datasets. f &#61508; Any mechanism that meets Definition 2 is considered as satisfying DP <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref> . Currently, two principal mechanisms have been used for realizing DP: the Laplace mechanism <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref> and the exponential mechanism <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref> . Definition 4 (Laplace mechanism <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref> ). For a numeric function on a dataset D, the mechanism M in Eq. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science .</ns0:note><ns0:formula xml:id='formula_20'>(<ns0:label>18</ns0:label></ns0:formula><ns0:formula xml:id='formula_21'>) d f Lap D f D M ) ( ) ( ) ( &#61541; &#61508; &#61483; &#61501;</ns0:formula><ns0:p>The Laplace mechanism gets the real results from the numerical query and then perturbs it by adding independent random noise. Let Lap(b) represent the random noise sampled from a Laplace distribution according to sensitivity. The Laplace mechanism is usually used for numerical data, while for the non-numerical queries, DP uses the exponential mechanism to randomize results. Definition 5 (Exponential mechanism <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref> ). Let be a score function on a dataset D that measures the quality of ) , ( r D q output , represents the sensitivity. The mechanism</ns0:p><ns0:formula xml:id='formula_22'>M satisfies &#603;-DP if R r &#61646; q &#61508; . (<ns0:label>19</ns0:label></ns0:formula><ns0:formula xml:id='formula_23'>) &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61621; &#61501; q r D q r return D M 2 ) , ( exp ) (</ns0:formula></ns0:div> <ns0:div><ns0:head>&#61541;</ns0:head><ns0:p>The exponential mechanism is useful to select a discrete output in a differentially private manner, which employs a score function q to evaluate the quality of an output r with a nonzero probability.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>DPWSS algorithm</ns0:head><ns0:p>In this paper, we study the problem of how to privately release the classification model of SVMs while satisfying DP.</ns0:p><ns0:p>To overcome the shortcomings of the privacy-preserving SVM classification methods, such as low accuracy or complex sensitivity analysis of output perturbation and objective perturbation, we proposed the algorithm DPWSS for training SVM in this section. The DPWSS algorithm is achieved by privately selecting the working set with the exponential mechanism in every iteration. As far as we know, DPWSS is the first private WSS algorithm based on DP.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.'>An improved WSS method</ns0:head><ns0:p>In the process of training SVMs, WSS is an important step in SMO-type decomposition methods. Meanwhile, the special properties of the selection process in WSS are perfectly combined with the exponential mechanism of DP. WSS 3 algorithm is a more general algorithm to select a working set by checking nearly possible B's to decide j, </ns0:p><ns0:formula xml:id='formula_24'>1. Given a fixed 0 &lt; &#963; &#8804; 1 in all iterations. 2. Compute , (<ns0:label>20</ns0:label></ns0:formula><ns0:formula xml:id='formula_25'>) )} ( | ) ( { max ) ( k up t k t t k I t f y m &#61537; &#61537; &#61537; &#61646; &#61649; &#61485; &#61501; . (<ns0:label>21</ns0:label></ns0:formula><ns0:formula xml:id='formula_26'>) )} ( | ) ( { max ) ( ' k low t k t t k I t f y M &#61537; &#61537; &#61537; &#61646; &#61649; &#61501; 3. Select i, j satisfying , ,<ns0:label>(22)</ns0:label></ns0:formula><ns0:formula xml:id='formula_27'>) ( arg k m i &#61537; &#61646; ) ( k low I j &#61537; &#61646; . (<ns0:label>23</ns0:label></ns0:formula><ns0:formula xml:id='formula_28'>) 0 )) ( ' ) ( ( ) ( ) ( &#61502; &#61483; &#61619; &#61649; &#61483; k k j k j k M m f y m &#61537; &#61537; &#61555; &#61537; &#61537; 4. Return B = {i, j}.</ns0:formula></ns0:div> <ns0:div><ns0:head n='4.2.'>The score function and sensitivity in the exponential mechanism</ns0:head><ns0:p>In the exponential mechanism, the scoring function is an important guarantee for achieving DP. The rationality of scoring function design is directly related to the execution efficiency of mechanism M. For one output r, the greater the value of the scoring function, the greater the probability that r will be selected. Based on the definition of the 'maximal violating pair', it is obvious that .</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_29'>) j k j k k k f y m M m ) ( ) ( ) ( ' ) ( &#61537; &#61537; &#61537; &#61537; &#61649; &#61483; &#61619; &#61483;<ns0:label>24</ns0:label></ns0:formula><ns0:p>From Neq. ( <ns0:ref type='formula' target='#formula_27'>23</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_29'>24</ns0:ref>), we conclude that .</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_30'>) 0 )) ( ' ) ( ( ) ( ) ( ) ( ' ) ( &#61502; &#61483; &#61619; &#61649; &#61483; &#61619; &#61483; k k j k j k k k &#61501; &#61619; ) ( ' ) ( ) ( ) ( ) , (<ns0:label>25</ns0:label></ns0:formula><ns0:formula xml:id='formula_31'>k k j k j k M m f y m r D q<ns0:label>1</ns0:label></ns0:formula><ns0:p>where r denotes the working set B, which contains violating pair i and j. The larger the value of scoring function q(D, r), the closer the selected violation pair is to the maximal violation pair. The sensitivity of scoring function q(D, r) is</ns0:p><ns0:formula xml:id='formula_32'>, (<ns0:label>27</ns0:label></ns0:formula><ns0:formula xml:id='formula_33'>) &#61555; &#61485; &#61501; &#61508; 1 q</ns0:formula><ns0:p>and the value of is a small number, less than 1. q &#61508;</ns0:p><ns0:p>In the exponential mechanism, the output r is selected randomly with probability .</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_34'>) &#61669; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61501; R r q r D q q r D q r ' 2 ) ' , ( exp 2 ) , ( exp ) Pr( &#61541; &#61541;<ns0:label>28</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head n='4.3.'>Privacy budget</ns0:head><ns0:p>Privacy budget is a vital parameter in DP, which controls the privacy level in a randomized mechanism M. The smaller the privacy budget, the higher the privacy level. When the allocated privacy budget runs out, mechanism M will lose privacy protection, especially for the iteration process. To improve the utilization of the privacy budget, every pair of working sets is selected only once during the entire training process as in [25]. Meanwhile, in DPWSS every iteration is based on the result of the last iteration, but not based on the entire original dataset. Therefore, there is no need to split the privacy budget for every iteration.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4.'>Description of DPWSS algorithm</ns0:head><ns0:p>In the DPWSS algorithm, DP is achieved by privately selecting the working set with the exponential mechanism in every iteration. We first present an overview of the DPWSS algorithm and then elaborate on the key steps. Finally, we describe an SMO-type decomposition method using the DPWSS algorithm in detail.</ns0:p><ns0:p>The description of the DPWSS algorithm is shown below. if The DPWSS algorithm selects multiple violating pairs that meet the constraints based on WSS 4, and then randomly selects one with a certain probability by the exponential mechanism to satisfy DP. Firstly, the DPWSS algorithm computes m(&#945;) and M'(&#945;) for the scoring function q from Line 1 to Line 4 and determines i as one element of the violating pair. Secondly, it computes the scoring function q from Line 5 to Line 12. The constraints in Line 6 represent that the violating pair {i, j} has not been previously selected, meanwhile the value range of the other element j and the violating pair are valid for the changes of gradient G. The constraints in Line 8 represent that the scoring function value is effective under constant-factor &#963;. Line 14 and Line 15 are key steps in the exponential mechanism, which randomly select a violating pair with the chosen probability of the scoring function q. Lastly, the DPWSS algorithm outputs the violating pair {i, j} as the working set B in Line 15. The time and memory complexity of DPWSS algorithm is O(l).</ns0:p><ns0:formula xml:id='formula_35'>I[i][t] = =false</ns0:formula><ns0:p>In summary, a SMO method using the DPWSS algorithm is shown below. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>End</ns0:head><ns0:p>Algorithm 2 is an iterative process, which first selects working set B by DPWSS, then updates dual vector &#945; and gradient G in every iteration. After the iterative process, the algorithm outputs the final &#945;. There are three ways to get out of the iterative process. One is that &#945; is a stationary point, another is that all violating pairs have been selected, and the last one is that the number of iterations exceeds the maximum value. Using Algorithm 2, we privately release the classification model of SVMs with dual vector &#945; while satisfying the requirement of DP.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5.'>Privacy analysis</ns0:head><ns0:p>In the DPWSS algorithm, randomness is introduced by randomly selecting working sets with the exponential mechanism. By using the exponential mechanism, a violating pair is selected randomly with a certain probability. The greater the probability, the closer the selected violating pair is to the maximal violating pair. For every iteration, the violating pair in the outputs of the DPWSS algorithm is uncertain. The uncertainty masks the impact of individual record change on the algorithm results, thus protecting the data privacy.</ns0:p><ns0:p>According to the definition of DP mentioned in Section 3, we proved that the DPWSS algorithm satisfies DP strictly by theorem 1 as shown below.</ns0:p><ns0:p>Theorem 1 DPWSS algorithm satisfies DP.</ns0:p><ns0:p>Proof Let M(D, q) be to select the output r of the violating pair in one iteration, and &#603; be the allocated privacy budget in the DPWSS algorithm. Based on Eq. ( <ns0:ref type='formula' target='#formula_34'>28</ns0:ref>), we randomly select violating pair r as a working set with the following probability by the exponential mechanism. To accord with the standard form of the exponential mechanism, we use q to denote q' in the DPWSS algorithm.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:4:0:NEW 29 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_36'>Computer Science &#61669; &#61669; &#61646; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61501; &#61501; &#61501; O r O r q r D q q r D q q r D q q r D q r q D M r q D M ' ' 2 ) ' , ' ( exp 2 ) , ' ( exp 2 ) ' , ( exp 2 ) , ( exp ) ) , ' ( Pr( ) ) , ( Pr( &#61541; &#61541; &#61541; &#61541; &#61669; &#61669; &#61646; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61620; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61485; &#61501; O r O r q r D q q r D q q r D q r D q ' ' 2 ) ' , ( exp 2 ) ' , ' ( exp 2 )) , ' ( ) , ( ( exp &#61541; &#61541; &#61541; &#61669; &#61669; &#61646; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61688; &#61686; &#61671; &#61672; &#61670; &#61620; &#61687; &#61688; &#61686; &#61671; &#61672; &#61670; &#61603; O r O r q r D q q r D q ' ' 2 ) ' , ( exp 2 ) ' , ( exp 2 exp 2 exp &#61541; &#61541; &#61541; &#61541; &#61669; &#61669; &#61646; &#61646; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61687; &#61687; &#61688; &#61686; &#61671; &#61671; &#61672; &#61670; &#61508; &#61620; &#61687; &#61688; &#61686; &#61671; &#61672; &#61670; &#61620; &#61687; &#61688; &#61686; &#61671; &#61672; &#61670; &#61501; O r O r q r D q q r D q ' ' 2 ) ' , ( exp 2 ) ' , ( exp 2 exp 2 exp &#61541; &#61541; &#61541; &#61541; ) exp(&#61541; &#61501; According to Definition 2, we prove that . ) ) , ' ( Pr( ) exp( ) ) , ( Pr( r q D M r q D M &#61501; &#61620; &#61603; &#61501; &#61541;</ns0:formula><ns0:p>Therefore, the DPWSS algorithm satisfies DP.</ns0:p><ns0:p>Algorithm 2 is an iterative process, in which DPWSS is a vital step to privately select a working set. As the DPWSS algorithm satisfies DP, we perform the steps of updating dual vector &#945; and gradient G in every iteration without accessing private data. To improve the utilization of the privacy budget, every pair of working sets is selected only once during the entire training process. Meanwhile, in Algorithm 2 every iteration is based on the result of the last iteration, but not based on the original datasets. Therefore, Algorithm 2 satisfies DP.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.'>Experiments</ns0:head><ns0:p>In this section, we compared the performance of the DPWSS algorithm with WSS 2, which is a classical non-private WSS algorithm and has been used in the software LIBSVM <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref> . The comparison between WSS 2 and WSS 1 was done in [20]. We do not compare the DPWSS algorithm with other private SVMs. One reason is that randomness is introduced in different ways, and the other reason is that the DPWSS algorithm achieves classification accuracy and optimized objective value almost the same as the original non-privacy SVM algorithm.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1.'>Datasets and experimental environment</ns0:head><ns0:p>The datasets are partly selected for the experiments as [19], [20], and [25]. All datasets are for binary classification, and available at http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/. The basic information of the datasets includes dataset size, value range, number of features, and imbalance ratio, which is shown in Table <ns0:ref type='table'>2</ns0:ref> below. To make the figures look neater in the experiments, we use breast to denote the breast-cancer dataset and german to denote the german.number dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 2.</ns0:head><ns0:p>To carry out the contrast experiments efficiently, we use LIBSVM (version 3.24) as an implementation of the DPWSS algorithm in C++ language and GNU Octave (version 5.2). All parameters are set to default values.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.'>An example of a private classification model</ns0:head><ns0:p>Unlike other privacy SVMs, which introduce randomness into the objective function or classification result by the Laplace mechanism, in our method the randomness is introduced into the training process of SVM. It is achieved by privately selecting the working set with the exponential mechanism in every iteration. We give an example of a private classification model to show how privacy is protected in Figure <ns0:ref type='figure' target='#fig_4'>1</ns0:ref>. The data uses two columns of the heart dataset and moves the positive and negative instances to each end for easier classification. The solid lines represent the original non-private classification model and circles represent support vectors. The dotted lines represent a private classification model by training SVM with the DPWSS algorithm. It is observed that the differences between the private and nonprivate classification models are very small, and achieves similar accuracy of classification. All the classification models generated are different from each other to protect the training data privacy. </ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.'>Algorithm performance experiments</ns0:head><ns0:p>In this section, we evaluated the performance of the DPWSS algorithm versus WSS 2 by experiments for the entire training process. The metrics of performance include classification capability, algorithm stability, and execution efficiency under different constant-factor &#963; and privacy budget &#603;.</ns0:p><ns0:p>The classification capability is measured by AUC, Accuracy, Precision, Recall, F1, and Mcc.</ns0:p><ns0:formula xml:id='formula_37'>(29) N M M M rank AUC ass positiveCl i i &#61620; &#61483; &#61485; &#61501; &#61669; &#61646; 2 ) 1 (</ns0:formula><ns0:p>The rank i denotes the serial number of instance i after sorting by the probability, M is the number of positive instances and N is the number of negative instances. The higher the AUC, the better the usability of the algorithm. Other metrics are calculated as shown below, and they are all based on confusion matrix.</ns0:p><ns0:formula xml:id='formula_38'>(30) FN FP TN TP TN TP Accuracy &#61483; &#61483; &#61483; &#61483; &#61501; (31) FP TP TP Precision &#61483; &#61501; (32) FN TP TP ecall R &#61483; &#61501; (33) FN FP TP TP F &#61483; &#61483; &#61501; 2 2 1 (34) ) )( )( )( ( FN TN FP TN FN TP FP TP FN FP TN TP Mcc &#61483; &#61483; &#61483; &#61483; &#61620; &#61485; &#61620; &#61501;</ns0:formula><ns0:p>The algorithm stability is measured by the error of optimized objective value between DPWSS algorithm and WSS 2, named objError. The smaller the objError, the better the stability of the algorithm.</ns0:p><ns0:p>The execution efficiency of the algorithm is measured by the ratio of iteration between the two algorithms, named iterationRatio. The smaller the iterationRatio, the better the execution efficiency of the algorithm. We do not compare the training time between the two algorithms as it is a millisecond class for the entire training process to most of the datasets.</ns0:p><ns0:p>To evaluate the influence of different constant-factor &#963; and privacy budget &#603; to the three metrics for algorithm performance, we set &#963; at 0.1, 0.3, 0.5 and 0.7 under &#603; fixed at 1 and set &#603; at 0.1, 0.5 and 1 under &#963; fixed at 0.7. We do not set &#963; at 0.9, because under the circumstances, most of the violating pairs will be filtered out that the algorithm fails to reach the final objective value. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Firstly, we measure the classification capability of the DPWSS algorithm versus WSS 2. The experiments for the DPWSS algorithm were repeated 5 times under different &#963; and &#603;, and the averages of the experimental results are shown in Table <ns0:ref type='table'>3</ns0:ref>. Observed from the results, the DPWSS algorithm achieves almost the same classification capability as WSS 2 on all datasets. The maximum error between them is no more than 3%. Due to the repeated execution of the iterative process, the DPWSS algorithm obtains a well private classification model. The classification capability is not affected by the randomness of DP and the filtering effect of parameter &#963; on violating pairs. The DPWSS algorithm introduces randomness into the training process of SVMs, not into the objective function or classification result. There are no requirements of the differentiability of the objective function and the complex sensitivity analysis, and the less influence of high-dimensional data on noise. Therefore, the DPWSS algorithm achieves the target extremum through the optimization process under the current condition. Meanwhile, the imbalance of dataset has little effect on the classification capability of the DPWSS algorithm.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p><ns0:p>Secondly, we compare the optimized objective values and measure the algorithm stability by objError between the DPWSS algorithm and WSS 2. The experimental results are shown in Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref> to Figure <ns0:ref type='figure' target='#fig_13'>5</ns0:ref>. Observed from the results, the DPWSS algorithm achieves similar optimized objective values with WSS 2 on all datasets under different &#963; and &#603;.</ns0:p><ns0:p>The errors between the DPWSS algorithm and WSS 2 are very small (nearly within two). Due to the repeated execution of the iterative process, the DPWSS algorithm converges stably to optimized objective values and is not affected by the randomness of DP and the filtering effect of parameter &#963; on violating pairs. With the increase of &#963;, the errors also tends to increase. Lastly, we compare the iterations and measure the execution efficiency by iterationRatio between the two algorithms.</ns0:p><ns0:p>The experimental results are shown in Figure <ns0:ref type='figure' target='#fig_14'>6</ns0:ref> to Figure <ns0:ref type='figure' target='#fig_10'>21</ns0:ref>. Observed from the results, the DPWSS algorithm achieves higher execution efficiency with fewer iterations versus WSS 2 on all datasets under different &#963; and &#603;.</ns0:p><ns0:p>Because the DPWSS algorithm introduces randomness into the WSS process, the iterations will increase more or less. However, with the increase of constant-factor &#963;, the iterations are affected by the filtering effect of it on violating pairs larger and larger. When &#963; increases to 0.3, the execution efficiency of the DPWSS algorithm is already higher than WSS 2 for most datasets. When &#963; increases to 0.7, the iterations of the DPWSS algorithm are far less than WSS2 for all datasets except ijcnn1. Therefore, our method should set larger &#963; for big datasets. While the privacy budget &#603; has little effect on iterations under a fixed constant-factor &#963;. Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science Figure 14.</ns0:note></ns0:div> <ns0:div><ns0:head>Figure 21.</ns0:head><ns0:p>In the above experiments, we compared the average results of 5 times running of the DPWSS algorithm with the WSS 2 algorithm. It can be seen from the experimental results that the two algorithms have similar classification capability and optimized objective value under different parameter combinations. Under the same set of parameters, the experimental results of DPWSS algorithm differ little each time, and the main difference lies in the iterations. These slight differences show that the DPWSS algorithm has good usability while satisfying DP. Due to the limitations of the paper, we have not listed each running result in the experiments.</ns0:p></ns0:div> <ns0:div><ns0:head n='6.'>Conclusions</ns0:head><ns0:p>In this paper, we study the privacy leakage problem of the traditional SVM training methods. The DPWSS algorithm was proposed to release a private classification model of SVM and theoretically proved to satisfy DP through utilizing the exponential mechanism to privately select working sets in every iteration. The extended experiments show that the DPWSS algorithm achieves similar classification capability and the optimized objective value as the original nonprivacy SVM under different parameters. Meanwhile, the DPWSS algorithm has a higher execution efficiency by comparing iterations on different datasets. In the DPWSS algorithm, randomness is introduced in the training process.</ns0:p><ns0:p>The most prominent advantages include that there are no requirements for differentiability of the objective function and complex sensitivity analysis compared with objective perturbation or output perturbation methods. And a number of training set selection methods can be easily combined with the DPWSS algorithm for large-scale training problems that require large memory and enormous amounts of training time. Because the DPWSS algorithm doesn't change the training process of the classical non-privacy SVMs, it is also suitable for multi-class classification. It's a challenge that parameter setting of the constant-factor &#963; for different datasets. The idea of introducing randomness into the optimization process can be easily extended to other privacy-preserving machine learning algorithms, and how to ensure that the method meets the DP requirements is another challenge. Furthermore, the DPWSS algorithm is valid to release a private classification model for linear SVM, while invalid for other non-linear kernel SVM as the privacy disclosure problem of the support vectors in kernel function. In future work, we will study how to release a private classification model for non-linear kernel SVMs. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>reviewing PDF | (CS-2021:03:59525:4:0:NEW 29 Oct 2021) Manuscript to be reviewed Computer Science or . (</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>j satisfying PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:4:0:NEW 29 Oct 2021) Manuscript to be reviewed Computer Science , ,</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>provides &#603;-DP. PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:4:0:NEW 29 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>WSS 4 (</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>restricted condition of parameter &#963;. By using the same as in WSS 2, which checks ) s, we propose WSS 4 to select a working set based on WSS 3 as below. To make the algorithm ) An improved WSS via the 'constant-factor violating pair')</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Algorithm 1 DPWSS</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Input: G: gradient array; y: array of every instance labels with {+1, -1}; l: number of instances; &#945;: dual vector; I: the violating pair selected flag bool matrix; &#963;: constant-factor; &#603;: privacy budget; eps: stopping tolerance;Output: B: working set; PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:4:0:NEW 29 Oct 2021) Manuscript to be reviewed Computer Science Begin 1: initialize m(&#945;) and M'(&#945;) to -INF; 2: find m(&#945;) by Eq. (20) for t in [0:l-1] and t in I up (&#945;); 3: set i = t; 4: find M'(&#945;) by Eq. (21) for t in [0:l-1] and t in I low (&#945;); 5: for t = 0 to l-1 6:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Algorithm 2 A</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>SMO method using DPWSS Input: Q: kernel symmetric matrix; y: array of every instance labels with {+1, -1}; l: number of instances; C: upper bound of all dual variables; Output: &#945;: dual vector; Begin 1: initialize gradient array G to all -1, dual vector &#945; to all 0, and violating pair selected flag bool matrix I to all 0; 2: find &#945; 1 as the initial feasible solution, set k = 1; 3: while k &lt; max_iter PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59525:4:0:NEW 29 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59525:4:0:NEW 29 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11.Figure 12.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head>Figure 16 .Figure 17 .</ns0:head><ns0:label>1617</ns0:label><ns0:figDesc>Figure 16.Figure 17.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_23'><ns0:head>Figure 18 .</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Figure 18.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_24'><ns0:head>Figure 19 .</ns0:head><ns0:label>19</ns0:label><ns0:figDesc>Figure 19.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_25'><ns0:head>Figure 20 .</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>Figure 20.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> </ns0:body> "
"Cover letter Dear Editors We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. We believe that the manuscript is now suitable for publication in PeerJ Computer Science. Zhenlong Sun On behalf of all authors. Reviewer 4 Basic reporting: 2. Please avoid using bulked references, such as “Support vector machine (SVM)[1][2][3][4][5]” – it does not bring any useful to the reader. Please unfold such bulked references. We unfolded these references and gave a detail description in the introduction section. 3. The authors should clearly state what do they mean by the “working set selection”. We added the statement of working set selection in section 3.2. 7. Although the English is acceptable, the manuscript would benefit from proofreading (there are several grammatical errors around). We polished the manuscript and corrected some grammar errors. Experimental design: 1. The characteristics of the datasets used in the experimental study must be expanded. Specifically, what is the imbalance ratio within each dataset? Does it have any impact on the overall performance of the optimizer? Also, the number of datasets is fairly small – currently, it is common to test such SVM-related algorithms over 50+ datasets to fully understand their generalization abilities. To this end, I suggest including much more benchmarks in the experimental validation (especially given that only one dataset may be considered “large”). We added the imbalance ratio as one of the characteristic of the datasets as shown in Table 2. The experimental results show that it has little effect on the classification performance. We also added eight benchmark datasets in the experiments including some large or high-dimensional datasets. However, due to space limitations we did not test more datasets. 4. The authors should present all relevant quality metrics: AUC, accuracy, precision, recall, F1 and MCC. We presented these quality metrics and gave the computational formula. The experimental results are shown in Table3. 5. The authors should back up their conclusions with appropriate statistical testing (here, non-parametric statistical tests) to fully understand if the differences between different optimizers are significant in the statistical sense. We explained this question at the end of the experiment section. In the experiments, we compared the average results of 5 times running of DPWSS algorithm with WSS 2 algorithm. The closer the experimental performance of the two algorithms, the higher the usability of the privacy algorithm. The difference of classification model results from the introduction of the randomness under DP to achieve the purpose of privacy protection. "
Here is a paper. Please give your review comments after reading it.
304
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>We propose a multi-scale image fusion scheme based on guided filtering. Guided filtering can effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at the decomposition and at the recombination stage of the multi-scale fusion process. First, size-selective iterative guided filtering is applied to decompose the source images into approximation and residual layers at multiple spatial scales. Then, frequency-tuned filtering is used to compute saliency maps at successive spatial scales. Next, at each spatial scale binary weighting maps are obtained as the pixelwise maximum of corresponding source saliency maps. Guided filtering of the binary weighting maps with their corresponding source images as guidance images serves to reduce noise and to restore spatial consistency. The final fused image is obtained as the weighted recombination of the individual residual layers and the mean of the approximation layers at the coarsest spatial scale. Application to multiband visual (intensified) and thermal infrared imagery demonstrates that the proposed method obtains state-of-the-art performance for the fusion of multispectral nightvision images. The method has a simple implementation and is computationally efficient.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>The increasing deployment and availability of co-registered multimodal imagery from different types of sensors has spurred the development of image fusion techniques. The information provided by different sensors registering the same scene can either be (partially) redundant or complementary and may be corrupted with noise. Effective combinations of complementary and partially redundant multispectral imagery can therefore visualize information that is not directly evident from the individual input images. For instance, in nighttime (low-light) outdoor surveillance applications, intensified visual (II) or near-infrared (NIR) imagery often provides a detailed but noisy representation of a scene. While different types of noise may result from several processes associated with the underlying sensor physics, additive noise is typically the predominant noise component encountered in II and NIR imagery <ns0:ref type='bibr' target='#b45'>(Petrovic &amp; Xydeas 2003)</ns0:ref>. Additive noise can be modelled as a random signal that is simply added to the original signal. As a result, additive noise may obscure or distort relevant image details. In addition, targets of interest like persons or cars are sometimes hard to distinguish in II or NIR imagery because of their low luminance contrast. While thermal infrared (IR) imagery typically represents these targets with high contrast, their background (context) is often washed out due to low thermal contrast. In this case, a fused image that clearly represents both the targets and their background enables a user to assess the location of targets relative to landmarks in their surroundings, thus providing more information than either of the input images alone. Some potential benefits of image fusion are: wider spatial and temporal coverage, decreased uncertainty, improved reliability, and increased system robustness. Image fusion has important applications in defense and security for situational awareness <ns0:ref type='bibr' target='#b62'>(Toet et al. 1997)</ns0:ref>, surveillance <ns0:ref type='bibr' target='#b52'>(Shah et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b79'>Zhu &amp; Huang 2007)</ns0:ref>, target tracking <ns0:ref type='bibr' target='#b41'>(Motamed et al. 2005;</ns0:ref><ns0:ref type='bibr' target='#b81'>Zou &amp; Bhanu 2005)</ns0:ref>, intelligence gathering (O'Brien &amp; Irvine 2004), concealed weapon detection <ns0:ref type='bibr' target='#b5'>(Bhatnagar &amp; Wu 2011;</ns0:ref><ns0:ref type='bibr' target='#b40'>Liu et al. 2006;</ns0:ref><ns0:ref type='bibr' target='#b59'>Toet 2003;</ns0:ref><ns0:ref type='bibr' target='#b70'>Xue &amp; Blum 2003;</ns0:ref><ns0:ref type='bibr' target='#b71'>Xue et al. 2002;</ns0:ref><ns0:ref type='bibr' target='#b72'>Yajie &amp; Mowu 2009)</ns0:ref>, detection of abandoned packages <ns0:ref type='bibr' target='#b4'>(Beyan et al. 2011</ns0:ref>) and buried explosives <ns0:ref type='bibr' target='#b33'>(Lepley &amp; Averill 2011)</ns0:ref>, and face recognition <ns0:ref type='bibr' target='#b29'>(Kong et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b54'>Singh et al. 2008)</ns0:ref>. Other important image fusion applications are found in industry <ns0:ref type='bibr' target='#b56'>(Tian et al. 2009</ns0:ref>), art analysis <ns0:ref type='bibr' target='#b80'>(Zitov&#225; et al. 2011)</ns0:ref>, agriculture <ns0:ref type='bibr' target='#b9'>(Bulanona et al. 2009)</ns0:ref>, remote sensing <ns0:ref type='bibr' target='#b21'>(Ghassemian 2001;</ns0:ref><ns0:ref type='bibr' target='#b26'>Jacobson &amp; Gupta 2005;</ns0:ref><ns0:ref type='bibr' target='#b27'>Jacobson et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b28'>Jiang et al. 2011</ns0:ref>) and medicine <ns0:ref type='bibr' target='#b1'>(Agarwal &amp; Bedi 2015;</ns0:ref><ns0:ref type='bibr' target='#b6'>Biswas et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b17'>Daneshvar &amp; Ghassemian 2010;</ns0:ref><ns0:ref type='bibr' target='#b53'>Singh &amp; Khare 2014;</ns0:ref><ns0:ref type='bibr' target='#b66'>Wang et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b74'>Yang &amp; Liu 2013</ns0:ref>) (for a survey of different applications of image fusion techniques see <ns0:ref type='bibr'>(Blum &amp; Liu 2006)</ns0:ref>). In general, image fusion aims to represent the visual information from any number of input images in a single composite (fused) image that is more informative than each of the input images alone, eliminating noise in the process while preventing both the loss of essential information and the introduction of artefacts. This requires the availability of filters that combine the extraction of relevant image details with noise reduction. To date, a variety of image fusion algorithms have been proposed. A popular class of algorithms are the multi-scale image fusion schemes, which decompose the source images into spatial primitives at multiple spatial scales, then integrate these primitives to form a new ('fused') multi-scale representation, and finally apply an inverse multi-scale transform to reconstruct the fused image. Examples of this approach are for instance the Laplacian pyramid <ns0:ref type='bibr' target='#b12'>(Burt &amp; Adelson 1983)</ns0:ref>, the Ratio of Low-Pass pyramid <ns0:ref type='bibr' target='#b58'>(Toet 1989b)</ns0:ref>, the contrast pyramid ( <ns0:ref type='bibr' target='#b63'>(Toet et al. 1989</ns0:ref>)), the filter-subtract-decimate Laplacian pyramid <ns0:ref type='bibr' target='#b10'>(Burt 1988;</ns0:ref><ns0:ref type='bibr' target='#b13'>Burt &amp; Kolczynski 1993)</ns0:ref>, the gradient pyramid <ns0:ref type='bibr' target='#b11'>(Burt 1992;</ns0:ref><ns0:ref type='bibr' target='#b13'>Burt &amp; Kolczynski 1993)</ns0:ref>, the morphological pyramid <ns0:ref type='bibr' target='#b57'>(Toet 1989a)</ns0:ref>, the discrete wavelet transform <ns0:ref type='bibr' target='#b32'>(Lemeshewsky 1999;</ns0:ref><ns0:ref type='bibr' target='#b34'>Li et al. 1995;</ns0:ref><ns0:ref type='bibr' target='#b36'>Li et al. 2002;</ns0:ref><ns0:ref type='bibr' target='#b51'>Scheunders &amp; De Backer 2001)</ns0:ref>, the shift invariant discrete wavelet transform <ns0:ref type='bibr' target='#b32'>(Lemeshewsky 1999;</ns0:ref><ns0:ref type='bibr' target='#b48'>Rockinger 1997;</ns0:ref><ns0:ref type='bibr' target='#b49'>Rockinger 1999;</ns0:ref><ns0:ref type='bibr' target='#b50'>Rockinger &amp; Fechner 1998)</ns0:ref>, the contourlet <ns0:ref type='bibr' target='#b73'>(Yang et al. 2010)</ns0:ref>, the shift-invariant shearlet transform <ns0:ref type='bibr' target='#b66'>(Wang et al. 2014)</ns0:ref>, the non-subsampled shearlet transform <ns0:ref type='bibr' target='#b30'>(Kong et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b38'>Liu et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b76'>Zhang et al. 2015)</ns0:ref>, the ridgelet transform <ns0:ref type='bibr' target='#b55'>(Tao et al. 2005)</ns0:ref>. The filters applied in several of the earlier techniques typically produce halo artefacts near edges. More recent methods like shearlets, contourlets and ridgelets are better capable to preserve local image features but are often complex or time-consuming. Non-linear edge-preserving smoothing filters such as anisotropic diffusion <ns0:ref type='bibr' target='#b44'>(Perona &amp; Malik 1990)</ns0:ref>, robust smoothing <ns0:ref type='bibr' target='#b7'>(Black et al. 1998</ns0:ref>) and the bilateral filter <ns0:ref type='bibr' target='#b65'>(Tomasi &amp; Manduchi 1998)</ns0:ref> may appear effective tools to prevent artefacts that arise from spatial inconsistencies in multiscale image fusion schemes. However, anisotropic diffusion tends to over sharpen edges and is computationally expensive, which makes it less suitable for application in multi-scale fusion schemes <ns0:ref type='bibr' target='#b18'>(Farbman et al. 2008)</ns0:ref>. The non-linear bilateral filter (BLF) assigns each pixel a weighted mean of its neighbors, with the weights decreasing both with spatial distance and with difference in value <ns0:ref type='bibr' target='#b65'>(Tomasi &amp; Manduchi 1998)</ns0:ref>. While the BLF is quite effective at smoothing small intensity changes while preserving strong edges and has efficient implementations, it also tends to blur across edges at larger spatial scales, thereby limiting its value for application in multi-scale image decomposition schemes <ns0:ref type='bibr' target='#b18'>(Farbman et al. 2008</ns0:ref>). In addition, the BLF has the undesirable property that it can reverse the intensity gradient near sharp edges (the weighted average becomes unstable when a pixel has only few similar pixels in its neighborhood: <ns0:ref type='bibr' target='#b24'>He et al. 2013</ns0:ref>). In the joint (or cross) bilateral filter (JBLF) a second or guidance image serves to steer the edge stopping range filter thus preventing over-or under-blur near edges <ns0:ref type='bibr' target='#b46'>(Petschnigg et al. 2004)</ns0:ref>. <ns0:ref type='bibr' target='#b77'>Zhang et al. (2014)</ns0:ref> showed that the application of the JBLF in an iterative framework results in size selective filtering of small scale details combined with the recovery of larger scale edges. The recently introduced Guided Filter (GF: <ns0:ref type='bibr' target='#b24'>He et al. 2013</ns0:ref>) is a computationally efficient, edge-preserving translation-variant operator based on a local linear model which avoids the drawbacks of bilateral filtering and other previous approaches. When the input image also serves as the guidance image, the GF behaves like the edge preserving BLF. Hence, the GF can gracefully eliminate small details while recovering larger scale edges when applied in an iterative framework.</ns0:p><ns0:p>In this paper we propose a multi-scale image fusion scheme, where iterative guided filtering is used to decompose the input images into approximate and residual layers at successive spatial scales, and guided filtering is used to construct the weight maps used in the recombination process. The rest of this paper is organized as follows. Section 2 briefly discusses the principles of edge preserving filtering and introduces (iterative) guided filtering. In Section 3 we discuss related work. Section 4 presents the proposed guided fusion based image fusion scheme. Section 5 presents the imagery and computational methods that were used to assess the performance of the new image fusion scheme. The results of the evaluation study are presented in Section 6. Finally, in Section 7 the results are discussed and some conclusions are presented.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>Edge preserving filtering</ns0:head><ns0:p>In this section we briefly introduce the edge preserving bilateral and joint bilateral filters, show how they are related to the guided filter, and how the application of a guided filter in an iterative framework results in size selective filtering of small scale image details combined with the recovery of larger scale edges.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Bilateral filter</ns0:head><ns0:p>Spatial filtering is a common operation in image processing that is typically used to reduce noise or eliminate small spurious details (e.g., texture). In spatial filtering the value of the filtered image at a given location is a function (e.g., a weighted average) of the original pixel values in a small neighborhood of the same location. Although low pass filtering or blurring (e.g., averaging with Gaussian kernel) can effectively reduce image noise, it also seriously degrades the articulation of (blurs) significant image edges. Therefore, edge preserving filters have been developed that reduce small image variations (noise or texture) while preserving large discontinuities (edges). The bilateral filter is a non-linear filter that computes the output at each pixel as a Gaussian weighted average of their spatial and spectral distances. It prevents blurring across edges by assigning larger weights to pixels that are spatially close and have similar intensity values ( <ns0:ref type='bibr' target='#b65'>(Tomasi &amp; Manduchi 1998)</ns0:ref>). It uses a combination of (typically Gaussian) spatial and a range (intensity) filter kernels that perform a blurring in the spatial domain weighted by the local variation in the intensity domain. It combines a classic low-pass filter with an edge-stopping function that attenuates the filter kernel weights at locations where the intensity difference between pixels is large. Bilateral filtering was developed as a fast alternative to the computationally expensive technique of anisotropic diffusion, which uses gradients of the filtering images itself to guide a diffusion process, avoiding edge blurring ( <ns0:ref type='bibr' target='#b44'>(Perona &amp; Malik 1990)</ns0:ref>). More formally, at a given image location (pixel) i, the filtered output O i is given by: 1</ns0:p><ns0:formula xml:id='formula_0'>(|| ||) (|| ||) i j i j j i O I f i j g I I K &#61646; &#61527; &#61501; &#61485; &#61485; &#61669; (1)</ns0:formula><ns0:p>where f is the spatial filter kernel (e.g., a Gaussian centered at i), g is the range or intensity (edge-stopping) filter kernel (centered at the image value at i), &#61527; is the spatial support of the kernel, i K is a normalizing factor (the sum of the f g &#61655; filter weights) .</ns0:p><ns0:p>Intensity edges are preserved since the bilateral filter decreases not only with the spatial distance but also with the intensity distance. Though the filter is efficient and effectively reduces noise while preserving edges in many situations, it has the undesirable property that it can reverse the intensity gradient near sharp edges (the weighted average becomes unstable when a pixel has only few similar pixels in its neighborhood: <ns0:ref type='bibr' target='#b24'>He et al. 2013</ns0:ref>).</ns0:p><ns0:p>In the joint (or cross) bilateral filter (JBLF) the range filter is applied to a second or guidance image G <ns0:ref type='bibr' target='#b46'>(Petschnigg et al. 2004</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_1'>1 (|| ||) (|| ||) i j i j j i O I f i j g G G K &#61646; &#61527; &#61501; &#61655; &#61485; &#61655; &#61485; &#61669; (2)</ns0:formula><ns0:p>The JBLF can prevent over-or under-blur near edges by using a related image G to guide the edge stopping behavior of the range filter. That is, the JBLF smooths the image I while preserving edges that are also represented in the image G . The JBLF is particularly favored when the edges in the image that is to be filtered are unreliable (e.g., due to noise or distortions) and when a companion image with well-defined edges is available (e.g., in the case of flash /noflash image pairs).image. Thus, in the case of filtering an II image for which a companion (registered) IR image is available, the guidance image may either be the II image itself or its IR counterpart.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Guided filtering</ns0:head><ns0:p>A guided image filter <ns0:ref type='bibr' target='#b24'>(He et al. 2013</ns0:ref>) is a translation-variant filter based on a local linear model. Guided image filtering involves an input image I , a guidance image G ) and an output image O .</ns0:p><ns0:p>The two filtering conditions are (i) that the local filter output is a linear transform of the guidance image G and (ii) as similar as possible to the input image I . The first condition implies that</ns0:p><ns0:formula xml:id='formula_2'>i k i k k O a G b i &#61559; &#61501; &#61483; &#61474; &#61646; (3)</ns0:formula><ns0:p>where k &#61559; is a square window of size ( <ns0:ref type='formula'>2</ns0:ref> </ns0:p><ns0:formula xml:id='formula_3'>&#61480; &#61481; 2 2 ( , ) ( ) k k k k i k i k i E a b a G b I a &#61559; &#61541; &#61646; &#61501; &#61483; &#61485; &#61483; &#61669; (4)</ns0:formula><ns0:p>where &#61541; is a regularization parameter penalizing large k a . The coefficients k a and k b can directly be solved by linear regression <ns0:ref type='bibr' target='#b24'>(He et al. 2013</ns0:ref> &#61555; is the variance of I over k &#61559; .</ns0:p><ns0:formula xml:id='formula_4'>): 2 1 | | k i i k k i k k G I G I a &#61559; &#61559; &#61555; &#61541; &#61646; &#61485; &#61501; &#61483; &#61669; (5) k k k k b I a G &#61501; &#61485;<ns0:label>(</ns0:label></ns0:formula><ns0:p>Since pixel i is contained in several different (overlapping) windows k &#61559; , the value of i O in Equation ( <ns0:ref type='formula'>3</ns0:ref>) depends on the window over which it is calculated. This can be accounted for by averaging over all possible values of i O :</ns0:p><ns0:formula xml:id='formula_5'>&#61480; &#61481; | 1 | | k i k k k k i O a G b &#61559; &#61559; &#61646; &#61501; &#61483; &#61669; (7) Since | k i k k k i k a a &#61559; &#61559; &#61646; &#61646; &#61501; &#61669;</ns0:formula><ns0:p>&#61669; due to the symmetry of the box window Equation ( <ns0:ref type='formula'>7</ns0:ref>) can be written as</ns0:p><ns0:formula xml:id='formula_6'>i i i i O a G b &#61501; &#61483; (8) where 1 | | i i k k a a &#61559; &#61559; &#61646; &#61501; &#61669; and 1 | | i i k k b b &#61559; &#61559; &#61646; &#61501; &#61669;</ns0:formula><ns0:p>are the average coefficients of all windows overlapping i . Although the linear coefficients ( , ) i i a b vary spatially, their gradients will be smaller than those of G near strong edges (since they are the output of a mean filter). As a result we have O a G &#61649; &#61627; &#61649; , meaning that abrupt intensity changes in the guiding image G are still largely preserved in the output image O . Equations ( <ns0:ref type='formula'>5</ns0:ref>), ( <ns0:ref type='formula'>6</ns0:ref>) and ( <ns0:ref type='formula'>8</ns0:ref>) define the guided filter. When the input image also serves as the guidance image, the guided filter behaves like the edge preserving bilateral filter, with the parameters &#61541; and the window size r having the same effects as respectively the range and the spatial variances of the bilateral filter. Equation ( <ns0:ref type='formula'>8</ns0:ref>) can be rewritten as</ns0:p><ns0:formula xml:id='formula_7'>( ) i ij j j O W G I &#61501; &#61669; (9)</ns0:formula><ns0:p>with the weighting kernel ij W depending only on the guidance image G :</ns0:p><ns0:formula xml:id='formula_8'>2 2 :( , ) ( ) ( ) 1 1 | | k i k j k ij k i j k G G G G W &#61559; &#61559; &#61555; &#61541; &#61646; &#61670; &#61686; &#61485; &#61485; &#61501; &#61483; &#61671; &#61687; &#61671; &#61687; &#61483; &#61672; &#61688; &#61669; (10) Since ( ) 1 ij j W G &#61501; &#61669;</ns0:formula><ns0:p>this kernel is already normalized. The guided filter is a computationally efficient, edge-preserving operator which avoids the gradient reversal artefacts of the bilateral filter. The local linear condition formulated by Equation (3) implies that its output is locally approximately a scaled version of the guidance image plus an offset. This makes it possible to use the guided filter to transfer structure from the guidance image G to the output image O , even in areas where the input image I is smooth (or flat). This structure-transferring filtering is an useful property of the guided filter, and can for instance be applied for feathering/matting and dehazing <ns0:ref type='bibr' target='#b24'>(He et al. 2013)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Iterative guided filtering</ns0:head><ns0:p>Zhang et al. <ns0:ref type='bibr' target='#b77'>(Zhang et al. 2014)</ns0:ref> showed that the application of the joint bilateral filter (Equation ( <ns0:ref type='formula'>2</ns0:ref>)) in an iterative framework results in size selective filtering of small scale details combined with the recovery of larger scale edges. In this scheme the result Manuscript to be reviewed Computer Science obtained from the joint bilateral filtering of the input image I using the result t G of the previous iteration step as the guidance image:</ns0:p><ns0:formula xml:id='formula_9'>1 1 (|| ||) (|| ||) t t t i j i j j i G I f i j g G G K &#61483; &#61646; &#61527; &#61501; &#61655; &#61485; &#61655; &#61485; &#61669; (11)</ns0:formula><ns0:p>In this scheme details smaller than the Gaussian kernel of the bilateral filter are removed while the edges of the remaining details are iteratively restored. Hence, this scheme allows the selective elimination of small scale details while preserving the remaining image structure. Note that the initial guidance image 1 G can simply be a constant (e.g., zero) valued image since it updates to the Gaussian filtered input image in the first iteration step. Here we propose to replace the bilateral filter in this scheme by a guided filter to avoid any gradient reversal artefacts.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>Related work</ns0:head><ns0:p>As mentioned before most multi-scale transform-based image fusion methods introduce some artefacts because the spatial consistency is not well-preserved <ns0:ref type='bibr' target='#b35'>(Li et al. 2013</ns0:ref>). This has led to the use of edge preserving filters to decompose source images into approximate and residual layers while preserving the edge information in the fusion process. Techniques that have been applied include weighted least squares filter <ns0:ref type='bibr' target='#b75'>(Yong &amp; Minghui 2014)</ns0:ref>, L 1 fidelity using L 0 gradient <ns0:ref type='bibr' target='#b14'>(Cui et al. 2015)</ns0:ref>, L 0 gradient minimization <ns0:ref type='bibr' target='#b78'>(Zhao et al. 2013)</ns0:ref>, cross bilateral filter <ns0:ref type='bibr' target='#b31'>(Kumar 2013</ns0:ref>) and anisotropic diffusion <ns0:ref type='bibr' target='#b2'>(Bavirisetti &amp; Dhuli 2016a)</ns0:ref>. <ns0:ref type='bibr' target='#b35'>Li et al. (2013)</ns0:ref> proposed to restore spatial consistency by using guided filtering in the weighted recombination stage of the fusion process. In their scheme, the input images are first decomposed into approximate and residual layers using a simple averaging filter. Next, each input image is then filtered with a Laplacian kernel followed by blurring with a Gaussian kernel, and the absolute value of the result is adopted as a saliency map that characterizes the local distinctness of the input image details. Then, binary weight maps are obtained by comparing the saliency maps of all input images, and assigning a pixel in an individual weight map the value 1 if it is the pixelwise maximum of all saliency maps, and 0 otherwise. The resulting binary weight maps are typically noisy and not aligned with object boundaries and may produce artefacts to the fused image. <ns0:ref type='bibr' target='#b35'>Li et al. (2013)</ns0:ref> performed guided filtering on each weight map with its corresponding source layer as the guidance image, to reduce noise and to restore spatial consistency. The GF guarantees that pixels with similar intensity values have similar weights and weighting is not performed across edges. Typically a large filter size and a large blur degree are used to fuse the approximation layers, while a small filter size and a small blur degree are used to combine the residual layers. Finally, the fused image is obtained by weighted recombination of the individual source residual layers. Despite the fact that this method is efficient and can achieve state-of-theart performance in most cases, it does not use edge preserving filtering in the decomposition stage and applies a saliency map that does not relate well to human visual saliency <ns0:ref type='bibr' target='#b20'>(Gan et al. 2015)</ns0:ref>.</ns0:p><ns0:p>In their multi-scale image fusion framework <ns0:ref type='bibr' target='#b20'>Gan et al. (2015)</ns0:ref> apply edge preserving filtering in the decomposition stage to extract well-defined image details (i.e., to preserve their edges) and use guided filtering in the weighted recombination stage to reduce spatial inconsistencies introduced by the weighting maps used in the reconstruction stage (i.e., to prevent edge artefacts like halos). First, a nonlinear weighted least squares edge-preserving filter <ns0:ref type='bibr' target='#b18'>(Farbman et al. 2008)</ns0:ref> is used to decompose the source images into approximate and residual layers. Next, phase congruency is used to calculate saliency maps that characterize the local distinctness of the source image details. The rest of their scheme is similar to that of <ns0:ref type='bibr' target='#b35'>Li et al. (2013)</ns0:ref>: binary weight maps are obtained from pixelwise comparison of the saliency maps corresponding to the individual source images; guided filtering is applied to these binary weight maps to recue noise and restore spatial consistency, and the fused image is obtained by weighted recombination of the individual source residual layers.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>Proposed method</ns0:head><ns0:p>A flow chart of the proposed multi-scale decomposition fusion scheme is shown in Figure <ns0:ref type='figure'>1</ns0:ref>. The algorithm consists of the following steps:</ns0:p><ns0:p>1. Iterative guided filtering is applied to decompose the source images into approximate layers (representing large scale variations) and residual layers (containing small scale variations). 2. Frequency-tuned filtering <ns0:ref type='bibr' target='#b0'>(Achanta et al. 2009</ns0:ref>) is used to generate saliency maps for the source images. 3. Binary weighting maps are computed as the pixelwise maximum of the individual source saliency maps. 4. Guided filtering is applied to each binary weighting map with its corresponding source as the guidance image to reduce noise and to restore spatial consistency. 5. The fused image is computed as a weighted recombination of the individual source residual layers. In a hierarchical framework steps 1-4 are performed at multiple spatial scales. In this paper we used a 4 level decomposition obtained by filtering at 3 different spatial scales (see Figure <ns0:ref type='figure'>1</ns0:ref>). Figure <ns0:ref type='figure'>2</ns0:ref> shows the intensified visual (II) and thermal infrared (IR) or near infrared (NIR) images together with the results of the proposed image fusion scheme, for the 12 different scenes that were used in the present study. We will now discuss the proposed fusion scheme in more detail. Consider two co-registered source images 0 ( , ) X x y and 0 ( , ) Y x y . The proposed scheme then applies iterative guided filtering (IGF) to the input images i X and i Y to obtain progressively coarser image representations</ns0:p><ns0:formula xml:id='formula_10'>1 i X &#61483; and 1 i Y &#61483; ( 0 i &#61502; ): 1 IGF( , , ) ; {0,1, 2} i i i i X r X i &#61541; &#61483; &#61501; &#61646;</ns0:formula><ns0:p>(12) where the parameters i &#61541; and i r represent respectively the range and the spatial variances of the guided filter at iteration step i. In this study the number of iteration steps is set to 4. By letting each finer scale image serve as the approximate layer for the preceding coarser scale image the successive size-selective residual layers i dX are simply obtained by subtraction as follows:</ns0:p><ns0:formula xml:id='formula_11'>1 ; {0,1, 2} i i i dX X X i &#61483; &#61501; &#61485; &#61646;</ns0:formula><ns0:p>(13) Figure <ns0:ref type='figure'>3</ns0:ref> shows the approximate and residual layers that are obtained this way for the tank scene (nr 10 in Figure <ns0:ref type='figure'>2</ns0:ref>). The edge-preserving properties of the iterative guided filter guarantee a graceful decomposition of the source images into details at different spatial scales. The filter size . Visual saliency refers to the physical, bottom-up distinctness of image details <ns0:ref type='bibr' target='#b19'>(Fecteau &amp; Munoz 2006)</ns0:ref>. It is a relative property that depends on the degree to which a detail is visually distinct from its background <ns0:ref type='bibr' target='#b69'>(Wertheim 2010)</ns0:ref>. Since saliency quantifies the relative visual importance of image details saliency maps are frequently used in the weighted recombination phase of multiscale image fusion schemes <ns0:ref type='bibr' target='#b3'>(Bavirisetti &amp; Dhuli 2016b;</ns0:ref><ns0:ref type='bibr' target='#b14'>Cui et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b20'>Gan et al. 2015)</ns0:ref>. Frequency tuned filtering computes bottom-up saliency as local multi-scale luminance contrast <ns0:ref type='bibr' target='#b0'>(Achanta et al. 2009</ns0:ref>). The saliency map S for an image I is computed as</ns0:p><ns0:formula xml:id='formula_12'>( , ) ( , ) S x y x y f &#61549; &#61501; &#61485; I I (14)</ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_13'>&#61549; I</ns0:formula><ns0:p>is the arithmetic mean image feature vector, f I represents a Gaussian blurred version of the original image, using a 5x5 separable binomial kernel, is the 2 L norm (Euclidian distance), and ,</ns0:p><ns0:p>x y are the pixel coordinates. A recent and extensive evaluation study comparing 13 state-of-the-art saliency models found that the output of this simple saliency model correlates more strongly with human visual perception than the output produced by any of the other available models <ns0:ref type='bibr' target='#b60'>(Toet 2011</ns0:ref>). In the proposed fusion scheme we first compute saliency maps </ns0:p><ns0:formula xml:id='formula_14'>1 if ( , ) ( , ) ( , ) 0 otherwise 1 if ( , ) ( , ) ( , ) 0 otherwise i i i i i i X Y X Y X Y</ns0:formula></ns0:div> <ns0:div><ns0:head>S x y S x y BW x y S x y S x y BW x y</ns0:head><ns0:formula xml:id='formula_15'>&#61502; &#61676; &#61679; &#61501; &#61677; &#61679; &#61678; &#61502; &#61676; &#61679; &#61501; &#61677; &#61679; &#61678;</ns0:formula><ns0:p>(15) The resulting binary weight maps are noisy and typically not well aligned with object boundaries, which may give rise to artefacts in the final fused image. Spatial consistency is therefore restored through guided filtering (GF) of these binary weight maps with the corresponding source layers as guidance images:</ns0:p><ns0:formula xml:id='formula_16'>GF( , ) ( , ) i i i i X X i Y Y i W BW X W GF BW Y &#61501; &#61501;<ns0:label>(16)</ns0:label></ns0:formula><ns0:p>As noted before guided filtering combines noise reduction with edge preservation, while the output is locally approximately a scaled version of the guidance image. In the present scheme these properties are used to transform the binary weight maps into smooth continuous weight maps through guided filtering with the corresponding source images as guidance images. Figure Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>4 illustrates the process of computing smoothed weight maps by guided filtering of the binary weight maps resulting from the pointwise maximum of the corresponding source layer saliency maps for the tank scene. Fused residual layers are then computed as the normalized weighted mean of the corresponding source residual layers:</ns0:p><ns0:formula xml:id='formula_17'>i i i i X i Y i i X Y W dX W dY dF W W &#61655; &#61483; &#61655; &#61501; &#61483;<ns0:label>(17)</ns0:label></ns0:formula><ns0:p>The fused image F is finally obtained by adding the fused residual layers to the average value of the coarsest source layers:</ns0:p><ns0:formula xml:id='formula_18'>2 3 3 0 2 i i X Y F dF &#61501; &#61483; &#61501; &#61483; &#61669;</ns0:formula><ns0:p>(18) By using guided filtering both in the decomposition stage and in the recombination stage, this proposed fusion scheme optimally benefits from both the multi-scale edge-preserving characteristics (in the iterative framework) and the structure restoring capabilities (through guidance by the original source images) of the guided filter. The method is easy to implement and computationally efficient.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>Methods and material</ns0:head><ns0:p>This section presents the test imagery and computational metrics used to assess the performance of the proposed images fusion scheme in comparison to existing multi-scale fusion schemes.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1'>Test imagery</ns0:head><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref> shows the intensified visual (II) , thermal infrared (IR) or near infrared (NIR: scene 12) source images together with the result of the proposed fusion scheme (F) for each of the 12 scenes used in this study. The 12 scenes are part of the TNO Image Fusion Dataset <ns0:ref type='bibr' target='#b61'>(Toet 2014)</ns0:ref> with the following identifiers: airplane_in_trees, Barbed_wire_2, Jeep, Kaptein_1123, Marne_07, Marne_11, Marne_15, Reek, tank, Nato_camp_sequence, soldier_behind_smoke, Vlasakkers.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2'>Multi-scale fusion schemes used for comparison</ns0:head><ns0:p>In this study we compare the performance of our image fusion scheme with seven other popular image fusion methods based on multi-scale decomposition including the Laplacian pyramid <ns0:ref type='bibr' target='#b12'>(Burt &amp; Adelson 1983)</ns0:ref>, the Ratio of Low-Pass pyramid <ns0:ref type='bibr' target='#b58'>(Toet 1989b)</ns0:ref>, the contrast pyramid <ns0:ref type='bibr' target='#b63'>(Toet et al. 1989)</ns0:ref>, the filter-subtract-decimate Laplacian pyramid <ns0:ref type='bibr' target='#b10'>(Burt 1988;</ns0:ref><ns0:ref type='bibr' target='#b13'>Burt &amp; Kolczynski 1993)</ns0:ref>, the gradient pyramid <ns0:ref type='bibr' target='#b11'>(Burt 1992;</ns0:ref><ns0:ref type='bibr' target='#b13'>Burt &amp; Kolczynski 1993)</ns0:ref>, the morphological pyramid ( <ns0:ref type='bibr' target='#b57'>(Toet 1989a</ns0:ref>)), the discrete wavelet transform <ns0:ref type='bibr' target='#b32'>(Lemeshewsky 1999;</ns0:ref><ns0:ref type='bibr' target='#b34'>Li et al. 1995;</ns0:ref><ns0:ref type='bibr' target='#b36'>Li et al. 2002;</ns0:ref><ns0:ref type='bibr' target='#b51'>Scheunders &amp; De Backer 2001)</ns0:ref>, and a shift invariant extension of the discrete wavelet transform <ns0:ref type='bibr' target='#b32'>(Lemeshewsky 1999;</ns0:ref><ns0:ref type='bibr' target='#b48'>Rockinger 1997;</ns0:ref><ns0:ref type='bibr' target='#b49'>Rockinger 1999;</ns0:ref><ns0:ref type='bibr' target='#b50'>Rockinger &amp; Fechner 1998)</ns0:ref>. We used Rockinger's freely available Matlab image fusion toolbox (www.metapix.de/toolbox.htm) to compute these fusion schemes. To allow a straightforward comparison, the number of scale levels is set to 4 in all methods, and simple averaging is used to compute the approximation of the fused image representation at the coarsest spatial scale. Figures <ns0:ref type='figure' target='#fig_14'>5-9</ns0:ref> show the results of the proposed method together with the results of other seven fusion schemes for some of the scenes used in this study (scenes 2-5 and 10).</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3'>Objective evaluation metrics</ns0:head><ns0:p>Image fusion results can be evaluated using either subjective or objective measures. Subjective methods are based on psycho-visual testing and are typically expensive in terms of time, effort, and equipment required. Also, in most cases, there is only little difference among fusion results. This makes it difficult to subjectively perform the evaluation of fusion results. Therefore, many objective evaluation methods have been developed (for an overview see e.g. <ns0:ref type='bibr' target='#b37'>(Li et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b39'>Liu et al. 2012)</ns0:ref>. However, so far, there is no universally accepted metric to objectively evaluate the image fusion results. In this paper, we use four frequently applied computational metrics to objectively evaluate and compare the performance of different image fusion methods. The metrics we use are Entropy, the Mean Structural Similarity Index (MSSIM), Normalized Mutual Information (NMI), and Normalized Feature Mutual Information (NFMI). These metrics will be briefly discussed in the following sections.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.1'>Entropy</ns0:head><ns0:p>Entropy (E) is a measure of the information content in a fused image F. Entropy is defined as 1 0 ( ) log ( )</ns0:p><ns0:formula xml:id='formula_19'>L F F F i E P i P i &#61485; &#61501; &#61501; &#61485; &#61669; (19)</ns0:formula><ns0:p>where ( ) F P i indicates the probability that a pixel in the fused image F has a gray value i , and the gray values range from 0 to L . The larger the entropy is, the more informative the fused image is. A fused image is more informative than either of its source images when its entropy is higher than the entropy of its source images.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.2'>Mean Structural Similarity Index</ns0:head><ns0:p>The Structural Similarity (SSIM: <ns0:ref type='bibr' target='#b68'>(Wang et al. 2004</ns0:ref>)) index is a stabilized version of the Universal Image Quality Index (UIQ: <ns0:ref type='bibr' target='#b67'>(Wang &amp; Bovik 2002)</ns0:ref>) which can be used to quantify the structural similarity between a source image A and a fused image F:</ns0:p><ns0:formula xml:id='formula_20'>1 2 3 , 2 2 2 2 1 2 3 2 2 x y x y xy x y x y x y x y C C C SSIM C C C &#61549; &#61549; &#61555; &#61555; &#61555; &#61549; &#61549; &#61555; &#61555; &#61555; &#61555; &#61483; &#61483; &#61483; &#61501; &#61655; &#61655; &#61655; &#61483; &#61483; &#61483; &#61483; &#61483;<ns0:label>(20)</ns0:label></ns0:formula><ns0:p>where and Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_21'>1 1 1 1 1 1 ( , ) , ( , ) M N M N x y i j i j x i j y i j M N M N &#61549; &#61549; &#61501; &#61501; &#61501; &#61501; &#61501; &#61501; &#61620; &#61620; &#61669;&#61669; &#61669;&#61669; (21) 2 2 2 2 1 1 1 1 1 1 ( ( , )</ns0:formula><ns0:p>) , ( ( , ) )</ns0:p><ns0:formula xml:id='formula_22'>M N M N x x y y i j i j x i j y i j M N M N &#61555; &#61549; &#61555; &#61549; &#61501; &#61501; &#61501; &#61501; &#61501; &#61485; &#61501; &#61485; &#61620; &#61620; &#61669;&#61669; &#61669;&#61669; (22) 2 1 1 1 ( ( , ) ) ( ( , ) ) M N xy x y i j x i j y i j M N &#61555; &#61549; &#61549; &#61501; &#61501; &#61501; &#61485; &#61485; &#61620; &#61669;&#61669;<ns0:label>(23)</ns0:label></ns0:formula><ns0:p>By default, the stabilizing constants are set to</ns0:p><ns0:formula xml:id='formula_23'>2 1 (0.01 ) C L &#61501; &#61655; , 2 2 (0.03 ) C L &#61501; &#61655; and 3 2 / 2 C C &#61501;</ns0:formula><ns0:p>, where L is the maximal gray value. The value of SSIM is bounded and ranges between -1 and 1 (it is 1 only when both images are identical). The SSIM is typically computed over a sliding window to compare local patterns of pixel intensities that have been normalized for luminance and contrast. The Mean Structural Similarity (MSSIM) index quantifies the overall similarity between a source image A and a fused image F:</ns0:p><ns0:formula xml:id='formula_24'>, , 1 1 w i i N A F x y i w MSSIM SSIM N &#61501; &#61501; &#61669; (24)</ns0:formula><ns0:p>where w N represents the number of local windows of the image. An overall image fusion quality index can then be defined as the mean MSSIM values between each of the source images and the fused result:</ns0:p><ns0:formula xml:id='formula_25'>, , , 2 A F B F A B F MSSIM MSSIM MSSIM &#61483; &#61501;<ns0:label>(25)</ns0:label></ns0:formula><ns0:p>, A B F MSSIM ranges between -1 and 1 (it is 1 only when both images are identical).</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.3'>Normalized Mutual Information</ns0:head><ns0:p>Mutual Information (MI) measures the amount of information that two images have in common. It can be used to quantify the amount of information from a source image that is transferred to a fused image <ns0:ref type='bibr' target='#b47'>(Qu et al. 2002)</ns0:ref>. The mutual information AF MI between a source image A and a fused image F is defined as: , ,</ns0:p><ns0:formula xml:id='formula_26'>A F A F A F i j A F P i j MI P i j P i P j &#61501; &#61669;<ns0:label>, , ( , ) ( , ) log ( ) ( )</ns0:label></ns0:formula><ns0:p>A P i and ( ) F P j are the probability density functions in the individual images, and ( , )</ns0:p><ns0:p>AF P i j is the joint probability density function.</ns0:p><ns0:p>The traditional mutual information metric is unstable and may bias the measure towards the source image with the highest entropy. This problem can be resolved by computing the normalized mutual information (NMI) as follows <ns0:ref type='bibr' target='#b25'>(Hossny et al. 2008</ns0:ref></ns0:p><ns0:formula xml:id='formula_28'>): , , , A F B F A B F A F B F MI MI NMI H H H H &#61501; &#61483; &#61483; &#61483;<ns0:label>(27)</ns0:label></ns0:formula><ns0:p>where , and</ns0:p><ns0:formula xml:id='formula_29'>A B F</ns0:formula><ns0:p>H H H are the marginal entropy of A, B and F, and , , and</ns0:p><ns0:formula xml:id='formula_30'>A F B F MI MI</ns0:formula><ns0:p>represent the mutual information between respectively the source image A and the fused image F and between the source image B and the fused image F. A higher value of NMI indicates that more information from the source images is transferred to the fused image. The NMI metric varies between 0 and 1. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='5.3.4'>Normalized Feature Mutual Information</ns0:head><ns0:p>The Feature Mutual Information (FMI) metric calculates the amount of image features that two images have in common <ns0:ref type='bibr' target='#b22'>(Haghighat &amp; Razian 2014;</ns0:ref><ns0:ref type='bibr' target='#b23'>Haghighat et al. 2011)</ns0:ref>. This method outperforms other metrics (e.g., E, NMI) in consistency with the subjective quality measures. Previously proposed MI-based image fusion quality metrics use the image histograms to compute the amount of information a source and fused image have in common <ns0:ref type='bibr' target='#b15'>(Cvejic et al. 2006;</ns0:ref><ns0:ref type='bibr' target='#b47'>Qu et al. 2002)</ns0:ref>. However, image histograms contain no information about local image structure (spatial features or local image quality) and only provide statistical measures of the number of pixels in a specific gray-level. However, since meaningful image information is contained in visual features, image fusion quality measures should measure the extent to which these visual features are transferred into the fused image from each of the source images. The Feature Mutual Information (FMI) metric calculates the mutual information between image feature maps <ns0:ref type='bibr' target='#b22'>(Haghighat &amp; Razian 2014;</ns0:ref><ns0:ref type='bibr' target='#b23'>Haghighat et al. 2011)</ns0:ref>. A typical image feature map is for instance the gradient map, which contains information about the pixel neighborhoods, edge strength and directions, texture and contrast. Given two source images as A and B and their fused image as F, the FMI metric first extracts feature maps of the source and fused images using a feature extraction method (e.g., gradient). After feature extraction, the feature images ' A , ' are then estimated from the marginal distributions using Nelsen's method <ns0:ref type='bibr' target='#b42'>(Nelsen 1987</ns0:ref>). The algorithm is described in more detail elsewhere <ns0:ref type='bibr' target='#b23'>(Haghighat et al. 2011)</ns0:ref>. The FMI metric between a source image A and a fused image F is then given by Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> lists the entropy of the fused result for the proposed method (IGF) and all seven multiscale comparison methods (Contrast Pyramid, DWT, Gradient Pyramid, Laplace Pyramid, Morphological Pyramid, Ratio Pyramid, SIDWT). It appears that IGF produces a fused image with the highest entropy for 9 of the 12 test scenes. Note that a larger entropy implies more edge information, but it does not mean that the additional edges are indeed meaningful (they may result from over enhancement or noise). Therefore, we also need to consider structural information metrics. Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> shows that IGF outperforms all other multi-scale methods tested here in terms of MSSIM. This means that the mean overall structural similarity between both source images the fused image F is largest for the proposed method. Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> shows that IGF also outperforms all other multi-scale methods tested here in terms of NMI. This indicates that the proposed IGF fusion scheme transfers more information from the source images to the fused image than any of the other methods. Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> shows that IGF also outperforms 10 of the 12 other multi-scale methods tested here in terms of NFMI. IGF is only outperformed by SIDWT for scene 1 and by the Contrast Pyramid for scene 7. This implies that fused images produced by the proposed IGF scheme typically have a larger amount of image features in common with their source images than the results of most other fusion schemes. Summarizing, the proposed IGF fusion scheme appears to outperform the other multi-scale fusion methods investigated here in most of the conditions tested.</ns0:p></ns0:div> <ns0:div><ns0:head n='6.2'>Runtime</ns0:head><ns0:p>In this study we used a Matlab implementation of the GF and IGF written by <ns0:ref type='bibr' target='#b77'>Zhang et al. (2014)</ns0:ref> that is freely available from the authors (at http://www.cs.cuhk.edu.hk/~leojia/projects/rollguidance). We made no effort to optimize the code of the algorithms. We conducted a runtime test on a Dell Latitude laptop with an Intel i5 2 GHz CPU and 8 GB memory. The algorithms were implemented in Matlab 2016a. Only a single thread was used without involving any SIMD instructions. For this test we used the set of 12 test images described in Section 5.1. As noted before, the filter size and regularization parameters used in this study are respectively set to . The mean runtime of the proposed fusion method was 0.61&#177;0.05 seconds.</ns0:p></ns0:div> <ns0:div><ns0:head n='7'>Discussion and conclusions</ns0:head><ns0:p>We propose a multi-scale image fusion scheme based on guided filtering. Iterative guided filtering is used to decompose the source images into approximation and residual layers. Initial binary weighting maps are computed as the pixelwise maximum of the individual source saliency maps, obtained from frequency tuned filtering. Spatially consistent and smooth weighting maps are then obtained through guided filtering of the binary weighting maps with their corresponding source layers as guidance images. Saliency weighted recombination of the individual source residual layers and the mean of the coarsest scale source layers finally yields the fused image. The proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at the decomposition and at the recombination stage of the multi-scale fusion process. Application to multiband visual (intensified) and thermal infrared imagery demonstrates that the proposed method obtains state-of-the-art performance for the fusion of multispectral nightvision images. The method has a simple implementation and is computationally efficient. The processing scheme is illustrated for two source images X and Y and 4 resolution levels (0-3). 0 X and 0 Y are the original input images, while i X and i Y represent successively lower resolution versions obtained by iterative guided filtering. 'Saliency' represents the frequencytuned saliency transformation, 'Max' and 'Mean' respectively denote the pointwise maximum and mean operators, '(I)GF' means (Iterative) Guided Filtering, 'dX', 'dY' and 'dF' are respectively the original and fused detail layers, 'BW' the binary weight maps, and 'W' the smooth weight maps. Manuscript to be reviewed Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>local linear model ensures that the output image O has an edge only at locations where the guidance image G has one, because O a G &#61649; &#61501; &#61649; . The linear coefficients k a and k b are constant in k &#61559; . They can be estimated by minimizing the squared difference between the output image O and the input image I (the second filtering condition) in the window k &#61559; , i.e. by minimizing the cost function E :</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>&#61559;</ns0:head><ns0:label /><ns0:figDesc>is the number of pixels in k &#61559; , PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11114:1:0:NEW 25 Jul 2016) Manuscript to be reviewed Computer Science k I and k G represent the means of respectively I and G over k &#61559; , and 2 k</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>1 t G &#61483; of the t-th iteration is PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11114:1:0:NEW 25 Jul 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11114:1:0:NEW 25 Jul 2016) Manuscript to be reviewed Computer Science and regularization parameters used in this study are respectively set to {5</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:06:11114:1:0:NEW 25 Jul 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>xy</ns0:head><ns0:label /><ns0:figDesc>represent local windows of size M N &#61620; in respectively and A F , and PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11114:1:0:NEW 25 Jul 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11114:1:0:NEW 25 Jul 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Figures</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Figure 1. Flow chart of the proposed image fusion scheme.</ns0:figDesc><ns0:graphic coords='23,103.23,102.60,405.54,428.04' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Comparison with existing multiresolution fusion schemes.Original intensified visual (a) and thermal infrared (b) images for scene nr 10, and the fused results obtained with respectively a Contrast Pyramid (c), Gradient Pyramid (d), Laplace Pyramid (e), Morphological Pyramid (f), Ratio Pyramid (g), DWT (h), SIDWT (i), and the proposed method (j), for scene nr. 10.</ns0:figDesc><ns0:graphic coords='27,72.00,95.80,439.44,129.36' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. As Figure 5, for scene nr. 2.</ns0:figDesc><ns0:graphic coords='27,72.00,351.75,439.44,129.12' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. As Figure 5, for scene nr. 3.</ns0:figDesc><ns0:graphic coords='27,72.00,538.47,439.44,129.36' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. As Figure 5, for scene nr. 5.</ns0:figDesc><ns0:graphic coords='28,72.00,258.96,439.44,129.36' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,72.00,72.00,451.30,414.10' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,72.00,85.80,429.96,325.71' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,72.00,95.80,509.38,189.51' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 . Entropy values for each of the methods tested and for all 12 scenes.</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Tables</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>scene</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>nr.</ns0:cell><ns0:cell cols='2'>Contrast DWT</ns0:cell><ns0:cell cols='2'>Gradient Laplace Morph</ns0:cell><ns0:cell>Ratio</ns0:cell><ns0:cell>SIDWT IGF</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>6.4818</ns0:cell><ns0:cell cols='2'>6.4617 6.1931</ns0:cell><ns0:cell cols='2'>6.5935 6.6943 6.5233 6.4406 6.5126</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>6.7744</ns0:cell><ns0:cell cols='2'>6.6731 6.5873</ns0:cell><ns0:cell cols='2'>6.7268 6.9835 6.7268 6.7075 7.4233</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>6.4340</ns0:cell><ns0:cell cols='2'>6.5704 6.4965</ns0:cell><ns0:cell cols='2'>6.6401 6.7032 6.6946 6.5878 6.8589</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>6.8367</ns0:cell><ns0:cell cols='2'>6.8284 6.6756</ns0:cell><ns0:cell cols='2'>7.0041 7.0906 6.7313 6.8547 7.2491</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>6.7549</ns0:cell><ns0:cell cols='2'>6.6642 6.5582</ns0:cell><ns0:cell cols='2'>6.7624 6.8618 6.5129 6.6813 7.1177</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>6.3753</ns0:cell><ns0:cell cols='2'>6.3705 6.2430</ns0:cell><ns0:cell cols='2'>6.5049 6.7608 6.2281 6.4116 6.9044</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>6.7470</ns0:cell><ns0:cell cols='2'>6.3709 6.1890</ns0:cell><ns0:cell cols='2'>6.5106 6.7445 6.3458 6.3817 6.7869</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>6.3229</ns0:cell><ns0:cell cols='2'>7.3503 7.2935</ns0:cell><ns0:cell cols='2'>7.3794 7.3501 7.4873 7.3406 7.4891</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>6.4903</ns0:cell><ns0:cell cols='2'>6.4677 6.3513</ns0:cell><ns0:cell cols='2'>6.5816 6.7295 6.3306 6.4753 6.7796</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>6.9627</ns0:cell><ns0:cell cols='2'>7.0131 6.8390</ns0:cell><ns0:cell cols='2'>7.1073 7.0530 7.0118 7.0224 7.2782</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>6.5442</ns0:cell><ns0:cell cols='2'>6.4554 6.2110</ns0:cell><ns0:cell cols='2'>6.5555 6.8051 6.4053 6.4572 6.2907</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>7.3335</ns0:cell><ns0:cell cols='2'>7.3744 7.3379</ns0:cell><ns0:cell cols='2'>7.3907 7.4251 7.3486 7.3746 7.3568</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11114:1:0:NEW 25 Jul 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 . MSSIM values for each of the methods tested and for all 12 scenes.</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>scene</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>nr.</ns0:cell><ns0:cell cols='2'>Contrast DWT</ns0:cell><ns0:cell cols='2'>Gradient Laplace Morph</ns0:cell><ns0:cell>Ratio</ns0:cell><ns0:cell>SIDWT IGF</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>0.7851</ns0:cell><ns0:cell cols='2'>0.7975 0.8326</ns0:cell><ns0:cell cols='2'>0.8050 0.7321 0.8054 0.8114 0.8381</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>0.6018</ns0:cell><ns0:cell cols='2'>0.6798 0.7130</ns0:cell><ns0:cell cols='2'>0.6406 0.6203 0.6406 0.6935 0.7213</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>0.7206</ns0:cell><ns0:cell cols='2'>0.7493 0.7849</ns0:cell><ns0:cell cols='2'>0.7555 0.6882 0.7468 0.7629 0.7932</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>0.6401</ns0:cell><ns0:cell cols='2'>0.6790 0.7162</ns0:cell><ns0:cell cols='2'>0.6875 0.6155 0.6668 0.6949 0.7184</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>0.5856</ns0:cell><ns0:cell cols='2'>0.6649 0.6938</ns0:cell><ns0:cell cols='2'>0.6695 0.6250 0.6270 0.6769 0.7038</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>0.5689</ns0:cell><ns0:cell cols='2'>0.6448 0.6755</ns0:cell><ns0:cell cols='2'>0.6516 0.5961 0.6099 0.6598 0.6921</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>0.3939</ns0:cell><ns0:cell cols='2'>0.5742 0.5994</ns0:cell><ns0:cell cols='2'>0.5809 0.5320 0.4490 0.5889 0.6344</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>0.6474</ns0:cell><ns0:cell cols='2'>0.6272 0.6630</ns0:cell><ns0:cell cols='2'>0.6392 0.5791 0.6291 0.6463 0.6940</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>0.6224</ns0:cell><ns0:cell cols='2'>0.6883 0.7224</ns0:cell><ns0:cell cols='2'>0.6955 0.6445 0.6718 0.7089 0.7405</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>0.3913</ns0:cell><ns0:cell cols='2'>0.5410 0.5715</ns0:cell><ns0:cell cols='2'>0.5430 0.4899 0.4331 0.5513 0.5961</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>0.7174</ns0:cell><ns0:cell cols='2'>0.7307 0.7754</ns0:cell><ns0:cell cols='2'>0.7439 0.6559 0.7419 0.7539 0.7908</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>0.7945</ns0:cell><ns0:cell cols='2'>0.8116 0.8466</ns0:cell><ns0:cell cols='2'>0.8227 0.7815 0.8106 0.8365 0.8646</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11114:1:0:NEW 25 Jul 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 . NMI values for each of the methods tested and for all 12 scenes.</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>scene</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>nr.</ns0:cell><ns0:cell cols='2'>Contrast DWT</ns0:cell><ns0:cell cols='2'>Gradient Laplace Morph</ns0:cell><ns0:cell>Ratio</ns0:cell><ns0:cell>SIDWT IGF</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>0.1534</ns0:cell><ns0:cell cols='2'>0.1692 0.2052</ns0:cell><ns0:cell cols='2'>0.1647 0.1699 0.1791 0.1796 0.2818</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>0.0989</ns0:cell><ns0:cell cols='2'>0.0948 0.1158</ns0:cell><ns0:cell cols='2'>0.0897 0.1028 0.0897 0.1028 0.2994</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>0.0898</ns0:cell><ns0:cell cols='2'>0.1222 0.1493</ns0:cell><ns0:cell cols='2'>0.1252 0.1171 0.1320 0.1280 0.2231</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>0.1102</ns0:cell><ns0:cell cols='2'>0.1097 0.1322</ns0:cell><ns0:cell cols='2'>0.1189 0.1169 0.1046 0.1177 0.2294</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>0.1236</ns0:cell><ns0:cell cols='2'>0.1170 0.1379</ns0:cell><ns0:cell cols='2'>0.1252 0.1318 0.1186 0.1251 0.2166</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>0.0857</ns0:cell><ns0:cell cols='2'>0.0943 0.1162</ns0:cell><ns0:cell cols='2'>0.0969 0.1068 0.0902 0.0980 0.2229</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>0.0697</ns0:cell><ns0:cell cols='2'>0.0711 0.0839</ns0:cell><ns0:cell cols='2'>0.0809 0.0888 0.0616 0.0781 0.2147</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>0.2192</ns0:cell><ns0:cell cols='2'>0.1825 0.2198</ns0:cell><ns0:cell cols='2'>0.1832 0.1884 0.2130 0.2021 0.3090</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>0.0692</ns0:cell><ns0:cell cols='2'>0.0679 0.0781</ns0:cell><ns0:cell cols='2'>0.0747 0.0790 0.0690 0.0731 0.2013</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>0.1375</ns0:cell><ns0:cell cols='2'>0.1643 0.2043</ns0:cell><ns0:cell cols='2'>0.1780 0.1761 0.1662 0.1760 0.2962</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>0.1055</ns0:cell><ns0:cell cols='2'>0.1043 0.1177</ns0:cell><ns0:cell cols='2'>0.1100 0.1047 0.1179 0.1115 0.1646</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>0.2572</ns0:cell><ns0:cell cols='2'>0.2511 0.2746</ns0:cell><ns0:cell cols='2'>0.2602 0.2438 0.2660 0.2649 0.2987</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11114:1:0:NEW 25 Jul 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>NFMI values for each of the methods tested and for all 12 scenes.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>scene</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>nr.</ns0:cell><ns0:cell cols='2'>Contrast DWT</ns0:cell><ns0:cell cols='2'>Gradient Laplace Morph</ns0:cell><ns0:cell>Ratio</ns0:cell><ns0:cell>SIDWT IGF</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>0.4064</ns0:cell><ns0:cell cols='2'>0.3812 0.3933</ns0:cell><ns0:cell cols='2'>0.3888 0.3252 0.3498 0.4084 0.4008</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>0.4354</ns0:cell><ns0:cell cols='2'>0.3876 0.4001</ns0:cell><ns0:cell cols='2'>0.3493 0.3432 0.3493 0.4075 0.4383</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>0.4076</ns0:cell><ns0:cell cols='2'>0.4081 0.4175</ns0:cell><ns0:cell cols='2'>0.4138 0.3758 0.3552 0.4330 0.4454</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>0.4017</ns0:cell><ns0:cell cols='2'>0.3913 0.4066</ns0:cell><ns0:cell cols='2'>0.4051 0.3655 0.3497 0.4205 0.4490</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>0.4304</ns0:cell><ns0:cell cols='2'>0.3971 0.4101</ns0:cell><ns0:cell cols='2'>0.4081 0.3758 0.3497 0.4229 0.4580</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>0.4299</ns0:cell><ns0:cell cols='2'>0.4074 0.4203</ns0:cell><ns0:cell cols='2'>0.4164 0.3832 0.3570 0.4295 0.4609</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>0.5050</ns0:cell><ns0:cell cols='2'>0.4383 0.4439</ns0:cell><ns0:cell cols='2'>0.4357 0.3942 0.3779 0.4469 0.4286</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>0.4305</ns0:cell><ns0:cell cols='2'>0.4074 0.4097</ns0:cell><ns0:cell cols='2'>0.4113 0.3806 0.3553 0.4273 0.4325</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>0.4351</ns0:cell><ns0:cell cols='2'>0.3959 0.4105</ns0:cell><ns0:cell cols='2'>0.3995 0.3658 0.3539 0.4130 0.4370</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>0.4439</ns0:cell><ns0:cell cols='2'>0.4251 0.4263</ns0:cell><ns0:cell cols='2'>0.4268 0.3863 0.3465 0.4513 0.5045</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>0.3882</ns0:cell><ns0:cell cols='2'>0.3798 0.3987</ns0:cell><ns0:cell cols='2'>0.3804 0.3131 0.3453 0.4068 0.4206</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>0.4051</ns0:cell><ns0:cell cols='2'>0.3725 0.3973</ns0:cell><ns0:cell cols='2'>0.3820 0.3449 0.3635 0.4111 0.4257</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11114:1:0:NEW 25 Jul 2016)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11114:1:0:NEW 25 Jul 2016)Manuscript to be reviewed</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11114:1:0:NEW 25 Jul 2016)</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11114:1:0:NEW 25 Jul 2016) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"RE: (#CS-2016:06:11114:0:0:REVIEW ('Iterative guided image fusion') Dear Professor Klara Kedem, Hereby we submit a revision of our initial PeerJ submission: 'Iterative guided image fusion' (#CS-2016:06:11114:0:0:REVIEW ). In this revised version we  have addressed all the comments of the reviewers. Below we provide a pointwise listing of the reviewers’ and the Editor’s comments together with a description of the actions we have taken in response to these comments. We would like to thank the Editor and the reviewers for spending their valuable time on this review process and for their helpful suggestions. We hope that you will find the revised version of our manuscript acceptable for publication in PeerJ. Yours sincerely, Alexander Toet ____________________________________________________________________________ Comments of Reviewer #1: [we used the comments of Reviewer 1 that were included in an annotated pdf ] Authors’ reply: We adopted the corrections that were suggested in the annotated reviewed pdf. ____________________________________________________________________________ Comments of Reviewer #2: The terms resolution/multiresolution/lower resolution are used without being defined. Commonly multiresolution reference to some kind of image pyramid – for example the Gaussian image pyramid and resolution is commonly used for the numbers of pixel in the image. In this paper resolution is instead used in a rather general Gaussian scale space meaning – resolution is the level of details present in the image (i.e. the sigma in linear Gaussian scale space - inner scale). Authors’ reply: We replaced the term “multiresolution” with “multiscale” which is indeed more appropriate in the given context, since the resolution of the images (the sampling rate) is not altered during the decomposition process. The terms “higher resolution” and lower resolution were replaced by respectively “finer scale” and “coarser scale”. Also the terms base layer and detail layer are less frequently used in the literature. Commonly an image can be decomposed into geometry (base layer) and texture (detail layer). The detail (texture) layer is also commonly referred to as residual image/layer. Authors’ reply: In the literature there are indeed several different terms used for the coarser scale (like background, base, approximation) and finer scale (detail, texture) layers. We decided to adopt the terms “approximate” and “residual” since we believe these terms best reflect the operational function of these images. The GF filter is defined as O = GF(I,G;r,e) where I is an image input, G is a guidance image, r is the window size parameter and e is a regularization parameter. The parameters in the IGF is not explicitly given. As it seems the image input is X_0 in all iteration while X_i is updated in the iteration. Explicitly stating the parameters in the IGF would clarify the issue. Authors’ reply: We now explicitly include the spatial and range parameters of the IGF filter in Equation 12 and explain these parameters in the text following this formula. The key features in the saliency map computation is the local contrast. Combining two images with potentially different contrast magnitude into BW_X and BW_Y can be non-trivial. Does all the test image pairs used in the fusion have the same contrast magnitude or have the images been normalized? Authors’ reply: The images approximately have the same contrast magnitude. ___________________________________________________________________________ Comments of Reviewer #3: -In the introduction, it might be helpful to provide example (or small description) of what encompasses image noise and/or image artifacts given that fusion aims to eliminate such features. I understand there is not always strict definition for these constructs but some motivating examples may be helpful. Authors’ reply: We added some explanation of the effects of image noise to the Introduction: “While different types of noise may result from several processes associated with the underlying sensor physics, additive noise is typically the predominant noise component encountered in II and NIR imagery (Petrovic & Xydeas 2003). Additive noise can be modelled as a random signal that is simply added to the original signal. As a result, additive noise may obscure or distort relevant image details”. -Section 2.1 into 2.2, there is no example of what can be used as a guidance image G until lines 152-153, although it is talked about in lines 143-144. This may be fixed simply by moving what is described within the parentheses to the earlier mention. Also, are there times when something other than the identical input image is used for G? If so, and if it is of simple enough of a statement to add in, it might be helpful to mention what other type of image might be used. Authors’ reply: To explain this issue in more detail we added the following text at the end of section 2.1: “The JBLF can prevent over- or under- blur near edges by using a related image to guide the edge stopping behavior of the range filter. That is, the JBLF smooths the image while preserving edges that are also represented in the image . The JBLF is particularly favored when the edges in the image that is to be filtered are unreliable (e.g., due to noise or distortions) and when a companion image with well-defined edges is available (e.g., in the case of flash /no-flash image pairs).image. Thus, in the case of filtering an II image for which a companion (registered) IR image is available, the guidance image may either be the II image itself or its IR counterpart.” -If at all possible, it would be nice to view the images at a larger size in the figures for ease of viewing, however because I think it is important to show all image conditions as you do in the paper now, this may be an impossible trade-off between size and space. Authors’ reply: The online version of this paper will enable the reader to view enlarged versions of all images. minor edits: -'artifacts' vs 'artefacts' consistency throughout -Line 415 C2 should be subscripted Authors’ reply: We corrected these errors. "
Here is a paper. Please give your review comments after reading it.
305
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>We investigated emotion classification from brief video recordings from the GEMEP database wherein actors portrayed 18 emotions. Vocal features consisted of acoustic parameters related to frequency, intensity, spectral distribution, and durations. Facial features consisted of facial action units. We first performed a series of person-independent supervised classification experiments. Best performance (AUC = 0.88) was obtained by merging the output from the best unimodal vocal (Elastic Net, AUC = 0.82) and facial (Random Forest, AUC = 0.80) classifiers using a late fusion approach and the product rule method. All 18 emotions were recognized with above-chance recall, although recognition rates varied widely across emotions (e.g., high for amusement, anger, and disgust; and low for shame). Multimodal feature patterns for each emotion are described in terms of the vocal and facial features that contributed most to classifier performance. Next, a series of exploratory unsupervised classification experiments were performed to gain more insight into how emotion expressions are organized. Solutions from traditional clustering techniques were interpreted using decision trees in order to explore which features underlie clustering. Another approach utilized various dimensionality reduction techniques paired with inspection of data visualizations. Unsupervised methods did not cluster stimuli in terms of emotion categories, but several explanatory patterns were observed. Some could be interpreted in terms of valence and arousal, but actor and gender specific aspects also contributed to clustering. Identifying explanatory patterns holds great potential as a meta-heuristic when unsupervised methods are used in complex classification tasks.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>When people interact, they do not only use words to convey affective information, but also often express emotions through nonverbal channels. Main sources of nonverbal communication include facial expressions, bodily gestures, and tone of voice. Accurate recognition of others' emotions is important for social interactions (e.g., for avoiding conflict and for providing support; <ns0:ref type='bibr' target='#b16'>Ekman, 2003;</ns0:ref><ns0:ref type='bibr' target='#b45'>Russell et al., 2003)</ns0:ref>. Knowledge about how emotions are expressed nonverbally thus has applications in many fields, ranging from psychotherapy (e.g., <ns0:ref type='bibr' target='#b23'>Hofmann, 2016)</ns0:ref> to human-computer interaction (e.g., <ns0:ref type='bibr' target='#b26'>Jeon, 2017)</ns0:ref>. Notably, research on the production and perception of emotional expressions has also been a main source of data for theories of emotion (e.g., <ns0:ref type='bibr' target='#b47'>Scherer, 2009)</ns0:ref>. We employ machine learning methods to classify dynamic multimodal emotion expressions based on vocal and facial features and describe the most important features associated with a range of positive and negative emotions. We also compare traditional supervised methods with solutions obtained with unsupervised methods in order to gain new insights into how emotion expressions may be organized.</ns0:p><ns0:p>Meta-analyses of emotion perception studies suggest that human judges are able to accurately infer emotions from nonverbal vocal and facial behavior, also in cross-cultural settings (e.g., <ns0:ref type='bibr' target='#b18'>Elfenbein &amp; Ambady, 2002;</ns0:ref><ns0:ref type='bibr' target='#b33'>Laukka &amp; Elfenbein, 2021)</ns0:ref>. However, it has proved more difficult PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to define the physical features reliably associated with specific emotions. <ns0:ref type='bibr' target='#b27'>Juslin and Laukka (2003)</ns0:ref> proposed that nonverbal communication of emotion through the voice is based on a number of probabilistic and partly redundant acoustic cues. Probabilistic cues are not perfect indicators of the expressed emotion because they are not always associated with that emotion and can also be used in the same way to express different emotions. For example, high mean fundamental frequency (F0) can be associated with both happiness and fear. Several partly redundant cues can, in turn, be associated with the same emotion. For example, anger can be associated with high levels of both voice intensity and high-frequency energy. <ns0:ref type='bibr' target='#b8'>Barrett et al. (2019)</ns0:ref> similarly noted that facial cues (e.g., smiles) are only probabilistically associated with any one emotion (e.g., happiness), and that similar configurations of facial movements can be associated with more than one emotion.</ns0:p><ns0:p>The combination of probabilistic and partly redundant cues entails that there may be several cue combinations that are associated with the same emotion, which leads to a robust and flexible system of communication <ns0:ref type='bibr' target='#b27'>(Juslin &amp; Laukka, 2003)</ns0:ref>. For example, <ns0:ref type='bibr' target='#b55'>Srinivasan and Martinez (2018)</ns0:ref> recently reported that several different facial configurations were used to communicate the same emotion in naturalistic settings (e.g., they reported 17 different configurations for happiness). Machine learning methods are increasingly used to detect patterns in this type of high-dimensional probabilistic data. Recent years have seen much activity in the field of machine-based classification of emotions from facial (e.g., <ns0:ref type='bibr' target='#b34'>Li &amp; Deng, 2020)</ns0:ref> and vocal (e.g., <ns0:ref type='bibr' target='#b48'>Schuller, 2018)</ns0:ref> expressions. Classifiers often perform on par with human judges, although performance also varies across emotions and databases <ns0:ref type='bibr' target='#b31'>(Krumhuber et al., 2021b)</ns0:ref> Manuscript to be reviewed Computer Science combining features from several modalities has been shown to increase classification accuracy (see <ns0:ref type='bibr'>D&#180;Mello &amp; Kory, 2015</ns0:ref>, for a meta-analysis). The number of multimodal classification studies is steadily increasing <ns0:ref type='bibr' target='#b43'>(Poria et al., 2017)</ns0:ref>, and recent studies explore a wide variety of approaches (e.g., <ns0:ref type='bibr' target='#b9'>Bhattacharya et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b35'>Lingenfelser et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b38'>Mai et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b53'>Siriwardhana et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b56'>Tzirakis et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b61'>Wang et al., 2020)</ns0:ref>.</ns0:p><ns0:p>In the current study, we first compare how unimodal and multimodal classifiers perform in the classification of 18 different emotions from brief video recordings. Recordings are taken from the Geneva Multimodal Emotion Portrayal (GEMEP) corpus <ns0:ref type='bibr' target='#b5'>(B&#228;nziger et al., 2012)</ns0:ref>, which contains dynamic audio-video emotion expressions portrayed by professional actors. This approach extends most previous classification studies which have focused on a much smaller number of emotions, but is in line with recent perception studies which suggest that human judges can perceive a wide variety of emotions (e.g., <ns0:ref type='bibr' target='#b11'>Cordaro et al., 2018)</ns0:ref>. Different actors are used for training and testing in all classification experiments, to avoid person bias. We also contribute by providing details of which features are important for classification of which emotions-something only rarely done in machine classification (see <ns0:ref type='bibr'>Krumhuber et al., 2021b, for a recent example)</ns0:ref>.</ns0:p><ns0:p>We analyze the physical properties of vocal expressions by extracting the features included in the Geneva Minimal Acoustic Parameter Set (GeMAPS; <ns0:ref type='bibr' target='#b19'>Eyben et al., 2016)</ns0:ref>. This parameter set is commonly used in affective computing and provides features related to the frequency, intensity, spectral energy, and temporal characteristics of the voice. Facial Action Units (AUs) <ns0:ref type='bibr' target='#b17'>(Ekman &amp; Friesen, 1978</ns0:ref>)-which is one of the most comprehensive and objective ways to describe facial expressions <ns0:ref type='bibr' target='#b40'>(Martinez et al., 2019)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>allows for comparisons with the previous literature on emotion expression. Finally, we compare the results from the supervised classification experiments with results from unsupervised classification. Unsupervised methods may reveal new information about how emotion expressions are organized, because they are not restricted to any pre-defined emotion categories (e.g., <ns0:ref type='bibr' target='#b2'>Azari et al., 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Methods</ns0:head><ns0:p>Data, code, and additional computational information are openly available on GitHub (see Data Availability statement).</ns0:p></ns0:div> <ns0:div><ns0:head>Emotion expressions</ns0:head><ns0:p>The emotion expressions used in this study were taken from the GEMEP database <ns0:ref type='bibr' target='#b5'>(B&#228;nziger et al., 2012)</ns0:ref> and consist of 1,260 video files in which ten professional actors, coached by a professional director, convey 18 affective states. They do this by uttering two different pseudolinguistic phoneme sequences, or a sustained vowel 'aaa'. The emotions portrayed in this dataset are: admiration, amusement, anger, anxiety/worry, contempt, despair, disgust, interest, irritation, joy/elation, panic fear, sensual pleasure, pride, relief, sadness, shame, surprise, and tenderness. The number of files per emotion and actor can be seen in Figure <ns0:ref type='figure' target='#fig_15'>S1</ns0:ref>. The GEMEP dataset was chosen for its high naturalness ratings and wide range of included emotions. It is widely used in classification studies (e.g., <ns0:ref type='bibr' target='#b49'>Schuller et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b57'>Valstar et al., 2012)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Audio features were obtained using openSMILE 2.3.0 <ns0:ref type='bibr' target='#b21'>(Eyben et al., 2013)</ns0:ref>, an open-source toolkit that allows for the extraction of a wide variety of parameter sets. In this study, two different versions of the GeMAPS (Geneva Minimalistic Acoustic Parameter Set) <ns0:ref type='bibr' target='#b19'>(Eyben et al., 2016)</ns0:ref> were evaluated. While GeMAPS contains 62 non-time series parameters with prosodic, excitation, vocal tract, and spectral descriptors, the extended version eGeMAPS adds a small set of cepstral descriptors, reaching a total of 88 features.</ns0:p><ns0:p>Video features were obtained using OpenFace 2.2.0 <ns0:ref type='bibr' target='#b4'>(Baltru&#353;aitis et al., 2018)</ns0:ref>, an open-source facial behavior analysis toolkit. OpenFace offers an extensive range of parameters such as facial landmark detection, head pose estimation, and eye-gaze estimation. However, our study focused on Facial Action Units (AUs) <ns0:ref type='bibr'>(Ekman &amp; Friesen, 1977)</ns0:ref> and the toolkit provides the intensity of 17 AUs per frame <ns0:ref type='bibr'>(i.e., 1, 2, 4, 5, 6, 7, 9, 10, 12, 14, 15, 17, 20, 23, 25, 26, &amp; 45)</ns0:ref>. AU detection is based on pre-trained models for the detection of facial landmarks, and uses dimensionalityreduced histograms of oriented gradients (HOGs) from face image and facial shape features in Support Vector Machine analyses (for details, see <ns0:ref type='bibr' target='#b3'>Baltru&#353;aitis et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b48'>2018)</ns0:ref>.</ns0:p><ns0:p>We removed instances where AU detection was deemed unreliable. The OpenFace toolkit provides two indicators per instance that aided the data cleaning process: the confidence and success rates. The former refers to how reliable the extracted features are (continuous value from zero to one), whereas the latter denotes if the facial tracking is favorable or not (binary value).</ns0:p><ns0:p>Taking this into consideration, instances with a confidence rate lower than 0.98 or an unfavorable success rate were dropped. Ninety percent of instances received a confidence rating of 0.98 or higher (see Figure <ns0:ref type='figure'>S2</ns0:ref> Manuscript to be reviewed Computer Science success rate was very low (0.58%). In total, the number of instances decreased by 9.94% after the cleaning process and caused the deletion of one entire file. Two steps were followed to achieve data consistency. First, the corresponding audio track was deleted. Secondly, video instances were grouped by file and the framewise feature intensity scores from OpenFace were used to compute the following functionals for each AU: arithmetic mean; coefficient of variation; 20th, 50th, and 80th percentile; percentile range (20th to 80th percentile); and the number of peaks (using the mean value as an adaptive threshold). Lastly, data was normalized using min-max normalization.</ns0:p><ns0:p>After cleaning, data from both modalities was prepared in the following ways. For the supervised approach, data was randomly assembled in five groups ensuring that all stimuli were represented and that actors in the training set were not included in the validation set. This grouping strategy resulted in different pairs of actors (one female-female, one male-male, and three female-male pairs) which facilitated the later use of LOGO CV (Leave-One-Group-Out Cross-Validation).</ns0:p><ns0:p>For the unsupervised approach, the normalized feature vectors from both modalities were concatenated as in an early fusion scenario <ns0:ref type='bibr' target='#b62'>(W&#246;llmer et al., 2013)</ns0:ref>, yielding a dataset with 207 dimensions and 1,259 observations. Experiments Supervised learning. We evaluated different multimodal late and early fusion pipelines (see <ns0:ref type='bibr' target='#b1'>Atrey et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b15'>Dong et al., 2015)</ns0:ref>, and compared them with the best audio and video unimodal classifiers. After this, the relations between emotion categories and audiovisual cues were investigated. The multimodal pipelines utilize machine learning algorithms such as Linear Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Classifiers with Elastic Net regularization, k-NN, Decision Tree, and Random Forest. The first three were used since they are some of the most commonly employed methods for emotion recognition <ns0:ref type='bibr' target='#b39'>(Marechal et al., 2019)</ns0:ref>, whereas Random Forest was used because it is known as one of the best out-of-the-box classifiers <ns0:ref type='bibr' target='#b54'>(Sjardin et al., 2016)</ns0:ref>. We will use the term 'Elastic Net classifiers' to refer to linear classifiers with Elastic Net regularization.</ns0:p><ns0:p>Late fusion. This approach can be summarized into three steps. First, audio and video classifiers were separately subjected to a modeling and selection process. Second, different techniques were tested for fusing the outputs of the audio and video classifiers. Third, the best late fusion pipeline was compared to the best unimodal classifiers. Note that the best unimodal classifiers correspond to the strongest models, in terms of their validation Area Under the Curve (AUC) (see <ns0:ref type='bibr' target='#b25'>Jeni et al., 2013)</ns0:ref>, picked in the first step. Next, a more in-depth description of the previously stated stages is given. The first step can be split, in turn, into two phases, repeated for each modality. First, LOGO CV was employed for hyperparameter tuning over the dataset.</ns0:p><ns0:p>Second, once the best parameters for each classifier (Elastic Net, k-NN, Decision Tree, and Random Forest) were found, the validation AUC was used to choose between types of machine learning classifiers. The second step followed the same process as the previous one but evaluating different fusion methods, such as the maximum rule, sum rule, product rule, weight criterion, rule-based, and model-based (Elastic Net, k-NN, and Decision Tree) methods <ns0:ref type='bibr' target='#b1'>(Atrey et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b15'>Dong et al., 2015)</ns0:ref>. The last step consisted of comparing the best late fusion pipeline to the best unimodal classifiers in terms of their validation AUC.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Early fusion. This approach can also be divided into three steps. First, audio and video instances were carefully concatenated. Second, different types of machine learning classifiers were subjected to a modeling and selection process. Third, the best early fusion pipeline was compared to the best unimodal classifiers. In more detail, the first step joins audio and video feature instances on the 'file_id' field. The second step has two phases: LOGO CV was used for hyperparameter tuning over the dataset, and then the best parameters for each classifier (Elastic Net, k-NN, Decision Tree, and Random Forest) were obtained. The validation AUC was used to choose between types of machine learning classifiers. The third and final step compared the best early fusion pipeline to the best unimodal classifiers in terms of their validation AUC.</ns0:p><ns0:p>Unsupervised learning. In order to find meaningful patterns in the multimodal data, two different paths were taken. On the one hand, a more traditional method based on k-Means and Hierarchical Clustering, with and without dimensionality reduction. On the other hand, a more exploratory and graph-based method, which included the use of the TensorFlow Embedding Projector <ns0:ref type='bibr' target='#b63'>(Wongsuphasawat et al., 2018)</ns0:ref>, a Web application that allows for visualizations and analyses of high-dimensional data via Principal Component Analysis (PCA; <ns0:ref type='bibr' target='#b51'>Shlens, 2014)</ns0:ref> Traditional approach. Two clustering validation techniques were used to estimate the number of clusters from the k-means and hierarchical clustering analyses. The CH index <ns0:ref type='bibr' target='#b10'>(Calinski &amp; Harabasz, 1974)</ns0:ref> is less sensitive to monotonicity, different cluster densities, subclusters, and skewed distributions. The Silhouette score <ns0:ref type='bibr' target='#b44'>(Rousseeuw, 1987)</ns0:ref> <ns0:ref type='table'>2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science robust when it comes to handling noisy data, but has difficulty with the presence of subclusters <ns0:ref type='bibr' target='#b37'>(Liu et al., 2010)</ns0:ref>. The Manhattan distance was used for hierarchical clustering due to the high dimensionality of the dataset <ns0:ref type='bibr' target='#b0'>(Aggarwal et al., 2001)</ns0:ref>, and three different distance methods were evaluated (simple, complete, and weighted) (SciPy, 2019). Figure <ns0:ref type='figure' target='#fig_16'>S3</ns0:ref> shows the obtained CH(k) and sscore(k) for k-means before dimensionality reduction, and indicates that the estimated number of clusters was 2 for both the CH index and Silhouette score techniques. For hierarchical clustering, the CH index demonstrated that the best value was 2 for the single and complete distance methods, but 6 for the weighted method, whereas the Silhouette score consistently demonstrated that the best value was 2 (see Figure <ns0:ref type='figure' target='#fig_16'>S3</ns0:ref>). For both k-means and hierarchical clustering we selected the number of clusters that maximized the score.</ns0:p><ns0:p>Once the clustering without dimensionality reduction was done, the dataset was inspected in search of weak and redundant features to mitigate the curse of dimensionality <ns0:ref type='bibr' target='#b24'>(Jain et al., 2000)</ns0:ref>.</ns0:p><ns0:p>To that end, three feature reduction techniques were assessed. First, the PCA revealed that the use of the three strongest singular values would only have explained a modest amount (41%) of the total variance expressed in the data. Second, the standard deviation plot showed that neither were there fields with zero variation, nor was an exaggerated drop of the variance present in the dataset. Third, the correlation matrix demonstrated that there were some highly correlated features. Taking everything into consideration, the dimensionality of the multimodal dataset was diminished by dropping those fields that had a correlation value greater than 0.9, decreasing the number of dimensions from 207 to 161 (22%). The use of this correlation threshold has been applied in many studies <ns0:ref type='bibr' target='#b29'>(Katz, 2011)</ns0:ref> Manuscript to be reviewed Computer Science dimensionality reduction (see Figure <ns0:ref type='figure'>S4</ns0:ref>). Finally, k-means and hierarchical clustering were applied according to the obtained number of clusters. Additionally, to facilitate the interpretation of the clustering results, the problem was addressed in a supervised manner, where the membership of the instances to the clusters corresponded to the target classes. To that end, a simple decision tree was trained, and the first decision nodes were analyzed.</ns0:p><ns0:p>Exploratory approach. The exploratory approach consisted of preparing the input data and exploring the dataset. This meant converting the non-reduced multimodal dataset into a TSV file and creating a metadata file, which enclosed information such as portrayed emotion, valence (positive or negative), actor's ID, and actor's sex. Once both files were loaded into the TensorFlow web application <ns0:ref type='bibr' target='#b63'>(Wongsuphasawat et al., 2018)</ns0:ref>, data was ready for exploration. The system offers three different primary methods of dimensionality reduction (PCA, t-SNE, and UMAP) and can create two-and three-dimensional plots. For each of these techniques, parameters were tuned until meaningful patterns were found by ocular inspection -zooming in and out on the projections and coloring data points according to metadata.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head></ns0:div> <ns0:div><ns0:head>Supervised learning</ns0:head><ns0:p>Unimodality. Table <ns0:ref type='table'>1 lists</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This might be because the dataset was too small for an ensemble learning method. Regarding which audio parameter set performed best, there was no clear indication since none of them consistently presented better results. Therefore, both parameter sets were considered during the early fusion approach. Figure <ns0:ref type='figure'>S5</ns0:ref> presents how the best unimodal audio classifier coped with the validation set in the form of a confusion matrix. The model performed better than chance for all emotions except irritation (chance level performance in an 18-alternative classification task is 1/18 = 0.056). Additionally, the matrix revealed expected confusion patterns such as confusions between joy and amusement, and between shame and sadness.</ns0:p><ns0:p>Table 2 details the best video classifiers after hyperparameter tuning. Even though Random Forest did a better job than the rest of the classifiers, reaching an average validation AUC of 0.7981, it tended to overfit once again. Regarding its intraclass performance, the classifier struggled to recognize some of the emotions, especially shame, which was most of the times mislabeled as sadness. On the other hand, the model stood out in the prediction of amusement samples. The full confusion matrix is available in Figure <ns0:ref type='figure'>S6</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Multimodality</ns0:head></ns0:div> <ns0:div><ns0:head>Late fusion.</ns0:head><ns0:p>Once the best audio and video unimodal classifiers were identified, their outputs were merged by using different fusion techniques. Table <ns0:ref type='table'>3</ns0:ref> reveals that product rule outperformed the rest of the methods by achieving an average validation AUC of 0.8767. The confusion matrix of the best late fusion pipeline (Figure <ns0:ref type='figure' target='#fig_15'>1</ns0:ref>) shows that the multimodal classifier performed better than chance for all emotion classes, achieving its highest performance for amusement. Furthermore, it also reveals expected confusion patterns such as confusion between PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science panic fear and anger, and between contempt and sadness -emotions that belong to the same valence family.</ns0:p><ns0:p>Early fusion. In this approach, two different input parameter sets were evaluated: an extended set including the eGeMAPS features and AU intensity values and functionals, and a minimal set including the standard GeMAPS features and AU intensity and functionals. These parameter sets were created by concatenating audio and video features and were then used to train distinct classifiers. Table <ns0:ref type='table'>4</ns0:ref> lists the best early fusion multimodal classifiers after hyperparameter tuning. Elastic Net with the extended parameter set performed best, scoring an average validation AUC of 0.8662. As shown in Figure <ns0:ref type='figure'>2</ns0:ref>, its intraclass performance was once again better than random guessing for all emotions, and there were also expected confusion patterns such as confusions between joy and amusement and between anger and panic fear.</ns0:p></ns0:div> <ns0:div><ns0:head>Analyses of feature importance</ns0:head><ns0:p>An in-depth analysis of classifier behavior was conducted by plotting and analyzing the feature importance for each emotion and classifier. Since the input variables were scaled before fitting the model, logistic regression coefficients can be used as feature importance scores for Elastic Nets. A feature is affecting the prediction when its coefficient is significantly different from zero.</ns0:p><ns0:p>The probability of an event (emotion) increases and decreases when the coefficient is greater or lower than zero, respectively. The behavior of models based on Random Forest was analyzed using the TreeInterpreter package <ns0:ref type='bibr' target='#b46'>(Saabas, 2015)</ns0:ref>, which decomposes each prediction into bias and feature contribution components. These contributions were grouped by the predicted Manuscript to be reviewed Computer Science one explained for Elastic Nets. The contributions of the most important audio features are summarized in Figure <ns0:ref type='figure' target='#fig_16'>3</ns0:ref>, and the full list of importance scores for all audio features and emotions is available in Figure <ns0:ref type='figure'>S7</ns0:ref>. A summary of the most important video features (Facial Action Units, AUs) is shown in Figure <ns0:ref type='figure'>4</ns0:ref> (with full list shown in Figure <ns0:ref type='figure'>S8</ns0:ref>). These figures are based on the best performing audio and video classifiers and give a detailed look into which features were important for classification of which emotions. They also represent the best performing multimodal classifier, which was based on late fusion of the best unimodal classifiers using the product rule technique.</ns0:p><ns0:p>In general, different emotions were associated with different patterns of important features.</ns0:p><ns0:p>Important audio features for anger, for example, included spectral balance and amplitude related features that are associated with a 'harsh' voice quality (e.g., spectral slope from 0-500 Hz, Hammarberg index, harmonics to noise ratio). Whereas for fear, important audio features included the length of unvoiced segments, amplitude of the first formant frequency, and the mean slope of falling amplitude signal parts. We refer to <ns0:ref type='bibr' target='#b19'>Eyben et al. (2016)</ns0:ref> for definitions and calculations of audio features. Important video features included AU12 (lip corner puller) for joy, AU4 (brow lowerer) and AU7 (lid tightener) for disgust, and AU6 (cheek raiser) and AU10 (upper lip raiser) for amusement. For the sake of completeness, the supplementary materials also include a summary (Figure <ns0:ref type='figure'>S9</ns0:ref>) and full description (Figure <ns0:ref type='figure' target='#fig_15'>S10</ns0:ref>) of the importance of features for the best early fusion multimodal classification model.</ns0:p></ns0:div> <ns0:div><ns0:head>Unsupervised learning</ns0:head><ns0:p>Traditional approach. After determining the optimal number of clusters, these parameters were used as input to k-means and hierarchical clustering. Both clustering techniques were evaluated over the multimodal dataset, with and without dimensionality reduction. Results from the clustering analyses yielded a two-dimensional solution and are shown in Figure <ns0:ref type='figure'>5</ns0:ref>.</ns0:p><ns0:p>In order to interpret the clusters in terms of underlying features, the problem was analyzed in a supervised manner using Decision Trees. As shown in Figure <ns0:ref type='figure'>6</ns0:ref>, the most relevant feature that distinguished between clusters was AU6 (cheek raiser). According to the first decision node, those instances which had an 80th percentile value greater than 0.328 were classified as cluster one. In the next decision node, AU12 (lip corner puller) also contributed, whereas vocal features became more prominent in the third node. These findings were in good agreement with the emotion categories included in cluster one (see Figure <ns0:ref type='figure'>5</ns0:ref>), which mainly comprised emotions that are positive (and are characterized by the use of AU6, e.g., amusement and joy; see Figure <ns0:ref type='figure'>S8</ns0:ref>).</ns0:p><ns0:p>The pattern was consistent across both clustering techniques without dimensionality reduction, but changed when the reduced dataset was used for hierarchical clustering (see Figure <ns0:ref type='figure'>5</ns0:ref>, bottom right).</ns0:p><ns0:p>Exploratory approach. After preparing the features and metadata files of the non-reduced multimodal dataset (as described in the Methods section), the data was explored in search of meaningful patterns. To this end, three different dimensionality reduction techniques were employed via the TensorFlow Embedding Projector <ns0:ref type='bibr' target='#b63'>(Wongsuphasawat et al., 2018)</ns0:ref>. The tunable parameters were manually adjusted until any interesting patterns were found. The reader can interactively visualize the data and inspect the results 1 . Main findings are presented below. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Principal component analysis. Although the projection of data into a two-dimensional space reduced the amount of explained variance to 35%, some interesting patterns were detected.</ns0:p><ns0:p>It is apparent from Figure <ns0:ref type='figure'>7</ns0:ref> that the dataset could be split into two clusters. The left and middle side mainly contained high-arousal emotions, such as portrayals of amusement, anger, joy, panic fear, pride, and despair. The right side instead included low-arousal emotions, such as sadness, irritation, interest, anxiety, and contempt. In addition, representations of the same emotion tended to be close to each other.</ns0:p></ns0:div> <ns0:div><ns0:head>t-SNE. The t-SNE dimensionality reduction technique was run until convergence (6,081</ns0:head><ns0:p>iterations) with a perplexity value of 25 and a learning rate of 10. The data points were then colored by emotion, valence, actor, and actor's sex. The algorithm grouped the data into three main clusters (Figure <ns0:ref type='figure'>8</ns0:ref>), in which emotions characterized by positive valence were grouped together, whereas the rest split into two clusters with high prevalence of panic fear and anger, and of sadness, respectively. Figure <ns0:ref type='figure'>8</ns0:ref> also reveals that emotions portrayed by the same actor tended to be close to each other. Something similar also happened when coloring by sex: female samples tended to group together, as did male samples. UMAP. The UMAP algorithm was run for 500 epochs (not a tunable parameter) and 36 neighbors. As shown in Figure <ns0:ref type='figure'>9</ns0:ref>, the left side mainly contained high-arousal emotions and positive emotions (e.g., amusement, joy, and despair), whereas the right side contained lowarousal emotions and negative emotions (e.g., sadness, irritation, anxiety, contempt, and interest). Similarly to t-SNE, expressions portrayed by the same actor were close to each other (e.g., actor number 3, pink dots, is a case in point). Finally, Figure <ns0:ref type='figure'>9</ns0:ref> Manuscript to be reviewed Computer Science sex played a part in the clustering results since most of the female and male samples were on the upper and the lower part of the output, respectively.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>We investigated classification of 18 emotions -portrayed by 10 actors through vocal and facial expressions -using person-independent supervised and unsupervised methods. Our study makes three main contributions to the literature. First, results from the supervised experiments showed that multimodal classifiers performed better than unimodal classifiers and were able to classify all emotions, although recognition rates varied widely across emotions. This indicates that the combinations of vocal and facial features that were used for classification varied systematically as a function of emotion, and that the signal was reliable enough to allow for above chance classification of all 18 emotions. Second, we utilized our machine learning approach to present new data on multimodal feature patterns for each emotion, in terms of the features that contributed most to classifier performance. Third, we explored how wholly unsupervised classifiers would organize the emotion expressions, based on the same features that were used for supervised classification. Several meaningful explanatory patterns were observed and interpreted in terms of valence, arousal, and various actor-and sex-specific aspects. The comparison of supervised and unsupervised approaches allowed us to explore how different methodological choices may provide different perspectives on how emotion expressions are organized.</ns0:p><ns0:p>Overall, the multimodal classifiers performed approximately 5-6% better than the unimodal vocal and facial classifiers in our supervised experiments. The magnitude of this improvement is in accordance with previous studies (see <ns0:ref type='bibr'>D&#180;Mello &amp; Kory, 2015</ns0:ref>, for a review). We observed the PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:ref> Manuscript to be reviewed Computer Science best performance (AUC = 0.88) for multimodal classifiers that merged the output from the best unimodal vocal (elastic net, AUC = 0.82) and facial (random forest, AUC = 0.80) classifiers using a late fusion approach and the product rule method. A direct comparison with previous classification studies on the GEMEP expression set is difficult because studies have used different classification approaches (e.g., person-independent or person-dependent), algorithms, numbers of emotion categories, and selections of stimuli. Our unimodal vocal and facial classifiers seemed to achieve slightly lower accuracy compared to previous efforts (e.g., <ns0:ref type='bibr' target='#b49'>Schuller et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b57'>Valstar et al., 2012)</ns0:ref>, although it must be noted that earlier studies have classified fewer emotion categories. We also used relatively small feature sets, with the aim of mainly including features that are possible to interpret in terms of human perception. For example, we focused on AUs because they provide a comprehensive and widely used way to describe facial expressions <ns0:ref type='bibr' target='#b40'>(Martinez et al., 2019)</ns0:ref>, that can be used to produce easily interpretable feature patterns for each emotion. However, inclusion of additional features such as head pose and gaze direction would likely increase classification performance. Recent studies have also abandoned the use of pre-defined features altogether and instead use deep learning of physical properties with good results (e.g., <ns0:ref type='bibr' target='#b34'>Li &amp; Deng, 2020)</ns0:ref>, although such methods often result in features that are difficult to interpret. <ns0:ref type='bibr' target='#b7'>B&#228;nziger and Scherer (2010)</ns0:ref> provided data on human classification for the same stimulus set as used in the current study. Direct comparison of recognition rates is again difficult because the human judgments were collected using different methodology (e.g., judges were allowed to choose more than one alternative in a forced-choice task), but overall our classifiers had somewhat lower recognition rates than the human judges. However, an inspection of recognition PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:ref> Manuscript to be reviewed Computer Science patterns showed many similarities between human judges and classifiers. For example, the human judges in <ns0:ref type='bibr' target='#b7'>B&#228;nziger and Scherer (2010)</ns0:ref>, like our classifiers, received higher accuracy for multimodal vs unimodal expressions. Looking at individual emotions, human judges showed highest accuracy for amusement, anger, and panic fear. Our classifiers also performed best for amusement and anger, and also performed relatively well for panic fear. Shame received the lowest recognition rates from both human judges and our classifiers. Even confusion patterns showed many similarities between human judges and classifiers. For example, joy and amusement, tenderness and pleasure, relief and pleasure, and despair and panic fear were among the most frequent confusions for both humans and classifiers. Similar recognition patterns for human judges and classifiers tentatively suggest that the included features may also be relevant for human perception of emotion.</ns0:p><ns0:p>Traditional emotion recognition studies using machine learning often aim to achieve the highest possible classification performance, and do not pay attention to feature importance. We propose that a more detailed inspection of feature importance presents a promising method for investigating emotion-related patterns of probabilistic and partly redundant vocal and facial features. Such patterns may be difficult to discover using traditional descriptive statistics and linear analyses. We present detailed multimodal feature patterns for each of the 18 included emotions, several of which have rarely been included in previous emotion expression studies (e.g., Figure <ns0:ref type='figure' target='#fig_16'>3</ns0:ref> and Figure <ns0:ref type='figure'>4</ns0:ref>). Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We also performed a number of unsupervised classification experiments, guided by the idea that they may provide additional information about how emotion expressions are organized (e.g., <ns0:ref type='bibr' target='#b2'>Azari et al., 2020)</ns0:ref>. Previous studies on the organization of emotion expressions have focused on human perception (e.g., <ns0:ref type='bibr' target='#b12'>Cowen et al., 2019)</ns0:ref>, whereas our study instead investigated the organization of emotion expressions based on their physical vocal and facial properties. Results from these experiments did not replicate a structure with 18 emotion categories. This was expected because such a solution would require that almost all of the variance in the included features would be explained by emotion expressions. However, all methods lead to meaningful structures that could often be interpreted in relation to emotion categories. Traditional methods based on k-means and hierarchical clustering proposed a two-factor solution which was interpreted using decision trees. This approach revealed that AU6 (cheek raiser) was the most relevant feature at the first decision node. This guided our interpretation of the two clusters as largely representing positive and negative valence, although expressions of negative emotions that shared key features with positive emotions did also end up in the 'positive' cluster. The exploratory dimension reduction methods gave further insights. PCA results could largely be interpreted in terms of high vs. low arousal expressions, although Figure <ns0:ref type='figure'>7</ns0:ref> also revealed that portrayals of the same emotion tended to be close together. Results from the t-SNE and UMAP analyses could also be interpreted in terms of arousal, valence, and emotion -but they also revealed that person and gender specific aspects contributed to clustering. For example, portrayals from the same actor tended to be close to each other, as did portrayals by male and female actors, respectively (see Figure <ns0:ref type='figure'>8</ns0:ref> and Figure <ns0:ref type='figure'>9</ns0:ref>). One conclusion that can be drawn is that both emotional and non-emotional variability is likely to play a role in unsupervised classification of emotions (see <ns0:ref type='bibr' target='#b34'>Li &amp; Deng, 2020)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where within-person normalization of features is often not a feasible solution. Future research could focus on ways to minimize the impact of feature variability that is not directly related to the expression of emotions.</ns0:p><ns0:p>Our stimuli consisted of actor portrayals recorded in a studio, so future research is required to investigate which of the results will generalize to naturalistic conditions. Studies using spontaneous expressions are important because recent research suggests that there may be small but systematic differences between how emotions are conveyed in actor portrayals vs. spontaneous expressions (e.g., <ns0:ref type='bibr' target='#b28'>Juslin et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b30'>Krumhuber et al., 2021a)</ns0:ref>. Another limitation of our study was that we did not fully take advantage of the dynamic nature of the stimuli, and only used such temporal dynamics information that was directly encoded in the features. Future studies could instead track the dynamic changes of features over time and use analysis methods that take advantage of this information (e.g., long short-term memory recurrent neural networks; see <ns0:ref type='bibr' target='#b62'>W&#246;llmer et al., 2013;</ns0:ref><ns0:ref type='bibr'>Zhao et al., 2019)</ns0:ref>. While the GEMEP data set is relatively small, openly available huge data sets could in the future enable modeling of emotion expressions via transformers and attention-based mechanisms (e.g., <ns0:ref type='bibr' target='#b53'>Siriwardhana et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b60'>Vaswani et al., 2017)</ns0:ref>. Even for moderately sized data sets, such modeling could be useful, as we have recently shown in another context <ns0:ref type='bibr' target='#b22'>(Gogoulou et al., 2021)</ns0:ref>.</ns0:p><ns0:p>With an increase in the number of human-robot interactions with an emotional component <ns0:ref type='bibr' target='#b52'>(Shum et al., 2018</ns0:ref>) -e.g., recognizing an angry customer in dialog with a chatbot -the future will hold more opportunities for data-driven reasoning. Besides software robots, there is also an increase in the number of filmed interactions between humans and physical robots and Manuscript to be reviewed Computer Science corresponding studies of human-robot cooperation <ns0:ref type='bibr' target='#b13'>(Crandall et al., 2018)</ns0:ref>. Studies into unlabeled data from such sources can be done using self-supervised learning. Such studies may assist in understanding the importance of correctly interpreting emotions, and will likely also become more common and potentially have important societal implications. This further motivates deep dives into the methodology, of the kind that we have attempted here.</ns0:p><ns0:p>To conclude, we propose that research on emotion expression could benefit from augmenting supervised methods with unsupervised clustering. The combination of methods leads to more insight into how data is organized, and we especially note that the investigation of explanatory patterns could be a valuable meta-heuristic that can be applied to several classification areas. We believe that advances in the basic science of understanding how emotions are expressed requires Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:p>Summary of the most important features (Facial Action Units and functionals) for classification of each emotion for the best performing video classifier (random forest).</ns0:p><ns0:p>See Figure <ns0:ref type='figure'>S8</ns0:ref> Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:p>Fragment of the decision tree used to interpret the clustering.</ns0:p><ns0:p>The model was trained on the output of k-Means (k = 2; before dimensionality reduction).</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:p>PCA 2D visualization of the multimodal dataset colored by emotion.</ns0:p><ns0:p>Note that 18 non-unique colors were used. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>. The majority of classification studies have been performed on unimodal stimuli (either vocal or facial), but PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>-are also extracted. This selection of vocal and facial features PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>, although few previous studies have included all 18 emotions. Feature extraction and pre-processing PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>for more details). The percentage of instances with unfavorable PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>, tdistributed Stochastic Neighbor Embedding (t-SNE; van der Maaten &amp; Hinton, 2008), and Uniform Manifold Approximation and Projection (UMAP; McInnes et al., 2018).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>. When the CH index and the Silhouette score were used once again to determine the number of clusters, results were unchanged from before PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>the best audio classifiers after hyperparameter tuning. Elastic Net with the eGeMAPS parameter set outperformed the rest of the models with an average validation AUC of 0.8196. Most of the classifiers did not suffer from overfitting since their training AUCs were generally close to their correspondent validation ones. However, this was not always the case. Random Forest was prone to reach very high training AUCs and lower validation ones. PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>emotion and then averaged. The interpretation of feature contribution values coincides with the PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>1 https://projector.tensorflow.org/?config=https://raw.githubusercontent.com/marferca/multimodal-emotionrecognition/main/4.unsupervised_learning/exploratory_approach/tf_embedding_projector/projector_config.json PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>also reveals how the actor's PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>Future studies are needed to investigate how well the obtained patterns may generalize to other data sets and other classification methods. PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>, especially in person-independent approaches PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>continued efforts regarding both the development of more representative stimulus materials and of more representative vocal and facial features. Machine learning methods could play a vital role in the study of how emotions are expressed nonverbally through the voice and the face. PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>for a complete list that includes all features. Functionals: M = arithmetic mean, CV = coefficient of variation, peaks = number of peaks. PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head /><ns0:label /><ns0:figDesc>k-Means for k = 2 (top left) and hierarchical clustering for k = 2 and complete distance method (top right) before dimensionality reduction, with percentage of files per emotion and cluster. The bottom two graphs show k-Means for k = 2 (bottom left) and hierarchical clustering for k = 2 and weighted distance method (bottom right) after dimensionality reduction. adm = admiration, amu = amusement, ang = anger, anx = anxiety/worry, con = contempt, des = despair, dis = disgust, fea = panic fear, int = interest, irr = irritation, joy = joy/elation, ple = sensual pleasure, pri = pride, rel = relief, sad = sadness, sha = shame, sur = surprise, ten = tenderness. PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head /><ns0:label /><ns0:figDesc>Figure 8</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='47,42.52,204.37,525.00,159.75' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Manuscript to be reviewed</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Computer Science</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Classifier</ns0:cell><ns0:cell>AUC (train)</ns0:cell><ns0:cell>AUC (validation)</ns0:cell></ns0:row><ns0:row><ns0:cell>Elastic Net (ext.)</ns0:cell><ns0:cell>0.9634 &#177; 0.0024</ns0:cell><ns0:cell>0.8662 &#177; 0.0150</ns0:cell></ns0:row><ns0:row><ns0:cell>Elastic Net (min.)</ns0:cell><ns0:cell>0.9401 &#177; 0.0029</ns0:cell><ns0:cell>0.8600 &#177; 0.0174</ns0:cell></ns0:row><ns0:row><ns0:cell>k-NN (ext.)</ns0:cell><ns0:cell>0.8764 &#177; 0.0057</ns0:cell><ns0:cell>0.8334 &#177; 0.0255</ns0:cell></ns0:row><ns0:row><ns0:cell>k-NN (min.)</ns0:cell><ns0:cell>0.8476 &#177; 0.0077</ns0:cell><ns0:cell>0.8285 &#177; 0.0259</ns0:cell></ns0:row><ns0:row><ns0:cell>Decision Tree (ext.)</ns0:cell><ns0:cell>0.7772 &#177; 0.0093</ns0:cell><ns0:cell>0.7320 &#177; 0.0280</ns0:cell></ns0:row><ns0:row><ns0:cell>Decision Tree (min.)</ns0:cell><ns0:cell>0.7936 &#177; 0.0130</ns0:cell><ns0:cell>0.7321 &#177; 0.0376</ns0:cell></ns0:row><ns0:row><ns0:cell>Random Forest (ext.)</ns0:cell><ns0:cell>0.9998 &#177; 0.0000</ns0:cell><ns0:cell>0.8550 &#177; 0.0222</ns0:cell></ns0:row><ns0:row><ns0:cell>Random Forest (min.)</ns0:cell><ns0:cell>1.0000 &#177; 0.0000</ns0:cell><ns0:cell>0.8555 &#177; 0.0225</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Note. ext = extended feature set (eGeMAPS + AUs); min = minimal feature set (GeMAPS +</ns0:cell></ns0:row><ns0:row><ns0:cell>AUs).</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62753:1:1:NEW 3 Sep 2021)</ns0:note> </ns0:body> "
"Stockholm, August 31, 2021 Dear Dr. Ben, This letter accompanies the revised submission to PeerJ Computer Science of the manuscript “Comparing supervised and unsupervised approaches to multimodal emotion recognition” (CS-2021:06:62753:0:1) which received an editorial decision that minor revisions were required. We are grateful for the opportunity to revise this manuscript. We thank all 3 reviewers for their constructive comments, and believe that the manuscript has improved during the revision process. Below, we include point-by-point responses to all reviewer comments. For easier reference, we have marked all changes in the manuscript files with red font color (in the version with tracked changes). We hope that by addressing the comments, our paper is now suitable for publication in PeerJ Computer Science. Sincerely The authors ---------------- Below we provide detail on the changes we made to the manuscript. For reference, we include the original comments in italic typescript. Reviewer 1 R1:1 Basic reporting The writing of the paper manuscript is clear and well organized. But there are some problems to be addressed. The citation of some references should be complete, including pages (e.g. Srinivasan, R., & Martinez, A. M. (2018).) Some of the figures should be resized (e.g. Figure 6). Thank you for the positive evaluation! We have added information in the reference list, where needed, and all references should now be complete. Note that some papers (e.g., Srinivasan & Martinez, 2018) are published online ahead of print and thus do not yet have page numbers, and yet others (e.g., Azari et al., 2020) have article numbers instead of page numbers. Doi-links are provided when applicable. Regarding figures, we have uploaded high-definition png-files in the submission portal. Unfortunately, the automatic pdf-conversion did not manage to preserve the high definition of all figures. However, we believe that the figures will preserve the required quality (so that the reader is able to zoom in on details) when the paper moves into the production stage. R1:2. Experimental design - The implementation of the methods should be described with sufficient detail for replication, including the setting of parameters. More recently state-of-the-art methods on supervised and unsupervised learning should be considered for comparison. We have added additional details in the Methods section (based on the reviewers comments, detailed below). We have also added a sentence right at the beginning of the Methods section which clarifies that additional information about the methods is available on GitHub: “Data, code, and additional computational information are openly available on GitHub (see Data Availability statement).” This is further clarified in the Data Availability statement: “The data and code that support the findings of this study are openly available on GitHub: https://github.com/marferca/multimodal-emotion-recognition. Data consists of the extracted vocal and facial features, and we provide the code needed to reproduce the machine learning analyses.” Regarding the point about “inclusion of more recent state-of-the art” methods, we have added several new references in the Introduction which we believe give a representative picture of the state-of-the-art in multimodal emotion recognition: “The majority of classification studies have been performed on unimodal stimuli (either vocal or facial), but combining features from several modalities has been shown to increase classification accuracy (see D´Mello & Kory, 2015, for a meta-analysis). The number of multimodal classification studies is steadily increasing (Poria et al., 2017), and recent studies explore a wide variety of approaches (e.g., Bhattacharya et al., 2021; Lingenfelser et al., 2018; Mai et al., 2020; Siriwardhana et al., 2020; Tzirakis et al., 2017; Wang et al., 2020).” We now also discuss state-of-the-art methods in the Discussion section: “While the GEMEP data set is relatively small, openly available huge data sets could in the future enable modeling of emotion expressions via transformers and attention-based mechanisms (e.g., Siriwardhana et al., 2020; Vaswani et al., 2017). Even for moderately sized data sets, such modeling could be useful, as we have recently shown in another context (Gogoulou et al., 2021).” R1:3. Validity of the findings - The experimental results are provided. But the contribution and novelty of the article are not clear. We have rephrased and added content in the Discussion section. It now begins with a paragraph that both clarifies the main contributions of the study and provides a summary of the main findings: “We investigated classification of 18 emotions – portrayed by 10 actors through vocal and facial expressions – using person-independent supervised and unsupervised methods. Our study makes three main contributions to the literature. First, results from the supervised experiments showed that multimodal classifiers performed better than unimodal classifiers and were able to classify all emotions, although recognition rates varied widely across emotions. This indicates that the combinations of vocal and facial features that were used for classification varied systematically as a function of emotion, and that the signal was reliable enough to allow for above chance classification of all 18 emotions. Second, we utilized our machine learning approach to present new data on multimodal feature patterns for each emotion, in terms of the features that contributed most to classifier performance. Third, we explored how wholly unsupervised classifiers would organize the emotion expressions, based on the same features that were used for supervised classification. Several meaningful explanatory patterns were observed and interpreted in terms of valence, arousal, and various actor- and sex-specific aspects. The comparison of supervised and unsupervised approaches allowed us to explore how different methodological choices may provide different perspectives on how emotion expressions are organized.” We have also rephrased the concluding paragraph, so that the take-home message is presented in a clearer way: “To conclude, we propose that research on emotion expression could benefit from augmenting supervised methods with unsupervised clustering. The combination of methods leads to more insight into how data is organized, and we especially note that the investigation of explanatory patterns could be a valuable meta-heuristic that can be applied to several classification areas.” R1:4. Additional comments The authors investigated the supervised and unsupervised emotion classifier performance. The comments should be addressed before further processing. No response required. Reviewer 2 R2:1. Basic reporting The content of this article is acceptable on the general with novel research topic and abundant detailed experimental results. Yet it still requires certain revisions to meet the publication standard, which are listed below in several aspects. 1. Language description: The descriptions in this article are unclear and hard to unsderstand at multiple places, such as Lines 56-57, 105-110, 229-231, 237-239, 372-374, etc. Please check the language used in the article. We have corrected typos and rephrased the sentences mentioned above and hope that the language is now clearer. For example, there was a typo in Lines 372-374, where we had mixed up left and right, and incorrectly referred to the left side of Figure 9 twice – which left the meaning of the sentence unclear. We have also clarified the language in a number of other places through word replacements, or a new and better phrasing (all changes are marked in red in the manuscript file). R2:2. Literature references: This article provided various prior works and references, however, the methods compared in the experiments are not fully included. For example, the late fusion methods mentioned (maximum rule, sum rule, product rule, weight criterion, rule-based, and model-based methods) have not been referenced, and the early fusion methods have not been explicited presented. Please properly cite all evaluated works in this paper. We have added new references that explain the various fusion methods: Atrey, P. K., Hossain, M. A., El Saddik, A., & Kankanhalli, M. S. (2010). Multimodal fusion for multimedia analysis: A survey. Multimedia Systems, 16(6), 345-379. 10.1007/s00530-010-0182-0 Dong, X. L., Gabrilovich, E., Heitz, G., Horn, W., Murphy, K., Sun, S., & Zhang, W. (2015). From data fusion to knowledge fusion. Proceedings of the VLDB Endowment, 7(10), 881-892. 10.14778/2732951.2732962 R2:3. Article structure: This article is organized in clear and reasonable structure. However, the discussion section should be more compact and conclusive, highlighting the major findings and contributions of this article. We have rephrased and added content in the Discussion section, see our response to Reviewer 1 (R1:3) for details. We believe that the Discussion now highlights the main findings and contributions of our study. R2:4. Experimental design Various experiments have been conducted for this research. Minor improvements are suggested as follows: 1. The optimal numbers of clusters for the traditional unsupervised approaches are said to be 2 or 6 through CH index and Silhouette score analysis (Lines 220-225). Experimental results should be presented to validate this statement. We have rewritten this section and now clarify that the experimental results are presented in supplemental Figure S3: “Figure S3 shows the obtained CH(k) and sscore(k) for k-means before dimensionality reduction, and indicates that the estimated number of clusters was 2 for both the CH index and Silhouette score techniques. For hierarchical clustering, the CH index demonstrated that the best value was 2 for the single and complete distance methods, but 6 for the weighted method, whereas the Silhouette score consistently demonstrated that the best value was 2 (see Figure S3). For both k-means and hierarchical clustering we selected the number of clusters that maximized the score.” R2:5. In feature extraction and pre-processing section, certain data cleaning process has been conducted (Line 148). Please explain the reason for such process. We have added more details about the data cleaning procedure, and also added a new figure (Figure S2) that shows the proportion of instances per confidence rate before data cleaning: “We removed instances where AU detection was deemed unreliable. The OpenFace toolkit provides two indicators per instance that aided the data cleaning process: the confidence and success rates. The former refers to how reliable the extracted features are (continuous value from zero to one), whereas the latter denotes if the facial tracking is favorable or not (binary value). Taking this into consideration, instances with a confidence rate lower than 0.98 or an unfavorable success rate were dropped. Ninety percent of instances received a confidence rating of 0.98 or higher (see Figure S2 for more details). The percentage of instances with unfavorable success rate was very low (0.58%). In total, the number of instances decreased by 9.94% after the cleaning process and caused the deletion of one entire file.” R2:6. Multiple features for AUs are computed as video features (Lines 154-157), please give explicit definition or computational detail of each feature. We have added more details about how the OpenFace AU detection works: “OpenFace offers an extensive range of parameters such as facial landmark detection, head pose estimation, and eye-gaze estimation. However, our study focused on Facial Action Units (AUs) (Ekman & Friesen, 1977) and the toolkit provides the intensity of 17 AUs per frame (i.e., 1, 2, 4, 5, 6, 7, 9, 10, 12, 14, 15, 17, 20, 23, 25, 26, & 45). AU detection is based on pre-trained models for the detection of facial landmarks, and uses dimensionality-reduced histograms of oriented gradients (HOGs) from face image and facial shape features in Support Vector Machine analyses (for details, see Baltrušaitis et al., 2015; 2018).” We also clarify that the functionals were calculated based on the framewise intensity scores provided by OpenFace: “Secondly, video instances were grouped by file and the framewise feature intensity scores from OpenFace were used to compute the following functionals for each AU …” Additional computational details are available on GitHub, for the interested reader (and the link to GitHub is provided in the Associated Data section). For your reference, the direct link to the relevant GitHub page is: https://github.com/marferca/multimodal-emotion-recognition/blob/main/2.data_preparation/code/video_data_preparation.ipynb R2:7. Validity of the findings no comment Thank you for your valuable comments! Reviewer 3 R3:1 Basic reporting Very good paper. A few slight comments: I understand that one of the strength of this study is to investigate the variety of emotional categories and to show the importance of each features (voice + face). Thank you for the positive evaluation! R3:2 A little more introduction should be discussed in comparison with previous studies (see https://github.com/EvelynFan/AWESOME-MER) that adopted a multimodal approach. Thank you for the link, this is a very nice collection of papers which will be very useful for us! We have added several new references to recent multimodal emotion recognition studies (found through the link and elsewhere), see our response to the previous comment by Reviewer 1 (R1:2). R3:3 Knowledge of features goes to application to robots. I felt that it would be good if there was a discussion about Human-robot interaction in the Discussion section. We agree and have added a brief discussion about human-robot interaction: “With an increase in the number of human-robot interactions with an emotional component (Shum et al., 2018) – e.g., recognizing an angry customer in dialog with a chatbot – the future will hold more opportunities for data-driven reasoning. Besides software robots, there is also an increase in the number of filmed interactions between humans and physical robots and corresponding studies of human-robot cooperation (Crandall et al., 2018). Studies into unlabeled data from such sources can be done using self-supervised learning. Such studies may assist in understanding the importance of correctly interpreting emotions, and will likely also become more common and potentially have important societal implications. This further motivates deep dives into the methodology, of the kind that we have attempted here.” R3:4 Barrett et al (2019: Line 80-81) strictly criticizes emotion recognition in emotional categories like this paper. Citing Keltner is preferable as a position, so I recommend to cite it (e.g., https://psycnet.apa.org/record/2017-30838-004; http://emotionresearcher.com/the-great-expressions-debate/) In this particular citation, we refer specifically to the observation drawn in the review paper by Barrett et al. (2019) that “facial cues (e.g., smiles) are only probabilistically associated with any one emotion (e.g., happiness), and that similar configurations of facial movements can be associated with more than one emotion”. While the paper in general is critical regarding the value of emotion category classification, we cite it for the sake of the above observation. We have previously made the similar argument for vocal expressions (Juslin & Laukka, 2003), and the point we wish to make is that both facial and vocal cues may be only probabilistically linked to any specific emotion. So, in short, we would like to keep this citation. By the way, we do cite papers by Keltner elsewhere in the manuscript :-) R3:5 Experimental design no comment Validity of the findings 'actor and gender specific aspects also contributed to clustering' What are the implications of this finding? The Unsupervised method seems just to report the characteristics of the stimulus (GEMEP). I want the author to clarify that necessity. This is an interesting point for discussion, and we have tried to address it throughout the Discussion section (see below). In a sense, both the supervised and the unsupervised methods report the characteristics of the stimuli – because they are both based on the same set of features. However, they do provide different perspectives and we now try to clarify what each method contribute, in the Discussion section: The results from the supervised analyses indicated that: “… the combinations of vocal and facial features that were used for classification varied systematically as a function of emotion, and that the signal was reliable enough to allow for above chance classification of all 18 emotions.” Whereas the findings from the unsupervised methods: “did not replicate a structure with 18 emotion categories. This was expected because such a solution would require that almost all of the variance in the included features would be explained by emotion expressions. However, all methods lead to meaningful structures that could often be interpreted in relation to emotion categories.” Thus our conclusion is that the: “comparison of supervised and unsupervised approaches allowed us to explore how different methodological choices may provide different perspectives on how emotion expressions are organized.” We further discuss the implications of the different solutions provided by the supervised and unsupervised methods: ”One conclusion that can be drawn is that both emotional and non-emotional variability is likely to play a role in unsupervised classification of emotions (see Li & Deng, 2020), especially in person-independent approaches where within-person normalization of features is often not a feasible solution. Future research could focus on ways to minimize the impact of feature variability that is not directly related the expression of emotions.” "
Here is a paper. Please give your review comments after reading it.
306
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Epicardial fat (ECF) is localized fat surrounding the heart muscle or myocardial and enclosed by a thin-layer membrane of pericardium. Segmenting the ECF is one of the most difficult medical image segmentation tasks. Since the epicardial fat is infiltrated into the groove between cardiac chambers and is contiguous with cardiac muscle, segmentation requires location and voxel intensity. Recently, deep learning methods have been effectively used to solve medical image segmentation problems in several domains with state-of-the-art performance. This paper presents a novel approach to 3D segmentation of ECF by integrating attention gates and deep supervision into the 3D U-Net deep learning architecture. The proposed method shows significant improvement of the segmentation performance, when compared with standard 3D U-Net. The experiments show excellent performance on non-contrast CT datasets with average Dice scores of 90.06%. Transfer learning from a pre-trained model of a non-contrast CT to contrast-enhanced CT dataset was also performed. The segmentation accuracy of contrast-enhanced CT dataset achieved Dice score of 88.16%.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Epicardial fat (ECF) is localized fat surrounding the heart muscle and enclosed by the thin-layer pericardium membrane. The adipose tissue located outside pericardium is called pericardial fat that is contiguous with other mediastinal fat (Fig <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>). ECF is the source of proinflammatory mediators and promotes the development of atherosclerosis of coronary arteries. The clinical significance of the ECF volume lies in its relation to major adverse cardiovascular events. Thus, measuring its volume is important in diagnosis and prognosis of cardiac conditions. ECF volume can be measured in non-contrast CT images (NCCT) with coronary calcium scoring and in contrast-enhanced CT images (CECT) with coronary CT angiography (CCTA). However, accurate measurement of ECF is challenging. The ECF is separated from other mediastinal fat by thin layer pericardium. The pericardium is often not fully visible in CT images, which makes the detection of the boundaries of ECF difficult. ECF can also be infiltrated into grooves between cardiac chambers and is contiguous to the heart muscle. These technical challenges not only make accurate volume estimation difficult but make manual measurement a time consuming process that is not practical in routine use. Therefore, computer-assisted tools are essential to reduce the processing time for ECF volume measurement. Automated segmentation could potentially make ECF volume estimation more practical on a routine basis. Several approaches based on prior medical knowledge or non-machine learning techniques have been proposed for ECF segmentation, including genetic algorithms, region-of-interest selection with thresholding, and fuzzy c-mean clustering <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref><ns0:ref type='bibr' target='#b1'>[2]</ns0:ref><ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. Deep learning techniques have been applied to a wide variety of medical image segmentation problems with great success <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref><ns0:ref type='bibr' target='#b4'>[5]</ns0:ref><ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>. A recent article <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> demonstrates that deep learning algorithms outperform conventional methods for medical image segmentation in terms of accuracy. But most previous studies involved large solid organs or tumor segmentation <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref><ns0:ref type='bibr' target='#b8'>[9]</ns0:ref><ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>. The segmentation of relatively small and complex structures with high inter-patient variability, such as ECF, has been far less successful. Recently, a few deep learning approaches to ECF segmentation have made progress on this problem <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref><ns0:ref type='bibr' target='#b11'>[12]</ns0:ref><ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>. In this paper we build upon the previous work by presenting a novel deep learning model for 3D segmentation of ECF.</ns0:p><ns0:p>In this paper, we propose a solution of automatic segmentation of ECF volume using a deep learning based approach, in non-contrast and contrast-enhanced CT datasets. The NCCT dataset uses coronary calcium scoring and the CECT dataset uses contrast-enhanced coronary CT angiography (CE-CCTA). The model is first learned from scratch on the NCCT dataset with coronary calcium scoring CT. To cover the entire heart, it is scanned in 64 slices with 2.5 mm thickness on each acquisition. Then, the model pre-trained on that NCCT dataset is transferred to the CECT dataset which uses CE-CCTA. The CE-CCTA study is performed in 256 slices with 0.625 mm thickness One of the key contributions of this paper is to validate the performance of our new developed 3D CNN-based approach on these difficult tasks. Since segmentation of ECF requires utilization of both voxel intensity and location information, we integrate two attention gate (AG) and deep supervision modules (DSV) on a standard 3D U-Net. Our proposed model has better performance than the recent state-of-the-art approaches because of the integration of AG and DSV modules. The AG module is used to focus on the target structures by suppressing irrelevant regions in the input image. The DSV module is used to increase the number of learned features by generating a secondary segmentation map combining from different resolution levels of network layers. The second main contribution is the use of transfer learning, taking a model pretrained on NCCT data, and applying it to CECT data, using only a small amount of data for the re-training. This approach has benefits in clinical applications for both NCCT and CECT data for ECF segmentation. Furthermore, our proposed solution is 3D-based and does not require preprocessing and postprocessing steps, thus it can easily integrate into the clinical workflow of CT acquisition to rapidly generate ECF volume results for the physician in clinical practice.</ns0:p></ns0:div> <ns0:div><ns0:head>Related works</ns0:head><ns0:p>Conventional non-machine learning methods have been proposed for ECF segmentation. Rodrigues et al. <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> proposed a genetic algorithm to recognize the pericardium contour on CT images. Militello et al. <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> proposed a semi-automatic approach using manual region-of-interest selection followed by thresholding segmentation. Zlokolica et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> proposed local adaptive morphology and fuzzy c-means clustering. However, these conventional methods required many preprocessing steps before entering the segmentation algorithm. The next evolution of ECF segmentations were performed with a machine learning approach. Rodrigues et al. <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref> proposed ECF segmentation in CECT images using the Weka library (an open-source collection of machine learning algorithms) with Random-Forest as the classifier. The experiment, performed on 20 patients, yielded a Dice score of 97.7%. Commandeur et al. <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> proposed ECF segmentation from non-contrast coronary artery calcium computed tomography using ConvNets. They reported the Dice score of 82.3%.</ns0:p><ns0:p>To improve the performance of medical image segmentation, several modifications of U-Net have been proposed. The spatial attention gate has been proposed to focus on the spatial and detailed structure of the important region varying in shape and size <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>. Schlemper et al. <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref> demonstrate the performance of the attention U-Net on real-time fetal detection on 2D images and pancreas detection on 3D CT images. He et al. <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> proposed ECF segmentation from CE-CCTA using a modified 3D U-Net approach by adding attention gates (AG). AGs are commonly used in classification tasks <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref><ns0:ref type='bibr' target='#b16'>[17]</ns0:ref><ns0:ref type='bibr' target='#b17'>[18]</ns0:ref><ns0:ref type='bibr' target='#b18'>[19]</ns0:ref> and have been applied for various medical image problems such as image classification <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref><ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>, image segmentation <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref><ns0:ref type='bibr' target='#b15'>[16]</ns0:ref><ns0:ref type='bibr' target='#b16'>[17]</ns0:ref><ns0:ref type='bibr' target='#b17'>[18]</ns0:ref><ns0:ref type='bibr' target='#b18'>[19]</ns0:ref><ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>, and image captioning <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>. AG is used to focus on the relevant portion of the image by suppressing irrelevant regions <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>. The integration of AG on the standard U-Net <ns0:ref type='bibr' target='#b5'>[6,</ns0:ref><ns0:ref type='bibr' target='#b9'>10,</ns0:ref><ns0:ref type='bibr' target='#b10'>11,</ns0:ref><ns0:ref type='bibr' target='#b14'>15,</ns0:ref><ns0:ref type='bibr' target='#b20'>21]</ns0:ref> or V-Net <ns0:ref type='bibr' target='#b9'>[10,</ns0:ref><ns0:ref type='bibr' target='#b21'>22]</ns0:ref> has been demonstrated to have benefits for region localization.</ns0:p><ns0:p>As mentioned above, the ECF has a complex-shaped structure. Some parts contain a thin layer adjacent to the cardiac muscle, which is similar to the microvasculature of the retinal vascular image visualized as small linear structures. In order to improve the performance of segmentation of small structures, several modules have been integrated into the main architecture of U-Net and V-Net such as dense-layer and deep supervision modules <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref><ns0:ref type='bibr' target='#b21'>[22]</ns0:ref><ns0:ref type='bibr' target='#b22'>[23]</ns0:ref><ns0:ref type='bibr' target='#b24'>[24]</ns0:ref><ns0:ref type='bibr' target='#b25'>[25]</ns0:ref>. The Dense-layer <ns0:ref type='bibr' target='#b22'>[23,</ns0:ref><ns0:ref type='bibr' target='#b24'>24]</ns0:ref> has been used to enhance the segmentation result instead of the traditional convolution in the U-Net model. Deep supervision <ns0:ref type='bibr' target='#b20'>[21,</ns0:ref><ns0:ref type='bibr' target='#b21'>22,</ns0:ref><ns0:ref type='bibr' target='#b25'>25]</ns0:ref> was used to improve local minimal traps during the training. The deep supervision helps to improve model convergence and increase the number of learned features <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref>. Kearney et al. <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref> showed that addition of deep supervision added to the U-Net model could improve the performance of 3D segmentation in CT image of prostate gland, rectum, and penile bulb.</ns0:p><ns0:p>While 2D and 3D deep learning approaches have been used for medical image segmentation, 3D approaches have typically shown better performance than the 2D approaches <ns0:ref type='bibr' target='#b7'>[8,</ns0:ref><ns0:ref type='bibr' target='#b8'>9,</ns0:ref><ns0:ref type='bibr' target='#b26'>26]</ns0:ref>. For example, Zhou et al. <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> demonstrated the better performance of 3D CNN approaches on multiple organs on 3D CT images, when compared to the 2D based method. Starke et al. <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref> also demonstrated that 3D CNN achieved better performance on segmentation of head and neck squamous cell carcinoma on CT images. Woo et al. <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref> demonstrated that 3D U-Net provided better performance on brain tissue MRI images, compared with 2D U-Net, on a smaller training dataset. Therefore, in this paper we use a 3D CNN for segmenting epicardial fat in cardiac CT images.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head></ns0:div> <ns0:div><ns0:head>CNN architecture</ns0:head><ns0:p>The model architecture is based on a 3D U-Net model composed of multiple levels of encoding and decoding paths. The initial number of features at the highest layers of the model is 32. The numbers of feature maps are doubled with each downsampling path. In addition to the original U-Net architecture, we added an attention gate connecting the encoding and decoding paths and deep supervision at the final step of the network. The model is created on a fully 3D structure at each network level. The final layer is an element-wise sum of feature maps of two last decoding paths. The segmentation map of two classes (epicardial fat and background) is the output layer with threshold of 0.5 to generate the binary classification of the epicardial fat. The architecture of the proposed network is shown in Fig 2 <ns0:ref type='figure'>.</ns0:ref> Starting with the standard 3D U-Net architecture, the attention gate module connects each layer of encoding and decoding paths. The gating signal (g) is chosen from the encoding path and the input features (x) are collected from the decoding path. To generate the attention map, g and x go through a 1x1x1 convolution layer and element-wise sum, followed by rectified linear unit (ReLu) activation, a channel-wise 1x1x1 convolutional layer, batch normalization and sigmoid activation layer. The output of sigmoid activation is concatenated to the input x to get the output of the attention gate module <ns0:ref type='bibr' target='#b10'>[11,</ns0:ref><ns0:ref type='bibr' target='#b20'>21]</ns0:ref>.</ns0:p><ns0:p>Deep supervision <ns0:ref type='bibr' target='#b9'>[10,</ns0:ref><ns0:ref type='bibr' target='#b21'>22]</ns0:ref> is the module at the final step of the network where it generates the multiple segmentation maps at different resolution levels, which are then combined together. The secondary segmentation maps are created from each level of decoding paths which are then transposed by 1x1x1 convolution. All feature maps are combined by element-wise sum. The lower resolution map is upsampled by 3D transposed convolution to have the same size as the second-lower resolution. Two maps are combined with element-wise sum then upsampled and added to the next level of segmentation map, until reaching the highest resolution level.</ns0:p></ns0:div> <ns0:div><ns0:head>CT imaging data</ns0:head><ns0:p>This experimental study was approved and participant consent was waived by the institutional review board of Siriraj Hospital, Mahidol University (certificate of approval number Si 766/2020). The experimental datasets were acquired from 220 patients with non-contrast enhanced calcium scoring and 40 patients with CE-CCTA. The exclusion criteria were post open surgery of the chest wall. All CT acquisition was performed with the 256-slice multi-detector row CT scanner (Revolution CT; GE Medical Systems, Milwaukee, Wisconsin, United States). The original CT datasets of NCCT and CECT studies were 64 slices in 2.5 mm slice thickness and 256 slices in 0.625 mm slice thickness, respectively. All DICOM images were incorporated into a single 3D CT volume file with preserved original pixel intensity. Due to limitation of GPU memory, the 256 slices of CE-CCTA were pre-processing with rescaling to 64 images in the volume dataset. The final 3D volume dataset in all experiments was 512x512x64. The dataset was raw 12 bits grayscale in each voxel. The area of pericardial fat was defined by fat tissue attenuation inside the pericardium, ranging from -200 HU to -30 HU <ns0:ref type='bibr' target='#b13'>[14,</ns0:ref><ns0:ref type='bibr' target='#b27'>27,</ns0:ref><ns0:ref type='bibr' target='#b29'>28]</ns0:ref>. The groundtruth segmentation of ECF in all axial slices was performed using the 3D slicer software version 4.10.0 by a cardiovascular radiologist with more than 15 years of experience. No additional feature map or augmentation was performed in the pre-processing step. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The experiments were implemented using the pytorch (v1.8.0) deep learning library with Tensorflow backend in Python (v3.6.9). The workflow for network training is illustrated in Fig 3 <ns0:ref type='figure'>.</ns0:ref> The training and testing processes were performed on a cuda-enabled GPU (Nvidia DGX-A100) with 40 GB RAM. The experiments were divided into three scenarios: model validity assessment, NCCT, and CECE experiments. The parameters were the same for all three experiments. The networks were trained with RMSprop optimizer and mean squared error loss.</ns0:p><ns0:p>The training parameters of learning rate, weight decay, and momentum were le-3, le-8 and 0.9, respectively. The initial random seed was set to be 0. The illustration of the experimental framework is shown in Fig <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>.</ns0:p><ns0:p>The first experiment was the assessment of the model validity, for which we used 5-fold cross validation. The total dataset consisted of 200 volume-sets (12,800 images), divided into five independent folds. Each fold contained 160 volume-sets (10,240 images) for training and 40 volume-sets (2,560 images) for validation, without repeated validation data between folds. The other 20 volume-sets (1,280 images) were left for testing in second and third experiments. The volume matrix of each dataset was 512x512x64 pixels. Then the 5-fold cross validation was performed on standard U-Net, AG-U-Net, DSV-U-Net and the proposed method (AG-DSV-U-Net). For each fold of validation, the model with the best training accuracy after 150 epochs was selected for the validation.</ns0:p><ns0:p>The second experiment was to assess segmentation performance by training the network from scratch with the NCCT dataset. The volume matrix of each dataset was 512x512x64 pixels. To compare the performance of segmentation, this experiment was performed with four model architectures: standard U-Net, AG-U-Net, DSV-U-Net and proposed method (AG-DSV-U-Net). The network was trained with a hold-out method, in which a total of 220 volume-sets (14,080 images) were split into 200 volume-sets (12,800 images) for training and 20 volume-sets (1,280 images) for testing. The model output on the training data was collected at the best accuracy of total 300 epochs, named model-A.</ns0:p><ns0:p>The third experiment was to assess segmentation performance in CECT dataset and to evaluate the effectiveness of transferring the learning from NCCT to CECT datasets. The pretraining 3D model (model-A) was trained on large calcium scoring NCCT datasets. The key success of the transfer learning on 3D U-Net is to fine-tune only the shallow layers (contracting path) <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref> instead of the whole network. This contracting path represents a more low-level feature of the network <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref>. The retraining dataset requires only a small amount of data -in our case only 20 volume-sets of CECT data. These retraining datasets are not from the same cases as used in the pre-trained model. The original volume matrix of each dataset was 512x512x256 pixels. Due to the limitation of GPU memory, the pre-processing step is voxel rescaling from 256 to 64 slices in the z plane. To compare the performance of segmentation, this experiment was performed with four model architectures: standard U-Net, AG-U-Net, DSV-U-Net and proposed method (AG-DSV-U-Net). The network was trained with a hold-out method, in which the total 40 volume-sets <ns0:ref type='bibr'>(</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Performance evaluation</ns0:head><ns0:p>The performance of our proposed CNN segmentation is compared with the performance of the existing methods. The evaluation was quantitatively evaluated by comparison with the reference standard using the Dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC) and Hausdroff distance (HD). An average HD value was calculated using the insight toolkit library of 3D slicer. Differences in the comparison coefficient among the four groups of experiments (standard U-Net, AG-U-Net, DSV-U-Net and AG-DSV-U-Net) were assessed with a paired Student's t-test. P values &lt;0.05 indicated a statistically significant difference. Differences in the comparison between DSC of segmentation result and ECF volume were assessed with Pearson's correlation coefficient. The Pearson's values of &lt; 0.3 indicated poor correlation, 0.3 to 0.7 indicated moderate correlation, and &gt; 0.7 indicated good correlation.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>The patient demographics are shown in Table <ns0:ref type='table'>1</ns0:ref>. The training dataset of NCCT has an average age of 61.43 years and an average volume of 135.75 ml. The testing dataset of noncontrast CT has a similar distribution, with an average age of 67.80 years and an average volume of 127.59 ml. For the contrast-enhanced dataset, the average ages of training and testing datasets were 65.85 and 60.85 years, respectively. The average volumes of epicardial fat of training and testing datasets were 117.13 and 121.43 ml, respectively. Five-fold cross validation experiments on our NCCT dataset were used to evaluate the validity and repeatability performance of the proposed method. The dataset was split into training (80%) and validation (10%) for each fold. On each model architecture, the validation data exhibits good results across each fold. The proposed method also demonstrates the best average performance (DSC = 89.02), when compared with other methods (p&lt;0.05). (Table <ns0:ref type='table'>2</ns0:ref>).</ns0:p><ns0:p>The experimental result of the NCCT dataset is shown in Table <ns0:ref type='table'>3</ns0:ref>. The proposed CNNbased method for ECF segmentation on the non-contrast dataset demonstrates excellent results, achieving average DSC, JSC, HD values of 90.06&#177;4.60, 82.42&#177;6.91 and 0.25&#177;0.14, respectively. The baseline of the experiment is the standard 3D U-Net which demonstrates good results with DSC, JSC and HD values of 84.87&#177;5.73, 74.12&#177;8.0.8 and 0.34&#177;0.18, respectively. The segmentation results of the modified U-Net models (AG-U-Net, DSV-U-Net and the proposed method) demonstrate statistically significant improvement compared with the standard U-Net (p&lt;0.05). The difference between segmentation results of AG-U-Net and DSV-U-Net is not statistically significant (p&gt;0.05). The DSC, JSC, HD values of AG-U-Net are 89.59+4.45, 81.41+6.77 and 0.27+0.12, respectively. The proposed method statistically improved the segmentation result (p&lt;0.05) compared with the standard U-Net and AG-U-Net. While the proposed method is better than DSV-U-Net, the difference is not statistically significant (p&gt;0.05). Examples of segmentation results of the proposed method are shown in Fig <ns0:ref type='figure' target='#fig_5'>4</ns0:ref> The experimental result of the proposed method on the CECT dataset is shown in Table <ns0:ref type='table'>4</ns0:ref>. This transfer learning approach achieved average DSC, JSC and HD values of 88.16&#177;4.57, 79.10&#177;6.75 and 0.28&#177;0.20, respectively (Table <ns0:ref type='table'>4</ns0:ref>). The segmentation result of the proposed method demonstrates statistically significant improvement, when compared with other methods (p&lt;0.0. </ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Segmentation of ECF is a difficult image segmentation task because of the thin layer and complex structures at the outer surface sulci of the heart. The ECF is also variable in distribution, depending on body habitus. In general, obese patients have larger amounts of ECF than do thin patients. Segmentation of ECF is more challenging than the segmentation of other cardiac structures.</ns0:p><ns0:p>Most CNN approaches work on 2D images whereas in clinical practice, 3D volume segmentation is used <ns0:ref type='bibr' target='#b31'>[30]</ns0:ref>. The 2D-based CNN approaches such as ResNet and VGG are not applicable for 3D datasets. The model architectures for 2D CNN and 3D CNN are different <ns0:ref type='bibr' target='#b7'>[8,</ns0:ref><ns0:ref type='bibr' target='#b8'>9,</ns0:ref><ns0:ref type='bibr' target='#b26'>26]</ns0:ref>. 3D CNN has an advantage over 2D-CNN by extracting both spectral and spatial features simultaneously, while 2D CNN can extract only spatial features from the input data <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. For this reason in general, the 3D CNN is more accurate than 2D <ns0:ref type='bibr' target='#b3'>[4,</ns0:ref><ns0:ref type='bibr' target='#b32'>31]</ns0:ref>. 2.5D CNN has been developed to solve the memory consumption problem of 3D models <ns0:ref type='bibr' target='#b33'>[32]</ns0:ref>. 2.5D CNN has at least three approaches <ns0:ref type='bibr' target='#b33'>[32,</ns0:ref><ns0:ref type='bibr' target='#b34'>33]</ns0:ref>. The first is a combination of output of 2D CNN in three orthogonal planes (axial, coronal and sagittal) with majority voting. The second is 2D CNN with 3 or 5 channels from adjacent 3 or 5 slices. Third is 2D CNN with randomly oriented 2D cross sections. In the final step, 2.5D segmentation requires an additional post-processing step to generate 3D output <ns0:ref type='bibr' target='#b35'>[34]</ns0:ref>. Recent studies demonstrated that the 3D CNN provides a higher accuracy for image segmentation, when compared with the 2D CNN <ns0:ref type='bibr' target='#b7'>[8,</ns0:ref><ns0:ref type='bibr' target='#b8'>9,</ns0:ref><ns0:ref type='bibr' target='#b26'>26]</ns0:ref>. However, the 3D CNN requires more resources and time for the model training. For the best performance, we use the 3D CNN in our implementation. The best performing methods for 3D volume segmentation of medical data are U-Net and V-Net. V-Net has more trainable parameters in its network architecture. Recent experimental comparisons of U-Net and V-Net on medical data have not shown statistically significant differences in performance <ns0:ref type='bibr' target='#b10'>[11,</ns0:ref><ns0:ref type='bibr' target='#b36'>35]</ns0:ref>. However, U-Net is less complex and easier to modify so that additional modules can be used to integrate to the standard U-Net in order to improve the performance.</ns0:p><ns0:p>Several state-of-the-art approaches for CNN-based segmentation of ECF have recently been proposed. Commandeur et al. <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> proposed the first CNN-based method for ECF segmentation in a non-contrast 2D CT dataset using a multi-task convolutional neural network called ConvNets. They reported a Dice score of 82.3% for the segmentation result. He et al. <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> proposed another CNN-based method on a 3D CECT dataset using AG integrated into 3D U-Net. The segmentation result was reported to have a DSC of 88.7% <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>. We repeated the experiment by implementing the AG in 3D U-Net on our NCCT dataset by hold-out test. The amount of hold-out train on our dataset (200 volume-sets) is more than the one used in the previous article (150 volume-sets) <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>. Our 4 layer AG-U-Net method demonstrates significant improved performance with DSC of 89.59%, as compared with 3 layer AG-U-Net of 86.54% (p&lt;0.05). That should be due to more layers of our network. In our implementation, our AG-U-Net has deeper convolutional layers (4 layers) and removes sigmoid at the end of the network. However, the AG integration provides significantly better performance (p&lt;0.05) as compared with standard 3D U-Net (DSC of 84.87%). To the best of our knowledge, our experiment uses the largest volume size of the dataset (512x512x64). We try to improve the accuracy of the segmentation by modifying the standard U-Net architecture. We introduce a novel approach to 3D segmentation of ECF by integrating both AG and DSV modules into all layers of 3D U-Net deep learning architecture. The AGs are commonly used in natural image analysis and natural language processing <ns0:ref type='bibr' target='#b37'>[36,</ns0:ref><ns0:ref type='bibr' target='#b38'>37]</ns0:ref>, which can generate attention-awareness features. The AG module is beneficial for organ localization, which can improve organ segmentation <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>. During CNN training, AG is automatically learned to focus on the target without additional supervision <ns0:ref type='bibr' target='#b39'>[38]</ns0:ref>. The AG module can improve model accuracy by suppressing feature activation in irrelevant regions of an input image <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>. The AG module is used to make connections between encoding and decoding paths on the standard U-Net. The DSV module is used to deal with the vanishing gradient problem in the deeper layer of CNN <ns0:ref type='bibr' target='#b9'>[10,</ns0:ref><ns0:ref type='bibr' target='#b25'>25]</ns0:ref>. The standard approach provides the supervision only at the output layer. But the DSV module propagates the supervision back to the earlier layer by generating a secondary segmentation map combining from different resolution levels. The losses of this segmentation map is weighted and added to the final loss function that can effectively increase the performance <ns0:ref type='bibr' target='#b40'>[39]</ns0:ref>. The DSV module is used by adding into the decoding path of 3D U-Net. The AG-DSV modules had been implemented in previous work <ns0:ref type='bibr' target='#b9'>[10,</ns0:ref><ns0:ref type='bibr' target='#b21'>22]</ns0:ref> for kidney <ns0:ref type='bibr' target='#b21'>[22]</ns0:ref> tumor segmentation (Kidney Tumor Segmentation Challenge 2019), as well as for liver <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> and pancreas <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> tumor segmentation (Medical Decathlon Challenge 2018).</ns0:p><ns0:p>The experiment demonstrated that our proposed method (AG-DSV-U-Net) achieves excellent performance with average and max DSC values of 90.06% and 95.32%, respectively. Our proposed method also shows a significant improvement of performance (90.06%), when compared with the previous state-of-the-art network (86.54%) on the same dataset (p&lt;0.05). The example of the results was shown in Fig 4 <ns0:ref type='figure'>.</ns0:ref> While one might expect the segmentation performance to improve with fat volume of the dataset, unexpectedly, the statistical analysis demonstrates that there is poor correlation between segmentation performance and fat volume (Pearson's correlation 0.2).</ns0:p><ns0:p>The 3D volume size of our dataset is larger (512x512x64) compared with the previous work (512x512x32) <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>. For comparative analysis on different numbers of slices and image resolution of dataset, the previous work demonstrates that a 40-slice of volume dataset achieves 1% higher DSC than 32-slice and 24-slice <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>. However, the training time is also increased. We set up additional experiments to test the effect of different numbers of slices and image resolution on segmentation performance with our proposed model (AG-DSC-U-Net).The number of training, testing datasets and hyperparameters are the same as defined in the NCCT experiment. The experiment of effect of the number of slices was performed by rescaling the slices with 64, 32 and 16 slices and fixed image resolution with 512x512 pixels. The segmentation results of 64, 32 and 16 slices are DSC 90.06%, 81.76%, 78.93%, respectively. The experiment to determine the effect of different image resolution scales was performed by rescaling the resolution with 512x512, 256x256 and 128x128 pixels, with the number of slices fixed at 64. The segmentation results of 512x512, 256x256 and 128x128 resolution are DSC 90.06%, 86.19% and 83.73% respectively. The 512x512 image resolution and 64 slices still give the best performance, with significant improvement over lower resolution (p&lt;0.05). More slices and higher image resolution of the dataset let the network extract more spatial information that can help to improve segmentation accuracy. Furthermore, because the ECF somewhere is a thin layer along the sulcus of the heart contour, more spatial resolution will improve segmentation accuracy. To give the best performance, we choose the 64 slices for our implementation which is a perfect fit with the original NCCT dataset, having 64 images in each dataset. In the CECT, the original CT dataset had 256 slices and needed to be rescaled to 64 slices. Due to this limitation of the proposed model and current GPU architecture, the voxel size of train and test datasets cannot be extended beyond 64 slices. The other limitation of this study is the size of the dataset: 220 volume-sets for NCCT experiment and 40 volume-sets for CECT experiment. However, the experiment demonstrates the excellent result of the testing.</ns0:p><ns0:p>In clinical practice, the cardiac CT scan can be performed in NCCT or CECT or both studies. For this reason, the ECF can also be either segmentation from NCCT or CECT dataset. To the best of our knowledge, ours is the first implementation of ECF segmentation on NCCT and CECT datasets. In our experiment, we start to train with the NCCT dataset (200 volumesets). We use the concept of transfer learning to re-train with a similar dataset by taking a small amount of the dataset (Fig <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>). We re-train the pre-trained NCCT model with a small amount of CECT data (20 volume-sets). We test the model with additional testing of 20 volume-sets. The experimental result achieves good performance with a DSC value of 88.16%. The performance result is also significantly better than the standard U-Net and AG-U-Net (Table <ns0:ref type='table'>4</ns0:ref>). Our proposed re-trained model demonstrates a good performance as compared with the previous training from scratch (88.7%) <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>.</ns0:p><ns0:p>Future studies could include investigations in more data diversity from multiple CT venders, larger patient variation, and testing the model across different healthcare centers. Further investigation in clinical correlation between CNN segmentation of ECF volume and occurrence of cardiovascular disease would be also interesting research questions. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>The proposed network of epicardial fat segmentation. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Training frameworkPeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61572:1:1:NEW 6 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>2,560 images) were split into 20 volume-sets (1,280 images) for training and 20 volume-sets (1,280 images) for testing. The output model is collected at the best training accuracy of total 300 epochs, named model-B.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>5). The segmentation results of the standard 3D U-Net and DSV-U-Net demonstrate good similar performance (p&gt;0.05). The segmentation results of the standard 3D U-Net and DSV-U-Net are statistically significantly better than AG-U-Net (p&lt;0.05). Examples of segmentation results of transfer learning with the proposed methods are shown in Fig 5.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 1 The</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4 Example</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 5 Example</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,280.87,525.00,296.25' type='bitmap' /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61572:1:1:NEW 6 Oct 2021)</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61572:1:1:NEW 6 Oct 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
" Sep 28th, 2021 Dear Editors We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns In particular all of the code we wrote is available and I have included multiple links throughout the paper to the appropriate code repositories. We believe the manuscript is now suitable for publication in Peer J. Dr. Thanongchai Siriapisith Professor in Radiology On behalf of all authors. Reviewer 1 Basic reporting 1. Reviewer suggest the authors that the materials and Methods part should have some sub sections for readability. -We split the Material and Method into subsection. 2. Reviewer not sure why the author presented the experimental result of 5-fold cross validation. Are the other experimental results not the cross validation results? Then, the author had better report how to divide the training set and the test set. Currently, it does not seem to be clear whether all the experimental results were cross validation based results or not. - To assess model validity, we use 5-fold cross validation with 200 volume-sets of non-contrast CT data by splitting the dataset into training 160 volume-sets and validation 40 volume-sets for each fold. To get the best performance for model segmentation, we use as much sample data as possible, with the hold-out method, 220-volume sets by splitting the dataset into training 200 volume-sets and testing 20 volume-sets. (page 6, lines 214-230). 3. The author had better describe why those gate and module made the performance improved precisely. - We added sentences to explain why gate and deep supervision module could increase the segmentation performance, in the discussion section of this revised version. (page 9 line 347-358). Experiment design 4. Why the final report in Table 3 are different from table 2. - Table 2 is a report of a hold-out method with the non-contrast CT dataset. Table 3 shows 5-fold cross validation for model validity assessment. For easier understanding, we swapped the order of tables 2 and 3. Finally, table 2 is 5-fold cross validation and tables 3 and 4 are based on a hold-out method of non-contrast and contrast-enhanced CT datasets, respectively. Validity of the finding 5. it should be also noted that the number of the training case is not large enough. - In this revised version, this limitation is explained in the discussion section. (page 10, line 393-395) Reviewer2 Basic reporting 1. Reviewer would encourage the authors to clearly list the 'Clinical Significance' and 'Technical significance' of their work at somewhere within the 'Introduction'. -“Furthermore, our proposed solution is 3D based and does not require preprocessing and postprocessing steps, thus it can easily integrate into the clinical workflow of CT acquisition to rapidly generate ECF volume results for the physician in clinical practice.” (page 3, line 104-106) -“Our proposed model has better performance than the recent state-of-the-art approaches because of integration of AG and DSV modules. Both AG and DSV modules can help to enhance the performance of segmentation. The AG module is used to focus the target structures by suppressing irrelevant regions in the input image. The DSV module is used to increase the number of learned features by generating a secondary segmentation map combining from different resolution levels of network layers.” (page 3, line 96-101) - The introduction section for clinical and technical significance is revised accordingly. 2. 3D UNet and even 3D attention-based UNet have been around for several years in the community. What kind of novelties this papers is carrying? -The 3D UNet has been shown to be a good tool for medical image segmentation. The novelty of this paper is to propose a revised UNet by adding an attention gate and deep supervision model to all layers of standard Unet. (page 3, line 96-101) 3. While for 2D images the UNet network(s) are computationally efficient, we know that 3D convolutions have huge storage requirements and therefore, end-to-end training is limited by GPU memory and data size. Have authors tried 2.5 UNet? If not, why? - The 2.5D UNet was introduced to solve the GPU memory shortage. However, the overall accuracy of the 3D UNet has been reported to be better. In this work, since we have sufficient GPU memory, the 3D version is chosen. An explanation has been added in this revised version. (page 8, line 310-318) 4. The current work reports that it employed 15 radiologist to manually segment the data. However, I was not able to find the analysis of inter-rater agreement! I mean, the level of agreement among annotators. I would very much like to see the IoU and/or Dice coefficient to better understand the level of agreement. -For data consistency, we use only 1 radiologist specialized in cardiovascular imaging with more than 15 years-experience. So we don’t have an inter-rater agreement. (page 5, line 197-198) 5. How the authors did split the 220 patients to build a training, testing, and validation set? - The 220 cases in the dataset was split into 200 volume-sets for training and 20 volume-sets for testing. (page 6, line 227-229) 6. I would encourage the authors to provide another comparative analysis using different number of slices plus different image scaling. - In this revised version, an additional experiment was conducted to test the different number of slices (64, 32, 16 slices) and image scaling (512, 256, 128 pixels) on the NCCE dataset. The results show that the 512x512 pixel resolution and 64 slices are still the best performing (p<0.05). (page 10, line 375-389) Experimental design 7. Inter-rater agreement among those 15 radiologist is missing - We did not report inter-rater agreement because this work uses only one expert radiologist with 15 years experience for ground truth segmentation. (page 5, line 197-198) 8. Data stratification is missing. - In the NCCT experiment, we use the hold-out method. We split 220 cases into 200 cases for training and 20 cases for testing. (page 6, line 223-231) -In the CECT experiment, we also used the hold-out method, to train the dataset with a pre-trained model learned from the NCCT dataset. We split 40 cases into 20 cases for training and 20 cases for testing. (page 7, line 242-246) -The material and method section was revised accordingly for clear understanding. 9. In general, the amount of data seems to be few, particularly to tackle one of the most difficult medical image segmentation tasks. Have they tried any type of data augmentation, at least for the training set? - This study uses 200 volume CT data for training that is equivalent to 12,800 2D images. We don’t use any data augmentation in preprocessing. The experiment shows a good result on the testing data. However, the amount of data is a limitation of this study. We discuss the limitation in the discussion section. (page 10, line 393-395) 10. Is the improvement level statistically significance? -A discussion of this is provided in the discussion section. “Our proposed method also shows a significant improvement of performance (90.06%), when compared with the previous state-of-the-art network (86.54%) on the same dataset (p<0.05)”. (page 10, line 364-365) Validity of the findings 11.To me, this problem is one the most challenging problem in medical image analysis and interpretation. To me, I expect to see more in deep implementation, more data that covers more diversity, and a generalize model that can be applied across different healthcare centers. - We agreed with the reviewer. However, this would need further extensive collaboration with other healthcare centers and is beyond the scope of the current study. Therefore, this has been added as a point for future study in the discussion section. (page 10, line 407-410) "
Here is a paper. Please give your review comments after reading it.
307
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Epicardial fat (ECF) is localized fat surrounding the heart muscle or myocardium and enclosed by the thin-layer pericardium membrane. Segmenting the ECF is one of the most difficult medical image segmentation tasks. Since the epicardial fat is infiltrated into the groove between cardiac chambers and is contiguous with cardiac muscle, segmentation requires location and voxel intensity. Recently, deep learning methods have been effectively used to solve medical image segmentation problems in several domains with state-of-the-art performance. This paper presents a novel approach to 3D segmentation of ECF by integrating attention gates and deep supervision into the 3D U-Net deep learning architecture. The proposed method shows significant improvement of the segmentation performance, when compared with standard 3D U-Net. The experiments show excellent performance on non-contrast CT datasets with average Dice scores of 90.06%. Transfer learning from a pre-trained model of a non-contrast CT to contrast-enhanced CT dataset was also performed. The segmentation accuracy on the contrast-enhanced CT dataset achieved a Dice score of 88.16%.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Epicardial fat (ECF) is localized fat surrounding the heart muscle and enclosed by and located inside the thin-layer pericardium membrane. The adipose tissue located outside the pericardium is called paracardial fat <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> and is contiguous with other mediastinal fat (Fig <ns0:ref type='figure' target='#fig_4'>1</ns0:ref>). ECF is the source of pro-inflammatory mediators and promotes the development of atherosclerosis of coronary arteries. The clinical significance of the ECF volume lies in its relation to major adverse cardiovascular events. Thus, measuring its volume is important in diagnosis and prognosis of cardiac conditions. ECF volume can be measured in non-contrast CT images (NCCT) with coronary calcium scoring and in contrast-enhanced CT images (CECT) with coronary CT angiography (CCTA). However, accurate measurement of ECF is challenging. The ECF is separated from other mediastinal fat by the thin pericardium. The pericardium is often not fully visible in CT images, which makes the detection of the boundaries of ECF difficult. ECF can also be infiltrated into grooves between cardiac chambers and is contiguous to the heart muscle. These technical challenges not only make accurate volume estimation difficult but make manual measurement a time-consuming process that is not practical in routine use. Therefore, computer-assisted tools are essential to reduce the processing time for ECF volume measurement.</ns0:p><ns0:p>Automated segmentation could potentially make ECF volume estimation more practical on a routine basis. Several approaches based on prior medical knowledge or non-deep learning techniques have been proposed for ECF segmentation, including genetic algorithms, region-ofinterest selection with thresholding, and fuzzy c-means clustering <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref><ns0:ref type='bibr' target='#b2'>[3]</ns0:ref><ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. Deep learning techniques have been applied to a wide variety of medical image segmentation problems with great success <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref><ns0:ref type='bibr' target='#b5'>[6]</ns0:ref><ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>. A recent article <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref> demonstrates that deep learning algorithms outperform conventional methods for medical image segmentation in terms of accuracy. But most previous studies involved large solid organs or tumor segmentation <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref><ns0:ref type='bibr' target='#b9'>[10]</ns0:ref><ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>. The segmentation of relatively small and complex structures with high inter-patient variability, such as ECF, has been far less successful. Recently, a few deep learning approaches to ECF segmentation have made progress on this problem <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref><ns0:ref type='bibr' target='#b12'>[13]</ns0:ref><ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>. In this paper, we build upon the previous work by presenting a novel deep learning model for 3D segmentation of ECF, We propose a solution of automatic segmentation of ECF volume using a deep learning based approach that is evaluated on both non-contrast and contrast-enhanced CT datasets. The NCCT dataset uses coronary calcium scoring and the CECT dataset uses contrast-enhanced coronary CT angiography (CE-CCTA). The model is first learned from scratch on the NCCT dataset with coronary calcium scoring CT. To cover the entire heart, it is scanned in 64 slices with 2.5 mm thickness on each acquisition. Then, the model pre-trained on that NCCT dataset is transferred to the CECT dataset which uses CE-CCTA. The CE-CCTA study is performed in 256 slices with 0.625 mm thickness One of the key contributions of this paper is that we validate the performance of our newly developed 3D CNN-based approach on these difficult tasks. Since segmentation of ECF requires utilization of both voxel intensity and location information, we integrate two attention gate (AG) and deep supervision modules (DSV)into a standard 3D U-Net architecture. Our proposed model has better performance than the recent state-of-the-art approaches we evaluate because of the integration of AG and DSV modules. The AG module is used to focus on the target structures by suppressing irrelevant regions in the input image. The DSV module is used to increase the number of learned features by generating a secondary segmentation map combining the different resolution levels of network layers. The second main contribution is the use of transfer learning, taking a model pre-trained on NCCT data, and applying it to CECT data, using only a small amount of data for the re-training. This approach has benefits in clinical applications for both NCCT and CECT data for ECF segmentation. Furthermore, our proposed solution is 3D-based and does not require preprocessing and postprocessing steps, thus it can easily integrate into the clinical workflow of CT acquisition to rapidly generate ECF volume results for the physician in clinical practice.</ns0:p></ns0:div> <ns0:div><ns0:head>Related works</ns0:head><ns0:p>Conventional non-deep learning methods have been proposed for ECF segmentation. Rodrigues et al. <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> proposed a genetic algorithm to recognize the pericardium contour on CT images. Militello et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> proposed a semi-automatic approach using manual region-of-interest selection followed by thresholding segmentation. Zlokolica et al. <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref> proposed local adaptive morphology and fuzzy c-means clustering. Rodrigues et al. <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref> proposed ECF segmentation in CECT images using the Weka library (an open-source collection of machine learning algorithms) with Random-Forest as the classifier. The experiment, performed on 20 patients, yielded a Dice score of 97.7%. However, these conventional methods required many preprocessing steps before entering the segmentation algorithm. The next evolution of ECF segmentations was performed with a deep learning approach. Commandeur et al. <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref> proposed ECF segmentation from non-contrast coronary artery calcium computed tomography using ConvNets. They reported the Dice score of 82.3%.</ns0:p><ns0:p>To improve the performance of medical image segmentation, several modifications of U-Net <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref> have been proposed. The spatial attention gate has been proposed to focus on the spatial and detailed structure of the important region varying in shape and size <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>. Schlemper et al. <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref> demonstrate the performance of the attention U-Net on real-time fetal detection on 2D images and pancreas detection on 3D CT images. He et al. <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref> proposed ECF segmentation from CE-CCTA using a modified 3D U-Net approach by adding attention gates (AG). AGs are commonly used in classification tasks <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref><ns0:ref type='bibr' target='#b18'>[19]</ns0:ref><ns0:ref type='bibr' target='#b19'>[20]</ns0:ref><ns0:ref type='bibr' target='#b20'>[21]</ns0:ref> and have been applied for various medical image problems such as image classification <ns0:ref type='bibr' target='#b20'>[21,</ns0:ref><ns0:ref type='bibr' target='#b21'>22]</ns0:ref>, image segmentation <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref><ns0:ref type='bibr' target='#b17'>[18]</ns0:ref><ns0:ref type='bibr' target='#b18'>[19]</ns0:ref><ns0:ref type='bibr' target='#b19'>[20]</ns0:ref><ns0:ref type='bibr' target='#b20'>[21]</ns0:ref><ns0:ref type='bibr' target='#b21'>[22]</ns0:ref>, and image captioning <ns0:ref type='bibr' target='#b21'>[22]</ns0:ref>. AG are used to focus on the relevant portion of the image by suppressing irrelevant regions <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>. The integration of AG into the standard U-Net <ns0:ref type='bibr' target='#b6'>[7,</ns0:ref><ns0:ref type='bibr' target='#b10'>11,</ns0:ref><ns0:ref type='bibr' target='#b11'>12,</ns0:ref><ns0:ref type='bibr' target='#b16'>17,</ns0:ref><ns0:ref type='bibr' target='#b22'>23]</ns0:ref> or V-Net <ns0:ref type='bibr' target='#b10'>[11,</ns0:ref><ns0:ref type='bibr' target='#b23'>24]</ns0:ref> has been demonstrated to have benefits for region localization.</ns0:p><ns0:p>As mentioned above, the ECF has a complex structure. Some parts contain a thin layer adjacent to the cardiac muscle, which is similar to the microvasculature of the retinal vascular image visualized as small linear structures. In order to improve the performance of segmentation of small structures, several modules have been integrated into the main architecture of U-Net and V-Net, such as dense-layer and deep supervision modules <ns0:ref type='bibr' target='#b22'>[23]</ns0:ref><ns0:ref type='bibr' target='#b23'>[24]</ns0:ref><ns0:ref type='bibr' target='#b24'>[25]</ns0:ref><ns0:ref type='bibr' target='#b26'>[26]</ns0:ref><ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>. The dense-layer <ns0:ref type='bibr' target='#b24'>[25,</ns0:ref><ns0:ref type='bibr' target='#b26'>26]</ns0:ref> has been used to enhance the segmentation result instead of the traditional convolution in the U-Net model. Deep supervision <ns0:ref type='bibr' target='#b22'>[23,</ns0:ref><ns0:ref type='bibr' target='#b23'>24,</ns0:ref><ns0:ref type='bibr' target='#b27'>27]</ns0:ref> was used to avoid local minimal traps during the training. The deep supervision helps to improve model convergence and increases the number of learned features <ns0:ref type='bibr' target='#b22'>[23]</ns0:ref>. Kearney et al. <ns0:ref type='bibr' target='#b22'>[23]</ns0:ref> showed that addition of deep supervision added to the U-Net model can improve the performance of 3D segmentation in CT images of the prostate gland, rectum, and penile bulb.</ns0:p><ns0:p>While 2D and 3D deep learning approaches have been used for medical image segmentation, 3D approaches have typically shown better performance than the 2D approaches <ns0:ref type='bibr' target='#b8'>[9,</ns0:ref><ns0:ref type='bibr' target='#b9'>10,</ns0:ref><ns0:ref type='bibr' target='#b28'>28]</ns0:ref>. For example, Zhou et al. <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> demonstrated the better performance of 3D CNN approaches on multiple organs on 3D CT images, when compared to the 2D based method. Starke et al. <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> also demonstrated that 3D CNN achieved better performance on segmentation of head and neck squamous cell carcinoma on CT images. Woo et al. <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref> demonstrated that 3D U-Net provided better performance on brain tissue MRI images, compared with 2D U-Net, on a smaller training dataset. Therefore, in this paper we use a 3D CNN for segmenting epicardial fat in cardiac CT images.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head></ns0:div> <ns0:div><ns0:head>CNN architecture</ns0:head><ns0:p>The model architecture is based on a 3D U-Net model composed of multiple levels of encoding and decoding paths. The initial number of features at the highest layers of the model is 32. The numbers of feature maps are doubled with each downsampling path. In addition to the original U-Net architecture, we added an attention gate connecting the encoding and decoding paths and deep supervision at the final step of the network. The model is created on a fully 3D structure at each network level. The final layer is an element-wise sum of feature maps of the two last decoding paths. The segmentation map of two classes (epicardial fat and background) is obtained using the output layer with threshold 0.5 to generate the binary classification of the epicardial fat. The architecture of the proposed network is shown in Fig 2 <ns0:ref type='figure'>.</ns0:ref> Starting with the standard 3D U-Net architecture, the attention gate module connects each layer of encoding and decoding paths. The gating signal (g) is chosen from the encoding path and the input features (x) are collected from the decoding path. To generate the attention map, g and x go through a 1x1x1 convolution layer and element-wise sum, followed by rectified linear unit (ReLu) activation, a channel-wise 1x1x1 convolutional layer, batch normalization and a sigmoid activation layer. The output of sigmoid activation is concatenated to the input x to get the output of the attention gate module <ns0:ref type='bibr' target='#b11'>[12,</ns0:ref><ns0:ref type='bibr' target='#b22'>23]</ns0:ref>.</ns0:p><ns0:p>Deep supervision <ns0:ref type='bibr' target='#b10'>[11,</ns0:ref><ns0:ref type='bibr' target='#b23'>24]</ns0:ref> is the module at the final step of the network where it generates the multiple segmentation maps at different resolution levels, which are then combined together. The secondary segmentation maps are created from each level of decoding paths which are then transposed by 1x1x1 convolution. All feature maps are combined by element-wise sum. The lower resolution map is upsampled by 3D transposed convolution to have the same size as the second-lower resolution. Two maps are combined with element-wise sum then upsampled and added to the next level of segmentation map, until reaching the highest resolution level.</ns0:p></ns0:div> <ns0:div><ns0:head>CT imaging data</ns0:head><ns0:p>This experimental study was approved and participant consent was waived by the institutional review board of Siriraj Hospital, Mahidol University (certificate of approval number Si 766/2020). The experimental datasets were acquired from 220 patients with non-contrast enhanced calcium scoring and 40 patients with CE-CCTA. The exclusion criteria were post open surgery of the chest wall. All CT acquisition was performed with the 256-slice multi-detector row CT scanner (Revolution CT; GE Medical Systems, Milwaukee, Wisconsin, United States). The original CT datasets of NCCT and CECT studies were 64 slices in 2.5 mm slice thickness and 256 slices in 0.625 mm slice thickness, respectively. All DICOM images were incorporated into a single 3D CT volume file with preserved original pixel intensity. Due to limitation of GPU memory, the 256 slices of CE-CCTA were pre-processed with rescaling to 64 images in the volume dataset. The final 3D volume dataset in all experiments was 512x512x64. The dataset was raw 12 bits grayscale in each voxel. The area of pericardial fat was defined by fat tissue attenuation inside the pericardium, ranging from -200 HU to -30 HU <ns0:ref type='bibr' target='#b14'>[15,</ns0:ref><ns0:ref type='bibr' target='#b29'>29,</ns0:ref><ns0:ref type='bibr' target='#b31'>30]</ns0:ref>. The groundtruth segmentation of ECF in all axial slices was performed using the 3D slicer software version 4.10.0 <ns0:ref type='bibr' target='#b32'>[31]</ns0:ref> by a cardiovascular radiologist with 18 years of experience. No additional feature map or augmentation was performed in the pre-processing step. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Training framework</ns0:head><ns0:p>The experiments were implemented using the PyTorch (v1.8.0) deep learning library in Python (v3.6.9). The workflow for network training is illustrated in Then the 5-fold cross validation was performed on standard U-Net, AG-U-Net, DSV-U-Net, and the proposed method (AG-DSV-U-Net). For each fold of validation, the model with the best training accuracy after 150 epochs was selected for the validation.</ns0:p><ns0:p>The second experiment was to assess segmentation performance by training the network from scratch with the NCCT dataset. The volume matrix of each dataset was 512x512x64 pixels. To compare the performance of segmentation, this experiment was performed with four model architectures: standard U-Net, AG-U-Net, DSV-U-Net, and the proposed method (AG-DSV-U-Net). The network was evaluated with the hold-out method, in which a total of 220 volume-sets (14,080 images) were split into 200 volume-sets (12,800 images) for training (the same used in the first experiment) and 20 volume-sets (1,280 images) for testing. The model output was collected at the maximum number of iterations at the 300th epoch, named model-A.</ns0:p><ns0:p>The third experiment was to assess segmentation performance on the CECT dataset and to evaluate the effectiveness of transferring the learning from NCCT to the CECT datasets. The pre-trained 3D model (model-A) was trained on the large calcium scoring NCCT datasets. The key to the success of the transfer learning with 3D U-Net is to fine-tune only the shallow layers (contracting path) <ns0:ref type='bibr' target='#b33'>[32]</ns0:ref> instead of the whole network. This contracting path represents the more low-level features in the network <ns0:ref type='bibr' target='#b33'>[32]</ns0:ref>. The retraining dataset requires only a small amount of data -in our case only 20 volume-sets of CECT data. Note that these retraining cases are not from the same cases as used in the pre-trained model. To compare the performance of segmentation, this experiment was performed with four model architectures: standard U-Net, AG-U-Net, DSV-U-Net and proposed method (AG-DSV-U-Net). The network was evaluated with the hold-out method, in which the total Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Performance evaluation</ns0:head><ns0:p>The performance of our proposed CNN segmentation is compared with the performance of the existing methods. The evaluation was quantitatively performed by comparison with the reference standard using the Dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), and Hausdorff distance (HD). An average HD value was calculated using the insight toolkit library of 3D slicer. Differences in the comparison coefficient among the four groups of experiments (standard U-Net, AG-U-Net, DSV-U-Net, and AG-DSV-U-Net) were assessed with a paired Student's t-test. P values &lt;0.05 indicated a statistically significant difference. Differences in the comparison between DSC of segmentation result and ECF volume were assessed with Pearson's correlation coefficient. The Pearson's values of &lt; 0.3 indicated poor correlation, 0.3 to 0.7 indicated moderate correlation, and &gt; 0.7 indicated good correlation. To assess the consistency of the ground-truth, the inter-rater agreement was obtained by pixel-based correlation of DSC using two experienced cardiovascular radiologists with 18 and 14 years of experience. Ten volume sets (640 images) were randomly selected from NCCT dataset for the process of inter-rater correlation.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>The patient demographics are shown in Table <ns0:ref type='table'>1</ns0:ref>. The training dataset consisting of 200 NCCT scan had an average age of 61.43 years and an average volume of 135.75 ml. The NCCT testing dataset had a similar distribution, with an average age of 67.80 years and an average volume of 127.59 ml. For the contrast-enhanced dataset, the average ages of the training and testing datasets were 65.85 and 60.85 years, respectively. The average volumes of ECF of the training and testing datasets were 117.13 and 121.43 ml, respectively. The inter-rater agreement of ECF ground-truth by two cardiovascular radiologists is about 91.73.&#177;1.27%,which indicates excellent correlation.</ns0:p><ns0:p>Five-fold cross validation experiments on our NCCT dataset were used to evaluate the validity and repeatability performance of the proposed method. The dataset was split into training (80%) and validation (20%) for each fold. On each model architecture, the validation data exhibits good results across each fold. The proposed method also demonstrates the best average performance (DSC = 89.02), when compared with the other methods (at p&lt;0.05 when using validation data obtained in the five-fold cross-validation in the t-test), see Table <ns0:ref type='table'>2</ns0:ref>.</ns0:p><ns0:p>The experimental result on the hold-out NCCT test dataset is shown in Table <ns0:ref type='table'>3</ns0:ref>. The proposed method demonstrates excellent results, achieving average DSC, JSC, HD values of 90.06&#177;4.60, 82.42&#177;6.91 and 0.25&#177;0.14, respectively. The baseline of the experiment is the standard 3D U-Net, which demonstrates good results with DSC, JSC and HD values of 84.87&#177;5.73, 74.12&#177;8.0.8 and 0.34&#177;0.18, respectively. The segmentation results of the modified U-Net models (AG-U-Net, DSV-U-Net, and the proposed method) demonstrate statistically significant improvement compared with the standard U-Net based on DSC (p&lt;0.05). The difference between segmentation results of AG-U-Net and DSV-U-Net is not statistically significant (p&gt;0.05). The DSC, JSC, HD values of AG-U-Net are 89.59+4.45, 81.41+6.77 and 0.27+0.12, respectively. The proposed method statistically improved the segmentation result (p&lt;0.05) compared with AG-U-Net as well. While the proposed method is better than DSV-U-Net, the difference is not statistically significant (p&gt;0.05). Examples of segmentation results of the proposed method are shown in Fig <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>.</ns0:p><ns0:p>The experimental result of the proposed method on the CECT dataset is shown in Table <ns0:ref type='table'>4</ns0:ref>. This transfer learning approach achieved average DSC, JSC and HD values of 88.16&#177;4.57, 79.10&#177;6.75 and 0.28&#177;0.20, respectively. The segmentation result of the proposed method demonstrates statistically significant improvement, when compared with the other methods based on DSC (p&lt;0.0.5). The segmentation results of the standard 3D U-Net and DSV-U-Net demonstrate good similar performance (p&gt;0.05). The segmentation results of the standard 3D U-Net and DSV-U-Net are statistically significantly better than those of AG-U-Net (p&lt;0.05). Examples of segmentation results of transfer learning with the proposed methods are shown in Fig <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Segmentation of ECF is a difficult image segmentation task because of the thin layer and complex structures at the outer surface of the heart. The ECF is also variable in distribution, depending on body habitus. In general, obese patients have larger amounts of ECF than do thin patients. Segmentation of ECF is more challenging than the segmentation of other cardiac structures.</ns0:p><ns0:p>Most CNN approaches work on 2D images whereas in clinical practice, 3D volume segmentation is used <ns0:ref type='bibr' target='#b34'>[33]</ns0:ref>. The 2D-based CNN approaches such as ResNet and VGG are not applicable for 3D datasets. The model architectures for 2D CNN and 3D CNN are different <ns0:ref type='bibr' target='#b8'>[9,</ns0:ref><ns0:ref type='bibr' target='#b9'>10,</ns0:ref><ns0:ref type='bibr' target='#b28'>28]</ns0:ref>. 3D CNN has an advantage over 2D-CNN by extracting both spectral and spatial features simultaneously, while 2D CNN can extract only spatial features from the input data <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>. For this reason in general, the 3D CNN is more accurate than a 2D one <ns0:ref type='bibr' target='#b4'>[5,</ns0:ref><ns0:ref type='bibr' target='#b35'>34]</ns0:ref>. 2.5D CNN has been developed to solve the memory consumption problem of 3D models <ns0:ref type='bibr' target='#b36'>[35]</ns0:ref>. 2.5D CNN has at least three approaches <ns0:ref type='bibr' target='#b36'>[35,</ns0:ref><ns0:ref type='bibr' target='#b37'>36]</ns0:ref>. The first is a combination of outputs of 2D CNNs in three orthogonal planes (axial, coronal, and sagittal) with majority voting. The second is to use 2D CNNs with 3 or 5 channels from adjacent 3 or 5 slices. Third is to apply 2D CNNs with randomly oriented 2D cross sections. In the final step, 2.5D segmentation requires an additional post-processing step to generate 3D output <ns0:ref type='bibr' target='#b38'>[37]</ns0:ref>. Although, the 3D CNN requires more resources and time for the model training, for the best performance, we use the 3D CNN in our implementation. The best performing methods for 3D volume segmentation of medical data are U-Net and V-Net. V-Net has more trainable parameters in its network architecture. Recent experimental comparisons of U-Net and V-Net on medical data have not shown statistically significant differences in performance <ns0:ref type='bibr' target='#b11'>[12,</ns0:ref><ns0:ref type='bibr' target='#b39'>38]</ns0:ref>. However, U-Net is less complex and easier to Manuscript to be reviewed Computer Science modify so that additional modules can be integrated into the standard U-Net in order to improve the performance.</ns0:p><ns0:p>Several state-of-the-art approaches for CNN-based segmentation of ECF have recently been proposed. Commandeur et al. <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref> proposed the first CNN-based method for ECF segmentation in a non-contrast 2D CT dataset using a multi-task convolutional neural network called ConvNets. They reported a Dice score of 82.3% for the segmentation result. He et al. <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref> proposed another CNN-based method on a 3D CECT dataset using AG integrated into 3D U-Net. The segmentation result was reported to have a DSC of 88.7% <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>. We repeated the experiment by implementing the AG in 3D U-Net on our NCCT dataset by hold-out testing. The amount of training data (200 volume-sets) is more than the one used in the previous article (150 volume-sets) <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>. Our 4 layer AG-U-Net method demonstrates significantly improved performance with DSC of 89.59%, as compared with 3 layer AG-U-Net of 86.54% (p&lt;0.05). That should be due to more layers of our network. In our implementation, the AG-U-Net has more convolutional layers (4 layers) and removes the sigmoid at the end of the network. However, the AG integration provides significantly better performance (p&lt;0.05) as compared with standard 3D U-Net (DSC of 84.87%). To the best of our knowledge, our experiment uses the largest volume size of any 3D CT dataset (512x512x64).</ns0:p><ns0:p>We introduce a novel approach to 3D segmentation of ECF by integrating both AG and DSV modules into all layers of the 3D U-Net deep learning architecture. The AGs are commonly used in natural image analysis and natural language processing <ns0:ref type='bibr' target='#b40'>[39,</ns0:ref><ns0:ref type='bibr' target='#b41'>40]</ns0:ref>, which can generate attention-awareness features. The AG module is beneficial for organ localization, which can improve organ segmentation <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>. During CNN training, AG is automatically learned to focus on the target without additional supervision <ns0:ref type='bibr' target='#b42'>[41]</ns0:ref>. The AG module can improve model accuracy by suppressing feature activation in irrelevant regions of an input image <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>. The AG module is used to make connections between encoding and decoding paths on the standard U-Net. The DSV module is used to deal with the vanishing gradient problem in the deeper layers of a CNN <ns0:ref type='bibr' target='#b10'>[11,</ns0:ref><ns0:ref type='bibr' target='#b27'>27]</ns0:ref>. The standard approach provides the supervision only at the output layer. But the DSV module propagates the supervision back to the earlier layer by generating a secondary segmentation map combining information from different resolution levels. The losses of this segmentation map are weighted and added to the final loss function and this can effectively increase the performance <ns0:ref type='bibr' target='#b43'>[42]</ns0:ref>. The DSV module is used by adding it into the decoding path of the 3D U-Net. The AG-DSV modules had been implemented in previous work <ns0:ref type='bibr' target='#b10'>[11,</ns0:ref><ns0:ref type='bibr' target='#b23'>24]</ns0:ref> for kidney <ns0:ref type='bibr' target='#b23'>[24]</ns0:ref> tumor segmentation (Kidney Tumor Segmentation Challenge 2019), as well as for liver <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> and pancreas <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> tumor segmentation (Medical Decathlon Challenge 2018).</ns0:p><ns0:p>The experiment demonstrated that our proposed method (AG-DSV-U-Net) achieves excellent performance with average and max DSC values of 90.06% and 95.32%, respectively. Our proposed method also shows a significant improvement of performance (90.06%), when compared with the previous state-of-the-art network (86.54%) on the same dataset (p&lt;0.05). The example of the results was shown in Fig 4 <ns0:ref type='figure'>.</ns0:ref> While one might expect the segmentation performance to improve with fat volume of the dataset, unexpectedly, the statistical analysis demonstrates that there is poor correlation between segmentation performance and fat volume (Pearson's correlation 0.2).</ns0:p><ns0:p>The 3D volume size of our dataset is larger (512x512x64) compared with the previous work (512x512x32) <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>. For comparative analysis on different numbers of slices and image resolution of datasets, the previous work demonstrates that a 40-slice of volume dataset achieves 1% higher DSC than 32-slice and 24-slice <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>. However, the training time is also increased. We set up additional experiments to test the effect of different numbers of slices and image resolution on segmentation performance with our proposed model (AG-DSC-U-Net). The training, testing datasets, and hyperparameters are the same as defined in the NCCT experiment. The experiment of effect of the number of slices was performed by rescaling the slices with 64, 32 and 16 slices and using a fixed image resolution with 512x512 pixels. The segmentation results are DSC 90.06%, 81.76%, 78.93%, respectively. The experiment to determine the effect of different image resolution scales was performed by rescaling the resolution with 512x512, 256x256 and 128x128 pixels, with the number of slices fixed at 64. The segmentation results are DSC 90.06%, 86.19% and 83.73% respectively. The 512x512 image resolution and 64 slices still give the best performance, with significant improvement over lower resolution (p&lt;0.05). More slices and higher image resolution of the dataset let the network extract more spatial information that can help to improve segmentation accuracy. Furthermore, because the ECF is a thin layer along the heart contour, more spatial resolution will improve segmentation accuracy. To give the best performance, we chose 64 slices for our implementation which is a perfect fit with the original NCCT dataset, having 64 images in each dataset. In the CECT, the original CT dataset had 256 slices and needed to be rescaled to 64 slices. Due to the limitation of the proposed model and current GPU architecture, the voxel size of train and test datasets cannot be extended beyond 64 slices. The other limitation of this study is the size of the dataset: 220 volume-sets for NCCT experiment and 40 volume-sets for CECT experiment.</ns0:p><ns0:p>In clinical practice, the cardiac CT scan can be performed in NCCT or CECT or using both methods. For this reason, the ECF can also be either segmentation from NCCT or CECT dataset. To the best of our knowledge, ours is the first implementation of ECF segmentation on NCCT and CECT datasets. In our experiment, we started to train with the NCCT dataset (200 volume-sets). We used the concept of transfer learning to re-train with a similar dataset by taking a small amount of the dataset (Fig <ns0:ref type='figure' target='#fig_1'>3</ns0:ref>). We re-trained the pre-trained NCCT model with a small amount of CECT data (20 volume-sets). Weed test the model with additional testing of 20 volume-sets. The experimental result shows good performance with a DSC value of 88.16%. The performance is also significantly better than that obtained with the standard U-Net and AG-U-Net (Table <ns0:ref type='table'>4</ns0:ref>). Training from scratch is more generally applicable but requires a large amount of training samples. Both NCCT and CECT datasets are similar in appearance but differ in color.</ns0:p><ns0:p>The training from scratch approach of both datasets required a large amount of data samples. The transfer learning method is more practical because it uses smaller amounts of training data and still yields good performance. Additionally, our proposed re-trained model demonstrates good performance as compared with the previous training from scratch (88.7%) <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>The proposed network for epicardial fat segmentation. Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61572:2:0:NEW 8 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Fig 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The training and testing processes were performed on a CUDA-enabled GPU (Nvidia DGX-A100) with 40 GB RAM. The experiments were divided into three scenarios: model validity assessment, NCCT, and CECT experiments. The parameters were the same for all three experiments. The networks were trained with RMSprop optimizer and mean squared error loss. The training parameters of learning rate, weight decay, and momentum were le-3, le-8 and 0.9, respectively. The initial random seed was set to be 0. The illustration of the experimental framework is shown in Fig 3. The first experiment was the assessment of the model validity, for which we used 5-fold cross validation. The total dataset consisted of 200 volume-sets (12,800 images), divided into five independent folds. Each fold contained 160 volume-sets (10,240 images) for training and 40 volume-sets (2,560 images) for validation, without repeated validation data between folds. The other 20 volume-sets (1,280 images) were left for testing in the second and third experiments.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>40 volume-sets (2,560 images) were split into 20 volume-sets (1,280 images) for training and 20 volume-sets (1,280 images) for testing. The output model is collected at the maximum number of iterations at of 300thh epoch, named model-B. PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61572:2:0:NEW 8 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61572:2:0:NEW 8 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 1 The</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Fig 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Fig 4. Example of segmentation result of proposed method on non-contrast CT images.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Fig 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Fig 5. Example of segmentation result of proposed method on contrast-enhanced CT images.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,255.37,525.00,296.25' type='bitmap' /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61572:2:0:NEW 8 Nov 2021)</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61572:2:0:NEW 8 Nov 2021)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
" Nov 8th, 2021 Dear Editors We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns In particular all of the code we wrote is available and I have included multiple links throughout the paper to the appropriate code repositories. We believe the manuscript is now suitable for publication in Peer J. Dr. Thanongchai Siriapisith Professor in Radiology On behalf of all authors. Reviewer 1 Experimental design 1. The reviewer wishes the authors could include the analysis of inter-rater agreement. - In this revised version, the analysis of inter-rater agreement has been added in the Materials and Methods section (page 7, lines 251-255) and the Result section (page 8, lines 264-266). Editor comments Discussion 1. Are you sure this is what you want to say? Does [9] tackle the same problem (ECF segmentation)? Training from scratch is more generally applicable, so it seems that [9] has a preferable method then? Why would one want to use your method, which requires another dataset for pre-training? -Corresponding explanations have been added in the Discussion section of this revised version. “Training from scratch is more generally applicable but requires a large amount of training samples. Both NCCT and CECT datasets are similar in appearance but differ in color. The training from scratch approach of both datasets required a large amount of data samples. The transfer learning method is more practical because it uses smaller amounts of training data and still yields good performance. Additionally, our proposed re-trained model demonstrates good performance as compared with the previous training from scratch (88.7%) [12].” (page 11 lines 394-398) -We thank the editor for the very helpful annotations to the manuscript. All suggested changes have been made. "
Here is a paper. Please give your review comments after reading it.
308
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Computer science education (CSEd) research within K-12 makes extensive use of empirical studies in which children participate. Insight in thedemographics of these children is important for the purpose of understanding the representativeness of the populations included. This literature review studies the demographics of subjects included in K-12 CSEd studies. We have manually inspected the proceedings of three of the main international CSEd conferences: SIGCSE, ITiCSE and ICER, of five years (2014)(2015)(2016)(2017)(2018), and selected all papers pertaining to K-12 CSEd experiments. This led to a sample of 134 papers describing 143 studies. We manually read these papers to determine the demographic information that was reported on, investigating the following categories: age/grade, gender, race/ethnic background, location, prior computer science experience, socio-economic status (SES), and disability. Our findings show that children from the United States, boys and childrenwithout computer science experience are included most frequently. Race and SES are frequently not reported on, and for race as well as for disabilities there appears a tendency to report these categories only when they deviate from the majority. Further, for several demographic categories different criteria are used to determine them. Finally, most studies take place within schools. These insights can be valuable to correctly interpret current knowledge from K-12 CSEd research, and furthermore can be helpful in developing standards for consistent collection andreporting of demographic information in this community.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Computer Science Education (CSEd) is being increasingly taught in schools, starting as early as kindergarten. As a result, CSEd research, aimed at children under the age of 18, has also been on the rise.</ns0:p><ns0:p>While several papers study teachers, programming environments, or curricula, many CSEd papers observe children as participants, using a variety of research methods from case studies to controlled experiments.</ns0:p><ns0:p>These studies might have a large impact on policy, since many countries are currently in the process of implementing mandatory programming and computer science curricula <ns0:ref type='bibr' target='#b1'>(Barendsen et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b25'>Hong et al., 2016)</ns0:ref>. Further, reporting complete data on the participants and the settings of K12 CSEd activities is also required for activity comparison and replication across studies. Previous work focusing on CSEd studies and their reporting of activity components, including student demographic information, found that race and socio-economic (SES) status are rarely reported <ns0:ref type='bibr' target='#b41'>(McGill et al., 2018)</ns0:ref>. We build upon this, by focusing specifically on participant demographic data, including information which has previously not been analyzed, such as the presence of students' disabilities. Although data on disabilities is of an especially sensitive nature and it might not always be possible or appropriate to obtain this information, we believe it is both relevant and important to consider as a demographic factor. An estimated 15% of the world population has a form of disability, 1 and inclusivity in CSEd can be helped forward if it is better understood how specific subpopulations of children can, when needed, be additionally supported.</ns0:p><ns0:p>Further, we examine in depth the types of data that are reported or omitted, by assessing who is reported on (for instance, how many boys and girls for the category gender) and how a category is reported on (for instance, which indicators are used to represent SES).</ns0:p><ns0:p>We are interested in gaining an understanding of the representativeness of the children who are participating in K-12 CSEd studies. To that end, we have conducted a two-phase literature review of the proceedings of five years <ns0:ref type='bibr'>(2014)</ns0:ref><ns0:ref type='bibr'>(2015)</ns0:ref><ns0:ref type='bibr'>(2016)</ns0:ref><ns0:ref type='bibr'>(2017)</ns0:ref><ns0:ref type='bibr'>(2018)</ns0:ref> of the SIGCSE, ITiCSE and ICER conferences. In Phase 1, we manually inspected the abstracts of all papers of at least 6 pages in length (953 papers) and then identified all papers that A) concerned K-12 (pre-university) subjects and B) involved subjects taking part in CSEd activities. 134 papers remained. Then in Phase 2, we read all the full papers, and gathered the reported demographic information of the participants in the categories: age/grade, gender, race/ethnicity, location, prior experience, SES and disabilities.</ns0:p><ns0:p>The aim of this literature review is to gain insight in the demographics of subjects participating in K-12 CSEd studies. These insights can contribute to first of all understanding the representativeness of the population included in these studies at the moment, thus correctly informing policy and enabling comparisons and replications. Second, our paper can help researchers identify which characteristics are important to report on when conducting CSEd studies.</ns0:p></ns0:div> <ns0:div><ns0:head>K-12 Computer Science Education</ns0:head><ns0:p>CSEd, and the broader computational thinking <ns0:ref type='bibr' target='#b67'>(Wing, 2006)</ns0:ref>, has recently been made part of the curriculum in many countries <ns0:ref type='bibr' target='#b1'>(Barendsen et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b25'>Hong et al., 2016)</ns0:ref>. Especially block-based programming languages are currently being used in K-12 programming education extensively, most commonly using Scratch <ns0:ref type='bibr' target='#b51'>(Resnick et al., 2009)</ns0:ref> and Alice <ns0:ref type='bibr' target='#b7'>(Conway et al., 1994)</ns0:ref>. In addition to blocks-based languages, robots <ns0:ref type='bibr' target='#b36'>(Kazakoff et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b38'>Ludi et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b60'>Swidan and Hermans, 2017)</ns0:ref> and textual programming languages <ns0:ref type='bibr' target='#b22'>(Hermans, 2020;</ns0:ref><ns0:ref type='bibr' target='#b49'>Price and Barnes, 2015;</ns0:ref><ns0:ref type='bibr' target='#b61'>Swidan and Hermans, 2019)</ns0:ref> are also frequently used.</ns0:p><ns0:p>Other approaches use no computers at all, referred to as unplugged computing <ns0:ref type='bibr' target='#b23'>(Hermans and Aivaloglou, 2017)</ns0:ref>.</ns0:p><ns0:p>A SIGCSE paper <ns0:ref type='bibr' target='#b0'>(Al-Zubidy et al., 2016)</ns0:ref> showed that empirical validation is common among SIGCSE papers, with about 70% of papers using some form of empiricism. There is a large variety in methods however: studies are performed both in the classroom <ns0:ref type='bibr' target='#b19'>(Grover et al., 2016b;</ns0:ref><ns0:ref type='bibr' target='#b24'>Hickmott et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b35'>Kazakoff and Bers, 2012)</ns0:ref> as well as in extracurricular settings <ns0:ref type='bibr' target='#b39'>(Maloney et al., 2008)</ns0:ref>. Not all studies involving K-12 CSEd include subjects: in addition to small scale studies in classrooms, programs created by learners have also been analyzed by researchers. Scratch projects were used for example to explore the learning patterns of programmers in their first 50 projects <ns0:ref type='bibr' target='#b68'>(Yang et al., 2015)</ns0:ref>. <ns0:ref type='bibr'>Aivaloglou and Hermans (2016)</ns0:ref> performed an analysis on 250.000 Scratch programs investigating the occurrence of code smells. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Research on Demographics in Computer Science</ns0:head><ns0:p>There is some previous research aimed at understanding who participates in K-12 computing. Ericson and Guzdial, for example, in their exploration of the CS Advanced Placement test, specifically investigate women and non-white students <ns0:ref type='bibr' target='#b10'>(Ericson and Guzdial, 2014)</ns0:ref>. Some papers also specifically looked at the reporting of demographics of students in computer science papers. In a meta-analysis from 2018 <ns0:ref type='bibr' target='#b41'>(McGill et al., 2018)</ns0:ref> 92 articles from SIGCSE, ICER, and ToCE between 2012 and 2016 were studied, analyzing which components or elements of activities are reported in K-12 studies, including data on the activities, the learning objectives, the instructors and the participants. Regarding participant demographics, they found that, while age and grade of participants are often reported on (98% and 74%), other factors are reported less, such as gender (64%), race (45%) and SES (13%). A similar study on CHI papers between 1982-2016 <ns0:ref type='bibr' target='#b53'>(Schlesinger et al., 2017)</ns0:ref> found that gender is reported in 63% of papers, race in 13% and class in 23%. The reporting of combined demographics is even more rare; only 2% of papers in the field of HCI report gender, race and class of subjects.</ns0:p><ns0:p>Our study expands these prior, related, works in two ways. First, our focus is on the content of the demographics, studying in depth who participates in CSEd studies, in addition to the information on whether the demographics are being reported on. Second, we performed a full manual analysis of an expanded set of 134 papers of the three main international conferences within the CSEd community from 2014 to 2018, in order to also gain insights in the way in which demographic information is presented and possible reasons why this information is omitted. This includes the factor 'disabilities', which previous reviews <ns0:ref type='bibr' target='#b41'>(McGill et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b53'>Schlesinger et al., 2017)</ns0:ref> did not address as a demographic factor. A high prevalence of individuals with disabilities exists worldwide 2 , yet this groups remain underrepresented in fields of computing and software engineering <ns0:ref type='bibr' target='#b5'>(Burgstahler and Ladner, 2006)</ns0:ref>, and accessible technology remains lacking <ns0:ref type='bibr' target='#b47'>(Patel et al., 2020)</ns0:ref>. Attention for these issues in CSEd research is rising, and several research lines on young learners with disabilities have started, often with focused questions and target groups <ns0:ref type='bibr' target='#b20'>(Hadwen-Bennett et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b27'>Israel et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b48'>Powell et al., 2004)</ns0:ref>. Insights on support these learners need to be fully included in CSEd is growing, and can concern adapted or assistive technologies as well instructional approaches. Especially since the majority of children with disabilities takes place within classes in regular education <ns0:ref type='bibr' target='#b64'>(Van Mieghem et al., 2020)</ns0:ref>, it is important to assess their representativeness within CSEd studies to further understand how inclusive CSEd education can be fostered. Together, these insights are valuable to understand the representativeness of the populations included in K-12 CSEd studies, to interpret previous and future findings, and to help guide reporting on future work.</ns0:p><ns0:p>Finally, one point to address concerns our choice of the three conferences. We believe that the proceedings of the SIGCSE, ITiCSE and ICER conferences reflect the global CSEd community well as they are often referred to as the 'main international conferences', and consequently provide a valid starting point to explore the demographics of it's studies subjects. It is likely that the US will be over-represented in our findings, however, it can be considered part of our assessment to determine this and to reflect on the implications. A related point concerns the demographic category race or ethnicity. This category can be common to include, as the previous studies on demographics of K-12 computing participants confirm <ns0:ref type='bibr' target='#b41'>(McGill et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b53'>Schlesinger et al., 2017)</ns0:ref>, however, this strongly depends on the context and country. We will follow the terminology used in the studies we include, taking this together under the term 'race/ethnic background'. In addition, we will also explore whether the inclusion of the category of disabilities is related to context and country.</ns0:p></ns0:div> <ns0:div><ns0:head>SURVEY METHODOLOGY</ns0:head></ns0:div> <ns0:div><ns0:head>Setup of Literature Review</ns0:head><ns0:p>The research question of this paper is: What are the demographics of subjects who participate in CSEd studies in <ns0:ref type='bibr'>K-12?</ns0:ref> In order to answer this question, a two-phase literature review was conducted. In Phase 1, all 953 papers that appeared in the proceedings of one of the three main international CSEd conferences: SIGCSE, ITiCSE and ICER between 2014 and 2018, were inspected using the procedure and criteria described below to determine whether they qualified for Phase 2. The resulting 134 papers were manually analyzed to collect the reported demographic information of each paper. The distribution of papers in each phase is shown in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>[insert Table <ns0:ref type='table'>1</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Paper Inspection (Phase 1) and Paper Analysis (Phase 2)</ns0:head><ns0:p>In Phase 1 all full papers (6 pages or longer) which appeared in the proceedings of SIGCSE, ITiCSE and ICER in five years (2014-2018) were inspected. Phase 1 included 953 papers in total (561, 267 and 125 respectively for the three conferences). These papers were inspected by reading their title and abstract.</ns0:p><ns0:p>The first papers were inspected together by the group of nine authors to evaluate the inspection process, after which the rest of the papers were divided among all authors. Papers were inspected individually, but uncertainties concerning whether a paper qualified for Phase 2 were noted down and discussed among the group of authors together until agreement was reached. For papers to be included in Phase 2, we set two criteria: K-12 Participants in the reported study should be K-12 level: in kindergarten, elementary school, middle school or high school. Papers that described studies of which some of the participants were K-12 while others were already in university were also considered.</ns0:p><ns0:p>Computer Science Education activity Participants in the reported study should be actively involved in CSEd activities. We excluded studies in which children did not actively engage in programming or other CSEd activities, for example, studies in which only children's attitude towards programming or computers was measured without any intervention.</ns0:p><ns0:p>In Phase 2 the papers selected during the first phase were analyzed. This concerned 134 papers (13% of all SIGCSE, ITiCSE and ICER 2014 to 2018 papers). Each paper was carefully read in full to gather the demographic information that was reported on or to note the lack of report of information. The specific demographic categories are described in the next section. The same process as in Phase 1 was followed, where the first three papers were read together by all authors to practice the process. Next, the papers were divided among the nine authors. Uncertainties were noted down and discussed by the authors in a weekly group meeting. In these meetings, demographic information that was inferred and not explicitly stated in the papers, such as information on location, was checked and agreed upon by the authors. Moreover, coding conventions were set for demographic information that was not uniformly reported on, such as age and school system levels, race/ethnicity and socio-economic status.</ns0:p></ns0:div> <ns0:div><ns0:head>Demographic Categories</ns0:head><ns0:p>The demographic categories were determined at the start of Phase 2, observing the categories commonly reported in the included papers. We confirmed the demographic categories that <ns0:ref type='bibr'>McGill et al. found (McGill et al., 2018)</ns0:ref>: ages/grades, gender, race/ethnic background, location, prior experience and SES, and included the factor disabilities.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>This section presents the results of our analysis of the 134 papers reporting on CSEd studies using K-12 students as subjects. We found 9 papers in the sample that reported on multiple studies, and we decided to classify them separately, because they occassionally reported on different samples. As such in the remainder of the paper, we report on the number of studies, not the number of papers. The 134 papers in total report on 143 studies. Table <ns0:ref type='table'>2</ns0:ref> presents an overview of the number of studies reporting on the different demographic categories. Not all conferences were represented equally, as can be seen in Table <ns0:ref type='table'>1</ns0:ref>. This is mainly due to the fact that SIGCSE is bigger in terms of papers than the other two conferences.</ns0:p><ns0:p>Our sample represents the conferences quite equally: 16% of SIGCSE papers were included, 9% of ITiCSE papers and 14% of ICER papers.</ns0:p><ns0:p>Before we present the results of the demographic categories, we briefly refer to the context in which the studies are performed. All but two studies report on the context where the experiment was run.</ns0:p><ns0:p>The majority of studies (88 or 62%) took place within a school, while 49 (34%) took place outside of school, for example in a coding club or summer camp. Four studies report performing experiments or interventions both in a school and in another context.</ns0:p></ns0:div> <ns0:div><ns0:head>[insert Table 2 here]</ns0:head></ns0:div> <ns0:div><ns0:head>Age and Grade</ns0:head><ns0:p>The ages of the participants, or their grade, is reported in almost all papers in our sample: 134 studies (94%) report this information. Since the majority of papers refer to the classification of the US school Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>system of elementary, middle, and high-school, we followed this classification as well in order to obtain an overview. Information from papers that reported on a different grade system or only on age was manually classified. We classified Kindergarten to grade 6 as elementary school, grades 6 to 8 as middle school and grades 9 to 12 as high school. Grade 6 is within the US classified as either elementary or middle school, depending on the specific school. Consequently, where school was provided in a study for grade 6 we follow this and noted either elementary or middle school. Where school level was not provided, we classified grade 6 as elementary school unless it was indicated as part of a range that includes higher grades a well (for instance, participants being from grades 6-8), in which case we classified the whole range as middle school. Middle school and high school participants are most common in the papers, representing 37% and 31% of studies respectively. Some papers report on a variety of subjects in their studies, for example one paper reported on a study with participants from pre-school to university <ns0:ref type='bibr' target='#b40'>(Martin et al., 2017)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Gender</ns0:head><ns0:p>Gender is relatively well-reported, with 92 studies (64%) providing the gender of their participants. As Figure <ns0:ref type='figure'>1</ns0:ref> shows, 60 studies (44 + 16), which together represent 65% of studies that report on gender, have over 55% male participants. Even though boys are over-represented in general, girls are over-represented in studies focusing specifically on gender: out of the 12 studies with less than 25% male participants, 6 consisted exclusively of female students. Further, the majority (10) of the studies with less than 25% male participants took place in camps or after school. In the studies with less than 25% female participants this pattern was not visible. Besides the studies that included just one gender, there are also studies that included gender in their research purpose (22% of the studies that reported gender). Overall, male participants appear over-represented in the studies which report gender, even though there are several papers with a special-focus on female participants. Finally, two studies report in addition to male and female the category other, which includes 1% <ns0:ref type='bibr' target='#b45'>(Paspallis et al., 2018)</ns0:ref> and 5% <ns0:ref type='bibr' target='#b52'>(Schanzer et al., 2018)</ns0:ref>.</ns0:p><ns0:p>[insert Figure <ns0:ref type='figure'>1</ns0:ref> here]</ns0:p></ns0:div> <ns0:div><ns0:head>Race/Ethnic background</ns0:head><ns0:p>With 46 studies (34%) reporting on the race or ethnic background of their participants, this category is reported on in approximately half as many studies as gender. Out of these 46, 37 studies provide concrete quantifiable information. The remaining studies for instance indicate race about a proportion of their participants but leave a large proportion as 'unknown' <ns0:ref type='bibr' target='#b66'>(Wang and Hejazi Moghadam, 2017)</ns0:ref> or refer merely to different countries <ns0:ref type='bibr' target='#b57'>(Srikant and Aggarwal, 2017)</ns0:ref>. The quantifiable information from the 37 studies is represented in Figure <ns0:ref type='figure'>2</ns0:ref>, showing that white students form the minority of participants, with 13 studies (28%) reporting 0 to 25% white students. Figure <ns0:ref type='figure'>3</ns0:ref> depicts the number of races or ethnicity's included in the studies that provide this information.</ns0:p><ns0:p>As is the case for gender observations, there are also a number of papers reporting on interventions, tools or programs specifically aimed at ethnic minority students. Out of the 46 studies that reported race/ethnicity, 15 (or 33%) studies included race/ethnicity in their research purpose. Some of these studies that target a specific race/ethnic group also target a specific gender, focusing on African-American girls <ns0:ref type='bibr' target='#b62'>(Thomas, 2018;</ns0:ref><ns0:ref type='bibr' target='#b65'>Van Wart et al., 2014)</ns0:ref>, or American-Indian boys <ns0:ref type='bibr' target='#b55'>(Searle and Kafai, 2015)</ns0:ref>.</ns0:p><ns0:p>[insert Figure <ns0:ref type='figure'>2</ns0:ref> here] [insert Figure <ns0:ref type='figure'>3</ns0:ref> here]</ns0:p></ns0:div> <ns0:div><ns0:head>Socio-economic status (SES)</ns0:head><ns0:p>SES is reported in 28 studies (20% of studies). Figure <ns0:ref type='figure'>4</ns0:ref> gives an overview of what is being reported in these studies. The figure shows that there is no clear focus on either low or high SES students, and that many papers explicitly stated the SES of subjects was mixed. Out of the 28 studies that report on SES, 8 (29%) included SES within their research purpose.</ns0:p><ns0:p>We also found that SES was approached differently across the studies. Various indicators were used, including income or poverty rate, as well as eligibility of the sample or the population for free lunches or other forms of extra funding, which all represent slightly different aspects of SES.</ns0:p><ns0:p>[insert Figure <ns0:ref type='figure'>4</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Disabilities</ns0:head><ns0:p>With only 6 studies (4%) reporting on disabilities of their participants, this category is the least reported on. Out of the 6, two studies named visually impaired children <ns0:ref type='bibr' target='#b33'>(Kane and Bigham, 2014;</ns0:ref><ns0:ref type='bibr' target='#b38'>Ludi et al., 2018)</ns0:ref>, and one named children with autism <ns0:ref type='bibr' target='#b30'>(Johnson, 2017)</ns0:ref>. In one of the studies that included children with visual impairments <ns0:ref type='bibr' target='#b33'>(Kane and Bigham, 2014)</ns0:ref> and in the study that mentioned children with autism <ns0:ref type='bibr' target='#b30'>(Johnson, 2017)</ns0:ref>, these target groups are part of the context in which the research takes place (a camp for students with autism and a camp for visually impaired children, resp.). Consequently, the information on the prevalence of these disabilities seems inferred from the research context and not explicitly acquired. In addition, no other details (such as specific types within these syndromes) are provided. The other study on children with visual impairments <ns0:ref type='bibr' target='#b38'>(Ludi et al., 2018)</ns0:ref> provides somewhat more information, indicating that the participants were recruited across the US, from various educational situations (homeschooled, public school, schools for the blinds), and furthermore that 58% had low vision and the remaining children were blind.</ns0:p><ns0:p>The other three papers, out of the 6 that reported on disabilities, remarked at a more general level that 'students with various disabilities' participated in their study <ns0:ref type='bibr' target='#b44'>(Paramasivam et al., 2017)</ns0:ref> or that there were children with 'special needs or learning difficulties' <ns0:ref type='bibr' target='#b19'>(Grover et al., 2016b)</ns0:ref> or from 'special education' <ns0:ref type='bibr' target='#b18'>(Grover et al., 2016a)</ns0:ref>. The paper by Paravisam et al. ( <ns0:ref type='formula'>2017</ns0:ref>) provided a wide range of disabilities included in their sample ('deafness, low vision or blindness, Cerebral Palsy, Muscular Dystrophy, Ollier's disease, Attention Deficit Disorder, Asperger's Syndrome, and other autism spectrum disorders or learning disabilities'), without however specifying which disability was present for how many students. All 6 studies that report on disabilities concern research conducted in the US.</ns0:p><ns0:p>Two additional papers reported comparable cognitive background information on their participants by describing that the children in the study were gifted <ns0:ref type='bibr' target='#b17'>(Friend et al., 2018)</ns0:ref>, or were students in a school which is also a center for gifted children <ns0:ref type='bibr' target='#b8'>(Daily et al., 2014)</ns0:ref> (which is usually not seen as a disability, consequently we did not include these 2 papers in the 6 papers of this category).</ns0:p></ns0:div> <ns0:div><ns0:head>Location</ns0:head><ns0:p>Location is a multi-faceted factor, which <ns0:ref type='bibr' target='#b41'>(McGill et al., 2018)</ns0:ref> describes as 'Specific locations, including city, state, and country'. Country is reported on relatively often, with 96 studies (67%) explicitly stating the country in which the study is being executed. For 42 studies (29%), the location of the study can be deduced with reasonable certainty from the description of the setup in the paper, for example when the authors work at a university in the US and state they have executed the study themselves.</ns0:p><ns0:p>Combining papers that explicitly report subjects from the United States (65) and papers where we deduced the US as location from context (36), we find 101 studies (71% of all studies) in which children from the US participate. This overshadows studies conducted in Europe (19 or 13%) and Asia &amp; Australia (7 or 5%). Location more specific than the country of the study is reported on in about one in three papers (46 studies). When it is reported, however, it is often with different terms focused on different aspects.</ns0:p><ns0:p>For example, some papers state the location along with more information about the residents, such as 'two tribes of an American Indian community outside of Phoenix, Arizona' <ns0:ref type='bibr' target='#b32'>(Kafai et al., 2014)</ns0:ref>, while others are more generic and use broad locations such as 'Western Germany' <ns0:ref type='bibr' target='#b46'>(Pasternak, 2016)</ns0:ref> or 'a large Northeast city' <ns0:ref type='bibr' target='#b9'>(Deitrick et al., 2015)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Prior Experience</ns0:head><ns0:p>Prior experience with an aspect of computer science is reported in 52 studies (37%). However, it is typically reported on loosely using phrases such as 'some children had coding experience' <ns0:ref type='bibr' target='#b2'>(Broll et al., 2017)</ns0:ref>, 'students did not begin the class with strong experience in computers and computing' <ns0:ref type='bibr' target='#b15'>(Freeman et al., 2014)</ns0:ref> or 'most had little to no experience with robotics design or programming.' <ns0:ref type='bibr' target='#b38'>(Ludi et al., 2018)</ns0:ref> which leaves room for interpretation as to what some, strong or most means exactly. Figure <ns0:ref type='figure'>5</ns0:ref> shows the distribution of the studies over different categories of the experience of subjects. The category 'mixed' pertains to studies for which it was explicitly reported that some of the participants have experience while others do not. As can be seen in Figure <ns0:ref type='figure'>5</ns0:ref>, the largest category of studies (22, 42% of studies reporting on prior experience) reports on subjects without any experience. In some cases, papers reported on excluding children with prior knowledge of computer science through a pretest or self-assessment, for example <ns0:ref type='bibr' target='#b70'>(Zhi et al., 2018)</ns0:ref>.</ns0:p><ns0:p>[insert Figure <ns0:ref type='figure'>5</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>With this literature review we aimed to gain insight in the demographics of subjects participating in K-12 CSEd studies, in order to understand the representativeness of the populations included, and in order to identify which characteristics empirical studies can report on. Our insights are discussed below.</ns0:p></ns0:div> <ns0:div><ns0:head>Main Overview of Reported Information and Included Populations</ns0:head><ns0:p>First, the reported demographic information shows that children from the United States, boys and children without experience appear over-represented. Second, we found that demographic information in general and specific categories is often missing. Especially race or ethnic background, disabilities and SES of participants are frequently not reported on, as previous studies also showed <ns0:ref type='bibr' target='#b41'>(McGill et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b53'>Schlesinger et al., 2017)</ns0:ref>. Finally, most studies take place within schools. These insights are further discussed below, together with other observations on the inclusion and report of demographic factors.</ns0:p></ns0:div> <ns0:div><ns0:head>US Focus</ns0:head><ns0:p>As we expected, the category location showed that children from the US were over-represented in the studies in SIGCSE, ITiCSE and ICER we reviewed. This limits the representativeness of the findings of these studies first of all because of curriculum differences world-wide in the organization of K-12 education. In addition, in many European countries (including Sweden, Germany, Finland and the Netherlands) there are no or very few private schools, and homeschooling is in European countries often prohibited (in Greece, Germany and the Netherlands) or hardly practiced (in France). These differences can make it difficult to generalize findings. A second implication of the over-representativeness of the US is that the reporting is also distinctly US flavored. For example, when discussing SES, a paper stated that a school has 'eligibility for Title I funding' <ns0:ref type='bibr' target='#b26'>(Ibe et al., 2018)</ns0:ref> or that children received 'free or reduced lunch' <ns0:ref type='bibr' target='#b3'>(Buffum et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b21'>Hansen et al., 2017)</ns0:ref> referring to distinct US policies that people outside the US might not be familiar with. Similarly, a concept such as an 'urban' area <ns0:ref type='bibr' target='#b19'>(Grover et al., 2016b;</ns0:ref><ns0:ref type='bibr' target='#b63'>Tsan et al., 2018)</ns0:ref> does not have the same connotation in all countries; it can refer to a school in a poor inner city area or a well-educated high-income neighborhood.</ns0:p><ns0:p>Interpretation and generalization could be strengthened if empirical studies (including those from the US) give a clear report on factors of location and SES. This can be done either in way that is easy to understand without an in-depth knowledge of a specific school system and context, or by providing a brief description of the educational system in a country. Some of the included papers provide examples of this approach, for example when describing the specific setting in France <ns0:ref type='bibr' target='#b6'>(Chiprianov and Gallon, 2016)</ns0:ref>.</ns0:p><ns0:p>Further, it is important to be aware of the high presence of children from the US in studies in SIGCSE, ITiCSE and ICER, and to interpret findings with this in mind or explicitly look for studies from other locations.</ns0:p></ns0:div> <ns0:div><ns0:head>Inconsistent Criteria and School-level reporting</ns0:head><ns0:p>We observed that, for several demographic categories, different criteria were being used by different papers. This was most visible for SES, where some papers reported low SES based on eligibility for lunch programs <ns0:ref type='bibr' target='#b3'>(Buffum et al., 2016)</ns0:ref> while others used poverty rates <ns0:ref type='bibr' target='#b11'>(Feaster et al., 2014)</ns0:ref> or the economic and educational level of parents <ns0:ref type='bibr' target='#b37'>(Ko et al., 2018)</ns0:ref>. The same is true for prior experience. Some papers excluded children with too much prior knowledge <ns0:ref type='bibr' target='#b70'>(Zhi et al., 2018)</ns0:ref>, some papers named experience with the tool or language being used <ns0:ref type='bibr' target='#b29'>(Joentausta and Hellas, 2018)</ns0:ref> while others explicitly separated experience with the tool from computer science experience in general <ns0:ref type='bibr' target='#b56'>(Smith et al., 2014)</ns0:ref>. To increase the clarity of future studies, it would be beneficial for our community to establish recommended guidelines for the criteria to be used for SES or prior experience in K-12 studies. Further, both in the case of race/ethnic background and disabilities it can be considered whether a bias is occurring in reporting this information. First, in the case of race/ethnic background, the studies that provide quantifiable information on this demographic together suggests that white students form the minority, which is not in line with the American population (to which a large percentages of our studies pertain), where about 50% of children enrolled in public elementary and secondary schools are white. 3 It is possible that race or ethnic background is more often reported when participants do not belong to the racial/ethnic majority, or, specific to the current research setting, when their background is not congruent with expectations of computer science classroom populations. This is further confirmed by the large number of studies that report on mixed-racial experimental groups. Here as well, a higher number of mixed-racial groups is reported compared to what can be expected from typical classrooms in the US <ns0:ref type='bibr' target='#b59'>(Stroub and Richards, 2013)</ns0:ref>. Second, a similar type of reporting bias might be occurring for the factor disabilities, which are mainly being reported when the study targets or focuses on children with disabilities and as such all children in the study undoubtedly have a disability (such as in a camp for blind children). Here as well, it seems unlikely that only 6 studies reporting on disabilities is an accurate representation of the population, where worldwide approximately 15% has a disability 4 . However, for both of these factors (race/ethnic background and disabilities) the lack of consistent reporting might also be related to the especially sensitive nature of these factors, which is further discussed below.</ns0:p><ns0:p>Finally, some papers reported the demographics of the study not at the level of the study participants, but at the level of the school. For example: 'At the elementary school where we collected the data, the student body is roughly 53% African-American, 33% Caucasian, and 14% Hispanic, Latino, Native American, Asian, or mixed race. Approximately 47.4% of the students receive free or reduced cost lunch' <ns0:ref type='bibr' target='#b3'>(Buffum et al., 2016)</ns0:ref>. This presents only an indication of the exact study sample, which might differ because of factors such as selection bias, with some students or teachers being more willing or available to participate. At the same time, school level reporting is preferable over no reporting at all. Further, as discussed in the section below on ethical and legal considerations, it can be a good alternative when dealing with practical or ethical constraints.</ns0:p></ns0:div> <ns0:div><ns0:head>Disabilities as a Demographic Factor</ns0:head><ns0:p>In this review we explored the inclusion of 'disabilities' as a demographic factor, which was not included in previous reviews on CSEd research populations <ns0:ref type='bibr' target='#b41'>(McGill et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b53'>Schlesinger et al., 2017)</ns0:ref>. Only 6 studies reported on the presence of disabilities, referring for instance in a relatively general manner to children with visual impairments or to learners with special needs. Usually how this information was obtained was not indicated, though in some cases this was inferred from the targeted research context (such as a camp for children with autism). It is likely that the low and relatively general reporting on disabilities can be explained by a lack of acknowledgment of this factor as a standard demographic (as the absence of mentioning it in the previous reviews also suggests), and, related, the sensitivity of these data <ns0:ref type='bibr' target='#b12'>(Fernandez et al., 2016)</ns0:ref>. The need to increase inclusivity of CSEd, especially concerning learners with impairments <ns0:ref type='bibr' target='#b5'>(Burgstahler and Ladner, 2006)</ns0:ref>, stresses the importance of considering how a balance can be found in reporting on this information in a more consistent, but also feasible, way. A starting point might be to consistently refer to the consideration of the presence of disabilities within a sample, adding information where possible at an appropriate level of detail.</ns0:p><ns0:p>Further assessment of how studies outside of the conferences of our focus handle and report on the presence of disabilities can be helpful. Although a thorough exploration falls outside of the scope of this review, some suggestions can be found in other CSEd journals or conferences within the topic of learners with visual impairments. Similar to the paper by <ns0:ref type='bibr' target='#b38'>Ludi et al (2018)</ns0:ref> included in our review, some of the often small scale studies within this topic tend to elaborate both on the specific type of impairment (including for example being blind from birth, or partially sighted) and on the recruitment of the participants (through special education schools or personal contacts) <ns0:ref type='bibr' target='#b28'>(Ja&#353;kov&#225; and Kaliakov&#225;, 2014;</ns0:ref><ns0:ref type='bibr' target='#b31'>Kab&#225;tov&#225; et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b42'>Milne and Ladner, 2018;</ns0:ref><ns0:ref type='bibr' target='#b43'>Morrison et al., 2018</ns0:ref>). An interesting option that is being applied is to elaborate on a participants' disability in terms of use of computer, assistive technologies, or tools <ns0:ref type='bibr' target='#b28'>(Ja&#353;kov&#225; and Kaliakov&#225;, 2014;</ns0:ref><ns0:ref type='bibr' target='#b42'>Milne and Ladner, 2018)</ns0:ref>. Although such level of detail can be highly useful to interpret the findings, it is likely that, in studies taking place in larger groups and within mainstream education, obtaining this information is not always feasible or practical. In that case, more general (indicating the presence of disabilities) or school level (indicating a percentage) report can be valid alternative options, that still provide an indication of the population included. Authors could then also explain or substantiate why they report in this manner. For instance, Koushik and <ns0:ref type='bibr'>Kane (2019)</ns0:ref> indicate in their study on training computer science concepts that their participants have cognitive disabilities. They further describe : 'We did not collect individual diagnoses from our participants as we did not believe this personal information was relevant to our research goals', and provide a summary of different types of cognitive impairments included in the club of which the participants were members. Overall, a tailored approach might be needed for the report on disability, yet mentioning the factor and substantiating the information that is (not) provided can be very helpful for the field. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Separate Subjects Section</ns0:head><ns0:p>While examining the papers, we found that papers do not only differ in which demographic data were reported, but also in how this was presented in the paper. 53 papers (40% of 134 papers) have a dedicated section or subsection where all information on participants was placed, called 'subjects', 'participants', or something similar. In 35 papers (26%), information about participants was placed in larger sections, such as 'experimental design', 'research setup' or 'methods'. In the remaining 38% of the papers there was no central place in the paper for demographic information. Instead, it was placed across different sections, or was only stated in the introduction or abstract. We would advise journals and conferences that are considering to adopt guidelines for the disclosure of participant demographics to also consider suggesting a default name for the section in which this information should be placed.</ns0:p></ns0:div> <ns0:div><ns0:head>Ethical Considerations and Legal Consideration</ns0:head><ns0:p>Above the ethical issues in collecting and reporting demographic information have already been touched upon. All demographic information, but some especially (such as on ethnic background, SES, and disabilities) concern sensitive data that might not be easy to obtain due to objections from parents or schools. Further, if obtained, these data should be handled and reported carefully without the possibility of making the subjects of a study traceable. In behavioral sciences, the collection and report of demographic information of its' subjects, which also often concerns children, is standard and required. This can be helpful in identifying safe procedures for this type of data (for instance, using careful informed consent procedures and pseudonymisation of the obtained data). Furthermore, the example of behavioral sciences can also clarify the purpose of reporting demographic information that is in itself not part of the research question. The discussion on the representativeness of study samples exists in psychology as well, and one recommendation here is also to consistently report demographics in empirical studies <ns0:ref type='bibr' target='#b50'>(Rad et al., 2018)</ns0:ref>.</ns0:p><ns0:p>The collection and reporting of detailed demographic data could also be hindered by legal concerns about the privacy of the study participants. This is an increasingly important issue, because interest in privacy and personal data protection has intensified, leading to the adoption of new laws, in particular in the European Union. Data sanitation techniques, including randomisation and generalisation, can be applied, aiming to prevent the re-identification of data subjects. The application of these techniques when reporting on demographic data on study participants is challenging, because the more detailed the reported data, the more identifiable the subjects become. For researchers reporting this data, it could understandably be a challenge to achieve a proper balance between on the one hand providing replicable and representative data and on the other hand protecting the privacy of the participants and complying with privacy laws. Regulations and customs from behavioral sciences could be helpful here.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations of Current Research and Directions for Future Work</ns0:head><ns0:p>This literature review focused specifically on proceedings of the SIGCSE, ITiCSE and ICER conferences.</ns0:p><ns0:p>We believe these venues reflect the global CSEd community and provide the valid starting point to explore the demographics of it's studies' subjects. At the same time, there is some limitation because of this focus.</ns0:p><ns0:p>Future work can look further into papers from CSEd journals, as well as focus on the specific case of local conferences. Moreover, for our analyses we assumed that each of the 143 studies has been reported on in exactly one paper in the dataset; a publication of a study in more than one papers in the three conferences would be a threat to the validity of our findings. Further, the analyses of specific demographic factors in the current research are somewhat limited, not looking for example at relations between different factors. Our goal was to provide an overview of different factors, future studies could focus on specific factors such as ethnic background to gain further insight. Finally, a threat to validity in the method of analyzing the papers should be mentioned. This was done manually by nine different researchers, without applying a method of double-coding. However, we attempted to reduce this threat by first practicing the analysis process with all researchers together and agreeing on common coding conventions, and second by regularly meeting and discussing the general experiences as well as specific doubts about the reported demographic information and especially about information that was not explicitly stated but deduced from the study context. In addition, we make the resulting dataset available to the research community for cross validating our findings. 5 5 Link to be provided through our institution's repository.</ns0:p></ns0:div> <ns0:div><ns0:head>9/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63800:1:0:NEW 22 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>1 https://www.un.org/development/desa/disabilities/resources/factsheet-on-persons-with-disabilities.html2/13PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63800:1:0:NEW 22 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63800:1:0:NEW 22 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:63800:1:0:NEW 22 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:63800:1:0:NEW 22 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>4 https://www.un.org/development/desa/disabilities/resources/factsheet-on-persons-with-disabilities.html8/13PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63800:1:0:NEW 22 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:note place='foot' n='3'>https://nces.ed.gov/programs/raceindicators/indicator rbb.asp 7/13 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63800:1:0:NEW 22 Oct 2021)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"21-10-2021 Dear editors, We would like to thank you and the reviewers for the constructive feedback, which we used to improve our paper. In particular, through the paper we have added substantiation, extended the results, and enriched the discussion on our inclusion of disability as a demographic factor. Below we respond to the reviewers’ comments and explain how we addressed each comment in the paper. We would like to point out that we originally created our manuscript in LaTeX, consequently we had to convert our pdf files to Word files required for the tracked changes manuscript and clean manuscript within the revision. Although we tried our best here, the resulting Word files are slightly less optimal in terms of style formatting. We believe this does not impact the readability of the files but apologize for any inconvenience. We look forward to your response on the revised version of our paper. Comments from Reviewer 1 Basic reporting Comment #1 The authors note that similar review work has been completed (McGill et al., 2018; Schlesinger et al., 2017), although they seem to indicate that their review is distinct in that it evaluates additional demographic information, disability status in particular. I am not certain that the authors have convinced me that this sensitive information can be widely and appropriately asked across K-12 studies, at least those that occur in the United States Response #1: We agree that the disability factor requires more background and discussion. Consequently we added information on this factor through the paper, in the introduction, results, and discussion section, including on the feasibility of obtaining this information. Below (in response to comments #5 and #9) we provide more detail on these changes. Experimental design Comment #2 The authors do a commendable job of detailing their process and justifying their use of the three conference proceedings. They do note in Limitations that there was no attempt at inter-rater reliability, or double-coding. This does seem potentially problematic as the authors indicated they deduced information context, rather than from details overtly stated in the studies. It may be that one researcher missed vital information or misunderstood. Such double checking is important to the validity of the authors’ claims. Response #2: Thank you for bringing this point to our attention. In the previous version we had omitted to add details about our collaboration process, that could indeed leave room for doubt on the validity of the encoded information. After practicing with reading and encoding the first papers together, we held working sessions where all papers and information flagged as uncertain or inferred was checked and agreed upon. Coding conventions for demographic information we encountered being reported on in a diverse manner were also set during these sessions. As a result of this process, the pieces of information that we did not cross check were the ones explicitly stated in the papers that we examined. In this version, we have updated the methods (page 4, lines 141-144) and limitations (page 9, lines 421-425) sections with these points. Comment #3 Additionally, the authors do not address what safeguards were in place to ensure that an intervention was not submitted to and accepted at more than one of these conferences and therefore those data were considered more than once. Response #3: We agree to this point and we have updated the limitations section accordingly (page 9, lines 413-415). Comment #4 The authors cite appropriately, although I am used to alphabetizing in-text citations when there is more than one cite. Relatedly, there are several places (lines 61, 69, and 74) where the formatting of the citation is incorrect. Response #4: We alphabetized the order of all multiple in-text citations, as well as corrected the citations on lines 61, 69, and 74 (in the new document lines 65, 71, 77). Validity of the findings Comment #5 I provide more specific information below in General Comments, but overall I am struggling to see the feasibility of including disability status as a socio-demographic metric. I understand the authors’ contention that such a characteristic needs to be considered for generalizability purposes; however, the reality of receiving such information, at least in the US, is highly complicated and would not lend itself to generalizability. This aside, the authors complete their literature review as they set out in the Introduction. I feel the authors could strengthen their argument and the paper overall by delving more deeply into the articles that report on disability status, as this seems to be their major interest, especially those not focused solely on a certain population of students (ie., blind students). How did these articles report on these disabilities? What was their institutional review policy? How did they obtain this information (ie., from the students, teachers, or parents)? What major findings/conclusions did they offer? The number of articles that fall into this category is quite small, so more closely exploring them may help draw out and support your contention that disability status is a metric essential to report. Given this small number, perhaps it would be wise to broaden your search beyond the flagship conferences. In other words, the authors could share their literature review findings, including the small number of articles that report on disability status, and then provide a detailed overview of articles in CSEd at large (conferences and journals) that do report on disability status. Response #5 As indicated in our response to Comment #1, we agree that the disability factor requires more information and discussion. We made the following adaptations: First, in the introduction (on page 2, lines 29-33) we briefly provide more context on our consideration to include disability as a demographic factor. In addition to providing the prevalence of disabilities worldwide we indicate how understanding the support of specific subpopulations of learners can increase inclusivity of CSEd. We also acknowledge the especially sensitive nature of this information here. We further elaborate on this in the section on ‘Research on Demographics in Computer Science’ (on page 3, lines 85-95), describing the underrepresentedness of individuals with disabilities in the fields of computing and software engineering as well as the emerging research lines in inclusive CSEd research. Together, we provide more substantiation of the importance of, when possible, including disability as a demographic factor. We fully agree that the feasibility of obtaining this information also requires further attention, which we added in the discussion section (see below). Second, we examined the papers in our review that report on disability more closely and added more elaborate information in the results section ‘Disabilities’ (page 6). We added specificities, or the lack of such specificities, on how the information provided on disabilities was obtained and which specific syndromes were present. This includes the observation that in two studies the target groups (learners with visual impairments or with autism) are part of the research context (a camp specifically for these learners), which suggests the prevalence of disability in these participants was not explicitly obtained. Further, we note that all studies that report on disabilities take place within the US. Third, in the discussion section we now pay attention to the interpretation of our (extended) findings on the disabilities factor, as well as to the sensitive nature of this information and to practical considerations for future empirical studies: • We moved some of the information from the results section on what we interpret as a ‘reporting bias’ that might take place for the factors race/ethnic background, SES, and disabilities, to the discussion section (on page 7-8, lines 314-331). As reviewer 2 pointed out, this information goes beyond describing the results and consequently should be placed in the discussion. Furthermore, by placing the information on the reporting on these three factors in the discussion, we formed a combined discussion point on this ‘reporting bias’, where we also consider that for all three factors this bias could be related to the sensitive nature of this information. • We added a section in this discussion ‘Disability as a demographic factor’ (page 8, lines 341-375). Here, we discuss our findings of the papers in our review that included disability, and consider different appropriate ways to refer to this sensitive information. We also briefly look into options from studies from other venues or journals outside of our three conferences, in order to gain a more elaborate perspective on how disability is currently being treated and could be included in the future. Additional comments Comments #6 Line 5 (Abstract): if space permits, it may be helpful to include the year range Lines 25, 38, and 50: authors appear to use ‘pre-college,’ ‘pre-university,’ and ‘K-12’ interchangeably. I think it would read better if one phrase were used; K-12 is most common. These phrases appear throughout the article. Line 99: it seems the RQ should end as “... in K-12 programming” and not into Line 156: grade 6 is included twice in both elementary and middle school; where did you ultimately include it? This does not seem to be consistently handled. Response #6: Line 5: we added the year range (2014-2018) in the abstract. Line 25, 38, and 50: we agree K-12 is the most common term, and replaced all occurrences of ‘pre-college’ and ‘pre-university’ by ‘K-12’. We clarify this term once for readers who are not familiar with it in line 41 by indicating: ‘K-12’ (pre-university)’ Line 99: we changed ‘into’ into ‘in’ Line 156: we clarified how we handled ‘grade 6’. As explained, specifically grade 6 is known to be considered either elementary or middle school, depending on the state or specific school even. Consequently, it can differ per study whether grade 6 is considered elementary school or middle school. We follow specific studies’ grouping of grade 6 into either of these levels, and where school level was not provided, we classified grade 6 as elementary school unless it was indicated as part of a range that includes higher grades a well (for instance, participants being from grades 6-8), in which case we classified the whole range as middle school (current version this is on page 5, in lines 172-177). Comment #7 Gender: is it important to include the gender options the studies permitted the participants to select, or that --% of studies permitted students to opt out or select Other? Also, the finding that males are overrepresented is not terribly surprising, but I feel this interacts with the context of the studies. For example, did the studies where males were overrepresented take place in elective classes or optional experiences such as camps? Response #7: First, we agree that it is important to indicate how many studies refer to the option ‘other’. We included this in the section on gender in the results (page 5, lines 201-202), referring to the two studies that report on other as a category of gender as well as the percentage of participants in both of these studies included in this category. Second, we explored the role of context by specifically looking at studies where the majority where boys or girls. As we describe in the result section on gender, the few studies where the majority are girls mostly take place in code clubs or after schools, whereas when the majority is boys this is not the case (page 5, lines 186-188). Comment #8 Race/Ethnic Background: Line 178: the sentence that begins on line 177 and includes “studies is represented in,” is poorly worded or missing a reference to a figure. “This is further confirmed by the large number of studies that report on mixed-racial experimental groups, as Figure 3 depicts, which does not correspond with how mixed classrooms typically are in the US (Stroub and Richards, 2013)”: what does this mean? What are mixed classrooms and how are they typically organized/studied in the US? Response #8: First, the sentence on 178 was indeed missing a reference to a figure. We added this reference in the sentence, which now reads: ‘The quantifiable information from the 37 studies is represented in Figure 2’ (line 199 in current version). Second, the interpretation on the large number of mixed-racial groups has been moved to the discussion and rephrased to clarify that more studies refer to ‘mixed-racial’ groups than can be expected from the general population within US classrooms, as the reference by Stroub et al. (2013) indicates. Comment #9 SES and Disabilities: I have combined these two subsections as my concerns with them are similar. As a US researcher, I have always used NSLP (free and reduced school lunch eligibility) as a proxy for SES. This follows because we have never found it appropriate to ask students about their family’s SES and this information is publicly available at the school level. When working with a single class or a subset of students in class, it feels inappropriate to report out the school level SES. No university IRB would approve US researchers asking K-12 students, deemed a vulnerable population, about their disability status. FERPA regulations prohibit school personnel from disclosing such information at an individual level to outside researchers. It may be that a teacher could tell a researcher something generic, such as “of the 30 students in this classroom, 5 have IEPs and 4 have 504 plans.” Line 216: “However, given the aforementioned prevalence of individuals with a disability, it seems unlikely that the participant groups of the other 128 papers did not include any of these children”: of course these classrooms included students with disabilities, but the practice of mainstreaming and not delineating students by ability, likely contributed to the small number of studies that report on this characteristic. Response #9: We agree that school level reporting is a very valid option for both SES and disability. As explained above in our response to Comment #5, we included especially for the disability factor the need for a tailored approach for such sensitive information which include general and/or school level reporting. Comment #10 Lines 273-4: the wording of the final sentence is problematic. You have set up a juxtaposition whereby poor inner city cannot include well-educated. Don’t conflate education and income here. Something like “it could be a school in a poor inner city area or a high-income neighborhood” would be better. Response #10: We have rephrased this sentence, which now reads as ‘it can refer to a school in a poor inner city area or a well-educated high-income neighborhood’ (lines 295-296 in current version). Comments from Reviewer 2 Basic reporting Comment #1 In the “Pre-university Programming Education” section, two improvements should be made to the citation style: (1) (Yang et al., 2015) for example […] -> Yang et al. (2015) for example; (2) Aivaloglou and Hermans (Aivaloglou and Hermans, 2016) performed an […] -> Aivaloglou and Hermans (2016) performed an. Response #1: We rephrased these citations, which now read as: ‘Scratch projects were used for example to explore the learning patterns of programmers in their first 50 projects (Yang et al., 2015)’, and ‘Aivaloglou and Hermans (2016) performed an analysis on 250.000 Scratch programs investigating the occurrence of code smells.’ (page 2, line 65) Comment #2 Authors should consider submitting the figures in a better resolution as the diagrams provided are slightly blurred. Please, add a dot after each of the figure labels. Response #3: We provided versions of the figures with a better resolution, and added a dot after all the figure labels. Comment #3 Please, reference Figure 2 in the text. Currently, there is no reference to it in the text. Response #3: The reference to figure 2 was missing in the section ‘Race/ethnic background’ of the results, and has been added in the sentence ‘The quantifiable information from the 37 studies is represented in Figure 2’ (line 199). Comment #4 Please, use CSed abbreviation consequently. Response #4: We replaced all other terms (CS education) by the consistent abbreviation CSEd. Experimental design Comment #5 Although the authors refer to computer science education in the title of the paper, the literature used for the review is limited to programming education only. However, computer science education is much broader than learning programming. Please, include this information as a limitation and explain in the introduction why you focused on programming education as one of the aspects of computer science education. Response #5: We realized that our use of the terms computer science education and programming education was inconsistent and not clear. Our review is in fact more broadly directed at computer science education, since we did not apply any criteria where only studies focusing specifically on programming where included. Our inclusion factor of children actively participating in what we referred to previously as ‘programming activities’ is focused on their active participation (thereby excluding studies where for instance attitudes were measures). The nature of the activity being specifically programming was not an inclusion criterion. Consequently, we now consistently apply the term ‘computer science education’ when referring to our topic in general as well as to the activities the children within our included studies participated in. In the section on ‘Prior experience’ where we previously spoke of ‘prior experience with programming’, we now refer to ‘prior experience with an aspect of computer science’, which becomes more concrete by the information provided in this section which explicitly refers to for instance experience with coding or with computers. Comment #6 In the introduction, the authors justify the selection of SIGCSE, ITiCSE and ICER by arguing that these are “flagship conferences” in the CSEd community. Please, support this argument with references. Response #6: We rephrased our wording, and now refer to the three conferences we focus on as ‘the three main international conferences within the CSEd community’. Comment #7 In presenting the results in the “Race/Ethnic Background” section, the authors go beyond describing the results (“This leads us to hypothesize that reporting bias is at play here”). However, since this argument is already about evaluating the results, I recommend moving it to the “Ethical and Legal Considerations” section. Response #7: We agree this interpretation belongs in the discussion section. We have moved it there to the section ‘Inconsistent Criteria and School-level reporting’ as part of a larger discussion point on the possible bias in reporting race/ethnic background as well as disabilities (page 7-8, lines 315-331). "
Here is a paper. Please give your review comments after reading it.
309
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Due to the rapid development of information technology, Internet has become part of everyday life gradually. People would like to communicate with friends to share their opinions on social networks. The diverse social network behavior is an ideal users' personality traits reflection. Existing behavior analysis methods for personality prediction mostly extract behavior attributes with heuristic. Although they work fairly well, but it is hard to extend and maintain. In this paper, for personality prediction, we utilize deep learning algorithm to build feature learning model, which could unsupervisedly extract Linguistic Representation Feature Vector (LRFV) from text published on Sina Micro-blog actively. Compared with other feature extraction methods, LRFV, as an abstract representation of Micro-blog content, could describe use's semantic information more objectively and comprehensively. In the experiments, the personality prediction model is built using linear regression algorithm, and different attributes obtained through different feature extraction methods are taken as input of prediction model respectively. The results</ns0:p><ns0:p>show that LRFV performs more excellently in micro-blog behavior description and improve the performance of personality prediction model.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Personality can be defined as a set of traits in behaviour, cognition and emotion which is distinctive among people <ns0:ref type='bibr' target='#b18'>[16]</ns0:ref>. In recent years, researchers have formed a consensus on personality structure, and proposed the Big Five factor model <ns0:ref type='bibr' target='#b20'>[18]</ns0:ref>, which uses five broad domains or factors to describe human personality, including openness(O), conscientiousness(C), extraversion(E), agreeableness(A) and neuroticism(N) <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>.</ns0:p><ns0:p>Traditionally, questionnaire has been widely used for personality assessment, especially the Big Five personality questionnaire. But the form of questionnaire may be inefficient on large population.</ns0:p><ns0:p>Due to the rapid development of information technology, Internet becomes part of everyday life nowadays. People prefer expressing their thoughts and interacting with friends on social network platform. So researchers pay more and more attention to figuring out the correlation between users' behaviors on social network and their personality traits in order to realize automatical personality prediction by machine learning methods. Nowadays, Internet is not just for communication, but also a platform for users to express their thoughts, ideas and feelings. Personality is expressed by users' behavior on the social network indirectly, which refers to a variety of operation on social network, such as comment, follow and like. In addition, text, punctuation and emoticon published by users can be regarded as one kind of social behavior. So, for automatic personality prediction, how to abstract these diverse and complex behaviors and acquire the digital representation of social network behaviors has become an critic problem. Existing behavior analysis methods are mostly based on some statistics rules, but artificial means have some disadvantages in objectivity and integrity. Generally, attributes are especially important for the performance of prediction model. A set of proper feature vectors could improve the effectiveness of prediction model to a certain extent. So, it is required that the attributes are not only the comprehensive and abstract description of individual's behavior characteristic, but also could reflect the diversity of different individuals' behaviors.</ns0:p><ns0:p>In this paper, we use deep learning algorithm to unsupervisedly extract LRFV actively from users' content published on Sina Micro-blog. Compared with other attributes obtained by artificially means, LRFV could represent users' linguistic behavior more objectively and comprehensively. There are two reasons of utilizing deep learning algorithm to investigate the correlation between users' linguistic behavior on social media and their personality traits. One is that deep learning algorithm could extract high-level abstract representation of data layer by layer by exploiting arithmetical operation and the architecture of model. It has been successfully applied in computer vision, object recognition and other research regions. Another is, the scale of social network data is huge and deep learning alg orithm can meet the computational demand of big data. Given all this, we do some preliminary study on constructing microblog linguistic representation for personality prediction based on deep learning algorithm in this paper.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.1'>Related Work</ns0:head><ns0:p>At present, many researchers have paid attentions to the correlation between users' Internet behaviors and their personality traits. Qiu et al. <ns0:ref type='bibr' target='#b21'>[19]</ns0:ref> figured out the relationship between tweets delivered on Twitter and users' personality, and they found that some personality characteristics such as openness(O), extraversion(E) and agreeableness(A) are related to specific words used in tweets.</ns0:p><ns0:p>Similarly, Vazire et al. <ns0:ref type='bibr' target='#b26'>[23]</ns0:ref> discovered that there is great relevance between users' specific Internet behaviors and their personality through studying users' behaviors on personal website. These conclusions can be explained as personality not only influences people's daily behaviors, but also plays an important role in users' Internet behaviors. With the rise of social media, more and more researchers begin to analyse uses' personality traits through social network data with the help of computer technology. Sibel et al. <ns0:ref type='bibr' target='#b23'>[21]</ns0:ref> predicted users' personality based on operational behaviors on Twitter utilizing linear regression model. Similarly, in <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref>, Jennifer et al. also used regression algorithm to build a personality prediction model, but they considered both of operational behaviors and linguistic behaviors. Ana et al. <ns0:ref type='bibr' target='#b15'>[14]</ns0:ref> used semi-supervised method to predict personality based on the attributes of linguistic behaviors extracted from tweets. Alvaro et al. <ns0:ref type='bibr' target='#b19'>[17]</ns0:ref> built users' personality prediction model according to their social interactions in Facebook by machine-learning methods, such as classification trees.</ns0:p><ns0:p>Although lots of researchers utilized machine learning methods to built personality prediction model and have gotten some achievements, but there are also some disadvantages need to be improved. First, in state of art methods, the behavior analysis method and behavior attributes extraction methods are mostly based on some experiential heuristic rules which are set artificially. The behavior attributes extracted manually by statistical methods may not be able to describe characteristics of behaviors comprehensively and objectively. Second, supervised and semi-supervised behavior feature extraction methods need a certain number of labeled data, but in the actual application, obtaining a large number of labeled data is difficult, time-consuming and high cost. So supervised and semi-supervised feature extraction methods are not suitable for a wide range of application.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.2'>Deep Learning</ns0:head><ns0:p>Deep learning is a set of algorithms in machine learning <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>, which owns a hierarchical structure in accordance with the biological characteristics of human brain. Deep learning algorithm is originated in artificial neural network, and it has been applied successfully in many artificial intelligence applications, such as face recognition <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>, image classification <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>, natural language processing <ns0:ref type='bibr' target='#b24'>[22]</ns0:ref> etc.. Recently, researchers are attempting to apply deep learning algorithm to other research field.</ns0:p></ns0:div> <ns0:div><ns0:head>2/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9825:1:0:NEW 12 Jun 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Lin at el. <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref> [13] used Cross-media Auto-Encoder (CAE) to extract feature vector and identified users' psychological stress based on social network data. Due to the multi-layer structure and mathematics algorithm designed, deep learning algorithm can extract more abstract high-level representation from low-level feature through multiple non-linear transformations, and discover the distribution characteristics of data. In this paper, based on deep learning algorithm, we could train unsupervised linguistic behavior feature learning models for five factors of personality respectively.</ns0:p><ns0:p>Through the feature learning models, the LRFV corresponding to each trait of personality can be learned actively from users' contents published on Sina Micro-blog. The LRFV could describe the users' linguistic behavior more objectively and comprehensively, and improve the accuracy of the personality prediction model.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>DATASET</ns0:head><ns0:p>In this paper, we utilize deep learning algorithm to construct unsupervised feature learning model which can extract Linguistic Representation Feature Vector (LRFV) from users' contents published on Sina Micro-blog actively and objectively. Next, five personality prediction models corresponding to five personality traits are built using linear regression algorithm based on LRFV. We conduct preliminary experiments on relatively small data as pre-study of exploring the feasibility of using deep learning algorithm to investigate the correlation between user's social network behaviors and his personality.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Data collection</ns0:head><ns0:p>Nowadays, users prefer to expressing their attitudes and feelings through social network. Therefore, the linguistic information on social network is more significant for analysing users' personality characteristics. In this paper, we pay more attention to the correlation between users' linguistic behaviors on Sina Micro-blog and their personalities. According to the latest statistics, by the end of Dec. 2014, the total number of registered users of Sina Micro-blog has exceeded 500 million. On the 2015 spring festival's eve, the number of daily active users is more than 1 billion firstly. It can be said that Sina Micro-blog is one of the most popular social network platforms in China currently.</ns0:p><ns0:p>Similar to Facebook and Twitter, Sina Micro-blog users can post blogs to share what they saw and heard. Through Sina Micro-blog, people express their inner thoughts and ideas, follow friends or someone they want to pay attention to, and comment or repost blogs they interested in or agreed with.</ns0:p><ns0:p>For data collection, we firstly released the experiment recruitment information on Sina Microblog. Based on the assumption that the users are often express themselves on social media platform, we try to construct personality prediction model. So, it is required that for one person, there have to be enough Sina Micro-blog data. On the other hand, some participants might provide their deprecated or deputy account of social network rather than the commonly used and actual accounts when participating our experiment. Such data are unfaithful. Considering that, we set an 'active users' selection criteria for choosing the effective and authentic samples.</ns0:p><ns0:p>Our human study has been reviewed and approved by the Institutional Review Board, and the Protocol Number is 'H09036'. In totally, 2385 volunteers were recruited to participate in our experiments. They have to accomplished the Big Five questionnaire <ns0:ref type='bibr' target='#b27'>[24]</ns0:ref> Manuscript to be reviewed Computer Science is consist of the users' all blogs and their basic status information, such as age, gender, province, personal description and so on. The whole process of subjects recruitment and data collection lasted nearly two months. Through the preliminary screening, we obtained 1552 valid samples finally.</ns0:p><ns0:p>When filtering invalid and noisy data, we designed some heuristic rules as follows:</ns0:p><ns0:p>&#8226; If the total number of one's micro-blogs is more than 500, this volunteer is a valid sample.</ns0:p><ns0:p>This rule can ensure that the volunteer is an active user.</ns0:p><ns0:p>&#8226; In order to ensure the authenticity of the results of questionnaire, we set several polygraph questions in the questionnaire. The samples with unqualified questionnaires were removed.</ns0:p><ns0:p>&#8226; When the volunteers filled out the questionnaire online, the time they costed on each question were recorded. If the answering time was too short, the corresponding volunteer was considered as an invalid sample. In our experiments, we set the the answering time should longer than 2 seconds.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Data for linguistic behavior feature learning</ns0:head><ns0:p>Through iteration and calculation layer by layer, deep learning algorithm can mine the internal connection and intrinsical characteristics of linguistic information on social network platform.</ns0:p><ns0:p>Assuming the text in micro-blogs could reflect users' personality characteristics, for each trait of personality, we build a linguistic behavior feature learning model based on deep learning algorithm to extract the corresponding LRFV from users' expressions in Sina Micro-blog. Linguistic Inquiry and Word Count (LIWC) is a kind of language statistical analysis software, which has been widely used by many researches to extract attributes of English contents from Twitter and Facebook <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref> <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>. In order to meet the demands of simple Chinese semantic analysis, we developed a simplified Chinese psychological linguistic analysis dictionary for Sina Micro-blog (SCLIWC) <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>. This dictionary was built based on LIWC 2007 <ns0:ref type='bibr' target='#b22'>[20]</ns0:ref> and the traditional Chinese version of LIWC (CLIWC) <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>. Besides referring to the original LIWC, we added five thousand words which are most frequently used in Sina Micro-blog into this dictionary. The words in dictionary are classified into 88 categories according to emotion and meaning, such as positive word, negative word, family, money, punctuation etc. Through analysis and observation, we found that in some factors of personality, users of different scores show great differences in the number of using words belonging to positive emotion, negative emotion and some other categories in the dictionary. According to SCLIWC <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>, the users' usage degree of words in blogs could be computed in 88 categories. In order to obtain the usage characteristics of social media text in the temporal domain, we divide the time by week firstly. For the i th word category of SCLIWC, the usage frequency within the j th week f i j (i=1,2,. . . ,88) is calculated by Equation <ns0:ref type='formula'>1</ns0:ref>, in which, i denotes the serial number of category, and j denotes the serial number of week. We collect all the text published in Sina Micro-blog during recent three years (Jun.2012&#732;Jun.2015), and there are 156 weeks in total.</ns0:p><ns0:p>So, corresponding to each category of SCLIWC, the vector</ns0:p><ns0:formula xml:id='formula_0'>f i = { f i 1 , f i 2 , . . . , f i 156 } is the digital</ns0:formula><ns0:p>representation of the i th category in temporal domain.</ns0:p><ns0:formula xml:id='formula_1'>f i j =</ns0:formula><ns0:p>T he number o f words belongs to the i th category o f SCLIWC in j th week T he total number o f words in blogs in j th week (1)</ns0:p></ns0:div> <ns0:div><ns0:head>4/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9825:1:0:NEW 12 Jun 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Then, we utilize Fast Fourier Transform(FFT) <ns0:ref type='bibr' target='#b17'>[15]</ns0:ref> to obtain the varying characteristics of social media text usage in temporal space. Fourier Transform is a special integral transform, which could convert the original temporal signal into frequency domain signal which is easily analyzed. FFT is the fast algorithm of Discrete Fourier Transform (DFT), defined by</ns0:p><ns0:formula xml:id='formula_2'>X(k) = DFT [x(n)] = N&#8722;1 &#8721; n=0 x(n)W kn N , k = 0, 1, . . . , N &#8722; 1<ns0:label>(2)</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>W N = e &#8722; j 2&#960; N<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>In order to extract the temporal information from massive high-dimensional digital vectors, Fourier time-series analysis is considered. Concretely, we conduct FFT for each vector. Through FFT, the amplitudes calculated include frequency information, and former 8 maximum amplitudes are selected to constitute a vector as the representation of each word category. Finally, linking the vectors of each category in series, we can obtained a linguistic vector of 704 length corresponding to each user ID.</ns0:p><ns0:p>In our experiment, we use 1552 users' blogs published in 3 years as data for preliminary study.</ns0:p><ns0:p>Each user's linguistic behavior is represented as vector form through FFT based on SCLIWC.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Data for personality prediction</ns0:head><ns0:p>In order to verify the deep learning algorithm is an effective method for extracting the representation of user's Sina Micro-blog linguistic behaviors, we build personality prediction model based on linguistic behavior feature vectors. The personality prediction model is constructed by linear regression algorithm. For each volunteer, five linguistic behavior feature vectors corresponding to five traits of personality are obtained by feature learning models respectively. The training process of personality prediction model is supervised, so users' five scores of five personality traits in the Big Five questionnaire are taken as their labels of the corresponding linguistic behavior feature vectors.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>METHODS</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1'>Unsupervised feature learning based on Stacked Autoencoders</ns0:head><ns0:p>Feature learning can be seen as a process of dimensionality reduction. In order to improve the computational efficiency, for all traits of personality, we utilize the relatively simpler form of artificial neural network, autoencoder <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Fig <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> shows the basic structure of an autoencoder. Basiclly, for an autoencoder, the input and output own the same dimensions, both of them can be taken as X, but through mathematical transformation, the input and output may be not completely equal. In Fig <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>,</ns0:p><ns0:p>X denotes input and X denotes output. The variable in hidden layer Y is encoded through X by Equation <ns0:ref type='formula' target='#formula_4'>4</ns0:ref>. Manuscript to be reviewed In Equation <ns0:ref type='formula' target='#formula_4'>4</ns0:ref>, {W, b} are parameters which can be obtained through training. s(z) is the Sigmoid activation function of hidden layers which is defined in Equation <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>. In addition, a reconstructed vector X in input vector space could be obtained by mapping the result of hidden layer Y back through a mapping function,</ns0:p><ns0:formula xml:id='formula_4'>Y = f &#952; (X) = s(W T X + b) = s( n &#8721; i=1 W i x i + b)<ns0:label>(4)</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>s(z) = 1 1 + exp(&#8722;z)<ns0:label>(5</ns0:label></ns0:formula><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_6'>X = g &#952; (Y ) = s (W T Y + b ) = s( n &#8721; i=1 W i y i + b )<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>For an autoencoder, if we want the mapping result Y is another representation of input X, it is assumed that the input X and the reconstructed X are the same. According to this assumption, the training process of an autoencoder could be conducted and the parameters of autoencoder are adjusted according to minimize the error value L between X and X , as shown in the Equation <ns0:ref type='formula' target='#formula_7'>7</ns0:ref>.</ns0:p><ns0:p>Due to the error is directly computed based on the comparison between the original input and the reconstruction obtained, so the whole training process is unsupervised. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_7'>L(X;W, b) = X &#8722; X 2<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>Computer Science performance of prediction model would be set as the optimal number of layer. Then, we take the output of the last layer as the abstract representation of the original linguistic behavior information. Finally, based on the Big Five questionnaire, for each user, we could obtained five scores (S A , S C , S E , S N , S O ) corresponding to 'A', 'C', 'E', 'N', 'O' five factors respectively. These scores are used to label corresponding linguistic behavior feature vectors for personality prediction models.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Personality prediction model based on linear regression</ns0:head><ns0:p>Personality prediction is a supervised process. The linguistic behavior feature vectors are labeled by the corresponding scores of the Big Five questionnaire. For five traits of personality, we utilized the linear regression algorithm to build five personality prediction models in totally.</ns0:p><ns0:p>Take one trait of personality as an example, the linguistic behavior feature vectors are represented by</ns0:p><ns0:formula xml:id='formula_8'>X = {X i | X i = (x i1 , x i2 , . . . , x im )} n i=1 ,<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>in which, n is the number of samples, n = 1552, and m denotes the number of dimensions of the input vector. The scores of the Big Five questionnaire are taken as the labels, Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_9'>Y = {y i } n i=1<ns0:label>(</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The general form of linear regression is</ns0:p><ns0:formula xml:id='formula_10'>y i = &#969; 1 x i1 + &#969; 2 x i2 + . . . + &#969; m x im + &#949; i , (i = 1, 2, . . . , n)<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>We trained five personality prediction models based on linear regression algorithm using corresponding linguistic behavior feature vectors and labels.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>RESULTS</ns0:head><ns0:p>In Experiments, we collect 1552 users' Sina Micro-blog data in total. Users' linguistic behaviors are quantified based on SCLIWC, and the temporal characteristics are calculated through FFT. Then, we utilize deep learning algorithm to construct feature learning models, which could extract objective and comprehensive representation of linguistic behaviors from the temporal sequence. Finally, personality prediction model is trained by linear regression algorithm based on linguistic behavior feature vectors.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Evaluation measures</ns0:head><ns0:p>In this paper, we conducted preliminary study about constructing Micro-blog behavior representation for predicting social media user's personality. The five factors of personality are all tested. We use In Equation <ns0:ref type='formula'>12</ns0:ref>, i is the sequence number of sample and n is the total number of samples, n = 1552.</ns0:p><ns0:p>In the Big Five questionnaire used in our experiments, there are 44 questions in all. The score ranges of 'A', 'C', 'E', 'N', 'O' are <ns0:ref type='bibr' target='#b8'>[9,</ns0:ref><ns0:ref type='bibr'>45]</ns0:ref>, <ns0:ref type='bibr' target='#b7'>[8,</ns0:ref><ns0:ref type='bibr'>40]</ns0:ref>, <ns0:ref type='bibr' target='#b8'>[9,</ns0:ref><ns0:ref type='bibr'>45]</ns0:ref>, <ns0:ref type='bibr' target='#b7'>[8,</ns0:ref><ns0:ref type='bibr'>40]</ns0:ref>, <ns0:ref type='bibr' target='#b9'>[10,</ns0:ref><ns0:ref type='bibr'>50]</ns0:ref> respectively. The value of RMSE shows the average difference between our prediction results and the scores of questionnaire.</ns0:p><ns0:p>The smaller is the value of RMSE, the better is the performance of prediction model.</ns0:p><ns0:formula xml:id='formula_11'>r = Cor(Y,Y ) = Cov(Y,Y ) Var(Y )Var(Y )<ns0:label>(11)</ns0:label></ns0:formula><ns0:formula xml:id='formula_12'>RMSE = &#8721; n i=1 (y i &#8722; y i ) 2 n (12)</ns0:formula></ns0:div> <ns0:div><ns0:head n='4.2'>Experiment results</ns0:head><ns0:p>In comparison experiments, we utilized five different kinds of attributes to train and build the personality prediction model respectively. The five kinds of attributes including the attributes selected Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>by artificial statistical method without feature selection (denoted by Attribute 1), the attributes selected from Attribute 1 by Principal Component Analysis (PCA) <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref> (denoted by Attribute 2), the attributes selected from Attribute 1 by Stepwise(denoted by Attribute 3), the attributes selected from Attribute 1 by Lasso (denoted by Attribute 4) and linguistic behavior feature vector obtained based on Stacked Antoencoders (SAE) (denoted by Attribute SAE). PCA is a kind of unsupervised feature dimension reduction method, and Stepwise is usually used as a kind of supervised feature selection method. LASSO is a regression analysis method which also perform feature selection.</ns0:p><ns0:p>For different kinds of attributes, the personality prediction models are all built by linear regression algorithm. In order to obtain the stable model and prevent occurrence of overfitting, for each factor of personality, we use 10-fold cross validation and run over 10 randomized experiments. Finally, the mean of 10 randomized experiments' results is recorded as the final prediction result. The comparison of prediction results of five personality factors using three kinds of attributes are shown in Tables <ns0:ref type='table' target='#tab_2'>1 and 2</ns0:ref>. Tables <ns0:ref type='table' target='#tab_3'>3</ns0:ref> shows the dimensionality of different kinds of feature vectors. The letters in subscript 'a', 'c', 'e', 'n', 'o' indicate different personality factors respectively. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>DISCUSSION</ns0:head><ns0:p>This study explore the relevance between users' personality and their social network behaviors. The feature learning models are built to unsupervisedly extract the representations of social network linguistic behaviors. Compared with the attributes obtained by some supervised behavior feature extraction methods, the LRFV is more objective, efficient, comprehensive and universal. In addition, based on LRFV, the accuracy of the personality prediction model could be improved.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1'>The performance of personality prediction model</ns0:head><ns0:p>The results in Tables <ns0:ref type='table' target='#tab_2'>1 and 2</ns0:ref> show that the linguistic behavior feature vectors learned through Stacked Autoencoders perform better than other attributes in both Pearson correlation coefficient and RMSE. When using Attribute SAE, the Pearson correlation coefficients r e = 0.2583, which represent a small correlation. For'E', 'N', 'C' and 'O', r e = 0.3503, r n = 0.3245, r c = 0.4001 and r o = 0.4238, which means that the prediction results of 'E', 'N', 'C' and 'O' correlate with the results of questionnaire moderately. It is concluded that personality prediction based on the linguistic behavior in social network is feasible. Besides, the traits of conscientiousness and openness could be reflected in the network linguistic behavior more obviously.</ns0:p><ns0:p>Compared with other feature extraction methods, our proposed method performs better. When using the original feature vector (Attributes 1), the prediction results r are all less than 0.2. When using another kind of unsupervised feature dimension reduction method (Attributes 2), except for 'C', others are also less than 0.2. Attributes 3, which is obtained by using a kind of supervised feature selection method, the prediction results r are also not ideal. Similarly, considering RMSE of every personality traits, the prediction model also obtain better results based on the linguistic behavior feature vectors.</ns0:p><ns0:p>Besides, we compared the time and memory consuming of prediction when using SAE and PCA to reduce the dimensionality of features respectively in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>. The experiments were conducted on a DELL desktop with an Intel Core 3.30 GHz CPU and 12G memories. The average time consuming denotes the average time cost for predicting one personality factor of one sample. The average memory consuming denotes the memory usage percentage when running the prediction model.</ns0:p><ns0:p>Although PCA performed better in the time and memory consuming, but the prediction results of linguistic behavior feature vectors were outstanding. Usually, the high-powered computing server could offset the deficiency of time and memory consuming. </ns0:p></ns0:div> <ns0:div><ns0:head n='5.2'>Parameters selection</ns0:head></ns0:div> <ns0:div><ns0:head n='5.2.1'>Activation function</ns0:head><ns0:p>There are many kinds of activation function in neural network, such as Sigmoid, Tanh, Softmax, Softplus, ReLU and Linear. Among them, Sigmoid and Tanh are used commonly. In experiment, we utilized both of them to construct the feature learning model, and the comparative results (Tables 5) Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>showed that when using Sigmoid as activation function of hidden layers, the prediction results are a bit better. </ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.2'>the dimensionality of linguistic behavior feature vector</ns0:head><ns0:p>For each personality trait, the dimensionality of linguistic behavior feature vector is set according to the optimal result of prediction model obtained from repeated experiments, and the comparison of r and RMSE when using linguistic behavior feature vectors with different dimensionality are presented in Figs 4(a) and 4(b) respectively. Pearson correlation coefficient reflects the correlation degree between two variables. If the change tendencies of two variables are more similar, the correlation coefficient is higher. Root Mean Square Error reflects the bias between the real value and prediction value. For a dataset, the Pearson correlation coefficient and Root Mean Square Error may not be direct ratio. In practical applications, the trend of the psychological changes is more necessary. So, when adjusting the optimal parameters, we give priority to Pearson correlation coefficient. For 'A', 'C' and 'N', prediction models perform better when the dimensionality of feature vector is 400. For 'E' and 'O', we could obtain the better results when the dimensionality of feature vector is 300. </ns0:p></ns0:div> <ns0:div><ns0:head n='5.3'>Differences in modeling performance across personality traits</ns0:head><ns0:p>Through analysing the results of experiments, we summarize that Agreeableness correlate with users' social network linguistic behaviors relative weakly than the other personality traits. The correlation between openness and users' social network linguistic behaviors is highest of all. We could identify whether the users own higher scores in openness or not through their blogs published in social network platform. Probably because the person with high scores in openness usually prefer expressing their thoughts and feelings publicly. Similarly, conscientiousness is moderately correlate with social network linguistic behaviors. And for conscientiousness, there are significant differences of using the words belonging to the categories of family, positive emotion and so on. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>CONCLUSIONS</ns0:head><ns0:p>In this paper, we utilized deep learning algorithm to investigate the correlations between users' personality traits and their social network linguistic behaviors. Firstly, the linguistic behavior feature vectors are unsupervisedly extracted using Stacked Autoencoders models actively. Then, the personality prediction models are built based on the linguistic behavior feature vectors by linear regression algorithm. Our comparison experiments are conducted on five different kinds of attributes, and the results show that the linguistic behavior feature vectors could improve the performance of personality prediction models.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>online and authorized us to obtain the public personal information and all blogs. Collecting volunteers' IDs of Sina Micro-blog, we obtained their micro-blog data through Sina Micro-blog API. The micro-blog data collected 3/13 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9825:1:0:NEW 12 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>) 5/ 13 PeerJ</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:03:9825:1:0:NEW 12 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The basic structure of an autoencoder.</ns0:figDesc><ns0:graphic coords='8,216.00,70.86,180.00,145.03' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 ./ 13 PeerJ</ns0:head><ns0:label>213</ns0:label><ns0:figDesc>Figure 2. The training principle diagram of an autoencoder.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Fig 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Fig 3 shows the structure of our model. For different personality factors, the number of layers and the number of units in each layer are different. The details are presented on the left of Fig 3. For 'A', 'C' and 'N', there are one hidden layers in the SAE, and the feature learning model are 3 layers in total. For 'E' and 'O', there are two hidden layers in the SAE. In our experiments, 1552 users' content information of Sina Micro-blog are used as training dataset, and the unsupervised feature learning models corresponding different personality traits are trained respectively. That is, we could obtain five feature learning models in total. For each trait, there will be corresponding linguistic behavior feature vectors extracted from social network behavior data actively.</ns0:figDesc><ns0:graphic coords='9,90.01,227.88,431.99,216.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The deep structure of our prediction model. The left table shows the details of SAE of different personality factors.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>9 ) 7 / 13 PeerJ</ns0:head><ns0:label>9713</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:03:9825:1:0:NEW 12 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Pearson product-moment correlation coefficient (r) and Root Mean Square Error (RMSE) to measure the quality of different behavior feature representation methods. The computational formulas of two measurements are shown in Equation 11 and 12 respectively. In Equation 11, Cov(Y,Y ) denotes the covariance of Y and Y , and Var(Y ) and Var(Y ) represents the variances of the real score Y and prediction score Y respectively. when r &gt; 0, it means the results of questionnaire and prediction model are positive correlation. On the contrary, r &lt; 0 means negative correlation. The absolute value is greater, the higher is the degree of correlation. In psychology research, we use Cohen's conventions [4] to interpret Pearson product-moment correlation coefficient. r &#8712; [0.1, 0.3) represent a weak or small association and r &#8712; [0.3, 0.5) indicates a moderate correlation between two variables.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The comparison of prediction results using linguistic feature vectors with different dimensionality. (a)The comparison of r. (b)The comparison of RMSE.</ns0:figDesc><ns0:graphic coords='13,90.00,388.52,432.01,144.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>11 / 13 PeerJ</ns0:head><ns0:label>1113</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:03:9825:1:0:NEW 12 Jun 2016)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The comparison of prediction results in Pearson correlation coefficient</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Attributes</ns0:cell><ns0:cell>r a</ns0:cell><ns0:cell>r c</ns0:cell><ns0:cell>r e</ns0:cell><ns0:cell>r n</ns0:cell><ns0:cell>r o</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 1 (Original)</ns0:cell><ns0:cell cols='2'>0.1012 0.1849</ns0:cell><ns0:cell cols='2'>0.1044 0.0832</ns0:cell><ns0:cell>0.181</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 2 (PCA)</ns0:cell><ns0:cell cols='5'>0.1106 0.2166 0.1049 0.1235 0.1871</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Attributes 3 (Stepwise) 0.1223 0.2639 0.1698 0.1298 0.2246</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 4 (Lasso)</ns0:cell><ns0:cell cols='5'>0.1209 0.2068 0.0788 0.0934 0.1136</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes SAE</ns0:cell><ns0:cell cols='3'>0.2583 0.4001 0.3503</ns0:cell><ns0:cell cols='2'>0.3245 0.4238</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The comparison of prediction results in RMSE</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Attributes</ns0:cell><ns0:cell cols='4'>RMSE a RMSE c RMSE e RMSE n RMSE o</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 1 (Original)</ns0:cell><ns0:cell cols='4'>5.6538 6.1335 4.9197 6.5591 7.0195</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 2 (PCA)</ns0:cell><ns0:cell cols='4'>5.1628 5.6181 5.6781 5.9426 6.4579</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 3 (Stepwise)</ns0:cell><ns0:cell cols='2'>4.8421 5.3495</ns0:cell><ns0:cell>5.276</ns0:cell><ns0:cell>5.6904 6.1079</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 4 (Lasso)</ns0:cell><ns0:cell cols='4'>5.8976 6.7471 6.4940 5.4241 6.0938</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes SAE</ns0:cell><ns0:cell>4.7753</ns0:cell><ns0:cell>5.339</ns0:cell><ns0:cell cols='2'>4.8043 5.6188 5.1587</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The comparison of dimensionality of different feature vector</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Attributes</ns0:cell><ns0:cell>D a</ns0:cell><ns0:cell>D c</ns0:cell><ns0:cell>D e</ns0:cell><ns0:cell>D n</ns0:cell><ns0:cell>D o</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 1 (Original)</ns0:cell><ns0:cell cols='5'>704 704 704 704 704</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 2 (PCA)</ns0:cell><ns0:cell cols='5'>250 203 250 310 250</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 3 (Stepwise)</ns0:cell><ns0:cell>47</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>56</ns0:cell><ns0:cell>47</ns0:cell><ns0:cell>32</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes SAE</ns0:cell><ns0:cell cols='5'>400 400 300 400 300</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>9/13</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9825:1:0:NEW 12 Jun 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The comparison of time and memory consuming of different feature vector</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Attributes</ns0:cell><ns0:cell cols='2'>Average time consuming Average memory consuming</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 2 (PCA)</ns0:cell><ns0:cell>3ms</ns0:cell><ns0:cell>56%</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes SAE</ns0:cell><ns0:cell>12ms</ns0:cell><ns0:cell>81%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The Comparison of prediction results when using different activation function</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Activation Function</ns0:cell><ns0:cell>r a</ns0:cell><ns0:cell>r c</ns0:cell><ns0:cell>r e</ns0:cell><ns0:cell>r n</ns0:cell><ns0:cell>r o</ns0:cell></ns0:row><ns0:row><ns0:cell>Sigmoid</ns0:cell><ns0:cell cols='5'>0.2583 0.4001 0.3503 0.3245 0.4238</ns0:cell></ns0:row><ns0:row><ns0:cell>Tanh</ns0:cell><ns0:cell cols='5'>0.2207 0.3338 0.3216 0.2696 0.3503</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='13'>/13 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9825:1:0:NEW 12 Jun 2016) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"12th JUN, 2016 RE:Manuscript Number: #CS-2016:03:9825:0:1:REVIEW Manuscript Title: Deep learning for constructing microblog behavior representation to identify social media user's personality Dear Editor, Thank you for your letter regarding our manuscript titled “Deep learning for constructing microblog behavior representation to identify social media user's personality”. We also thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. And in the tracked changes manuscript, the changes are highlighted in bold. Outlined below is our response to the reviewer’s suggestion. Reviewer 1 Basic reporting: The paper is basically well-organized and well-written. However, there are several grammar mistakes I noticed that lead to unclarity of readers' understanding. They are listed as follows: 1. In abstract, 'which could unsupervised extract Linguistic Representation Feature Vector (LRFV) from text published on Sina Micro-blog actively.' where 'unsupervised' should be 'unsupervisedly', or the authors can change the structure of the sentence. Same goes to others. Agreed. I have corrected this in the revised manuscript. 2. 'Collecting volunteers’ IDs of Sina Micro-blog, we crawled their micro 119 blog data through Sina Micro-blog API. The micro-blog data collected consists the users’ all blogs 120 and their basic status information, such as...' Obviously 'crawled' should be replaced with words like 'obtained' and 'consists' should be 'is consist of' since consist is an intransitive verb. Agreed. I have corrected this in the revised manuscript. 3. In 5.1 'there are weak correlation...' should be uncorrected use. Agreed. I have modified this sentence in the revised version. 4. I think you need to ask a native for revision of English. Thanks for your suggestion. We have further elaborated in English in our revised manuscript. Experimental design: 1. The authors describe their methods of study pretty clearly and successfully. The key point of this study lies largely on how good is the 'dictionary'. I actually looked into the 'LIWC' 1 years ago. This software is based on English key word counting process and neglects so many important grammars. Thus the correct rate of it is just so so. In our experimental design, we quantified the semantic information based on SCLIWC, which is a simplified Chinese psychological linguistic analysis dictionary for Sina Micro-blog. SCLIWC is developed according to LIWC dictionary and CLIWC dictionary by our research team. Considering there are lots of neologism and cyber-words in microblog, so we also added the high frequency words appeared in Sina Micro-blog into SCLIWC. The validity of SCLIWC has been verified in our previous psychological research work. The testing was conducted on some key categories, such as emotional words, cognitive, personal concerns and so on. Concretely, we randomly selected 100 users of Sina Micro-blog, and took their microblogs within one month as experimental samples. Three judges (native speaker of Simplified Chinese) were asked to evaluate individual levels of each categories through reading these posts by a 7-point Likert-type scale (1=Extremely low to 7=Extremely high). For each categories, the Pearson product-moment correlation coefficient between artificial evaluation results and the statistical results based on SCLIWC are calculated and the results showed they are high degree of correlation. This part of the work have been submitted to psychological journal. Your comment is valuable. It is indeed defective to represent semantic information just using key word counting. The important grammars are ignored. This paper reports our first stage of the research, and shows that it is feasible to constructing microblog behavior representation to identify personality based on deep learning algorithm preliminary. In the future, the accurate and comprehensive semantic feature extraction method will be one of our concerns. 2. I noticed your decryption of your 'dictionary'. My question is how precise your algorithm performs, and in what way is yours different with previous methods? The semantic information quantization was conducted based on SCLIWC. It is a simplified Chinese psychological linguistic analysis dictionary for Sina Micro-blog developed by our research team. The validity of SCLIWC has been verified in our previous psychological research work. And the results obtained based on SCLIWC are highly correlated with the results of artificial evaluation. The specific details of verification are described in the previous answer. Comparing with previous methods, the differences of ours could be summarized as the following: 1. We utilize deep learning algorithm to construct Linguistic Representation Feature Vector (LRFV) for personality identification. In the existing psychology research, behavior and linguistic features extraction mostly depends on artificial means by setting heuristic rules of feature extraction based on experience and observation. In our method, we applied machine learning method in the traditional psychology research, and attempted to utilize deep learning to construct an unsupervised micro-blog linguistic feature learning model. 2. The temporal attributes are considered in feature extraction. In order to describe the changes of usages of characters in the temporal domain,the semantic information during three years were quantified and each week was taken as a quantized period. Then, Fast Fourier Transform (FFT) is used to obtain the varying characteristics of social media text usage in temporal space. 3. Also I would like to know why you choose linear regression for prediction. First, when analyzing multi-factor model, regression analysis is more simple and convenient. And the computation efficiency of Linear Regression is higher. Besides, other methods which could be used in continuous value prediction including Logistic Regression, SVM and Classification and Regression Tree (CART) were tried in the experimental design phase. These models were of about the same performance. So, we choose Linear Regression with higher efficiency finally. Validity of the findings: How you would consider the performance of the proposed algorithm valid? I have seen enough description on this part. In the traditional psychology research, the major method of personality measurement is self-report method. This kind of methods rely on the professionals too much, so it is difficult to satisfy “generalizability”, “objectivity” and “timeliness”. In this paper, we tried to predict the Big Five personality based on machine learning method. In experiment, we took the Pearson product-moment correlation coefficient to measure the linear correlation between the results obtained by prediction model and the results obtained by questionnaire. We use Cohen's conventions [cohen1988] to interpret Pearson product-moment correlation coefficient. A correlation coefficient of .10 is thought to represent a weak or small association; a correlation coefficient of .30 is considered a moderate correlation; and a correlation coefficient of .50 or larger is thought to represent a strong or large correlation. For“E”, “N”, “C” and “O” factors, the results of our prediction model moderately correlate with the results of questionnaire. It is shown that the personality prediction model could be used in automatically personality identification on a large scale instead of questionnaire, which provides a new perspective on psychometrics. [cohen1988] Cohen, J. (1988). Statistical power analysis for the behavioral sciences. vol. 2. Lawrence Earlbaum Associates, Hillsdale, NJ. Comments for the Author: The paper is basically well-organized and well-written with interesting results and findings. Still it need some kind of revision. Details can be found in separated parts. We are truly grateful for your comments and suggestions. I have revised the manuscript to address your concerns. Reviewer 2 Basic reporting: 1. The notation might be ambiguous: author first use the word 'dimension' for the 5 traits (line 208), then use 'dimension' to represent the number of the features for prediction. It would be better if two term could be distinguished. Agreed. I have corrected this in the revised manuscript. ‘Factor’ is used to represent 5 traits of personality instead of ‘dimension’. 2. Figure 3 is only a deep architecture of stacked autoencoders. However, authors should include the exact structure that used in their study, instead of a general structure (for example, number of units in each layer, and number of layers). Or at least a table of the network used should be represented. You can find examples in the famous Alex net paper. Thank you for your valuable suggestion. Figure 3 has been modified in the revised manuscript as following. 3. Sample size is relatively small in this setting (1552). With this sample size, deep learning is not so necessary. It is better to explain how and why deep learning works in such small size, compared to other algorithms. That's true. Our sample size is relatively small. During the data collection, we have to recruit the volunteers to participate in our experiments and finish the Big Five questionnaire online. The scores of the questionnaire would be taken as the label of each samples. 1552 Valid data are obtained in three months. So, in the preliminary study of constructing microblog linguistic representation for personality prediction, the dataset is relatively small. In our method, SAE was used to abstract the semantic information and reduce the feature dimension, and the experimental results show that it is feasible and effective. Recently, the unlabeled weibo data of 100,000 users have been downloaded. In the future, our research work would focus on the atomic level quantization method for user's unstructured micro-blog behavior and text, and the improvement of feature learning model on a large-scale dataset. Experimental design: 1. In line 215-216, Authors trained five personality prediction models based on linear regression algorithm using corresponding linguistic behavior feature vectors and labels. Does these mean the five linear models are trained separately? This might lose the information between five dimensions. Yes, the five linear models are trained separately corresponding to five personality factor respectively. Five factor theory is developed by statistical methods (factor analysis). The theory required that the correlation between each other personality traits must be small and the correlation degree is very large between each parts within the same traits. So, in Big Five model, there are few influence relation among five personality factors. But your common is reasonable. In the future, we could try to construct personality prediction model based on multi-task learning model, and compare the performances. 2. Line 275 mentions sigmoid function is used as activation function of hidden layers. There are many choice for the nonlinearity units in the network. Is there any experiment with other units, or are there any literatures suggest the preference? There are many kinds of activation function in neural network, such as Sigmoid, Tanh, Softmax, Softplus, ReLU and Linear. Among them, Sigmoid and Tanh are used commonly. In experiment, we utilized both of them to construct the feature learning model, and the comparative results showed that when using Sigmoid as activation function of hidden layers, the prediction results are a bit better. The comparative results are shown in (Table 4). The corresponding description also have been added in the Parameters selection. Recently, ReLU and its variants are popular. We would focus on exploring optimal activation function in the future. 3. Line 124 mentions: If the total number of one’s micro-blogs is more than 500, this volunteer is a valid sample. This is reasonable, but might cause bias in the experiment. Authors might be more careful of this step and make more discussion for the potential outcome of this step. In psychology research, many researchers have demonstrated that users’ social network behaviors and text are correlated with their personality. That is to say, users’ text published on Sina Micro-blog could reflect their personality traits. Based on the findings, in this paper, we assumed the users are often express themselves on social media platform and try to predicting personality through Sina Micro-blog text data based on machine learning methods. So, it is required that for one person, there have to be enough weibo data. Based on this consideration, we set a data selection criteria for data and select “active users”. On the other hand, in the experimental design, we released the experiment recruitment information on Sina Micro-blog and recruited volunteers to participate in our experiments. The volunteers were required to accomplish the Big Five questionnaire online and authorized us to obtain the public personal information and all blogs. But, some participants might provide their deprecated or deputy account of social network rather than the commonly used and actual accounts. Such data are unfaithful. So, we choose the “active users” (one’s number of posts is more than 500) as the valid samples. In this method, if the total number of one’s micro-blogs is less than 500, we assume that it is inadequate or incomplete data for personality prediction, and should be removed. But, your common is valuable. The further study of active users’ selection criteria should be considered in our future work. We have added the explanation and discussion about selecting “active users” in the revised manuscript. 4. The dimensionality of linguistic behavior feature vector is set according to the optimal result of prediction model. Did authors try first generate high dimensional data, then use some statistical method for feature selection, e.g. lasso? We have compared the personality prediction results of different attributes, including linguistic behavior feature vector, the original high dimensional data and the attributes obtained by conducting PCA on the generate high dimensional data in Table 1 and Table 2. Thanks for your commons, we supplement our comparison experiments and add the prediction result by using lasso and Stepwise for feature selection in Table 1 and Table 2 in the revised manuscript. The comparison results shows that the performance using linguistic behavior feature vector is better. Validity of the findings: 1. In the section 'Evaluation measures', authors did not mention explicitly what is 'prediction'. Does the data split into training and testing data? Did testing data used in feature extraction? In our method, “Prediction” means inferring one’s personality traits based on his text published on Sina Micro-blog through constructing prediction model. The model is trained using small amounts of data which have been labeled by scores of personality traits. And the labeled data are obtained by acquiring the participants to fill out the Big Five questionnaire. The scores calculated from the answers of Big Five questionnaire are taken as the real value of one’ personality assessment. In feature extraction, the feature learning model constructed is unsupervised. All data are used to train the feature learning model and obtained the optimal parameters. Simultaneously, the output of the feature learning model are the feature extraction results. After obtaining the linguistic behavior feature vector, prediction model is construct. In “Experiment results”, we mentioned that in order to obtain the stable model and prevent occurrence of overfitting, we use 10-fold cross validation to measure the performance of prediction model. The dataset are randomly split into 10 parts. For each such split, when it is used to training the model, the other 9 splits are used as testing data. Finally, the mean of 10 randomized experiments' results is recorded as the final prediction result. So, for constructing prediction model, there are training data and testing data. 2. Authors study the relationship of the number of features and the prediction performance. There is an interesting pattern in figure 4b that 300 is a local minimum, while does not shown in the 4a. Authors might add more discussion of why that happens. In Figure 4a, the vertical axis denotes Pearson correlation coefficients, and in Figure 4b, the vertical axis denotes Root Mean Square Error. The calculation formula of both measurements are shown in Eq(11) and Eq(12) respectively. Pearson correlation coefficient reflects the correlation degree between two variables. If the change tendencies of two variables are more similar, the correlation coefficient is higher. Root Mean Square Error reflects the bias between the real value and prediction value. For a dataset, the Pearson correlation coefficient and Root Mean Square Error may not be direct ratio. We have added the explanation and discussion about Fig4 in the revised manuscript. Reviewer 3 Comments for the Author: In this paper, the authors use deep learning methods to study the social media user's personality problem. Both the topic and technique is quite new. My major comments are as follows: 1. For data collection, linguistic behavior is extracted based on the FFT methods. By applying FFT, we always assume the signal is stationary. However, whether micro-blog data is stationary signal or not is still highly questionable. Since micro-blog data does not only depend on user's thoughts, ideas and feelings. It is also strongly influenced by public events, like recently death of college student Zexi Wei raises questions on Baidu's ethics, which typical make the data non-stationary. In psychology research, many researchers have demonstrated that users’ social network behaviors and text are correlated with their personality. That is to say, users’ text published on Sina Micro-blog could reflect their personality traits. For each user of social media, whether he is willing to participate in the discussion about popular events, or whether he prefers to express his ideas and opinions frequently on Sina Micro-blog are also associated with his personality traits. On social media platform, public events or hot news appear frequently, such as the “military parade”, “Qingdao prawns events”, etc. User's responds to different popular events could be taken as his personality traits implied in one’s Sina Micro-blog data. In order to attempt to quantify fluctuations of users on some psychological characteristics, we use FFT to deal with the time series data. In previous studies, we have done some experiments. The Figure (a) showed a user’s usage of a certain category within 35 weeks, and the data are collected in weeks. So, the length of the time series data is 35. After using FFT, 17 spectrum coefficients are obtained as shown in Figure (b). (a) (b) 2. In method, Stacked Autoencoders is used as a deep learning method for dimension reduction, which is an unsupervised method. The author mentioned that compared with supervised methods unsupervised method is more objective, efficient, comprehensive and universal. However, normally supervised dimension reduction method would achieve a better performance than unsupervised methods. It is better to make a further discussion and compare different unsupervised and supervised dimension reduction methods in result. Thank you for your valuable suggestion. Both of the supervised dimension reduction method and the unsupervised dimension reduction methods have respective advantages and disadvantages. For unsupervised methods, it isn’t need to prepare a certain amount of labeled data. For supervised methods, the directivity of dimension reduction results is stronger. In the traditional psychology research, the major methods of extracting linguistic behavior feature is based on the heuristic rules which are usually set according to the experiences artificially. Constructing feature learning model based on machine learning algorithm could improve the objective, comprehensive and universal of features. Usually, we take the results of questionnaires as the real individual’s personality or psychological characteristics, but it is difficult to recruit lots of people to participate in the experiment. So there are some limitations of obtaining labeled data. In this case, unsupervised dimension reduction method is more applicable. In the revised manuscript, we have supplemented some experiments. We utilize Stepwise to reduce the dimensionality of features. Stepwise is a kind of supervised feature selection method. The prediction results of the supervised dimension reduction method and the unsupervised dimension reduction methods are compared in Table 1 and Table 2, and the analysis are detailed in Discussion. 3. In result, the authors use different parameters for the demission of feature vector. What kind of principle has been used here for parameter selection? If the authors select the parameters corresponding to the best result, it would be unfair to the other methods in the comparison and also unable to be used in practical. And also, how many components are used in the PCA? Again, what principle has been used for parameter selection in PCA? Furthermore, the introduction of the SAE is not sufficient? There is no definition of function f and g. What does \theta and \theta' stand for in Eq. (4) and Eq. (5)? How to determine the demission of x'? How to training the W, b, W', b in Eq. (4) and (5)? As the authors mentioned in line 200, the number of layer would be decided according to the optimal value of many experiments? Which kind of experiments will be done? Is it time consuming or did we need additional data in these experiments. Finally, it should also mentioned the time and memory consuming for the method of Stacked Autoencoders in this application, and make a comparison with the others, like PCA. In all, a lot of technique detail has not been described, which makes the readers unable to get a comprehensive understanding of the proposed method. The parameter selection of SAE is a process of finding the optimal parameters. By adjusting the value of parameters, we could get the optimal prediction results. Likewise, when using PCA to reduce the dimensionality of features, the contribution rate of PCA are tuned according to the better prediction results. Usually, the value range of contribution rate of PCA is 0.8~0.9. 0.8, 0.85, 0.9 are taken in experiments respectively. When the contribution rate is 0.85, the Pearson correlation coefficients of “A”, “E”, “O” are higher, and the number of principal component is 250. When the contribution rate is 0.8, the Pearson correlation coefficients of “C” is better, and when the contribution rate is 0.9, the Pearson correlation coefficients of “N” is better. The comparison results are shown in Table 1, Table2 and Table3 in the revised version. We are sorry that there are some mistakes and ambiguity in Eq. (4) and Eq. (5). In the revised manuscript, Eq. (4) has been modified as . And Eq. (5) has been modified as . In which, s is sigmoid function which is used as the activation function of hidden layers. denotes the parameter set {W,b}, and denotes {W',b'}. We used greedy layer-wise training to obtain the optimal parameters {W,b} for a Stacked Autoencoder model. That is, the parameters of each layer are trained individually while freezing parameters for the remainder of the model. The output of the layer is used as the input of the subsequent layer to trained the parameters. For an autoencoder, if we want the mapping result Y is another representation of input X, it is assumed that the input X and the reconstructed X' are the same. According to this assumption, the training process of an autoencoder could be conducted and the parameters of autoencoder are adjusted according to minimize the error value L between X and X', as shown in the following Equation. Due to the error is directly computed based on the comparison between the original input and the reconstruction obtained, so the whole training process is unsupervised. In our method, the number of layers of the SAE is optimized according to performance of prediction model. Adjusting the number of layers, and the number of layer corresponding the better performance of prediction model would be set as the optimal number of layer. Although it is time consuming in the training phase, the time consumption of testing would be not affected. And the time and memory consuming of Stacked Autoencoders are compared with PCA. The comparison results are supplemented in the revised manuscript. Thanks for your valuable suggestion, we have describe the technique detail of SAE in the revised manuscript also. Minor questions: 1. Does all the users have their micro-blogs account 3 years ago? Yes. When recruiting the volunteers, we required that the participants must start using Sina Micro-blog since before Dec.2011. But we didn’t required they published blogs every day. In data collection, all the text published in Sina Micro-blog from Jun.2012 to Jun.2015 were collected. 2. Eq. (9) and (10) is not so strict, Eq. (9) is in a vector form and Eq. (10) is in a scalar form. Eq. (9) is used to calculate the Pearson product-moment correlation coefficient. The result could measure the linear correlation between two variables Y and Y’. Y denotes the real scores of personality and Y’ denotes the scores calculated by prediction model. So, Eq. (9) is in a vector form. Eq. (10) is used to calculate Root Mean Square Error (RMSE), in which, denotes one person’s real scores of one personality factor, and denotes one person’s prediction scores of one personality factor. For each sample, the square of the difference between and is calculated. Then, the average of the results of n samples are calculated. So, Eq. (10) is in a scalar form. "
Here is a paper. Please give your review comments after reading it.
310
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Due to the rapid development of information technology, Internet has become part of everyday life gradually. People would like to communicate with friends to share their opinions on social networks. The diverse social network behavior is an ideal users' personality traits reflection. Existing behavior analysis methods for personality prediction mostly extract behavior attributes with heuristic. Although they work fairly well, but it is hard to extend and maintain. In this paper, for personality prediction, we utilize deep learning algorithm to build feature learning model, which could unsupervisedly extract Linguistic Representation Feature Vector (LRFV) from text published on Sina Micro-blog actively. Compared with other feature extraction methods, LRFV, as an abstract representation of Micro-blog content, could describe use's semantic information more objectively and comprehensively. In the experiments, the personality prediction model is built using linear regression algorithm, and different attributes obtained through different feature extraction methods are taken as input of prediction model respectively. The results</ns0:p><ns0:p>show that LRFV performs more excellently in micro-blog behavior description and improve the performance of personality prediction model.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Personality can be defined as a set of traits in behaviour, cognition and emotion which is distinctive among people <ns0:ref type='bibr' target='#b19'>[17]</ns0:ref>. In recent years, researchers have formed a consensus on personality structure, and proposed the Big Five factor model <ns0:ref type='bibr' target='#b21'>[19]</ns0:ref>, which uses five broad domains or factors to describe human personality, including openness(O), conscientiousness(C), extraversion(E), agreeableness(A) and neuroticism(N) <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>.</ns0:p><ns0:p>Traditionally, questionnaire has been widely used for personality assessment, especially the Big Five personality questionnaire. But the form of questionnaire may be inefficient on large population.</ns0:p><ns0:p>Due to the rapid development of information technology, Internet becomes part of everyday life nowadays. People prefer expressing their thoughts and interacting with friends on social network platform. So researchers pay more and more attention to figuring out the correlation between users' behaviors on social network and their personality traits in order to realize automatical personality prediction by machine learning methods. Nowadays, Internet is not just for communication, but also a platform for users to express their thoughts, ideas and feelings. Personality is expressed by users' behavior on the social network indirectly, which refers to a variety of operation on social network, such as comment, follow and like. In addition, text, punctuation and emoticon published by users can be regarded as one kind of social behavior. So, for automatic personality prediction, how to abstract these diverse and complex behaviors and acquire the digital representation of social network behaviors has become an critic problem. Existing behavior analysis methods are mostly based on some statistics rules, but artificial means have some disadvantages in objectivity and integrity. Generally, attributes are especially important for the performance of prediction model. A set of proper feature vectors could improve the effectiveness of prediction model to a certain extent. So, it is required that the attributes are not only the comprehensive and abstract description of individual's behavior characteristic, but also could reflect the diversity of different individuals' behaviors.</ns0:p><ns0:p>In this paper, we use deep learning algorithm to unsupervisedly extract LRFV actively from users' content published on Sina Micro-blog. Compared with other attributes obtained by artificially means, LRFV could represent users' linguistic behavior more objectively and comprehensively. There are two reasons of utilizing deep learning algorithm to investigate the correlation between users' linguistic behavior on social media and their personality traits. One is that deep learning algorithm could extract high-level abstract representation of data layer by layer by exploiting arithmetical operation and the architecture of model. It has been successfully applied in computer vision, object recognition and other research regions. Another is, the scale of social network data is huge and deep learning alg orithm can meet the computational demand of big data. Given all this, we do some preliminary study on constructing microblog linguistic representation for personality prediction based on deep learning algorithm in this paper.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.1'>Related Work</ns0:head><ns0:p>At present, many researchers have paid attentions to the correlation between users' Internet behaviors and their personality traits. Qiu et al. <ns0:ref type='bibr' target='#b22'>[20]</ns0:ref> figured out the relationship between tweets delivered on Twitter and users' personality, and they found that some personality characteristics such as openness(O), extraversion(E) and agreeableness(A) are related to specific words used in tweets.</ns0:p><ns0:p>Similarly, Vazire et al. <ns0:ref type='bibr' target='#b27'>[24]</ns0:ref> discovered that there is great relevance between users' specific Internet behaviors and their personality through studying users' behaviors on personal website. These conclusions can be explained as personality not only influences people's daily behaviors, but also plays an important role in users' Internet behaviors. With the rise of social media, more and more researchers begin to analyse uses' personality traits through social network data with the help of computer technology. Sibel et al. <ns0:ref type='bibr' target='#b24'>[22]</ns0:ref> predicted users' personality based on operational behaviors on Twitter utilizing linear regression model. Similarly, in <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>, Jennifer et al. also used regression algorithm to build a personality prediction model, but they considered both of operational behaviors and linguistic behaviors. Ana et al. <ns0:ref type='bibr' target='#b17'>[15]</ns0:ref> used semi-supervised method to predict personality based on the attributes of linguistic behaviors extracted from tweets. Alvaro et al. <ns0:ref type='bibr' target='#b20'>[18]</ns0:ref> built users' personality prediction model according to their social interactions in Facebook by machine-learning methods, such as classification trees.</ns0:p><ns0:p>Although lots of researchers utilized machine learning methods to built personality prediction model and have gotten some achievements, but there are also some disadvantages need to be improved. First, in state of art methods, the behavior analysis method and behavior attributes extraction methods are mostly based on some experiential heuristic rules which are set artificially. The behavior attributes extracted manually by statistical methods may not be able to describe characteristics of behaviors comprehensively and objectively. Second, supervised and semi-supervised behavior feature extraction methods need a certain number of labeled data, but in the actual application, obtaining a large number of labeled data is difficult, time-consuming and high cost. So supervised and semi-supervised feature extraction methods are not suitable for a wide range of application.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.2'>Deep Learning</ns0:head><ns0:p>In recent years, there are more and more interdisciplinary research of computational science and psychology <ns0:ref type='bibr' target='#b29'>[26]</ns0:ref> <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. Deep learning is a set of algorithms in machine learning <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>, which owns a hierarchical structure in accordance with the biological characteristics of human brain. Deep learning algorithm is originated in artificial neural network, and it has been applied successfully in many artificial intelligence applications, such as face recognition <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref>, image classification <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>, Due to the multi-layer structure and mathematics algorithm designed, deep learning algorithm can extract more abstract high-level representation from low-level feature through multiple non-linear transformations, and discover the distribution characteristics of data. In this paper, based on deep learning algorithm, we could train unsupervised linguistic behavior feature learning models for five factors of personality respectively. Through the feature learning models, the LRFV corresponding to each trait of personality can be learned actively from users' contents published on Sina Micro-blog.</ns0:p><ns0:p>The LRFV could describe the users' linguistic behavior more objectively and comprehensively, and improve the accuracy of the personality prediction model.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>DATASET</ns0:head><ns0:p>In this paper, we utilize deep learning algorithm to construct unsupervised feature learning model which can extract Linguistic Representation Feature Vector (LRFV) from users' contents published on Sina Micro-blog actively and objectively. Next, five personality prediction models corresponding to five personality traits are built using linear regression algorithm based on LRFV. We conduct preliminary experiments on relatively small data as pre-study of exploring the feasibility of using deep learning algorithm to investigate the correlation between user's social network behaviors and his personality.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Data collection</ns0:head><ns0:p>Nowadays, users prefer to expressing their attitudes and feelings through social network. Therefore, the linguistic information on social network is more significant for analysing users' personality characteristics. In this paper, we pay more attention to the correlation between users' linguistic behaviors on Sina Micro-blog and their personalities. According to the latest statistics, by the end of Dec. 2014, the total number of registered users of Sina Micro-blog has exceeded 500 million. On the 2015 spring festival's eve, the number of daily active users is more than 1 billion firstly. It can be said that Sina Micro-blog is one of the most popular social network platforms in China currently.</ns0:p><ns0:p>Similar to Facebook and Twitter, Sina Micro-blog users can post blogs to share what they saw and heard. Through Sina Micro-blog, people express their inner thoughts and ideas, follow friends or someone they want to pay attention to, and comment or repost blogs they interested in or agreed with.</ns0:p><ns0:p>For data collection, we firstly released the experiment recruitment information on Sina Microblog. Based on the assumption that the users are often express themselves on social media platform, we try to construct personality prediction model. So, it is required that for one person, there have to be enough Sina Micro-blog data. On the other hand, some participants might provide their deprecated or deputy account of social network rather than the commonly used and actual accounts when participating our experiment. Such data are unfaithful. Considering that, we set an 'active users' selection criteria for choosing the effective and authentic samples.</ns0:p><ns0:p>Our human study has been reviewed and approved by the Institutional Review Board, and the Protocol Number is 'H09036'. In totally, 2385 volunteers were recruited to participate in our experiments. They have to accomplished the Big Five questionnaire <ns0:ref type='bibr' target='#b28'>[25]</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>we obtained their micro-blog data through Sina Micro-blog API. The micro-blog data collected is consist of the users' all blogs and their basic status information, such as age, gender, province, personal description and so on. The whole process of subjects recruitment and data collection lasted nearly two months. Through the preliminary screening, we obtained 1552 valid samples finally.</ns0:p><ns0:p>When filtering invalid and noisy data, we designed some heuristic rules as follows:</ns0:p><ns0:p>&#8226; If the total number of one's micro-blogs is more than 500, this volunteer is a valid sample.</ns0:p><ns0:p>This rule can ensure that the volunteer is an active user.</ns0:p><ns0:p>&#8226; In order to ensure the authenticity of the results of questionnaire, we set several polygraph questions in the questionnaire. The samples with unqualified questionnaires were removed.</ns0:p><ns0:p>&#8226; When the volunteers filled out the questionnaire online, the time they costed on each question were recorded. If the answering time was too short, the corresponding volunteer was considered as an invalid sample. In our experiments, we set the the answering time should longer than 2 seconds.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Data for linguistic behavior feature learning</ns0:head><ns0:p>Through iteration and calculation layer by layer, deep learning algorithm can mine the internal connection and intrinsical characteristics of linguistic information on social network platform.</ns0:p><ns0:p>Assuming the text in micro-blogs could reflect users' personality characteristics, for each trait of personality, we build a linguistic behavior feature learning model based on deep learning algorithm to extract the corresponding LRFV from users' expressions in Sina Micro-blog. Linguistic Inquiry and Word Count (LIWC) is a kind of language statistical analysis software, which has been widely used by many researches to extract attributes of English contents from Twitter and Facebook <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>. In order to meet the demands of simple Chinese semantic analysis, we developed a simplified Chinese psychological linguistic analysis dictionary for Sina Micro-blog (SCLIWC) <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref>. This dictionary was built based on LIWC 2007 <ns0:ref type='bibr' target='#b23'>[21]</ns0:ref> and the traditional Chinese version of LIWC (CLIWC) <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref>. Besides referring to the original LIWC, we added five thousand words which are most frequently used in Sina Micro-blog into this dictionary. The words in dictionary are classified into 88 categories according to emotion and meaning, such as positive word, negative word, family, money, punctuation etc. Through analysis and observation, we found that in some factors of personality, users of different scores show great differences in the number of using words belonging to positive emotion, negative emotion and some other categories in the dictionary. According to SCLIWC <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref>, the users' usage degree of words in blogs could be computed in 88 categories. In order to obtain the usage characteristics of social media text in the temporal domain, we divide the time by week firstly. For the i th word category of SCLIWC, the usage frequency within the j th week f i j (i=1,2,. . . ,88) is calculated by Equation 1, in which, i denotes the serial number of category, and j denotes the serial number of week. We collect all the text published in Sina Micro-blog during recent three years (Jun.2012&#732;Jun.2015), and there are 156 weeks in total.</ns0:p><ns0:p>So, corresponding to each category of SCLIWC, the vector</ns0:p><ns0:formula xml:id='formula_0'>f i = { f i 1 , f i 2 , . . . , f i 156 } is the digital</ns0:formula><ns0:p>representation of the i th category in temporal domain.</ns0:p><ns0:formula xml:id='formula_1'>f i j =</ns0:formula><ns0:p>T he number o f words belongs to the i th category o f SCLIWC in j th week T he total number o f words in blogs in j th week (1)</ns0:p></ns0:div> <ns0:div><ns0:head>4/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9825:2:0:NEW 7 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Then, we utilize Fast Fourier Transform(FFT) <ns0:ref type='bibr' target='#b18'>[16]</ns0:ref> to obtain the varying characteristics of social media text usage in temporal space. Fourier Transform is a special integral transform, which could convert the original temporal signal into frequency domain signal which is easily analyzed. FFT is the fast algorithm of Discrete Fourier Transform (DFT), defined by</ns0:p><ns0:formula xml:id='formula_2'>X(k) = DFT [x(n)] = N&#8722;1 &#8721; n=0 x(n)W kn N , k = 0, 1, . . . , N &#8722; 1<ns0:label>(2)</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>W N = e &#8722; j 2&#960; N<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>In order to extract the temporal information from massive high-dimensional digital vectors, Fourier time-series analysis is considered. Concretely, we conduct FFT for each vector. Through FFT, the amplitudes calculated include frequency information, and former 8 maximum amplitudes are selected to constitute a vector as the representation of each word category. Finally, linking the vectors of each category in series, we can obtained a linguistic vector of 704 length corresponding to each user ID.</ns0:p><ns0:p>In our experiment, we use 1552 users' blogs published in 3 years as data for preliminary study.</ns0:p><ns0:p>Each user's linguistic behavior is represented as vector form through FFT based on SCLIWC.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Data for personality prediction</ns0:head><ns0:p>In order to verify the deep learning algorithm is an effective method for extracting the representation of user's Sina Micro-blog linguistic behaviors, we build personality prediction model based on linguistic behavior feature vectors. The personality prediction model is constructed by linear regression algorithm. For each volunteer, five linguistic behavior feature vectors corresponding to five traits of personality are obtained by feature learning models respectively. The training process of personality prediction model is supervised, so users' five scores of five personality traits in the Big Five questionnaire are taken as their labels of the corresponding linguistic behavior feature vectors.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>METHODS</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1'>Unsupervised feature learning based on Stacked Autoencoders</ns0:head><ns0:p>Feature learning can be seen as a process of dimensionality reduction. In order to improve the computational efficiency, for all traits of personality, we utilize the relatively simpler form of artificial neural network, autoencoder <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Fig <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> shows the basic structure of an autoencoder. Basiclly, for an autoencoder, the input and output own the same dimensions, both of them can be taken as X, but through mathematical transformation, the input and output may be not completely equal. In Fig <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>,</ns0:p><ns0:p>X denotes input and X denotes output. The variable in hidden layer Y is encoded through X by Equation <ns0:ref type='formula' target='#formula_4'>4</ns0:ref>. Manuscript to be reviewed In Equation <ns0:ref type='formula' target='#formula_4'>4</ns0:ref>, {W, b} are parameters which can be obtained through training. s(z) is the Sigmoid activation function of hidden layers which is defined in Equation <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>. In addition, a reconstructed vector X in input vector space could be obtained by mapping the result of hidden layer Y back through a mapping function,</ns0:p><ns0:formula xml:id='formula_4'>Y = f &#952; (X) = s(W T X + b) = s( n &#8721; i=1 W i x i + b)<ns0:label>(4)</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>s(z) = 1 1 + exp(&#8722;z)<ns0:label>(5</ns0:label></ns0:formula><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_6'>X = g &#952; (Y ) = s (W T Y + b ) = s( n &#8721; i=1 W i y i + b )<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>For an autoencoder, if we want the mapping result Y is another representation of input X, it is assumed that the input X and the reconstructed X are the same. According to this assumption, the training process of an autoencoder could be conducted and the parameters of autoencoder are adjusted according to minimize the error value L between X and X , as shown in the Equation <ns0:ref type='formula' target='#formula_7'>7</ns0:ref>.</ns0:p><ns0:p>Due to the error is directly computed based on the comparison between the original input and the reconstruction obtained, so the whole training process is unsupervised. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_7'>L(X;W, b) = X &#8722; X 2<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>Computer Science performance of prediction model would be set as the optimal number of layer. Then, we take the output of the last layer as the abstract representation of the original linguistic behavior information. Finally, based on the Big Five questionnaire, for each user, we could obtained five scores (S A , S C , S E , S N , S O ) corresponding to 'A', 'C', 'E', 'N', 'O' five factors respectively. These scores are used to label corresponding linguistic behavior feature vectors for personality prediction models.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Personality prediction model based on linear regression</ns0:head><ns0:p>Personality prediction is a supervised process. The linguistic behavior feature vectors are labeled by the corresponding scores of the Big Five questionnaire. For five traits of personality, we utilized the linear regression algorithm to build five personality prediction models in totally.</ns0:p><ns0:p>Take one trait of personality as an example, the linguistic behavior feature vectors are represented by</ns0:p><ns0:formula xml:id='formula_8'>X = {X i | X i = (x i1 , x i2 , . . . , x im )} n i=1 ,<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>in which, n is the number of samples, n = 1552, and m denotes the number of dimensions of the input vector. The scores of the Big Five questionnaire are taken as the labels, Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_9'>Y = {y i } n i=1<ns0:label>(</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The general form of linear regression is</ns0:p><ns0:formula xml:id='formula_10'>y i = &#969; 1 x i1 + &#969; 2 x i2 + . . . + &#969; m x im + &#949; i , (i = 1, 2, . . . , n)<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>We trained five personality prediction models based on linear regression algorithm using corresponding linguistic behavior feature vectors and labels.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>RESULTS</ns0:head><ns0:p>In Experiments, we collect 1552 users' Sina Micro-blog data in total. Users' linguistic behaviors are quantified based on SCLIWC, and the temporal characteristics are calculated through FFT. Then, we utilize deep learning algorithm to construct feature learning models, which could extract objective and comprehensive representation of linguistic behaviors from the temporal sequence. Finally, personality prediction model is trained by linear regression algorithm based on linguistic behavior feature vectors.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Evaluation measures</ns0:head><ns0:p>In this paper, we conducted preliminary study about constructing Micro-blog behavior representation for predicting social media user's personality. The five factors of personality are all tested. We use In Equation <ns0:ref type='formula'>12</ns0:ref>, i is the sequence number of sample and n is the total number of samples, n = 1552.</ns0:p><ns0:p>In the Big Five questionnaire used in our experiments, there are 44 questions in all. The score ranges of 'A', 'C', 'E', 'N', 'O' are <ns0:ref type='bibr' target='#b8'>[9,</ns0:ref><ns0:ref type='bibr'>45]</ns0:ref>, <ns0:ref type='bibr' target='#b7'>[8,</ns0:ref><ns0:ref type='bibr'>40]</ns0:ref>, <ns0:ref type='bibr' target='#b8'>[9,</ns0:ref><ns0:ref type='bibr'>45]</ns0:ref>, <ns0:ref type='bibr' target='#b7'>[8,</ns0:ref><ns0:ref type='bibr'>40]</ns0:ref>, <ns0:ref type='bibr' target='#b9'>[10,</ns0:ref><ns0:ref type='bibr'>50]</ns0:ref> respectively. The value of RMSE shows the average difference between our prediction results and the scores of questionnaire.</ns0:p><ns0:p>The smaller is the value of RMSE, the better is the performance of prediction model.</ns0:p><ns0:formula xml:id='formula_11'>r = Cor(Y,Y ) = Cov(Y,Y ) Var(Y )Var(Y )<ns0:label>(11)</ns0:label></ns0:formula><ns0:formula xml:id='formula_12'>RMSE = &#8721; n i=1 (y i &#8722; y i ) 2 n (12)</ns0:formula></ns0:div> <ns0:div><ns0:head n='4.2'>Experiment results</ns0:head><ns0:p>In comparison experiments, we utilized five different kinds of attributes to train and build the personality prediction model respectively. The five kinds of attributes including the attributes selected Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>by artificial statistical method without feature selection (denoted by Attribute 1), the attributes selected from Attribute 1 by Principal Component Analysis (PCA) <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref> (denoted by Attribute 2), the attributes selected from Attribute 1 by Stepwise(denoted by Attribute 3), the attributes selected from Attribute 1 by Lasso (denoted by Attribute 4) and linguistic behavior feature vector obtained based on Stacked Antoencoders (SAE) (denoted by Attribute SAE). PCA is a kind of unsupervised feature dimension reduction method, and Stepwise is usually used as a kind of supervised feature selection method. LASSO is a regression analysis method which also perform feature selection.</ns0:p><ns0:p>For different kinds of attributes, the personality prediction models are all built by linear regression algorithm. In order to obtain the stable model and prevent occurrence of overfitting, for each factor of personality, we use 10-fold cross validation and run over 10 randomized experiments. Finally, the mean of 10 randomized experiments' results is recorded as the final prediction result. The comparison of prediction results of five personality factors using three kinds of attributes are shown in Tables <ns0:ref type='table' target='#tab_2'>1 and 2</ns0:ref>. Tables <ns0:ref type='table' target='#tab_3'>3</ns0:ref> shows the dimensionality of different kinds of feature vectors. The letters in subscript 'a', 'c', 'e', 'n', 'o' indicate different personality factors respectively. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>DISCUSSION</ns0:head><ns0:p>This study explore the relevance between users' personality and their social network behaviors. The feature learning models are built to unsupervisedly extract the representations of social network linguistic behaviors. Compared with the attributes obtained by some supervised behavior feature extraction methods, the LRFV is more objective, efficient, comprehensive and universal. In addition, based on LRFV, the accuracy of the personality prediction model could be improved.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1'>The performance of personality prediction model</ns0:head><ns0:p>The results in Tables <ns0:ref type='table' target='#tab_2'>1 and 2</ns0:ref> show that the linguistic behavior feature vectors learned through Stacked Autoencoders perform better than other attributes in both Pearson correlation coefficient and RMSE. When using Attribute SAE, the Pearson correlation coefficients r e = 0.2583, which represent a small correlation. For'E', 'N', 'C' and 'O', r e = 0.3503, r n = 0.3245, r c = 0.4001 and r o = 0.4238, which means that the prediction results of 'E', 'N', 'C' and 'O' correlate with the results of questionnaire moderately. It is concluded that personality prediction based on the linguistic behavior in social network is feasible. Besides, the traits of conscientiousness and openness could be reflected in the network linguistic behavior more obviously.</ns0:p><ns0:p>Compared with other feature extraction methods, our proposed method performs better. When using the original feature vector (Attributes 1), the prediction results r are all less than 0.2. When using another kind of unsupervised feature dimension reduction method (Attributes 2), except for 'C', others are also less than 0.2. Attributes 3, which is obtained by using a kind of supervised feature selection method, the prediction results r are also not ideal. Similarly, considering RMSE of every personality traits, the prediction model also obtain better results based on the linguistic behavior feature vectors.</ns0:p><ns0:p>Besides, we compared the time and memory consuming of prediction when using SAE and PCA to reduce the dimensionality of features respectively in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>. The experiments were conducted on a DELL desktop with an Intel Core 3.30 GHz CPU and 12G memories. The average time consuming denotes the average time cost for predicting one personality factor of one sample. The average memory consuming denotes the memory usage percentage when running the prediction model.</ns0:p><ns0:p>Although PCA performed better in the time and memory consuming, but the prediction results of linguistic behavior feature vectors were outstanding. Usually, the high-powered computing server could offset the deficiency of time and memory consuming. </ns0:p></ns0:div> <ns0:div><ns0:head n='5.2'>Parameters selection</ns0:head></ns0:div> <ns0:div><ns0:head n='5.2.1'>Activation function</ns0:head><ns0:p>There are many kinds of activation function in neural network, such as Sigmoid, Tanh, Softmax, Softplus, ReLU and Linear. Among them, Sigmoid and Tanh are used commonly. In experiment, we utilized both of them to construct the feature learning model, and the comparative results (Tables 5) Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>showed that when using Sigmoid as activation function of hidden layers, the prediction results are a bit better. </ns0:p></ns0:div> <ns0:div><ns0:head n='5.2.2'>the dimensionality of linguistic behavior feature vector</ns0:head><ns0:p>For each personality trait, the dimensionality of linguistic behavior feature vector is set according to the optimal result of prediction model obtained from repeated experiments, and the comparison of r and RMSE when using linguistic behavior feature vectors with different dimensionality are presented in Figs 4(a) and 4(b) respectively. Pearson correlation coefficient reflects the correlation degree between two variables. If the change tendencies of two variables are more similar, the correlation coefficient is higher. Root Mean Square Error reflects the bias between the real value and prediction value. For a dataset, the Pearson correlation coefficient and Root Mean Square Error may not be direct ratio. In practical applications, the trend of the psychological changes is more necessary. So, when adjusting the optimal parameters, we give priority to Pearson correlation coefficient. For 'A', 'C' and 'N', prediction models perform better when the dimensionality of feature vector is 400. For 'E' and 'O', we could obtain the better results when the dimensionality of feature vector is 300. </ns0:p></ns0:div> <ns0:div><ns0:head n='5.3'>Differences in modeling performance across personality traits</ns0:head><ns0:p>Through analysing the results of experiments, we summarize that Agreeableness correlate with users' social network linguistic behaviors relative weakly than the other personality traits. The correlation between openness and users' social network linguistic behaviors is highest of all. We could identify whether the users own higher scores in openness or not through their blogs published in social network platform. Probably because the person with high scores in openness usually prefer expressing their thoughts and feelings publicly. Similarly, conscientiousness is moderately correlate with social network linguistic behaviors. And for conscientiousness, there are significant differences of using the words belonging to the categories of family, positive emotion and so on. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='5.4'>The future work</ns0:head><ns0:p>In this paper, we has carried on some preliminary study to explore the feasibility of using deep learning algorithm to construct linguistic feature learning model. More work will be conducted further. The millions of users' social media are being downloaded. In feature extraction, the massive data will be used to train the feature learning model unsupervisedly. Besides, a new round of user experiment is progressing. We would obtain a new set of labeled data to improve our personality prediction method. The study is of great significance. It could provide new quantitative and analytical methods for the social media data, and a new perspective for real-time assessing Internet users' mental health.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>CONCLUSIONS</ns0:head><ns0:p>In this paper, we utilized deep learning algorithm to investigate the correlations between users' personality traits and their social network linguistic behaviors. Firstly, the linguistic behavior feature vectors are unsupervisedly extracted using Stacked Autoencoders models actively. Then, the personality prediction models are built based on the linguistic behavior feature vectors by linear regression algorithm. Our comparison experiments are conducted on five different kinds of attributes, and the results show that the linguistic behavior feature vectors could improve the performance of personality prediction models.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>2 / 13 PeerJ</ns0:head><ns0:label>213</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:03:9825:2:0:NEW 7 Jul 2016) Manuscript to be reviewed Computer Science natural language processing [23] etc.. Recently, researchers are attempting to apply deep learning algorithm to other research field. Lin at el. [13] [14] used Cross-media Auto-Encoder (CAE) to extract feature vector and identified users' psychological stress based on social network data.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>online and authorized us to obtain the public personal information and all blogs. Collecting volunteers' IDs of Sina Micro-blog, 3/13 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9825:2:0:NEW 7 Jul 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>) 5/ 13 PeerJ</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:03:9825:2:0:NEW 7 Jul 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The basic structure of an autoencoder.</ns0:figDesc><ns0:graphic coords='8,216.00,70.86,180.00,145.03' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 2 ./ 13 PeerJ</ns0:head><ns0:label>213</ns0:label><ns0:figDesc>Figure 2. The training principle diagram of an autoencoder.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Fig 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Fig 3 shows the structure of our model. For different personality factors, the number of layers and the number of units in each layer are different. The details are presented on the left of Fig 3. For 'A', 'C' and 'N', there are one hidden layers in the SAE, and the feature learning model are 3 layers in total. For 'E' and 'O', there are two hidden layers in the SAE. In our experiments, 1552 users' content information of Sina Micro-blog are used as training dataset, and the unsupervised feature learning models corresponding different personality traits are trained respectively. That is, we could obtain five feature learning models in total. For each trait, there will be corresponding linguistic behavior feature vectors extracted from social network behavior data actively.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The deep structure of our prediction model. The left table shows the details of SAE of different personality factors.</ns0:figDesc><ns0:graphic coords='9,90.01,227.88,431.99,216.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>9 ) 7 / 13 PeerJ</ns0:head><ns0:label>9713</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:03:9825:2:0:NEW 7 Jul 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Pearson product-moment correlation coefficient (r) and Root Mean Square Error (RMSE) to measure the quality of different behavior feature representation methods. The computational formulas of two measurements are shown in Equation 11 and 12 respectively. In Equation 11, Cov(Y,Y ) denotes the covariance of Y and Y , and Var(Y ) and Var(Y ) represents the variances of the real score Y and prediction score Y respectively. when r &gt; 0, it means the results of questionnaire and prediction model are positive correlation. On the contrary, r &lt; 0 means negative correlation. The absolute value is greater, the higher is the degree of correlation. In psychology research, we use Cohen's conventions [5] to interpret Pearson product-moment correlation coefficient. r &#8712; [0.1, 0.3) represent a weak or small association and r &#8712; [0.3, 0.5) indicates a moderate correlation between two variables.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The comparison of prediction results using linguistic feature vectors with different dimensionality. (a)The comparison of r. (b)The comparison of RMSE.</ns0:figDesc><ns0:graphic coords='13,90.00,388.52,432.00,144.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>11 / 13 PeerJ</ns0:head><ns0:label>1113</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:03:9825:2:0:NEW 7 Jul 2016)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The comparison of prediction results in Pearson correlation coefficient</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Attributes</ns0:cell><ns0:cell>r a</ns0:cell><ns0:cell>r c</ns0:cell><ns0:cell>r e</ns0:cell><ns0:cell>r n</ns0:cell><ns0:cell>r o</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 1 (Original)</ns0:cell><ns0:cell cols='2'>0.1012 0.1849</ns0:cell><ns0:cell cols='2'>0.1044 0.0832</ns0:cell><ns0:cell>0.181</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 2 (PCA)</ns0:cell><ns0:cell cols='5'>0.1106 0.2166 0.1049 0.1235 0.1871</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Attributes 3 (Stepwise) 0.1223 0.2639 0.1698 0.1298 0.2246</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 4 (Lasso)</ns0:cell><ns0:cell cols='5'>0.1209 0.2068 0.0788 0.0934 0.1136</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes SAE</ns0:cell><ns0:cell cols='3'>0.2583 0.4001 0.3503</ns0:cell><ns0:cell cols='2'>0.3245 0.4238</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The comparison of prediction results in RMSE</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Attributes</ns0:cell><ns0:cell cols='4'>RMSE a RMSE c RMSE e RMSE n RMSE o</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 1 (Original)</ns0:cell><ns0:cell cols='4'>5.6538 6.1335 4.9197 6.5591 7.0195</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 2 (PCA)</ns0:cell><ns0:cell cols='4'>5.1628 5.6181 5.6781 5.9426 6.4579</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 3 (Stepwise)</ns0:cell><ns0:cell cols='2'>4.8421 5.3495</ns0:cell><ns0:cell>5.276</ns0:cell><ns0:cell>5.6904 6.1079</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 4 (Lasso)</ns0:cell><ns0:cell cols='4'>5.8976 6.7471 6.4940 5.4241 6.0938</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes SAE</ns0:cell><ns0:cell>4.7753</ns0:cell><ns0:cell>5.339</ns0:cell><ns0:cell cols='2'>4.8043 5.6188 5.1587</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The comparison of dimensionality of different feature vector</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Attributes</ns0:cell><ns0:cell>D a</ns0:cell><ns0:cell>D c</ns0:cell><ns0:cell>D e</ns0:cell><ns0:cell>D n</ns0:cell><ns0:cell>D o</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 1 (Original)</ns0:cell><ns0:cell cols='5'>704 704 704 704 704</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 2 (PCA)</ns0:cell><ns0:cell cols='5'>250 203 250 310 250</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 3 (Stepwise)</ns0:cell><ns0:cell>47</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>56</ns0:cell><ns0:cell>47</ns0:cell><ns0:cell>32</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes SAE</ns0:cell><ns0:cell cols='5'>400 400 300 400 300</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>9/13</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9825:2:0:NEW 7 Jul 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The comparison of time and memory consuming of different feature vector</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Attributes</ns0:cell><ns0:cell cols='2'>Average time consuming Average memory consuming</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes 2 (PCA)</ns0:cell><ns0:cell>3ms</ns0:cell><ns0:cell>56%</ns0:cell></ns0:row><ns0:row><ns0:cell>Attributes SAE</ns0:cell><ns0:cell>12ms</ns0:cell><ns0:cell>81%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The Comparison of prediction results when using different activation function</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Activation Function</ns0:cell><ns0:cell>r a</ns0:cell><ns0:cell>r c</ns0:cell><ns0:cell>r e</ns0:cell><ns0:cell>r n</ns0:cell><ns0:cell>r o</ns0:cell></ns0:row><ns0:row><ns0:cell>Sigmoid</ns0:cell><ns0:cell cols='5'>0.2583 0.4001 0.3503 0.3245 0.4238</ns0:cell></ns0:row><ns0:row><ns0:cell>Tanh</ns0:cell><ns0:cell cols='5'>0.2207 0.3338 0.3216 0.2696 0.3503</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='13'>/13 PeerJ Comput. Sci. reviewing PDF | (CS-2016:03:9825:2:0:NEW 7 Jul 2016) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"7th JUL, 2016 RE:Manuscript Number: #CS-2016:03:9825:1:0:REVIEW Manuscript Title: Deep learning for constructing microblog behavior representation to identify social media user's personality Dear Editor, Thank you for your letter regarding our manuscript titled “Deep learning for constructing microblog behavior representation to identify social media user's personality”. We also thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. And in the tracked changes manuscript, the changes are highlighted in bold. Outlined below is our response to the reviewer’s suggestion. Reviewer 3 Comments for the Author: 1. My first question is why FFT is used for feature extraction. In the reply the authors showed me more detail about the result after FFT. However the authors still did not make any explanation about the reason of using FFT. We are sorry that the previous reply is not clear. Fourier is one kind of mathematical method that changes the time domain signals to frequency domain signal. Using FFT, the continuous signal or data sequence could be decomposed into the superposition of many different frequency sine waves. If for a signal, it is hard to extract some characteristics in time domain, it is possible that some distinctive attributes could be extracted after transformation in frequency domain. FFT could be taken as a method which provide us another perspective for a problem. In our method, for each word category of SCLIWC, we calculated the word usage frequency of every week within three years (Jun.2012˜Jun.2015). So, the usage characteristics of each word category is quantized as a data sequence with length 156. In traditional psychology research, some common statistics such as mean, variance or median would be calculated to describe the characteristic of this data sequence. But, these common statistics in time domain of different signals or data sequences with different waveforms may be equal. So, we conduct FFT for each data sequence, and extract former 8 maximum amplitudes as the representation of each word category. Using the statistics in frequency domain to obtain the varying characteristics of text usage in temporal space. For example, the weekly usage frequency of a certain category within one year (52 weeks) are simulated in the following figure (a) and (c), in which, (a) and (c) could be taken as different users’ word usage data. The mean and variance of (a) and (c) are equal. After using FFT, different amplitude data could be obtained as shown in (b) and (d). Besides, one of our ongoing research is how to quantize the text information atomically. In order to figure out more optimal method of linguistic feature extraction method. 2. The second question about supervised and unsupervised methods. The authors mentioned that, 'for supervised methods, the directivity of dimension reduction results is stronger'; and the reason for using unsupervised method is that 'there are some limitations of obtaining labeled data'. In fact, after the dimension reduction, the label information is still needed for regression. So logically, the label information must be known for the proposed method. Since this, I did not find any reason here for using unsupervised deep learning method in dimension reduction instead of some supervised methods. We continue studying the semantic feature vector extraction method and psychological characteristics prediction method further. The feature extraction method and prediction model construction method could be taken as two sub-research work. In feature extraction, unsupervised feature learning model was trained using unlabeled data. In our current work, Internet data crawler system is optimized, and the Sina Micro-blog data of massive users will be downloaded. We attempt to collect social media data of 1 million users. In the future, the feature learning model will be trained based on a huge number of unlabeled data, the attributes learned would be more objective, effective and universal. Considering the effective attributes would improve the performance of prediction model, we want to obtain more superior attributes based on unsupervised feature learning method. Especially for psychological research, it is difficult to recruit lots of people to participate in the experiment. So there are some limitations of obtaining labeled data. In this case, unsupervised feature extraction method is more applicable. In this paper, because of data limitation, we conduct preliminary experiments on relatively small data as pre-study of exploring the feasibility of using deep learning algorithm to construct linguistic feature learning model. In order to provide the basis for subsequent research, we reported the current research progress in this paper. 3. The last question is about the principle of parameter selection. Since there are so many parameters need to be determined in the proposed method, parameter selection would be key important for the success of the method. Here, the authors determine the value of the parameters by the optimal prediction results. This principle is the most direct way for parameter selection. However it is easy to lead to the parameters overfitting to the dataset, which makes the results harder to believe. Cross validation is an alternative approach, which will leads to more computation. But at least it should be done on selection of the most important parameters. I am truly grateful for the important comments on parameter selection. Your consideration is reasonable. Our prediction model may occur overfitting problem, so for each factor of personality, we use 10-fold cross validation and run over 10 randomized experiments. Considering your advice, we plan to recruit a new group of participants, and verify and improve our personality prediction method further. "
Here is a paper. Please give your review comments after reading it.
311
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The rapid development of deep neural networks (DNN) has promoted the widespread application of image recognition, natural language processing, and autonomous driving.</ns0:p><ns0:p>However, DNN is vulnerable to adversarial examples. Such as, an input sample with imperceptible perturbation can easily invalidate the DNN and even deliberately modify the classification results. So this paper proposes a preprocessing defense framework based on image compression reconstruction to achieve adversarial sample defense. Firstly, the defense framework performs pixel depth compression on the input image based on the sensitivity of the adversarial example to eliminate adversarial perturbations. Secondly, we use the super-resolution image reconstruction network to restore the image quality and then map the adversarial example to the clean image. Therefore, there is no need to modify the network structure of the classifier model, and it can be easily combined with other defense methods. Finally, we evaluate the algorithm with MNIST, Fashion-MNIST, and CIFAR-10 datasets; the experimental results show that our approach outperforms current techniques in the task of defending against adversarial example attacks.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Deep neural networks have been widely used in computer vision, natural language processing, speech recognition, and other fields <ns0:ref type='bibr' target='#b12'>(Karen and Andrew, 2015)</ns0:ref>. However, the adversarial example proposed by Szegedy et al. <ns0:ref type='bibr' target='#b23'>(Szegedy et al., 2013)</ns0:ref>, as shown in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, can easily deceive the neural network by adding a minor perturbation to the ordinary image, i.e., the deep convolutional neural network will continuously amplify this perturbation, which is sufficient to drive the model to make high confidence incorrect predictions without being detected by the human eye. As a result, the adversarial example has a minor perturbation than the normal noise. However, it brings more significant obstacles to practical applications. Researchers usually input the pictures directly into the neural network for the computer classification test when training the classifier model to solve this problem. Such as, Kurakin et al. <ns0:ref type='bibr' target='#b13'>(Kurakin et al., 2016)</ns0:ref> find that a significant fraction of adversarial images crafted using the original network are misclassified even when fed to the classifier through the camera. Nowadays, the research and implementation of autonomous driving <ns0:ref type='bibr' target='#b4'>(Deng et al., 2020)</ns0:ref> and person detection <ns0:ref type='bibr' target='#b24'>(Thys et al., 2019)</ns0:ref> rely heavily on deep learning technology. In addition to making the target model random errors, an adversarial example can also conduct targeted attacks according to the attacker's wishes and generate specified results. Such as Eykholt et al. <ns0:ref type='bibr' target='#b6'>(Eykholt et al., 2018)</ns0:ref> shows that adversarial examples bring substantial security risks to the application of related technologies. Furthermore, by adding adversarial perturbation to a road sign, the intelligent system may recognize the deceleration sign as an acceleration sign, which will bring substantial hidden dangers to traffic safety. Currently, the reasons for the adversarial examples are still controversial. Such as Szegedy et al. <ns0:ref type='bibr' target='#b23'>(Szegedy et al., 2013)</ns0:ref> believe that it is caused by the nonlinearity of the model, while Kurakin et al. <ns0:ref type='bibr' target='#b13'>(Kurakin et al., 2016)</ns0:ref> propose that the high-dimensional space's linearity is sufficient to generate adversarial examples. If the input samples have sufficiently large dimensions for linear models, they are also attacked by adversarial examples. Adversarial attacks can be divided into single-step attacks, which perform only one step of gradient calculation, such as the FGSM <ns0:ref type='bibr' target='#b7'>(Goodfellow et al., 2015)</ns0:ref>, and iterative attacks, which perform multiple steps to obtain better adversarial examples, such as BIM <ns0:ref type='bibr' target='#b19'>(Ren et al., 2020)</ns0:ref>, CW <ns0:ref type='bibr' target='#b0'>(Carlini and Wagner, 2017)</ns0:ref>. At the same time, adversarial example attacks can be categorized into white-box, gray-box, and black-box attacks based on the attacker's knowledge. A white-box attack means that the attacker knows all the information, including models, parameters, and training data. We examples not only exist in images, but also in speech and text <ns0:ref type='bibr' target='#b30'>(Xu et al., 2020)</ns0:ref>, which makes the application of deep learning technology have huge uncertainty and diversity, and there are potential threats at the same time. Therefore, it is urgent to defend, which makes the application of deep learning technology have huge uncertainty and diversity, as well as many potential threats.</ns0:p><ns0:p>With the endless emergence of attack methods, the defense of adversarial examples has become a significant challenge. Many defense methods <ns0:ref type='bibr' target='#b5'>(Dong et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b32'>Zhang and Wang, 2019;</ns0:ref><ns0:ref type='bibr' target='#b8'>Hameed et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b20'>Singla and Feizi, 2020;</ns0:ref><ns0:ref type='bibr' target='#b11'>Jin et al., 2021)</ns0:ref> have been proposed, such as adversarial training <ns0:ref type='bibr' target='#b7'>(Goodfellow et al., 2015)</ns0:ref>, which increases model robustness by adding adversarial examples to the training process. Some other defenses mainly rely on preprocessing methods to detect or transform the input image before the target network without modifying the target model. For example, Xu et al. <ns0:ref type='bibr' target='#b31'>(Xu et al., 2017)</ns0:ref> propose that the input's adversarial perturbation can be eliminated by reducing the color bit depth of each pixel and spatial smoothing, and they created a defense framework to detect adversarial examples in the input. <ns0:ref type='bibr' target='#b10'>Jia et al. (Jia et al., 2019)</ns0:ref> introduce the ComDefend defense model, which constructs two deep convolutional neural networks: the one for compressing images and retaining valid information; the other for reconstructing images. However, this method does not perform well under the attack of BIM.</ns0:p><ns0:p>On the other hand, if you only perform detection without other measures when defending against adversarial examples, it will not be able to meet actual needs. For example, in an autonomous driving application scenario, the defense system recognizes a road sign and detects that it is an adversarial example. At this time, the defense system refuses to input the image, which will seriously affect its normal operation. In addition, convolutional neural networks are used to extract image features and compress images. If the compression rate is too low, the uncorrupted adversarial perturbation in the reconstruction network will continue to expand, thereby significantly reducing the classifier's accuracy.</ns0:p><ns0:p>To solve the above problems, we propose a defense framework based on image compression reconstruction, which is a preprocessing method. Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> clearly describes the defense framework of this paper. The defense model in the figure can be divided into two steps. The specific operation is to eliminate adversarial perturbations by compressing images to defend against adversarial example attacks.</ns0:p><ns0:p>Simultaneously, to ensure that the standard and processed samples do not suffer from performance loss on the target model, we use the deep convolutional neural network to repair the processed images. In short, this paper makes the following contributions: &#8226; To defend against various adversarial example attacks, we propose a defense framework based on image compression and reconstruction with super-resolution. This framework eliminates adversarial perturbations by compressing the input samples and then reconstructs the compressed images using super-resolution methods to alleviate the performance degradation caused by compression.</ns0:p><ns0:p>&#8226; As a preprocessing method, there is no need to modify the target model during the defense process, i. e., our method has good performance for single-step and iterative attacks and has a small calculation compared with other adversarial training methods. In addition, it can be combined with different target models to have a protective effect still.</ns0:p><ns0:p>&#8226; To verify the effectiveness, applicability, and transferability of the method, extensive experiments of defense tests are carried out on three real data sets and multiple attack methods. The results show that our approach can achieve better defense performance for different adversarial example attacks and significantly reduce image loss.</ns0:p><ns0:p>The rest of this paper is organized as follows: Section 2 briefly introduces an background of the existing attack and defense methods. Section 3 discusses the methodology and defense framework proposed in this paper in detail, followed by many experiments to demonstrate the feasibility of this method in Section 4. Finally, the conclusion is given in Section 5.</ns0:p></ns0:div> <ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>In this section, we will review related works from two aspects: the attack methods of generating adversarial examples and the defensive techniques of resisting adversarial examples.</ns0:p></ns0:div> <ns0:div><ns0:head>Attack methods</ns0:head><ns0:p>In order to verify the versatility of the proposed method, the following four different methods are mainly used to generate adversarial examples.</ns0:p><ns0:p>Fast Gradient Sign Method (FGSM) <ns0:ref type='bibr'>Goodfellow et al.</ns0:ref> propose the FGSM <ns0:ref type='bibr' target='#b7'>(Goodfellow et al., 2015)</ns0:ref>, a fast and straightforward method of generating adversarial examples. Given the input image, the maximum direction of gradient change of the deep learning model is found, and adversarial perturbations are added to maximize the cost subject to a L &#8734; constraint, resulting in the wrong classification result. The FGSM adds the imperceptible perturbations to the image by increasing the image classifier loss. The generated adversarial example is formulated as follows:</ns0:p><ns0:formula xml:id='formula_0'>x adv = x + &#949; &#8226; sign(&#9661; x J(&#952; , x, y true ))<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where J(&#952; , x, y) denotes the cross entropy cost function, x is the input image, y is the true label of the input image, and &#949; is the hyperparameter that determines the magnitude of the perturbations.</ns0:p></ns0:div> <ns0:div><ns0:head>3/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59969:1:1:NEW 28 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Basic Iterative Method (BIM)</ns0:p><ns0:p>The problem of adversarial examples is constantly being studied. Such as, Kurakin et al. <ns0:ref type='bibr' target='#b13'>(Kurakin et al., 2016)</ns0:ref> present a more direct basic iterative method (BIM) to improve the performance of FGSM. In other words, BIM is an iterative version of FGSM. It uses the basic idea of gradient descent to perform iterative training with small steps. Moreover, clip the pixel values of the intermediate results after each step to ensure that they are in an &#949;-neighborhood of the original image:</ns0:p><ns0:formula xml:id='formula_1'>x adv 0 = x, &#8226; &#8226; &#8226; , x adv N+1 = clip x,&#949; {x adv N + &#945; &#8226; sign(&#9661; x J(&#952; , x, y true ))} (2)</ns0:formula><ns0:p>Among them, x is the input image, y true is the true class label, J(&#952; , x, y) is the loss function, and &#945; is the step size, usually &#945; = 1.</ns0:p><ns0:p>This method attempts to increase the loss value of the correct classification and does not indicate which type of wrong class label the model should choose. Therefore, it is suitable for data sets with fewer and different types of applications.</ns0:p><ns0:p>Carlini &amp; Wagner (C&amp;W )</ns0:p><ns0:p>Carlin and Wagner proposed an optimization-based attack method called C&amp;W <ns0:ref type='bibr' target='#b0'>(Carlini and Wagner, 2017)</ns0:ref>. C&amp;W can be a targeted attack or an untargeted attack. The distortion caused by the attack is measured by three metrics:</ns0:p><ns0:formula xml:id='formula_2'>(L 0 , L 2 , L &#8734; ).</ns0:formula><ns0:p>There are three methods introduced by C&amp;W , which are more efficient than all previously-known methods in terms of achieving the attack success rate with the smallest amount of imperceptible perturbation. A successful C&amp;W attack usually needs to meet two conditions. </ns0:p><ns0:formula xml:id='formula_3'>min 1 2 (tanh(x n + 1) &#8722; X n ) 2 2 + c &#8226; f ( 1 2 tanh(x n ) + 1) W here f (x &#8242; ) = max(max{Z(x &#8242; ) i : i = t} &#8722; Z(x &#8242; t ), &#8722;k)<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where the Z is the softmax function, the k is a constant used to control the confidence, the t is the target label of misclassification, and c is constant chosen with binary search. In the above formula, tanh(x) refers to the mapping of adversarial examples to tanh space. After transformation, x belongs to (&#8722;in f , +in f ),</ns0:p><ns0:p>which is more conducive to optimization.</ns0:p></ns0:div> <ns0:div><ns0:head>DeepFool</ns0:head><ns0:p>The DeepFool algorithm is proposed by <ns0:ref type='bibr' target='#b17'>Moosavi-Dezfooli et al. (Moosavi-Dezfooli et al., 2016)</ns0:ref>, which generates an adversarial perturbation of the minimum norm of the input sample through iterative calculation. In each iteration, the DeepFool algorithm interferes with the image through a small vector. It gradually pushes the images located within the classification boundary to outside the decision boundary until a misclassification occurs. In addition, DeepFool aggregates the perturbations added in each iteration to calculate the total perturbations. Its perturbations are minor than FGSM, and at the same time, the classifier has a higher rate of misjudgment.</ns0:p></ns0:div> <ns0:div><ns0:head>Defense methods</ns0:head><ns0:p>At present, the defense is mainly divided into two aspects: improving the classifier model's robustness color depth. When &#949; is small, the attack intensity is low, and reducing each pixel's color depth can have an excellent defense effect. On the contrary, as the attack intensity continues to increase, the defense effect is also declining. At the same time, the situation becomes more complicated in the face of more complex data sets (such as Cifar-10). Although a higher compression rate can improve the defensive performance to a certain extent, it will also cause the loss of ordinary image information and reduce the prediction accuracy of the classifier model. Therefore, we need to repair the damaged image after compression.</ns0:p></ns0:div> <ns0:div><ns0:head>5/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59969:1:1:NEW 28 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In the experiment, we found that after compressing the high-strength adversarial example, the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science To better reconstruct clean samples, we minimize the distance between the reconstructed SR img and the original image HR img. We use Mean Squared Error(MSE) to define the loss function of the CNN:</ns0:p><ns0:formula xml:id='formula_4'>L(&#952; ) = 1 2N &#8721; F(X, &#952; ) &#8722;Y 2 (4)</ns0:formula><ns0:p>where F is the image restoration network, X is the compressed image, &#952; is the network parameter, and Y is the original image.</ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENT</ns0:head><ns0:p>In this section, we use experiments to verify the effectiveness of the proposed algorithm. The basic process of the experiment includes generating adversarial examples on different datasets and training multiple classifier models to test the performance and transferability of the defense model. In addition, we conducted a comprehensive theoretical analysis of the experimental results.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental setup</ns0:head><ns0:p>In our experiments, we use three different image datasets: <ns0:ref type='bibr'>MNIST (LeCun et al., 1998)</ns0:ref>, Fashion-MNIST (F-MNIST) <ns0:ref type='bibr' target='#b28'>(Xiao et al., 2017)</ns0:ref> and <ns0:ref type='bibr'>CIFAR-10 (Xiao et al., 2018)</ns0:ref>. The MNIST and F-MNIST datasets both contain 60,000 training images and 10,000 test images. Each example is a 28&#215;28 grayscale image associated with one label in 10 categories. The difference is that MNIST is a classification of handwritten numbers 0 &#8722; 9, while F-MNIST is no longer an abstract symbol but a more concrete clothing classification.</ns0:p><ns0:p>The CIFAR-10 dataset is a 32 &#215; 32 color image associated with 10 category labels, including 50,000 Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Experiment results</ns0:head><ns0:p>In this section, the adversarial examples generated by FGSM, BIM, DeepFool, and C&amp;W on different datasets are applied to the defense framework of this paper. Simultaneously, in the training process, to make the reconstructed network have selective noise reduction and generalization capabilities, we use the FGSM with the most perturbation to generate adversarial examples and input them into the neural network. Generally, simple images need to add a significant perturbation to be effective. In this paper, for the MNIST dataset, the value of &#949; is up to 0.3; for the F-MNIST dataset, the value of &#949; is from 0 to 0.1;</ns0:p><ns0:p>for the Cifar-10 dataset, the value of &#949; is taken from 0 to 0.01. When &#949; is equal to 0.01, the adversarial example is enough to produce a higher error rate on the target classifier model for the CIFAR-10 data set.</ns0:p><ns0:p>The results of each step of the defense experiment process are shown in the figure below.</ns0:p><ns0:p>From left to right, the different subgraphs in Figure <ns0:ref type='figure' target='#fig_10'>5</ns0:ref> the operation can eliminate adversarial perturbations in the input image. This is because the defense model has certain image recovery capabilities, the MNIST image structure is relatively simple, and the information is not easily damaged. For FGSM attacks, we can see that the accuracy can be restored from 20% to 97% under high attack intensity, BIM can be restored from 5% to 98%, DeepFool can be restored from 0% to 98%, and C&amp;W can be restored from 0% to 98%. Fashion-MNIST defense model, i. e., the original image recognition accuracy rate drops by 4% after preprocessing. For FGSM attacks, it recovery from 13% to 81%; for BIM, it recovery from 1% to 82%;</ns0:p><ns0:p>for DeepFool, it recovery from 0% to 88%; and for C&amp;W , it recovery from 0% to 88%. processing the three-channel color dataset CIFAR-10, we find that it is more complicated than the first two single-channel grayscale image data sets. Mainly because it is difficult to balance the pixel compression rate and the defense rate, which makes the defense effect appear to be reduced to a certain extent. It can be seen from Figure <ns0:ref type='figure' target='#fig_13'>8</ns0:ref>, the ordinary sample has a loss close to 5% in accuracy after compressed and reconstructed. For FGSM attacks, the defense model can restore the accuracy from 23% to 71%, BIM from 2% to 70%, DeepFool from 18% to 87%, and CW from 0% to 87%.</ns0:p></ns0:div> <ns0:div><ns0:head>Defense Transferability</ns0:head><ns0:p>As a preprocessing method, we can combine different target models without modifying them. To verify the defense model's portability, we train three classifier models from weak to strong performance. They are: <ns0:ref type='bibr'>LeNet (LeCun et al., 1998)</ns0:ref>, GoogLeNet <ns0:ref type='bibr' target='#b22'>(Szegedy et al., 2015)</ns0:ref>, and ResNet101 <ns0:ref type='bibr' target='#b9'>(He et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Besides, we combine the defense model trained with these three classifier models to test the defense performance.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> and Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref> show in detail the experimental results of the transferability of the defense model.</ns0:p><ns0:p>On the MNIST and Fashion-MNIST datasets, we take the median value of 0.15 and 0.05 for &#949;, respectively.</ns0:p><ns0:p>Due to the performance difference of the target model, the effect will be slightly reduced when the defense model is combined with different models. However, it can still defend well against adversarial example attacks. Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref> shows the transferability performance of our defense model combined with ResNet 50, ResNet101, and GoogLeNet on the data set Cifar-10. We take the median value of 0.005 for &#949;. The classification accuracy of the overall network model on the Cifar-10 data set has been reduced compared to the performance of the MNIST and F-MNIST data sets. This is because the Cifar-10 data set is relatively complex. In short, the classification accuracy of the network model with defense is much higher than the network model without defense when facing different attacks. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Performance comparison between similar defense models</ns0:head><ns0:p>Computer Science shown in Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref>, our method performs best compared with other methods under attack models such as BIM, DeepFool, and C&amp;W .</ns0:p><ns0:p>Although the ComDefend method is better at preserving the original image information, it adds Gaussian noise during training to improve the network's ability to resist noise. The defense effect of some attacks, such as BIM, is not ideal. The impact of adding an FGSM attack is only acceptable in the case of FGSM adversarial examples, and it performs poorly for adversarial examples generated by other methods.</ns0:p><ns0:p>In general, although the direct pixel depth reduction has made a certain sacrifice in image information preservation, the confrontation samples generated in the face of different attacks in the above experiments can all play a good defense effect. Therefore, to the best of our knowledge, our method can effectively defend against adversarial example attacks. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science attacks. However, due to limited knowledge and personal abilities, many issues need further research.</ns0:p><ns0:p>We will study how to better balance the compression rate of complex images and preserve adequate information and verify the method's effectiveness on more complex datasets.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The generation process of adversarial example.</ns0:figDesc><ns0:graphic coords='3,224.07,63.71,248.28,80.89' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>can use it to calculate the attack distance and generate adversarial examples. A gray-box attack means that the attacker knows limited target model information. A black-box attack means that an attacker uses a similar model to generate adversarial examples. The generated adversarial examples have a certain degree of transferability, which can carry out transfer attacks on the model without knowing the relevant information of the model, and it has a high success rate. Furthermore, extreme samples can even deceive multiple different models. Generally, adversarial</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The defense framework uses input samples as pictures.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>First</ns0:head><ns0:label /><ns0:figDesc>, the difference between the adversarial examples and the corresponding clean samples should be as slight as possible. Second, the adversarial examples should make the model classification error rate as high as possible. The details are shown in (3).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>and preprocessing the input without changing the classifier model. Adversarial training<ns0:ref type='bibr' target='#b7'>(Goodfellow et al., 2015)</ns0:ref> is currently a more effective defense method proposed byGoodfellow et al. They use adversarial examples to expand the training set and train with the original samples to increase the model's fit to the adversarial examples, thereby improving the robustness of the model. However, this increases the calculation cost and complexity, and adversarial training has excellent limitations. When facing adversarial attacks generated by different methods, the performance varies significantly.Generally, the preprocessing process does not need to modify the target model, compared with adversarial training and other methods, which is more convenient to implement. Moreover, it has a smaller amount of calculation and can be used in combination with different models. For instance, Xie et al.<ns0:ref type='bibr' target='#b29'>(Xie et al., 2017)</ns0:ref> propose to enlarge and fill the input image randomly. The entire defense process does4/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59969:1:1:NEW 28 Aug 2021)Manuscript to be reviewed Computer Science not need to be retrained and is easy to use. However, the results show that this method is only effective for iterative attacks such as C&amp;W and DeepFool<ns0:ref type='bibr' target='#b17'>(Moosavi-Dezfooli et al., 2016)</ns0:ref>, while for FGSM, the defensive effect of this single-step attack is inferior. They believe that this is due to the iterative attack to fitting the target model, resulting in low-level image transformation that can destroy the fixed structure of the adversarial disturbance. In addition, Liao et al.<ns0:ref type='bibr' target='#b15'>(Liao et al., 2018)</ns0:ref> regard adversarial perturbation as a kind of noise, and they designe a high-level representation guided denoiser (HGD) model to eliminate the adversarial disturbance of the input species. Das et al.<ns0:ref type='bibr' target='#b1'>(Das et al., 2017)</ns0:ref> use JPEG compression to destroy adversarial examples. Similarly, Pixel Defend (Song et al., 2017) is a new method that purifies the image by moving the maliciously perturbed image back to the training data to view the distribution. Feature squeezing (Xu et al., 2017) is both attack-agnostic and model-agnostic. It can reduce the image range from [0, 255] to a smaller value, merge the samples corresponding to many different feature vectors in the initial space, and reduce the search space available to the opponent. Similar methods also include label smoothing (Warde-Farley and Goodfellow, 2016), which converts one-hot labels to soft targets. Besides, Zhang et al. (Zhang et al., 2021) propose a domain adaptation method, which gradually aligns the features extracted from the adversarial example domain with the clean domain features, making DNN more robust and less susceptible to spoofing by diverse adversarial examples.OUR APPROACH MotivationThe essence of adversarial examples is to deliberately add high-frequency perturbations to clean input samples and amplify the noise through deep neural networks so that the model gives the wrong output with high confidence. For example, when we input a clean image of a cat, add an imperceptible perturbation, the classifier will misclassify it as a leopard with high confidence. Through previous research, we have also learned that the classifier is robust to ordinary noise. Simultaneously, the adversarial perturbation in the adversarial example is very unstable and can be destroyed by some simple image transformation methods. According to the currently known image characteristics, we use image processing methods to eliminate the fixed structure of the adversarial perturbation before the adversarial example is input to thetarget. At the same time, to ensure the system's normal operation and the performance of the adversarial examples after converting the original image and the target model, we combine image compression and image restoration neural networks to form the entire defense model. This model can convert adversarial examples into clean images to resist adversarial example attacks without significantly reducing the quality of ordinary images. Pixel depth reduce An array of pixels represents a standard digital image in a computer, and each pixel is usually represented as a number with a specific color. Since two common representations are used in the test data set, they are 8-bit grayscale and 24-bit color. Grayscale images provide 2 8 = 256 possible values for each pixel; we use k to represent the maximum range of pixel values. An 8-bit value represents a pixel's intensity, where 0 is black, 255 is white, and the average number represents different shades of gray. The 8-bit ratio can be expanded to display color images with separate red, green and blue channels and provides 24 bits for each pixel, representing 2 24 &#8776; 16 million different colors. The redundancy of the image itself offers many opportunities for attackers to create adversarial examples. Compressed pixel bit depth can reduce image redundancy and destroy the fixed structure of adversarial examples in the input while retaining image information without affecting the image's accuracy on the classifier model. As shown in Figure 3, the defense capability is tested on the MNIST, Fashion-MNIST, and CIFAR-10 datasets. In the sub-pictures (a), (b), and (c), k refers to the maximum range of pixel value</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>(a) results on the MNIST dataset. (b) results on the F-MNIST dataset. (c) results on the CIFAR-10 dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Changes in the defense effect of pixel compression on the three data sets.</ns0:figDesc><ns0:graphic coords='7,264.39,232.11,170.34,147.86' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The model training framework.</ns0:figDesc><ns0:graphic coords='8,141.73,63.87,396.46,107.70' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>training images and 10,000 test images. To prevent over-fitting, both the defense model and the classifier target model in this paper are trained by the training set. The classifier model's accuracy and the defense model's performance experiment are conducted in the test set. To verify the generalization effect of the defense framework, this paper chooses FGSM, BIM, Deep-Fool and C&amp;W four methods to generate different types of adversarial examples for defense testing. We preprocess the defense model and then input it into the classifier model to get the experimental results. For FGSM and BIM, we use the L &#8734; norm to control the perturbation's intensity by changing the size of &#949;. Differently, we use the L 2 norm to implement the C&amp;W model, and adjust the degree of perturbation by controlling the maximum number of iterations. To preserve the original image information and eliminate adversarial perturbations as much as possible, we set k = 2 (k denotes the maximum range of pixel value color depths) on the MNIST dataset and k = 4 on the F-MNIST dataset. 7/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59969:1:1:NEW 28 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>are the adversarial examples generated by FGSM, BIM, DeepFool, and C&amp;W attack methods, respectively; from top to bottom are normal examples, adversarial examples, compression examples, and reconstructed examples.Figure 5(a) is the result of working on the MNIST data set. It can be seen that only the pixel compression operation can eliminate most of the adversarial perturbations, and the adversarial examples restore the accuracy of the classifier model. In addition, the adversarial examples generated by different methods have different perturbation levels to the image, and FGSM has the most massive perturbation level. When &#949; is 1.5, it has already had a more significant impact on the image, and the human eye can already detect it, but our method can still restore it to a clean sample. A few extreme adversarial examples become other classification results after processing, as shown in the first column of Figure 5(a). Still, after the reconstruction of the network, the recognition accuracy is also restored.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5(b) and (c) show the experimental results of the relatively complex of F-MNIST and CIFAR-10 data sets. Since pixel depth reduction is a lossy compression, choosing an appropriate compression level can eliminate the adversarial perturbation of the input sample as much as possible while retaining the necessary information. Generally, a slight loss of details does not affect the classifier model's correct recognition of the image. The following experiment will specifically show the defense effect of different data sets after processed by our defense framework under different attack intensities.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6(a), (b), (c) and (d) are the recognition accuracy rates of the model ResNet-50 with and without defensive measures for different attack strengths (&#949;, iteration) on the MNIST dataset. Our algorithm is compared with four different types of adversarial samples in defensive and non-defensive situations. After the defense model processes the data set in this paper, the accuracy of the original image has almost no change. Furthermore, in the face of different types of attacks from FGSM, BIM, DeepFool, and C&amp;W ,</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7(a), (b), (c) and (d) are the recognition accuracy rates of the model ResNet-50 with and without defensive measures for different attack strengths (&#949;, iteration) on the Fashion-MNIST dataset. It can be seen from Figure 7 that we have also achieved good results in the face of a slightly complicated</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8(a), (b), (c) and (d) are the recognition accuracy rates of the model ResNet-50 with and without defensive measures for different attack strengths (&#949;, iteration) on the Cifar-10 dataset. When</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>experimental results on MNIST dataset. (b) experimental results on Fashion-MNIST datasets.(c) experimental results on CIFAR-10 datasets.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Performance of defense models against multiple adversarial example attacks on different datasets.</ns0:figDesc><ns0:graphic coords='10,220.61,476.45,255.33,162.42' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Performance of the defense model on the MNIST dataset.</ns0:figDesc><ns0:graphic coords='11,349.74,261.26,198.63,173.57' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Performance of the defense model on the F-MNIST dataset.</ns0:figDesc><ns0:graphic coords='12,143.85,261.98,198.27,173.53' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>Finding a robust defense</ns0:head><ns0:label /><ns0:figDesc>method for adversarial samples is an open problem, and many researchers have carried out work in this area. This paper proposes an image compression and reconstruction defense framework to defend against adversarial example attacks based on the redundancy of images and the sensitivity of adversarial examples. We compress the pixel bit depth in the image to destroy the adversarial perturbation of the image and then use DNN to repair the image. On the premise of ensuring the image quality, the adversarial examples are converted into clean samples to achieve the purpose of defense. In addition, this method can be easily combined with other defense methods without modifying the 11/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59969:1:1:NEW 28 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Performance of the defense model on the CIFAR-10 dataset.</ns0:figDesc><ns0:graphic coords='14,349.73,258.01,198.36,173.44' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The performance of the defense model combined with LeNet and GoogLeNet on MNIST.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Network</ns0:cell><ns0:cell cols='5'>Clean FGSM BIM DeepFool CW</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet50(no defense)</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell>48%</ns0:cell><ns0:cell>9%</ns0:cell><ns0:cell>18%</ns0:cell><ns0:cell>3%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet50(defense)</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell>98%</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell>99%</ns0:cell></ns0:row><ns0:row><ns0:cell>MNIST</ns0:cell><ns0:cell>LeNet(no defense)</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell>38%</ns0:cell><ns0:cell>2%</ns0:cell><ns0:cell>9%</ns0:cell><ns0:cell>3%</ns0:cell></ns0:row><ns0:row><ns0:cell>(&#949;=0.15)</ns0:cell><ns0:cell>LeNet(defense)</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell>98%</ns0:cell><ns0:cell>98%</ns0:cell><ns0:cell>98%</ns0:cell><ns0:cell>98%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>GoogLeNet(no defense) 99%</ns0:cell><ns0:cell>48%</ns0:cell><ns0:cell>26%</ns0:cell><ns0:cell>19%</ns0:cell><ns0:cell>15%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GoogLeNet(defense)</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell>98%</ns0:cell><ns0:cell>97%</ns0:cell><ns0:cell>98%</ns0:cell><ns0:cell>98%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The performance of the defense model combined with ResNet and GoogLeNet on F-MNIST.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Network</ns0:cell><ns0:cell cols='5'>Clean FGSM BIM DeepFool CW</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet50(no defense)</ns0:cell><ns0:cell>93%</ns0:cell><ns0:cell>22%</ns0:cell><ns0:cell>2%</ns0:cell><ns0:cell>0%</ns0:cell><ns0:cell>0%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet50(defense)</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>85%</ns0:cell><ns0:cell>85%</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>89%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>F-MNIST GoogLeNet(no defense) 90%</ns0:cell><ns0:cell>35%</ns0:cell><ns0:cell>18%</ns0:cell><ns0:cell>2%</ns0:cell><ns0:cell>0%</ns0:cell></ns0:row><ns0:row><ns0:cell>(&#949;=0.05)</ns0:cell><ns0:cell>GoogLeNet(defense)</ns0:cell><ns0:cell>90%</ns0:cell><ns0:cell>81%</ns0:cell><ns0:cell>84%</ns0:cell><ns0:cell>88%</ns0:cell><ns0:cell>88%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet101(no defense)</ns0:cell><ns0:cell>93%</ns0:cell><ns0:cell>20%</ns0:cell><ns0:cell>2%</ns0:cell><ns0:cell>0%</ns0:cell><ns0:cell>0%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet101(defense)</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>83%</ns0:cell><ns0:cell>84%</ns0:cell><ns0:cell>88%</ns0:cell><ns0:cell>88%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The performance of the defense model combined with ResNet and GoogLeNet on Cifar-10.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Network</ns0:cell><ns0:cell cols='5'>Clean FGSM BIM DeepFool CW</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet50(no defense)</ns0:cell><ns0:cell>84%</ns0:cell><ns0:cell>64%</ns0:cell><ns0:cell>23%</ns0:cell><ns0:cell>20%</ns0:cell><ns0:cell>39%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet50(defense)</ns0:cell><ns0:cell>79%</ns0:cell><ns0:cell>68%</ns0:cell><ns0:cell>59%</ns0:cell><ns0:cell>72%</ns0:cell><ns0:cell>72%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>CIFAR-10 GoogLeNet(no defense) 98%</ns0:cell><ns0:cell>36%</ns0:cell><ns0:cell>34%</ns0:cell><ns0:cell>35%</ns0:cell><ns0:cell>0%</ns0:cell></ns0:row><ns0:row><ns0:cell>(&#949;=0.005)</ns0:cell><ns0:cell>GoogLeNet(defense)</ns0:cell><ns0:cell>94%</ns0:cell><ns0:cell>51%</ns0:cell><ns0:cell>52%</ns0:cell><ns0:cell>61%</ns0:cell><ns0:cell>60%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet101(no defense)</ns0:cell><ns0:cell>84%</ns0:cell><ns0:cell>64%</ns0:cell><ns0:cell>24%</ns0:cell><ns0:cell>22%</ns0:cell><ns0:cell>43%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet101(defense)</ns0:cell><ns0:cell>80%</ns0:cell><ns0:cell>69%</ns0:cell><ns0:cell>63%</ns0:cell><ns0:cell>74%</ns0:cell><ns0:cell>74%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The result of comparisons with other defensive methods(F-MNIST).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Network</ns0:cell><ns0:cell>Methods</ns0:cell><ns0:cell>Clean</ns0:cell><ns0:cell>FGSM</ns0:cell><ns0:cell>BIM</ns0:cell><ns0:cell>DeepFool</ns0:cell><ns0:cell>CW</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Normal</ns0:cell><ns0:cell cols='5'>93%/93% 38%/24% 00%/00% 06%/06% 00%/00%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='6'>Adversarial FGSM 93%/93% 85%/85% 51%/00% 63%/07% 67%/21%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Adversarial BIM</ns0:cell><ns0:cell cols='5'>92%/91% 84%/79% 76%/63% 82%/72% 81%/70%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>Resnet50 Feature Squeezing 84%/84% 70%/28% 56%/25% 82%/72% 83%/83%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Pixel Defend</ns0:cell><ns0:cell cols='5'>89%/89% 87%/82% 85%/83% 83%/83% 88%/88%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ComDefend</ns0:cell><ns0:cell cols='5'>93%/93% 89%/89% 70%/60% 88%/88% 88%/89%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Our Method</ns0:cell><ns0:cell cols='5'>89%/89% 87%/86% 87%/86% 90%/89% 89%/89%</ns0:cell></ns0:row></ns0:table><ns0:note>12/15PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59969:1:1:NEW 28 Aug 2021)</ns0:note></ns0:figure> <ns0:note place='foot' n='15'>/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59969:1:1:NEW 28 Aug 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Dear Editors, On behalf of my co-authors, we would like to thank you for providing us with the opportunity to modify the manuscript, and thank you editors and reviewers for your positive comments and constructive suggestions on the title of 'Adversarial example defense based on image reconstruction'(ID: peerj-59969). First of all, we are very sorry for the delay in submitting the revised paper. The main reasons are as follows: 1. We have carefully considered every constructive suggestion that the editors and reviewers have put forward to our manuscript, and spent a lot of time and energy revising it repeatedly. 2. The experimental questions raised by the two reviewers have been added and modified, which is time-consuming due to the limitations of experimental equipment and the long experimental period. In fact, the code we wrote is available, and we will attach a more detailed related document containing a link to the manuscript code. Furthermore, we have made specific explanations and modifications for every constructive comment given by the reviewers. After careful modification, we believe that our manuscript will be closer to the standards published by PEERJ. Best wishes. Yours truly, Yu Zhang, Huan Xu, Chengfei Pei, Gaoming Yang Reviewer 1: Basic report: 1. The paper is easy to understand, but the English language should be improved on grammar and tense. Some examples include lines 241-242, 273, 288-289, and the inconsistency of tense used in Section experiment results. Author response: Dear Reviewer, first of all, thank you very much for taking your time to review our paper carefully. Based on your suggestions, we have repeatedly checked the language and tense of our manuscripts, and used the Grammarly tool to verify it to meet journal publication standards. Furthermore, we have changed and marked many places in the manuscript for your review. For example, we modify 'Preprocess the defense model in this paper, and then we input it into the classifier model to get the experimental results' on lines 241-241 to 'We preprocess the defense model, and then input it into the classifier model to get the experimental results'. In addition, we have also modified the other places you suggested accordingly. 2. The authors missed some background information on adversarial attacks and defense methods. In the experiment, you used DeepFool attack and compared your proposed method with Pixel Defend, Feature Squeezing, and ComDefend but you didn't introduce these methods in Section Background. Author response: Thank you for your valuable comments on the background section of our manuscript. According to your proposal, we have added a simple implementation of the ComDefend method to the “Introduction”, and added knowledge such as Pixel Defend and Feature Squeeze to the “Background” defense method. Also, we have already marked the specific changes in the manuscript, please check it out. For example, we introduced Pixel Defend and Feature Squeezing in the defense model in section 2.2 of the background introduction. As follows:'Similarly, Pixel Defend [22] is a new method that purifies the image by moving the maliciously perturbed image back to the training data to view the distribution. Feature squeezing [16] is both attackagnostic and model-agnostic'. In addition, the ComDefend method is also narrated in the introduction at the beginning. 3. The structure of this paper is good. Figures are relevant to the content of the paper. Some problems are in Table 3. The names of the first two columns should be 'network' and 'method'. The caption of Table 3 doesn't provide sufficient information to help understand this table. Specifically, why for each cell there are two numbers (e.g., 93%/93%)? I also don't find the explanation in the paper. Author response: Thank you very much for your suggestions and questions on our table 3. We apologize for the lack of clarity in the text of the experiment. In the revised manuscript, we changed Table 3 to Table 4 due to the addition of new experiments. Table 4 mainly shows the classification accuracy of our model compared with other defense models under the attack intensity of the four attack methods. We are very sorry that the names of the first two columns were incorrectly written due to carelessness when editing the document. The first column of the table is the network environment Resnet50. The second column shows some classic defense methods. We have changed the name of the first column from 'Dataset' to 'Network' and the name of the second column from 'Network' to 'Methods'. The two numbers in the table (for example, 93%/93%) refer to the classification accuracy of the defense under two attack strengths with different parameters. In view of your reminder, we will make a more detailed explanation in the manuscript to improve our deficiencies. 4. I thank you for providing the source code, but you need also to provide a Readme file to describe how to use the code. In addition, there are many Chinese characters in the code and filenames, which should be replaced with English words. The authors should also provide pre-trained models to ensure the experimental results are reproducible. Author response: Thank you for questioning the insufficiency of the presentation of the experimental table in our manuscript. We are very sorry for our negligence in adding Chinese characters to the uploaded program. Previously, due to the capacity limitation of the website, we failed to upload the complete program. According to your suggestions, we have supplemented and modified the written program to improve the readability of the written program. We will upload a more explicit Readme file and pre-trained model. Experimental design: 1. The paper proposed an adversarial defense method based on image compression and reconstruction. However, the detailed structure of the image reconstruction network was not introduced. In the provided source code I saw some different network architectures used on different datasets. The authors should introduce them in the paper. Author response: Thank you very much for your suggestions on reconstructing the image network structure. We have added the specific details of the reconstruction network in Figure 4 in the second paragraph of section 3 in the manuscript. In the source code, we refer to the reconstruction methods of SRResnet and EDSRNet, and improve some parameter settings to realize our reconstruction network. Based on your suggestions, we also introduced some relevant reconstruction networks in the manuscript and gave references. 2. In lines 214-216, the authors described they generated adversarial examples and used them with clear examples to train the image reconstruction network. However, in the source code (cs-59969-MNIST-002.zip/train_turn_defense.ipynb), I found only adversarial examples were used. The authors should double-check the used method and describe it clearly and accurately. Author response: Thank you very much for reading our manuscript carefully and for your valuable comments. Regarding the content of lines 214-216 you pointed out, we reviewed the source code and found that there is indeed an improper expression; that is, our training set only contains adversarial examples. Therefore, we made changes to the manuscript. In fact, we directly use clean samples to generate adversarial samples and then use clean samples as labels to train adversarial samples. The purpose of training is to reduce the gap between the adversarial sample and the original data after processing the defense model. Finally, we will provide a clearer and more readable source code to explain our model. 3. The authors conducted experiments on evaluating the performances of the proposed method, the transferability of the proposed methods, and the comparison of the proposed method with other defense methods. The research questions are well defined and meaningful. However, the experiment setting should be improved in some ways. First, ImageNet or tiny-imagenet dataset should be used to evaluate the proposed method because it is the most widely used and most complicated dataset for image classification. Second, for the comparison of the proposed method and other defense methods, more similar defense methods based on image reconstruction such as HGD mentioned in the paper could be considered to compare. Author response: First, we would like to thank the reviewers for summarizing and acknowledging our work. Similarly, we attach great importance to your suggestions in our manuscript and the experimental parts that need improvement. Regarding your two suggestions for our manuscript in the experiment, we responded as follows: 1. We realize imagenet dataset will definitely have more advantages for algorithm verification. However, our experimental equipment is so limited (We only have an NVIDIA RTX2060 graphics card) that it cannot support this huge data set. After that, we tried to retrain the new model on the tiny-imagenet dataset and run it on resnet101, but due to equipment limitations and our weak operating capabilities, the final result was still not obtained. Out of the importance of reviewers’ opinions, we spent a lot of time on this work (this is also one of the reasons for the postponement of repair work), and we will provide relevant source code document of our work. At the same time, considering the rigor of the paper, we will indicate in our manuscript that it is not wellcompleted on the large data set. 2. Thank you for suggesting us similar defense methods for HGD based on image reconstruction. We are very sorry for not being able to add the defense method HGD to the manuscript due to our weak understanding and practical ability. We spent a lot of time learning many defense methods similar to those in the manuscript, such as ComDefend, PixelDefend, Feature Squeezing, and the HGD you mentioned. To be honest, these methods have made great contributions to the research field of adversarial examples and achieved good results. However, our manuscript is based on Comdefend's experimental ideas and comparison methods. It only achieves innovation and outstanding in some aspects, and shows better results on certain data sets. Therefore, we have not been able to fully realize the comparison of HGD. Thank you again for your constructive comments and apologize for our failure to complete your suggestions. Validity of the findings: 1. For the experiment of comparing the proposed method with other defense methods, only result on F-MNIST was reported. The results on the other two datasets should be provided to prove the proposed method is better than other methods in general. Author response: First of all, thank you for your valuable comments on our experimental design. For our comparison with other defense methods only on the F-MNIST dataset, we would like to give the following two explanations: 1. The number of training sets and test sets of F-MNIST and MNIST are the same, and there are ten categories. It can be simply said that F-MNIST clones all the external features of MNIST. Differently, F-MNIST is no longer an abstract symbol 0-9, but a more concrete clothing classification. Relatively speaking, the results obtained can explain the problem better than MNIST. Therefore, we did not make comparisons on the MNIST dataset. 2. We supplemented the Table 3 comparison experiment in the manuscript on the Cifar10 data set, and tested the performance of our defense model under four different attack methods. However, due to the deadline and limitations of our GPU equipment, we are sorry that we have not completed the experiment using different defense methods to compare. For the rigor of the paper, we will indicate the limitations of our method in the manuscript. Reviewer 2: Basic reporting: The introduction section demands to be more convincing. Try to structure the introduction section with four paragraphs as follows: i) State the motivation and clearly define the problem to be solved. ii) Make a thorough discussion of the stateof-the-art. iii) Describe your proposal in fair context to other published methods highlighting advantages and disadvantages of these methods. iv) Clearly pinpoint the novelty/contribution of your proposal and briefly describe your findings. Author response: First of all, thank you very much indeed for your suggestion about how to revise the introduction. We have carefully studied the four suggestions you gave us about writing the introduction, which will bring us great benefits for revising the manuscript now and writing the paper later. In accordance with the four main points you gave for writing the introduction, we have repeatedly checked the introduction part of the manuscript and spent a lot of time revising the questions in the manuscript including background, motivation, discussion of the state-of-the-art, logical relationship between sentences, and contribution. We sincerely hope that we can meet the publication requirements of the journal after we improve. Thank you again for your unreserved help! Experimental design: The performance of CNN strongly depends on an optimum structure of a network. The training structure in Figure 4 needs to be self-contained such as number of layers, height and width of each layer. Networks with defense method show degraded performance for clean images in Table 2. How do you validate this result? Author response: First of all, thank you very much for your questions and suggestions regarding the figures and tables in the manuscript. According to the defense framework of the sample reconstruction in Figure 4, we have made a more detailed description in the manuscript and corrected some inappropriate statements. Regarding the performance degradation of the network with defense methods on clean images that you mentioned in Table 2, we think this is an unavoidable problem in our defense model. The general idea of our defense method is sample reconstruction, which includes lossy compression of samples that will lose some features. Therefore, the features of the samples that have gone through the defense mechanism in Table 2 have been lost, and the classification accuracy has also decreased. In contrast, the features of the clean samples are intact, and the classification accuracy is relatively high. After that, we conducted experimental verification on the data set Cifar-10, and the classification accuracy of the network with the defense method on the clean sample was slightly lower. Validity of the findings: Network models in supplement files were not possible to test. It is needed to provide comprehensive readme files to run and test source codes, models and dataset. Is there any limitation of the proposed methodology? Author response: Thank you for your review of the supplemental documents we submitted. We apologize for not double-checking the submitted supplemental documents. At that time, we were not able to submit all the code related to the entire network model due to the system file memory size limitation, and we will double-check and submit a full self-description file this time. Regarding your second question about the limitations of the approach in this paper, in all honesty, the general idea in our manuscript is to reconstruct the approach based on the input, which defends well against adversarial examples, but also brings a loss of classification accuracy. Therefore, we would like to better trade-off between loss of accuracy and defensibility. Comments for the Author The manuscript is overall well written. If there are weaknesses, as I have noted above which need be improved upon before publication. Author response: Thank you very much for your suggestions and affirmation of our work. We will carefully revise and improve according to your suggestions and do our best to meet the level of journal publication. We would like to express our great appreciation to you and reviewers for comments on our paper. I am looking forward to hearing from you. Best regards, All authors "
Here is a paper. Please give your review comments after reading it.
312
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The rapid development of deep neural networks (DNN) has promoted the widespread application of image recognition, natural language processing, and autonomous driving.</ns0:p><ns0:p>However, DNN is vulnerable to adversarial examples. Such as, an input sample with imperceptible perturbation can easily invalidate the DNN and even deliberately modify the classification results. So this paper proposes a preprocessing defense framework based on image compression reconstruction to achieve adversarial example defense. Firstly, the defense framework performs pixel depth compression on the input image based on the sensitivity of the adversarial example to eliminate adversarial perturbations. Secondly, we use the super-resolution image reconstruction network to restore the image quality and then map the adversarial example to the clean image. Therefore, there is no need to modify the network structure of the classifier model, and it can be easily combined with other defense methods. Finally, we evaluate the algorithm with MNIST, Fashion-MNIST, and CIFAR-10 datasets; the experimental results show that our approach outperforms current techniques in the task of defending against adversarial example attacks.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Deep neural networks have been widely used in computer vision, natural language processing, speech recognition, and other fields <ns0:ref type='bibr' target='#b11'>(Karen and Andrew, 2015)</ns0:ref>. However, the adversarial example proposed by Szegedy et al. <ns0:ref type='bibr' target='#b22'>(Szegedy et al., 2013)</ns0:ref>, as shown in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, can easily deceive the neural network by adding a minor perturbation to the ordinary image, i.e., the deep convolutional neural network will continuously amplify this perturbation, which is sufficient to drive the model to make high confidence incorrect predictions without being detected by the human eye. As a result, the adversarial example has a minor perturbation than the normal noise. However, it brings more significant obstacles to practical applications. Researchers usually input the pictures directly into the neural network for the computer classification test when training the classifier model to solve this problem. Such as, Kurakin et al. <ns0:ref type='bibr' target='#b12'>(Kurakin et al., 2016)</ns0:ref> find that a significant fraction of adversarial images crafted using the original network are misclassified even when fed to the classifier through the camera. Nowadays, the research and implementation of autonomous driving <ns0:ref type='bibr' target='#b3'>(Deng et al., 2020)</ns0:ref> and person detection <ns0:ref type='bibr' target='#b23'>(Thys et al., 2019)</ns0:ref> rely heavily on deep learning technology. In addition to making the target model random errors, an adversarial example can also conduct targeted attacks according to the attacker's wishes and generate specified results. Such as Eykholt et al. <ns0:ref type='bibr' target='#b5'>(Eykholt et al., 2018)</ns0:ref> shows that adversarial examples bring substantial security risks to the application of related technologies. Furthermore, by adding adversarial perturbation to a road sign, the intelligent system may recognize the deceleration sign as an acceleration sign, which will bring substantial hidden dangers to traffic safety. Currently, the reasons for the adversarial examples are still controversial. Such as Szegedy et al. <ns0:ref type='bibr' target='#b22'>(Szegedy et al., 2013)</ns0:ref> believe that it is caused by the nonlinearity of the model, while Kurakin et al. <ns0:ref type='bibr' target='#b12'>(Kurakin et al., 2016)</ns0:ref> propose that the high-dimensional space's linearity is sufficient to generate adversarial examples. If the input samples have sufficiently large dimensions for linear models, they are also attacked by adversarial examples. Adversarial attacks can be divided into single-step attacks, which perform only one step of gradient calculation, such as the FGSM <ns0:ref type='bibr' target='#b6'>(Goodfellow et al., 2015)</ns0:ref>, and iterative attacks, which perform multiple steps to obtain better adversarial examples, such as BIM <ns0:ref type='bibr' target='#b18'>(Ren et al., 2020)</ns0:ref>, CW <ns0:ref type='bibr' target='#b0'>(Carlini and Wagner, 2017)</ns0:ref>. At the same time, adversarial example attacks can be categorized into white-box, gray-box, and black-box attacks based on the attacker's knowledge. A white-box attack means that the attacker knows all the information, including models, parameters, and training data. We examples not only exist in images, but also in speech and text <ns0:ref type='bibr' target='#b29'>(Xu et al., 2020)</ns0:ref>, which makes the application of deep learning technology have huge uncertainty and diversity, and there are potential threats at the same time. Therefore, it is urgent to defend, which makes the application of deep learning technology have huge uncertainty and diversity, as well as many potential threats.</ns0:p><ns0:p>With the endless emergence of attack methods, the defense of adversarial examples has become a significant challenge. Many defense methods <ns0:ref type='bibr' target='#b4'>(Dong et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b31'>Zhang and Wang, 2019;</ns0:ref><ns0:ref type='bibr' target='#b7'>Hameed et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b19'>Singla and Feizi, 2020;</ns0:ref><ns0:ref type='bibr' target='#b10'>Jin et al., 2021)</ns0:ref> have been proposed, such as adversarial training <ns0:ref type='bibr' target='#b6'>(Goodfellow et al., 2015)</ns0:ref>, which increases model robustness by adding adversarial examples to the training process. Some other defenses mainly rely on preprocessing methods to detect or transform the input image before the target network without modifying the target model. For example, Xu et al. <ns0:ref type='bibr' target='#b30'>(Xu et al., 2017)</ns0:ref> propose that the input's adversarial perturbation can be eliminated by reducing the color bit depth of each pixel and spatial smoothing, and they create a defense framework to detect adversarial examples in the input. <ns0:ref type='bibr' target='#b9'>Jia et al. (Jia et al., 2019)</ns0:ref> introduce the ComDefend defense model, which constructs two deep convolutional neural networks: the one for compressing images and retaining valid information; the other for reconstructing images. However, this method does not perform well under the attack of BIM.</ns0:p><ns0:p>On the other hand, if you only perform detection without other measures when defending against adversarial examples, it will not be able to meet actual needs. For example, in an autonomous driving application scenario, the defense system recognizes a road sign and detects that it is an adversarial example. At this time, the defense system refuses to input the image, which will seriously affect its normal operation. In addition, convolutional neural networks are used to extract image features and compress images. If the compression rate is too low, the uncorrupted adversarial perturbation in the reconstruction network will continue to expand, thereby significantly reducing the classifier's accuracy.</ns0:p><ns0:p>To solve the above problems, we propose a defense framework based on image compression reconstruction, which is a preprocessing method. Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> clearly describes the defense framework of this paper. The defense model in the figure can be divided into two steps. The specific operation is to eliminate adversarial perturbations by compressing images to defend against adversarial example attacks.</ns0:p><ns0:p>Simultaneously, to ensure that the standard and processed samples do not suffer from performance loss on the target model, we use the deep convolutional neural network to repair the processed images. In short, this paper makes the following contributions: &#8226; To defend against various adversarial example attacks, we propose a defense framework based on image compression and reconstruction with super-resolution. This framework eliminates adversarial perturbations by compressing the input samples and then reconstructs the compressed images using super-resolution methods to alleviate the performance degradation caused by compression.</ns0:p><ns0:p>&#8226; As a preprocessing method, there is no need to modify the target model during the defense process, i. e., our method has good performance for single-step and iterative attacks and has a small calculation compared with other adversarial training methods. In addition, it can be combined with different target models to have a protective effect still.</ns0:p><ns0:p>&#8226; To verify the effectiveness, applicability, and transferability of the method, extensive experiments of defense tests are carried out on three real data sets and multiple attack methods. The results show that our approach can achieve better defense performance for different adversarial example attacks and significantly reduce image loss.</ns0:p><ns0:p>The rest of this paper is organized as follows: Section 2 briefly introduces an background of the existing attack and defense methods. Section 3 discusses the methodology and defense framework proposed in this paper in detail, followed by many experiments to demonstrate the feasibility of this method in Section 4. Finally, the conclusion is given in Section 5.</ns0:p></ns0:div> <ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>In this section, we review related works from two aspects: the attack methods of generating adversarial examples and the defensive techniques of resisting adversarial examples.</ns0:p></ns0:div> <ns0:div><ns0:head>Attack methods</ns0:head><ns0:p>In order to verify the versatility of the proposed method, the following four different methods are mainly used to generate adversarial examples.</ns0:p><ns0:p>Fast Gradient Sign Method (FGSM) <ns0:ref type='bibr'>Goodfellow et al.</ns0:ref> propose the FGSM <ns0:ref type='bibr' target='#b6'>(Goodfellow et al., 2015)</ns0:ref>, a fast and straightforward method of generating adversarial examples. Given the input image, the maximum direction of gradient change of the deep learning model is found, and adversarial perturbations are added to maximize the cost subject to a L &#8734; constraint, resulting in the wrong classification result. The FGSM adds the imperceptible perturbations to the image by increasing the image classifier loss. The generated adversarial example is formulated as follows:</ns0:p><ns0:formula xml:id='formula_0'>x adv = x + &#949; &#8226; sign(&#9661; x J(&#952; , x, y true ))<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where J(&#952; , x, y) denotes the cross entropy cost function, x is the input image, y is the true label of the input image, and &#949; is the hyperparameter that determines the magnitude of the perturbations.</ns0:p></ns0:div> <ns0:div><ns0:head>3/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59969:2:0:NEW 10 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Basic Iterative Method (BIM)</ns0:p><ns0:p>The problem of adversarial examples is constantly being studied. Such as, <ns0:ref type='bibr' target='#b12'>Kurakin et al. (Kurakin et al., 2016)</ns0:ref> present a more direct basic iterative method (BIM) to improve the performance of FGSM. In other words, BIM is an iterative version of FGSM. It uses the basic idea of gradient descent to perform iterative training with small steps. Moreover, clip the pixel values of the intermediate results after each step to ensure that they are in an &#949;-neighborhood of the original image:</ns0:p><ns0:formula xml:id='formula_1'>x adv 0 = x, &#8226; &#8226; &#8226; , x adv N+1 = clip x,&#949; {x adv N + &#945; &#8226; sign(&#9661; x J(&#952; , x, y true ))} (2)</ns0:formula><ns0:p>Among them, x is the input image, y true is the true class label, J(&#952; , x, y) is the loss function, and &#945; is the step size, usually &#945; = 1.</ns0:p><ns0:p>This method attempts to increase the loss value of the correct classification and does not indicate which type of wrong class label the model should choose. Therefore, it is suitable for data sets with fewer and different types of applications.</ns0:p></ns0:div> <ns0:div><ns0:head>Carlini &amp; Wagner (C&amp;W )</ns0:head><ns0:p>Carlin and Wagner propose an optimization-based attack method called C&amp;W <ns0:ref type='bibr' target='#b0'>(Carlini and Wagner, 2017)</ns0:ref>.</ns0:p><ns0:p>C&amp;W can be a targeted attack or an untargeted attack. The distortion caused by the attack is measured by three metrics: </ns0:p><ns0:formula xml:id='formula_2'>(L 0 , L 2 , L &#8734; ).</ns0:formula><ns0:formula xml:id='formula_3'>min 1 2 (tanh(x n + 1) &#8722; X n ) 2 2 + c &#8226; f ( 1 2 tanh(x n ) + 1) W here f (x &#8242; ) = max(max{Z(x &#8242; ) i : i = t} &#8722; Z(x &#8242; t ), &#8722;k)<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where the Z is the softmax function, the k is a constant used to control the confidence, the t is the target label of misclassification, and c is constant chosen with binary search. In the above formula, tanh(x) refers to the mapping of adversarial examples to tanh space. After transformation, x belongs to (&#8722;in f , +in f ),</ns0:p><ns0:p>which is more conducive to optimization.</ns0:p></ns0:div> <ns0:div><ns0:head>DeepFool</ns0:head><ns0:p>The DeepFool algorithm is proposed by <ns0:ref type='bibr' target='#b16'>Moosavi-Dezfooli et al. (Moosavi-Dezfooli et al., 2016)</ns0:ref>, which generates an adversarial perturbation of the minimum norm of the input sample through iterative calculation. In each iteration, the DeepFool algorithm interferes with the image through a small vector. It gradually pushes the images located within the classification boundary to outside the decision boundary until a misclassification occurs. In addition, DeepFool aggregates the perturbations added in each iteration to calculate the total perturbations. Its perturbations are minor than FGSM, and at the same time, the classifier has a higher rate of misjudgment.</ns0:p></ns0:div> <ns0:div><ns0:head>Defense methods</ns0:head><ns0:p>At present, the defense is mainly divided into two aspects: improving the classifier model's robustness and preprocessing the input without changing the classifier model. Adversarial training <ns0:ref type='bibr' target='#b6'>(Goodfellow et al., 2015)</ns0:ref> is currently a more effective defense method proposed by Goodfellow et al. color depth. When &#949; is small, the attack intensity is low, and reducing each pixel's color depth can have an excellent defense effect. On the contrary, as the attack intensity continues to increase, the defense effect is also declining. At the same time, the situation becomes more complicated in the face of more complex data sets (such as Cifar-10). Although a higher compression rate can improve the defensive performance to a certain extent, it will also cause the loss of ordinary image information and reduce the prediction accuracy of the classifier model. Therefore, we need to repair the damaged image after compression.</ns0:p></ns0:div> <ns0:div><ns0:head>5/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59969:2:0:NEW 10 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Finally, we get a picture SR img that eliminates the perturbation of adversarial examples.</ns0:p><ns0:p>In the experiment, we find that after compressing the high-strength adversarial example, the clas-</ns0:p></ns0:div> <ns0:div><ns0:head>6/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59969:2:0:NEW 10 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed To better reconstruct clean samples, we minimize the distance between the reconstructed SR img and the original image HR img. We use Mean Squared Error(MSE) to define the loss function of the CNN:</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_4'>L(&#952; ) = 1 2N &#8721; F(X, &#952; ) &#8722;Y 2 (4)</ns0:formula><ns0:p>where F is the image restoration network, X is the compressed image, &#952; is the network parameter, and Y is the original image.</ns0:p><ns0:p>After the training is completed, the reconstructed network has the ability to filter and fight noise.</ns0:p><ns0:p>We add the reconstructed network model before the classifier that needs to be protected. When a batch of samples are input, they first pass through our reconstructed network model. If these input images</ns0:p><ns0:p>include adversarial examples, their adversarial features will be destroyed, while normal samples will not be affected. In this way, we can turn the input into a clean sample to defend against adversarial attacks.</ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENT</ns0:head><ns0:p>In this section, we use experiments to verify the effectiveness of the proposed algorithm. The basic process of the experiment includes generating adversarial examples on different datasets and training multiple classifier models to test the performance and transferability of the defense model. In addition, we conduct a comprehensive theoretical analysis of the experimental results.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental setup</ns0:head><ns0:p>In our experiments, we use three different image datasets: <ns0:ref type='bibr'>MNIST (LeCun et al., 1998)</ns0:ref>, Fashion-MNIST (F-MNIST) <ns0:ref type='bibr' target='#b27'>(Xiao et al., 2017)</ns0:ref> and <ns0:ref type='bibr'>CIFAR-10 (Xiao et al., 2018)</ns0:ref>. The MNIST and F-MNIST datasets both contain 60,000 training images and 10,000 test images. Each example is a 28&#215;28 grayscale image associated with one label in 10 categories. The difference is that MNIST is a classification of handwritten numbers 0 &#8722; 9, while F-MNIST is no longer an abstract symbol but a more concrete clothing classification.</ns0:p><ns0:p>The CIFAR-10 dataset is a 32 &#215; 32 color image associated with 10 category labels, including 50,000 training images and 10,000 test images. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For FGSM and BIM, we use the L &#8734; norm to control the perturbation's intensity by changing the size of &#949;.</ns0:p><ns0:p>Differently, we use the L 2 norm to implement the C&amp;W model, and adjust the degree of perturbation by controlling the maximum number of iterations. To preserve the original image information and eliminate adversarial perturbations as much as possible, we set k = 2 (k denotes the maximum range of pixel value color depths) on the MNIST dataset and k = 4 on the F-MNIST dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiment results</ns0:head><ns0:p>In this section, the adversarial examples generated by FGSM, BIM, DeepFool, and C&amp;W on different datasets are applied to the defense framework of this paper. Simultaneously, in the training process, to make the reconstructed network have selective noise reduction and generalization capabilities, we use the FGSM with the most perturbation to generate adversarial examples and input them into the neural network. Generally, simple images need to add a significant perturbation to be effective. In this paper, for the MNIST dataset, the value of &#949; is up to 0.3; for the F-MNIST dataset, the value of &#949; is from 0 to 0.1;</ns0:p><ns0:p>for the Cifar-10 dataset, the value of &#949; is taken from 0 to 0.01. When &#949; is equal to 0.01, the adversarial example is enough to produce a higher error rate on the target classifier model for the CIFAR-10 data set.</ns0:p><ns0:p>The results of each step of the defense experiment process are shown in the figure below.</ns0:p><ns0:p>From left to right, the different subgraphs in Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref> the operation can eliminate adversarial perturbations in the input image. This is because the defense model has certain image recovery capabilities, the MNIST image structure is relatively simple, and the information is not easily damaged. For FGSM attacks, we can see that the accuracy can be restored from 20% to 97% under high attack intensity, BIM can be restored from 5% to 98%, DeepFool can be restored from 0% to 98%, and C&amp;W can be restored from 0% to 98%. Fashion-MNIST defense model, i. e., the original image recognition accuracy rate drops by 4% after preprocessing. For FGSM attacks, it recovery from 13% to 81%; for BIM, it recovery from 1% to 82%;</ns0:p><ns0:p>for DeepFool, it recovery from 0% to 88%; and for C&amp;W , it recovery from 0% to 88%. reconstructed. For FGSM attacks, the defense model can restore the accuracy from 23% to 71%, BIM from 2% to 70%, DeepFool from 18% to 87%, and CW from 0% to 87%.</ns0:p></ns0:div> <ns0:div><ns0:head>Defense Transferability</ns0:head><ns0:p>As a preprocessing method, we can combine different target models without modifying them. To verify the defense model's portability, we train three classifier models from weak to strong performance. They are: <ns0:ref type='bibr'>LeNet (LeCun et al., 1998)</ns0:ref>, GoogLeNet <ns0:ref type='bibr' target='#b21'>(Szegedy et al., 2015)</ns0:ref>, and ResNet101 <ns0:ref type='bibr' target='#b8'>(He et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Besides, we combine the defense model trained with these three classifier models to test the defense performance.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> and Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref> show in detail the experimental results of the transferability of the defense model.</ns0:p><ns0:p>On the MNIST and Fashion-MNIST datasets, we take the median value of 0.15 and 0.05 for &#949;, respectively.</ns0:p><ns0:p>Due to the performance difference of the target model, the effect will be slightly reduced when the defense model is combined with different models. However, it can still defend well against adversarial example attacks. Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref> shows the transferability performance of our defense model combined with ResNet 50, ResNet101, and GoogLeNet on the data set Cifar-10. We take the median value of 0.005 for &#949;. The classification accuracy of the overall network model on the Cifar-10 data set has been reduced compared to the performance of the MNIST and F-MNIST data sets. This is because the Cifar-10 data set is relatively complex. In short, the classification accuracy of the network model with defense is much higher than the network model without defense when facing different attacks. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In addition, this method can be easily combined with other defense methods without modifying the target classifier model. Extensive experiments have been applied to the three real datasets of MNIST, F-MNIST, and CIFAR-10, showing the superiority of the proposed method over some classic techniques to defend against adversarial examples, i.e., the defensive framework we designed can resist different attacks. However, due to limited knowledge and personal abilities, many issues need further research.</ns0:p><ns0:p>We will study how to better balance the compression rate of complex images and preserve adequate information and verify the method's effectiveness on more complex datasets.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The generation process of adversarial example.</ns0:figDesc><ns0:graphic coords='3,224.07,63.71,248.28,80.89' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>can use it to calculate the attack distance and generate adversarial examples. A gray-box attack means that the attacker knows limited target model information. A black-box attack means that an attacker uses a similar model to generate adversarial examples. The generated adversarial examples have a certain degree of transferability, which can carry out transfer attacks on the model without knowing the relevant information of the model, and it has a high success rate. Furthermore, extreme samples can even deceive multiple different models. Generally, adversarial</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The defense framework uses input samples as pictures.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>There are three methods introduced by C&amp;W , which are more efficient than all previously-known methods in terms of achieving the attack success rate with the smallest amount of imperceptible perturbation. A successful C&amp;W attack usually needs to meet two conditions. First, the difference between the adversarial examples and the corresponding clean samples should be as slight as possible. Second, the adversarial examples should make the model classification error rate as high as possible. The details are shown in (3).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>They use adversarial examples to expand the training set and train with the original samples to increase the model's fit to the adversarial examples, thereby improving the robustness of the model. However, this increases the calculation cost and complexity, and adversarial training has excellent limitations. When facing adversarial attacks generated by different methods, the performance varies significantly.Generally, the preprocessing process does not need to modify the target model, compared with adversarial training and other methods, which is more convenient to implement. Moreover, it has a smaller amount of calculation and can be used in combination with different models. For instance, Xie et al.<ns0:ref type='bibr' target='#b28'>(Xie et al., 2017)</ns0:ref> propose to enlarge and fill the input image randomly. The entire defense process does4/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59969:2:0:NEW 10 Oct 2021)Manuscript to be reviewed Computer Science not need to be retrained and is easy to use. However, the results show that this method is only effective for iterative attacks such as C&amp;W and DeepFool<ns0:ref type='bibr' target='#b16'>(Moosavi-Dezfooli et al., 2016)</ns0:ref>, while for FGSM, the defensive effect of this single-step attack is inferior. They believe that this is due to the iterative attack to fitting the target model, resulting in low-level image transformation that can destroy the fixed structure of the adversarial disturbance. In addition, Liao et al.<ns0:ref type='bibr' target='#b14'>(Liao et al., 2018)</ns0:ref> regard adversarial perturbation as a kind of noise, and they designe a high-level representation guided denoiser (HGD) model to eliminate the adversarial disturbance of the input species.<ns0:ref type='bibr' target='#b1'>Das et al. (Das et al., 2017)</ns0:ref> use JPEG compression to destroy adversarial examples. Similarly, Pixel Defend (Song et al., 2017) is a new method that purifies the image by moving the maliciously perturbed image back to the training data to view the distribution. Feature squeezing (Xu et al., 2017) is both attack-agnostic and model-agnostic. It can reduce the image range from [0, 255] to a smaller value, merge the samples corresponding to many different feature vectors in the initial space, and reduce the search space available to the opponent. Similar methods also include label smoothing (Warde-Farley and Goodfellow, 2016), which converts one-hot labels to soft targets. Besides, Zhang et al. (Zhang et al., 2021) propose a domain adaptation method, which gradually aligns the features extracted from the adversarial example domain with the clean domain features, making DNN more robust and less susceptible to spoofing by diverse adversarial examples.OUR APPROACH MotivationThe essence of adversarial examples is to deliberately add high-frequency perturbations to clean input samples and amplify the noise through deep neural networks so that the model gives the wrong output with high confidence. For example, when we input a clean image of a cat, add an imperceptible perturbation, the classifier will misclassify it as a leopard with high confidence. Through previous research, we have also learned that the classifier is robust to ordinary noise. Simultaneously, the adversarial perturbation in the adversarial example is very unstable and can be destroyed by some simple image transformation methods. According to the currently known image characteristics, we use image processing methods to eliminate the fixed structure of the adversarial perturbation before the adversarial example is input to thetarget. At the same time, to ensure the system's normal operation and the performance of the adversarial examples after converting the original image and the target model, we combine image compression and image restoration neural networks to form the entire defense model. This model can convert adversarial examples into clean images to resist adversarial example attacks without significantly reducing the quality of ordinary images. Pixel depth reduce An array of pixels represents a standard digital image in a computer, and each pixel is usually represented as a number with a specific color. Since two common representations are used in the test data set, they are 8-bit grayscale and 24-bit color. Grayscale images provide 2 8 = 256 possible values for each pixel; we use k to represent the maximum range of pixel values. An 8-bit value represents a pixel's intensity, where 0 is black, 255 is white, and the average number represents different shades of gray. The 8-bit ratio can be expanded to display color images with separate red, green and blue channels and provides 24 bits for each pixel, representing 2 24 &#8776; 16 million different colors. The redundancy of the image itself offers many opportunities for attackers to create adversarial examples. Compressed pixel bit depth can reduce image redundancy and destroy the fixed structure of adversarial examples in the input while retaining image information without affecting the image's accuracy on the classifier model. As shown in Figure 3, the defense capability is tested on the MNIST, Fashion-MNIST, and CIFAR-10 datasets. In the sub-pictures (a), (b), and (c), k refers to the maximum range of pixel value</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>(a) results on the MNIST dataset. (b) results on the F-MNIST dataset. (c) results on the CIFAR-10 dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Changes in the defense effect of pixel compression on the three data sets.</ns0:figDesc><ns0:graphic coords='7,264.39,232.11,170.34,147.86' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Implementation framework of our defense model.</ns0:figDesc><ns0:graphic coords='8,141.73,63.87,396.46,107.70' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>are the adversarial examples generated by FGSM, BIM, DeepFool, and C&amp;W attack methods, respectively; from top to bottom are normal examples, adversarial examples, compression examples, and reconstructed examples.Figure 5(a) is the result of working on the MNIST data set. It can be seen that only the pixel compression operation can eliminate most of the adversarial perturbations, and the adversarial examples restore the accuracy of the classifier model. In addition, the adversarial examples generated by different methods have different perturbation levels to the image, and FGSM has the most massive perturbation level. When &#949; is 1.5, it has already had a more significant impact on the image, and the human eye can already detect it, but our method can still restore it to a clean sample. A few extreme adversarial examples become other classification results after processing, as shown in the first column of Figure 5(a). Still, after the reconstruction of the network, the recognition accuracy is also restored.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5(b) and (c) show the experimental results of the relatively complex of F-MNIST and CIFAR-10 data sets. Since pixel depth reduction is a lossy compression, choosing an appropriate compression level can eliminate the adversarial perturbation of the input sample as much as possible while retaining the necessary information. Generally, a slight loss of details does not affect the classifier model's correct recognition of the image. The following experiment will specifically show the defense effect of different data sets after processed by our defense framework under different attack intensities.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6(a), (b), (c) and (d) are the recognition accuracy rates of the model ResNet-50 with and without defensive measures for different attack strengths (&#949;, iteration) on the MNIST dataset. Our algorithm is compared with four different types of adversarial samples in defensive and non-defensive situations. After the defense model processes the data set in this paper, the accuracy of the original image has almost no change. Furthermore, in the face of different types of attacks from FGSM, BIM, DeepFool, and C&amp;W ,</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7(a), (b), (c) and (d) are the recognition accuracy rates of the model ResNet-50 with and without defensive measures for different attack strengths (&#949;, iteration) on the Fashion-MNIST dataset. It can be seen from Figure 7 that we have also achieved good results in the face of a slightly complicated</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8(a), (b), (c) and (d) are the recognition accuracy rates of the model ResNet-50 with and without defensive measures for different attack strengths (&#949;, iteration) on the Cifar-10 dataset. Whenprocessing the three-channel color dataset CIFAR-10, we find that it is more complicated than the first two single-channel grayscale image data sets. Mainly because it is difficult to balance the pixel compression rate and the defense rate, which makes the defense effect appear to be reduced to a certain extent. It can be seen from Figure8, the ordinary sample has a loss close to 5% in accuracy after compressed and</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Performance of defense models against multiple adversarial example attacks on different datasets.</ns0:figDesc><ns0:graphic coords='10,220.61,476.45,255.33,162.42' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Performance of the defense model on the MNIST dataset.</ns0:figDesc><ns0:graphic coords='11,349.74,261.26,198.63,173.57' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Performance of the defense model on the F-MNIST dataset.</ns0:figDesc><ns0:graphic coords='12,143.85,261.98,198.27,173.53' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Performance of the defense model on the CIFAR-10 dataset.</ns0:figDesc><ns0:graphic coords='14,349.73,258.01,198.36,173.44' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Table 4, our method performs best compared with other methods under attack models such as BIM, DeepFool, and C&amp;W .Although the ComDefend method is better at preserving the original image information, it adds Gaussian noise during training to improve the network's ability to resist noise. The defense effect of some attacks, such as BIM, is not ideal. The impact of adding an FGSM attack is only acceptable in the case of FGSM adversarial examples, and it performs poorly for adversarial examples generated by other methods.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>In general, although the direct pixel depth reduction has made a certain sacrifice in image information</ns0:cell></ns0:row><ns0:row><ns0:cell>preservation, the confrontation samples generated in the face of different attacks in the above experiments</ns0:cell></ns0:row><ns0:row><ns0:cell>can all play a good defense effect. Therefore, to the best of our knowledge, our method can effectively</ns0:cell></ns0:row><ns0:row><ns0:cell>defend against adversarial example attacks.</ns0:cell></ns0:row></ns0:table><ns0:note>CONCLUSIONFinding a robust defense method for adversarial examples is an open problem, and many researchers have carried out work in this area. This paper proposes an image compression and reconstruction defense 11/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59969:2:0:NEW 10 Oct 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The performance of the defense model combined with LeNet and GoogLeNet on MNIST.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Network</ns0:cell><ns0:cell cols='5'>Clean FGSM BIM DeepFool CW</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet50(no defense)</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell>48%</ns0:cell><ns0:cell>9%</ns0:cell><ns0:cell>18%</ns0:cell><ns0:cell>3%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet50(defense)</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell>98%</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell>99%</ns0:cell></ns0:row><ns0:row><ns0:cell>MNIST</ns0:cell><ns0:cell>LeNet(no defense)</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell>38%</ns0:cell><ns0:cell>2%</ns0:cell><ns0:cell>9%</ns0:cell><ns0:cell>3%</ns0:cell></ns0:row><ns0:row><ns0:cell>(&#949;=0.15)</ns0:cell><ns0:cell>LeNet(defense)</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell>98%</ns0:cell><ns0:cell>98%</ns0:cell><ns0:cell>98%</ns0:cell><ns0:cell>98%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>GoogLeNet(no defense) 99%</ns0:cell><ns0:cell>48%</ns0:cell><ns0:cell>26%</ns0:cell><ns0:cell>19%</ns0:cell><ns0:cell>15%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GoogLeNet(defense)</ns0:cell><ns0:cell>99%</ns0:cell><ns0:cell>98%</ns0:cell><ns0:cell>97%</ns0:cell><ns0:cell>98%</ns0:cell><ns0:cell>98%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The performance of the defense model combined with ResNet and GoogLeNet on F-MNIST.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Network</ns0:cell><ns0:cell cols='5'>Clean FGSM BIM DeepFool CW</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet50(no defense)</ns0:cell><ns0:cell>93%</ns0:cell><ns0:cell>22%</ns0:cell><ns0:cell>2%</ns0:cell><ns0:cell>0%</ns0:cell><ns0:cell>0%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet50(defense)</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>85%</ns0:cell><ns0:cell>85%</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>89%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>F-MNIST GoogLeNet(no defense) 90%</ns0:cell><ns0:cell>35%</ns0:cell><ns0:cell>18%</ns0:cell><ns0:cell>2%</ns0:cell><ns0:cell>0%</ns0:cell></ns0:row><ns0:row><ns0:cell>(&#949;=0.05)</ns0:cell><ns0:cell>GoogLeNet(defense)</ns0:cell><ns0:cell>90%</ns0:cell><ns0:cell>81%</ns0:cell><ns0:cell>84%</ns0:cell><ns0:cell>88%</ns0:cell><ns0:cell>88%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet101(no defense)</ns0:cell><ns0:cell>93%</ns0:cell><ns0:cell>20%</ns0:cell><ns0:cell>2%</ns0:cell><ns0:cell>0%</ns0:cell><ns0:cell>0%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet101(defense)</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>83%</ns0:cell><ns0:cell>84%</ns0:cell><ns0:cell>88%</ns0:cell><ns0:cell>88%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The performance of the defense model combined with ResNet and GoogLeNet on Cifar-10.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Network</ns0:cell><ns0:cell cols='5'>Clean FGSM BIM DeepFool CW</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet50(no defense)</ns0:cell><ns0:cell>84%</ns0:cell><ns0:cell>64%</ns0:cell><ns0:cell>23%</ns0:cell><ns0:cell>20%</ns0:cell><ns0:cell>39%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet50(defense)</ns0:cell><ns0:cell>79%</ns0:cell><ns0:cell>68%</ns0:cell><ns0:cell>59%</ns0:cell><ns0:cell>72%</ns0:cell><ns0:cell>72%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>CIFAR-10 GoogLeNet(no defense) 98%</ns0:cell><ns0:cell>36%</ns0:cell><ns0:cell>34%</ns0:cell><ns0:cell>35%</ns0:cell><ns0:cell>0%</ns0:cell></ns0:row><ns0:row><ns0:cell>(&#949;=0.005)</ns0:cell><ns0:cell>GoogLeNet(defense)</ns0:cell><ns0:cell>94%</ns0:cell><ns0:cell>51%</ns0:cell><ns0:cell>52%</ns0:cell><ns0:cell>61%</ns0:cell><ns0:cell>60%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet101(no defense)</ns0:cell><ns0:cell>84%</ns0:cell><ns0:cell>64%</ns0:cell><ns0:cell>24%</ns0:cell><ns0:cell>22%</ns0:cell><ns0:cell>43%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ResNet101(defense)</ns0:cell><ns0:cell>80%</ns0:cell><ns0:cell>69%</ns0:cell><ns0:cell>63%</ns0:cell><ns0:cell>74%</ns0:cell><ns0:cell>74%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The result of comparisons with other defensive methods(F-MNIST).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Network</ns0:cell><ns0:cell>Methods</ns0:cell><ns0:cell>Clean</ns0:cell><ns0:cell>FGSM</ns0:cell><ns0:cell>BIM</ns0:cell><ns0:cell>DeepFool</ns0:cell><ns0:cell>CW</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Normal</ns0:cell><ns0:cell cols='2'>93%/93% 38%/24%</ns0:cell><ns0:cell>00%/00%</ns0:cell><ns0:cell>06%/06%</ns0:cell><ns0:cell>00%/00%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Adversarial FGSM 93%/93% 85%/85%</ns0:cell><ns0:cell>51%/00%</ns0:cell><ns0:cell>63%/07%</ns0:cell><ns0:cell>67%/21%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Adversarial BIM</ns0:cell><ns0:cell>92%/91%</ns0:cell><ns0:cell>84%/79%</ns0:cell><ns0:cell>76%/63%</ns0:cell><ns0:cell>82%/72%</ns0:cell><ns0:cell>81%/70%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Resnet50 Feature Squeezing 84%/84%</ns0:cell><ns0:cell>70%/28%</ns0:cell><ns0:cell>56%/25%</ns0:cell><ns0:cell>82%/72%</ns0:cell><ns0:cell>83%/83%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Pixel Defend</ns0:cell><ns0:cell>89%/89%</ns0:cell><ns0:cell>87%/82%</ns0:cell><ns0:cell>85%/83%</ns0:cell><ns0:cell>83%/83%</ns0:cell><ns0:cell>88%/88%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ComDefend</ns0:cell><ns0:cell cols='3'>93%/93% 89%/89% 70%/60%</ns0:cell><ns0:cell>88%/88%</ns0:cell><ns0:cell>88%/89%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Our Method</ns0:cell><ns0:cell>89%/89%</ns0:cell><ns0:cell cols='4'>87%/86% 87%/86% 90%/89% 89%/89%</ns0:cell></ns0:row></ns0:table><ns0:note>12/15PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59969:2:0:NEW 10 Oct 2021)</ns0:note></ns0:figure> <ns0:note place='foot' n='15'>/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59969:2:0:NEW 10 Oct 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Dear Editors, On behalf of my co-authors, we would like to thank you for providing us with the opportunity to modify the manuscript, and thank you editors and reviewers for your positive comments and constructive suggestions on the title of 'Adversarial example defense based on image reconstruction'(ID: peerj-59969). Sincerely thank editors and reviewers for comprehensively reviewing our manuscripts and providing valuable comments to guide our revision work. We carefully studied the comments of the reviewers and tried our best to revise the manuscript based on the comments. In order to solve the problems that you and the reviewers are concerned about, we have made major revisions to the relevant content of the manuscript and provided more detailed information. Best wishes. Yours truly, Yu Zhang, Huan Xu, Chengfei Pei, Gaoming Yang Reviewer 1: Basic report: 1. The paper proposed an adversarial defense method by combining image compression and image reconstruction models. The background of adversarial attack and defense was introduced in detail. The authors conducted comprehensive experiments to evaluate the performance of the proposed method against common adversarial attacks and compared the method with existing defenses. The experiment result shows that the proposed method achieves good performance. Author response: Dear reviewers, first of all, thank you very much for your objective and comprehensive review of our manuscript. We will carefully revise and perfect the existing problems in the manuscript based on your suggestions. Furthermore, we have changed and marked many places in the manuscript for your review. 2. There are a few tense inconsistencies in the Section background, which should be revised. Author response: Thank you for reading the content of the manuscript carefully. We are very sorry for the inconsistency of tense in the background introduction. Based on your suggestions, we have repeatedly checked the language and tense of our manuscripts, and used the Grammarly tool to verify it to meet journal publication standards. For example, we change the background 'In this section, we will review related works...' to the present tense 'we review...'. Also, 'Carlin and Wagner proposed an...' to 'Carlin and Wagner propose an...'. In addition, we pay special attention to the tense problem of other sentences in the manuscript and checked it many times. We also hope to meet the publication requirements as much as possible. 3. In the Section Approach, the overview of the defense method should be described more clearly. The authors could consider showing how the defense method works from taking as the input of the original image to reporting whether the image is an adversarial example in Figure 4. Now Figure 4 just shows the process until the image reconstruction. I cannot easily know how to use the output image to detect adversarial examples from the Figure. Author response: Thank you for your suggestions on our manuscript. We are very sorry for the unclear description of the defensive method of image reconstruction in the manuscript. In response to your comments, we briefly summarized the currently more commonly used adversarial example defense methods: 1. Image-based detection method: this method can be regarded as a relatively simple classification task, and the main purpose is to detect adversarial samples in the image. 2. Image-based reconstruction method: this method uses the characteristics of the fine-density perturbation of the adversarial sample to process the image through a certain filtering method to retain the low-quality image with the main characteristics, and then reconstruct the image to restore it to high-quality image. 3. Defense method based on adversarial training: this method re-labels the labels of the adversarial sample classification in the image, and then re-trains to enhance the defensive ability of the classifier. The defense model in this manuscript is based on the second method. First, we compress the adversarial example image to obtain the main features required by the low-quality image, and then train the deep neural network learning ability to restore the low-quality image to the high-quality image. After training, the reconstructed network model has the ability to filter and fight noise. According to your suggestion, we have supplemented the follow-up content in the 'Image Reconstruction' section of the manuscript, describing the content after image reconstruction. Experimental design: 1. The authors conducted experiments on evaluating the performances of the proposed method, the transferability of the proposed methods, and the comparison of the proposed method with other defense methods. The experiment results show the proposed method outperforms other baseline defenses. In Table 4, the author should highlight (bold) the best experiment results, which is better to help compare the performances of these methods. Author response: Thank you very much for your constructive suggestions on the experimental work in our manuscript. We are sorry for not having a clearer representation of Table 4. According to your suggestion, we bolded the higher accuracy values in the different methods in Table 4 to make the experimental results clearer. Validity of the findings: 1. The experiment results are well evaluated. The authors provided the source code and detailed instructions for reproducing the experiment. Author response: Thank you for your valuable comments on our experimental process. We will check and supplement the experimental analysis in the manuscript, and further improve the presentation of experimental results to ensure the authenticity and validity of the experiment. We would like to express our great appreciation to editors and reviewers for comments on our paper. We are looking forward to hearing from you. Best regards, All authors "
Here is a paper. Please give your review comments after reading it.
313
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Customer satisfaction and their positive sentiments are some of the various goals for successful companies. However, analyzing customer reviews to predict accurate sentiments have been proven to be challenging and time-consuming due to high volumes of collected data from various sources. Several researchers approach this with algorithms, methods, and models. These include machine learning and deep learning (DL) methods, unigram and skip-gram based algorithms, as well as the Artificial Neural Network (ANN) and bag-of-word (BOW) regression model. Studies and research have revealed incoherence in polarity, model overfitting and performance issues, as well as high cost in data processing. This experiment was conducted to solve these revealing issues, by building a high performance yet cost-effective model for predicting accurate sentiments from large datasets containing customer reviews. This model uses the fastText library from Facebook's AI research (FAIR) Lab, as well as the traditional Linear Support Vector Machine (LSVM) to classify text and word embedding. Comparisons of this model were also done with the author's a custom multi-layer Sentiment Analysis (SA) Bi-directional Long Short-Term Memory (SA-BLSTM) model. The proposed fastText model, based on results, obtains a higher accuracy of 90.71 % as well as 20% in performance compared to LSVM and SA-BLSTM models.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Today, customer satisfaction plays a major role for a successful business providing products and/or services. An analysis of consumer reviews is crucial to understand what a customer wants in terms of sentiment, <ns0:ref type='bibr' target='#b8'>(Duyu, Bing &amp; Ting, 2015)</ns0:ref> as well as the betterment of a company or business to grow overtime. The phrase, 'what other people think' has importance to a buyer's decision when purchasing products and services according <ns0:ref type='bibr' target='#b7'>(Bo &amp; Lillian, 2008)</ns0:ref> to survey. Most companies handle customer service via call-centers with live agents, and over the last few years, the availability of viewing opinions, reviews, testimonies, etc. <ns0:ref type='bibr' target='#b23'>(Swapna et al, 2007)</ns0:ref> on the web has provided new avenues of research for automatic subjectivity, understanding texts, and sentiment classification. Researchers would use ML and DL models with the use of Natural Language Processing (NLP) techniques to process and classify datasets filled with various reviews in their study. However, the purpose of NLP is to analyze, extract, and present information for better decision-making in businesses. The level of granularity in the process of analyzing controversial texts vary from individual characters to sub-word units or words forming <ns0:ref type='bibr' target='#b1'>(Alexis et al., 2017)</ns0:ref> a sentence to sentences forming paragraphs. Early research and applied methods in text analysis discriminates at a sentence and phrase level <ns0:ref type='bibr' target='#b9'>(Hong &amp; Vasileios, 2003)</ns0:ref> between objective and subjective texts. Prime candidates for traditional solutions for a sentence level classification of a document include the bag-of-words approach, SVM's, or the Adaboost classifier. Accurate scores in sentiment, detection of negation and sarcasm, as well as challenges in word ambiguity and multi-polarization were taken into account when making considerations to machine learning algorithms while building a sentiment classifier. The evaluation of the models includes audio transcript, voice and text chats from various internal sources along with publicly available social media data sources. We use unigram, bigram, trigram and n-grams textual features in terms of multimodal sentiment analysis. Our model's training dataset includes customer sarcastic reviews in the context of customer sentiment. In this experiment polarity, negation and sarcasm are considered and classified as positive and negative sentiment.</ns0:p><ns0:p>Our main goal is to build a state-of-the-art learning model that utilizes proven binary and multi-class algorithms, methods, and word embedding techniques for classification. The following below are contributions made towards that goal:</ns0:p><ns0:p>&#61623; Solving issues with context-based sentiments through multi-layer SA model with Bi-LSTM, fastText and LSVM. Data pre-processing is customized to fit the model requirement for customer review and speech transcript dataset. &#61623; Solving vocabulary issues with predictions made from datasets containing mixed language texts by adding input and service layers to detect the baseline language, translates them into English, and form a transcript. &#61623; Transcript and Translate service layers are added to model input layer for train and testing the domain-base mixed-language dataset to avoid out of vocabulary (OOV) issues. &#61623; Saving cost by providing new data pipeline techniques while having increased performance to train models with large datasets. &#61623; LSVM, fastText and SA-BLSTM models hyperparameters are fine-tuned based on dataset. This paper details several contributions from various researchers, such as relating literature, published works, and research papers and are taken into review. The methods used, as well as the pre-processing steps and flow of data are presented. This paper also describes the approach taken for the ML and DL models, as well as the architecture of various algorithms used for the model. Details of the experiment are documented throughout, such as the setup and the results. In the end, the results of the concluding model are shown, as well as a proposal for the future.</ns0:p></ns0:div> <ns0:div><ns0:head>Related Works</ns0:head><ns0:p>Reviewed recent research papers and researchers' contributions towards text classification, sentiment prediction. Our review focused on supervised <ns0:ref type='bibr'>ML, DL and SA research papers. Anna et.al., (2020)</ns0:ref> published a paper related to COVID19 Cross-language SA of European Twitter messages from Italy, Spain, France, and Germany. Neural network model used with the pre-trained word or sentence embedding. Model constructed with fully-connected Rectified Linear Unit (ReLu) layer to process output from embedded vector and a regression output layer with sigmoid activation. In this experiment the following pre-trained word embeddings models used, a skip-gram version of word2vec (Trained English-language Wikipedia data), and a multilingual version of BERT (Trained on Wikipedia data and 160 million Covid-19 tweets key-words). The result showed, based on analysis over-all 4.6 million tweets in which 79,000 tweets contains one keyword of Covid-19. In this paper researchers took geolocation-based data and trends were varying and these researchers concluded, this study will be continued to collect tweet data from other countries and compare the result and also moving from the binary sentiment scale to a more complex model. <ns0:ref type='bibr' target='#b27'>Swarn et al., (2021)</ns0:ref> published an article related to a machine learning scraping tool for a data fusion in the analysis of sentiments about supporting business decisions with human-centric AI (HAI) explanations. The multinomial Na&#239;ve Bayes (NB, k-nearest neighbours (KNN), SVMs and multinomial Bayesian classifiers are used for sentiments analysis. This study results revealed KNN outperformed other models.</ns0:p><ns0:p>Babacar, <ns0:ref type='bibr' target='#b6'>Dezheng and Aziguli (2021)</ns0:ref> published an article related to employee sentiment using employees' reviews. This study used traditional classifiers and vector stochastic gradient descent classifier (RV-SGDC) for sentiment classification. RV-SGDC is a combination of Logistic Regression (LR), Support Vector Machines (SVM), and Stochastic Gradient Descent (SGD) model. The study result showed RV-SGDC outperforms with a 0.97% accuracy compare to other models due to its hybrid architecture. <ns0:ref type='bibr'>et.al., (2021)</ns0:ref> published an article paper related to SA of Covid-19 tweets using Deep Learning (DL) models. These researchers' motto behind this study to analyze tweets by Indian netizens during the lockdown, collected tweets between 23 March 2020 and 15 July 2020 and text has been labelled as fear, sad, anger and joy, analysed data using the new deep learning model name called Bi-Directional Encoder Representation from Transformers (BERT). The BERT model result was compared with traditional Logistic Regression (LR), Support Vector Machines (SVM), Long Short-term Memory (LSTM). The BERT model result shows more accuracy 89%, compare to other model's accuracy LR 75%, SVM 74.75% and LSTM 65%. This experiment classified sentiment into fear, sad, anger and joy based on the key-words.</ns0:p></ns0:div> <ns0:div><ns0:head>Nalini</ns0:head><ns0:p>Most recently, <ns0:ref type='bibr'>Najla et.al., (2021)</ns0:ref> published a research article related to evaluation of SA using Amazon Online Reviews dataset. 3 Researchers evaluated different deep learning approaches to accurately predict the customer sentiment, categorized as positive, negative and neutral. The variation of simple Recurrent Neural Network (RNN) such as Long Short-Term Memory Networks (LRNN), Group Long Short-Term Memory Networks (GLRNN), Gated Recurrent Unit (GRNN) and Updated Recurrent Unit (UGRNN). for Amazon Online Reviews. All evaluated RNN Algothims were combined with word embedding as feature extraction approach for SA including the following three methods Glove, Word2Vec and fastText by Skipgrams. Evaluated combination of five RNN variants with three feature extraction methods, evaluation result measured based on accuracy, recall, precision and F1 score. Found that the GLRNN with fastText feature extraction scored the highest accuracy of 93.75%. Researchers try to solve programming problem for beginners to code and find next word, used conventional LSTM model with word embedding, dropout layer with an attention mechanism. 1 This model result showed the pointer mixture model succeeded in predicting both the next within-vocabulary word and the referenceable identifier with higher accuracy than the conventional neural language model alone in both statically and dynamically typed languages.</ns0:p><ns0:p>Shreyas (2020) published a paper related to customer churn prediction, the traditional Logistic Regression (LR), Gaussian Na&#239;ve Bayes (GNB), Adaptive Boosting (AdaBoost), Extra Gradient Boosting (XGB), Stochastic Gradient Descent (SGD), Extra Trees and SVM classifiers are used for this experiment. Result showed Extra Trees classifier outperformed, SVM and XGB classifier performed well for Telecom (UCI repository) dataset. Many researchers <ns0:ref type='bibr' target='#b10'>(Ikonomakis, Kotsiantis &amp; Tampakas, 2005)</ns0:ref> have shown combining multiple classifiers improve performance and classification accuracy of model in the context of combining multiple classifiers for text categorization.</ns0:p><ns0:p>Babacar, <ns0:ref type='bibr' target='#b6'>Dezheng and Aziguli (2021)</ns0:ref> published an article related to employee sentiment using employees' reviews using the traditional classifiers and vector stochastic gradient descent classifier (RV-SGDC) for sentiment classification. RV-SGDC is a combination of Logistic Regression (LR), Support Vector Machines (SVM), and Stochastic Gradient Descent (SGD). The result showed RV-SGDC outperforms with a 0.97% accuracy compare to other models due to its hybrid architecture.</ns0:p><ns0:p>Ashok &amp; Anandan ( <ns0:ref type='formula'>2020</ns0:ref>) presented a research study paper related to Sentiment and Emotion in social media COVD19 Conversations. In this research, used variants of RNN algorithms and evaluated a multi-class neural network model using Bi-directional Long Shortterm memory (Bi-LSTM) with additional layers to process the Covid-19 long text social media posting, overcome model outfitting, accuracy and performance problems. Authors experimental result showed SAB-LSTM model outperformed the traditional LSTM, Bi-LSTM models and sentiment prediction was context-based. Authors planned to extend their model for future research with domain-based dataset for customer SA problems, comparing with other models to improve the prediction accuracy and performance. <ns0:ref type='bibr'>Kamran et.al., (2019)</ns0:ref> presented a text classification survey paper. In this paper, researchers discussed about existing classification algorithms, feature extraction, dimensionality reduction, and model evaluation methods, and also addressed critical limitation of each one of these components of the text classification pipeline. The following components are discussed, Algorithms such as Rocchio, bagging and boosting, logistic regression (LR), Na&#239;ve Bayes Classifier (NBC), k-nearest Neighbor (KNN), Support Vector Machine (SVM), decision tree classifier (DTC), random forest, conditional random field (CRF), and deep learning. Feature's extraction methods such as Term Frequency-Inverse document frequency (TF-IDF), term frequency (TF), and word-embedding methods such as Word2Vec, contextualized word representations, Global Vectors for Word Representation (GloVe), and fastText. Dimensionality reduction methods such as Principal component analysis (PCA), linear discriminant analysis (LDA), non-negative matrix factorization (NMF), random projection, Autoencoder, and tdistributed Stochastic Neighbor Embedding (t-SNE). Evaluation methods such as Accuracy, F&#946;, Matthew correlation coefficient (MCC), receiver operating characteristics (ROC), and area under curve (AUC). <ns0:ref type='bibr'>Joulin et.al., (2016)</ns0:ref> at Facebook's AI research lab (FAIR) released and presented a linear text classifier fastText library a paper. It proved that fastText library can be transformed into a simpler equivalent classifier, and also proved that the necessary, sufficient dimensionality of the word vector embedding space is exactly the number of document classes. Experiment results show that combination of bag of words and linear classification methods fastText accuracy is same or slightly lower than deep learning algorithms, fastText performs well in normal environment setup, even without using high performance GPU servers. <ns0:ref type='bibr'>Kamran et.al., (2017)</ns0:ref> employed deep learning methods to multi-class documents classifications. The traditional multi-class classification works well for a limited number classes and the performance drops when increasing the number of classes and documents. To solve the performance problems, experimented combination of deep learning, recurrent and convolutional neural network models. This combined neutral networks, hierarchical DL classification model (HiDLTex) result showed more accuracy than traditional SVM and Na&#239;ve bayes models. <ns0:ref type='bibr'>et.al., (2010)</ns0:ref> presented a paper at International Conference on computational Linguistic related to Bag-of-Opinions method for review rating prediction from sparse text patterns. Customers are writing their comments with implicitly expressing their opinion polarities as positive, negative, and neutral, and also providing numeric ratings of products. The numerical review rating prediction is harder than classifying by polarity. In this paper discussed about a unigram-based regression model each unigram gets a weight indicating its polarity and strength rating, for e.g. 'This product is not very good' Vs 'This product is not so bad', in this e.g., unigram regression model consider weight to 'good' as positive and 'bad' as negative, and it assigns the strong negative weight to 'not', combining this weight, it was not predicted the true intention of opinion phrases. These models are not robust and referred unigram regression model as polarity incoherence. To overcome these two models, introduced a novel kind of Bag-ofopinion (BoO) with approach of cumulative linear offset (CLO) model representation, where an opinion, within a review consists of the following three components, a root word, a set of modifier words from the same sentence, and one or more negation words. For a phrase e.g., 'not very helpful' has opinion root word 'helpful', modifier word 'very' and a negation word 'not'. Enforced polarity coherence by the design of a learnable function that assigns a score to an opinion by ridge regression, from a large, domain-independent corpus of reviews. All Amazon reviews dataset used for BoO model training and testing regardless of domains.</ns0:p></ns0:div> <ns0:div><ns0:head>Lizhen</ns0:head><ns0:p>Xiang and Yann (2016) published a paper to determining the explicit or implicit meaning of words, phrases, sentences and paragraphs, and making inferences about these properties such as words and sentence of these texts has been traditionally difficult because of the extreme variability in language formation. The text understanding is another area of research to understand the text formed in natural languages such as English, Chinese, Spanish and others. To solve text understanding problem convolutional networks (ConvNet) models were used for research studies. For English text understanding, model was built using these 70 characters, including 26 English letters, 10 digits, new line and 33 other characters.</ns0:p></ns0:div> <ns0:div><ns0:head>Methodologies and Process Flow</ns0:head><ns0:p>Recently, researchers are used deep learning and neural network models for SA problems, however neural network approach cost more compare to traditional baseline methods for both supervised and unsupervised learning. The combination of right methodologies and text classification algorithms are contributed to overcome SA models accuracy and performance problems. The following process flow diagrams Fig. <ns0:ref type='figure'>1</ns0:ref>. shows the steps followed for this experiment. In general, the data pre-processing step, data obtain from publicly available customers review data is very often incomplete, inconsistent and filled with a lot of noise and It's likely contained errors not suitable for training and testing the machine learning models. The following minimal syntactical data preprocessing steps of lowercasing all words, removing new lines, punctuation, special characters and stripping recurring headers are needed for Neural networks and embedding models. To improve data quality, introduced additional steps which includes stop words removal, text standardization, spelling correction, correcting the negation words, tokenization, stemming, and Exploratory Data Analysis (EDA). Fig. <ns0:ref type='figure'>2</ns0:ref>. shows the three proposed models input and output data flow. These models are customized to fit the dataset. In this experiment, added additional layers SA-BLSTM to handle large volume of customer reviews and speech transcript data. fastText and SVM are linear models. All these pre-trained models and developed algorithms can be used for production purpose on real-time SA business applications.</ns0:p></ns0:div> <ns0:div><ns0:head>Proposed Models</ns0:head><ns0:p>LSVM, fastText and SAB-LSTM models are used for this experiment, before building the models, reviewed algorithms towards solving the short and long text classification and SA problems. In these following sections, explained all these three custom model architectures.</ns0:p></ns0:div> <ns0:div><ns0:head>Support Vector Machine (SVM)</ns0:head><ns0:p>SVM is used for text classification problems, this algorithm is viewed as a kernel machine, and the kernel functions can be changes based on the problem. Definition of Support Vector Machines (SVM): It performs classification by finding the hyperplane that maximize the margin between the two classes. The support vector is the vectors that define the hyperplane. For instance, given pictures of apples and oranges, state whether the object in question is an apple or an orange. Equally well, predict whether a customer is satisfied or not satisfied given customers positive and negative sentiment data. SVM performs classification by finding the hyperplane that maximize the margin between the two classes. In other words, SVM is that the partition which segregate the classes. Fig. <ns0:ref type='figure'>3</ns0:ref>. shows an example and definition of a point and vector, plotted a point and any point, &#119860;(4,2)</ns0:p><ns0:p>(1) &#119909; = (&#119909; 1 , &#119909; 2 ),&#119909; &#8800; 0 A vector is an object that has both a magnitude and a direction. In geometrical term, a hyperplane is a subspace whose dimension is one less than that of its ambient space. If a space is in 3-dimensional then it's a plane, if a space is in 2 dimensions, then it's a line, if the space in one dimension, then it's a point. The following Fig. <ns0:ref type='figure'>4</ns0:ref>. shows the hyperplane definition.</ns0:p><ns0:p>The linear and non-linear classifier (Kamran, 2019) data separation shown in the Figure <ns0:ref type='figure'>.</ns0:ref>5. for the 2-dimensional dataset. If the dataset is separable then linear kernel works well for classification. Because of the following reasons, the linear kernel is used for the text classification. The most text categorization problems are linearly separable and the linear kernel works well with lot of features and also less parameters to <ns0:ref type='bibr' target='#b28'>(Thorsten, 1998)</ns0:ref> optimize for training the model.</ns0:p><ns0:p>The following Fig. <ns0:ref type='figure'>6</ns0:ref>. shows the linear kernel <ns0:ref type='bibr' target='#b24'>(Sven et al., 2015)</ns0:ref> SVM model. There many kernel functions have been developed over the years; a kernel is a function, that returns the result of a dot product performed (Alexandre <ns0:ref type='bibr' target='#b2'>Kowalczyk, 2017)</ns0:ref> in another space.</ns0:p><ns0:p>The linear kernel is simplest kernel function , it is given by the inner product &lt; , &#119896;(&#119909;,&#119910;) &#119909;,&#119910; &gt; and an optional constant c. <ns0:ref type='bibr' target='#b11'>(Joulin et al., 2016)</ns0:ref> show that it is often on par with recently proposed DL methods in terms of accuracy, performance, faster for training and evaluation. Facebook allows research community to build the models on top of the fastText open-source code. fastText introduce a new word embedding approach an extension of the continuous skipgram and Continuous Bag of Words (CBOW) model like word2vec, where each word is represented as a bag of character n-grams. The original version of fastText is trained on (Matthias &amp; Stephan 2019) Wikipedia and its available for 294 languages. The main difference between word2vec and fastText is that fastText sees words as the sum of their character n-grams and it treats a vector representation is associated to each character n-gram and words being represented <ns0:ref type='bibr' target='#b22'>(Piotr et al., 2017)</ns0:ref> as the sum of these representations. This new approach has clear advantages, as it can calculate embeddings even for out-of-vocabulary (OOV) words. However, word2vec embedding approach treat word as the minimal entity and try to learn their respective embedding vector, in case if the word does not appear in the training corpus, then it fails to get word vector representation. Fig. <ns0:ref type='figure'>7</ns0:ref>. shows the CBOW and Skip-gram model architecture, The CBOW is the distributed representation of context model, it predict the words in middle of a sentence based on surrounding words. However, the Skip-gram predicts context within a sentence <ns0:ref type='bibr' target='#b30'>(Tomas, Quoc &amp; Ilya, 2012)</ns0:ref>.</ns0:p><ns0:p>The Skip-gram maximize the average log probability for input of training words , &#119908;</ns0:p><ns0:p>(3) &#119908; 1, &#119908; 2 ,&#119908; 3 &#8230;..,&#119908; &#119879; In the following Eq. ( <ns0:ref type='formula'>4</ns0:ref>) used for computing probability. The skip-gram model consists of input and output vectors associated with each word w. &#119906; &#119908;, &#119907; &#119908;,</ns0:p><ns0:p>The following probability Eq. ( <ns0:ref type='formula'>5</ns0:ref>) is used to predict the word from . &#119906; &#119894;, &#119908; &#119895;</ns0:p><ns0:p>(5)</ns0:p><ns0:formula xml:id='formula_0'>&#119901;(&#119908; &#119894; &#9474;&#119908; &#119895; ) = exp (&#119906; &#119908; &#119894; &#10201; &#119907; &#119908; &#119895; )1 &#8721; &#119881; &#119897; = 1 exp (&#119906; &#119908; &#119897; &#10201; &#119907; &#119908; &#119895; )</ns0:formula><ns0:p>Her, the total number V of words in the given vocabulary.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66395:1:2:NEW 12 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>CBOW and skip-gram models are obtaining the semantic information during the large datasets used for training. The words which are closely related has the similar vector representation the words. For example, school, college, university, education words are having similar context, similarly orange and apple are having similar context representations.</ns0:p><ns0:p>The Fig. <ns0:ref type='figure'>8</ns0:ref>. shows the architecture <ns0:ref type='bibr' target='#b11'>(Joulin et al., 2016)</ns0:ref> of a simple linear model of fastText with N gram features.</ns0:p><ns0:p>(6) &#119909; 1, &#119909; 2 ,&#8230;..&#119909; &#119873; The features are embedded and averaged to form the hidden variable. This model is a simple neural network with only one layer <ns0:ref type='bibr' target='#b18'>(Maria, 2018)</ns0:ref>. The bag-of-words representation of the text is first fed into a lookup layer, where the embedding is fetched for every single word. Then, those word embedding are averaged, so as to obtain a single averaged embedding for the whole text. At the hidden layer we end up with n words dim number of parameters, where dimension is &#119909; the size of the embedding and n words is the vocabulary size. After the averaging, we only have a single vector which is then fed to a linear classifier: we apply the softmax over a linear transformation of the output of the input layer. The linear transformation is a matrix with dimension1 output, where N output is the number output classes. The following Eq. ( <ns0:ref type='formula'>7</ns0:ref>) is the &#119909; &#119873; negative log likelihood function of fastText model. ( <ns0:ref type='formula'>7</ns0:ref>) -</ns0:p><ns0:formula xml:id='formula_1'>1 &#119873; = &#8721; &#119873; &#119899; = 1 &#119910; &#119899; log (&#119891;(&#119861;&#119860;&#119909; &#119899; ))</ns0:formula><ns0:p>Here, represents the n-gram feature of the word, &#119909; &#119899;</ns0:p><ns0:p>A represents the lookup matrix of the word embedding, B represents the linear output of the model transformation, f represents the softmax function.</ns0:p><ns0:p>The softmax function calculates the probabilities distribution of the event over n different events. The softmax takes a class of values and converts them to probabilities with sum 1. So, it is effectively squashing a k-dimensional vector of arbitrary real values to k-dimensional vector of real values within the range 0 to 1. The following Eq. ( <ns0:ref type='formula'>8</ns0:ref>) is the softmax function f of fastText. ( <ns0:ref type='formula'>8</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_2'>softmax(&#119911;) = &#119838;&#119857;&#119849; (&#119963;) &#8721; &#119922; &#119948; = &#120783; &#119942;&#119961;&#119953;(&#119963;)</ns0:formula><ns0:p>The fastText has the following tuning parameters: Epoch: By default, the model is trained on each example for 5 epochs, to increase this parameter for better training specify the number of epoch argument. Learning rate (lr): The learning rate controls how 'fast' the model updates during training. This parameter controls the size of the update that is applied to the parameters of the models <ns0:ref type='bibr' target='#b18'>(Maria, 2018)</ns0:ref>. Changing learning rate implies changing the learning speed of our model is to increase (or decrease) the learning rate of the algorithm. This corresponds to how much the model changes after processing each example. A learning rate of 0 would means that the model does not change at all, and thus, does not learn anything. Note that this calculation of the best model is going to be quite expensive. There is no magic formula to find the hyperparameters for the best model. Just taking one hyperparameter, the learning rate, would make the calculation impractical. This is a continuous variable and it would need to feed in each specific value, compute the model, and check the performance. Loss function: In this we are using softmax as loss function. The most popular methods for learning parameters of a model are using gradient descent. Gradient descent is basically an optimization algorithm that is meant for minimizing a function, based on which way the negative gradient points toward. In machine learning, the input function that gradient descent acts on is a loss function that is decided for the model. The idea is that if we move towards minimizing the loss function, the actual model will 'learn' the ideal parameters and will ideally generalize to out-of-sample or new data to a large extent as well. In practice, it has been seen this is generally the case and stochastic gradient, which is a variant of gradient descent, has a fast-training time as well. Since it needs to obtain the posterior distribution of words, the problem statement is more of a multinomial distribution instead of a binary.</ns0:p><ns0:p>The following Fig. <ns0:ref type='figure'>9</ns0:ref>. shows the text documents process flow using fastText linear model. In this model text classification pipe-line, raw text documents are processed using data preprocessing steps described in Fig. <ns0:ref type='figure'>1</ns0:ref>. and processed text data tested using fastText model, the model output classified the output into two classes (Satisfied and Not-Satisfied). fastText is a classification algorithm and C++ used to compile fastText mode. It provides high accuracy as well as good performance <ns0:ref type='bibr'>(Vladimir &amp; David, 2017)</ns0:ref> during training and testing the model.</ns0:p></ns0:div> <ns0:div><ns0:head>SA-BLSTM Model</ns0:head><ns0:p>Authors SA-BLSTM is a sequence processing model <ns0:ref type='bibr' target='#b5'>(Ashok &amp; Anandan, 2020)</ns0:ref>. The Bidirectional LSTM is used in this model. The extended LSTM architecture is shown in Fig. <ns0:ref type='figure'>10</ns0:ref>. with Input, Output, multiplicative Forget gates and all the gates are using <ns0:ref type='bibr' target='#b29'>(Tomas et al., 2013)</ns0:ref> activation function f sigmoid.</ns0:p><ns0:p>The Constant Error Carousels (CEC) is the central feature of LSTM (Ashok &amp; Anandan, 2020) and it solves the vanishing problem. CECs back flow is constant when there are no input or error signals to the cell. Input and output gates protect CECs error flow from forward and backward activation. If the gates are closed or the gates activation is around 0 then the irrelevant input will not enter the cell. The Forward pass LSTM computes the output of the network given the input data, the Backward pass LSTM computes the output error with respect to the expected output and then go backward into the network and update the weights using gradient descent. To compute <ns0:ref type='bibr' target='#b29'>(Tomas et al., 2013)</ns0:ref> the network weight for a single input to output network, the back propagation (BP) uses the loss function to compute the gradient. The following are the equations for gates and activation functions. LSTM Gates are the activation of sigmoid function, between 0 and 1 is output value of the sigmoid. When the gates are blocked the value is 0 and when the value is 1 then gates allow the input to pass through. This model (Ashok &amp; Anandan, 2020) consists of Input, Language detection and translation, embedded, Bi-Directional LSTM neutral network layer, dropout, dense and output layers. The input layers process the multilingual mixed customer reviews and speech transcript dataset and vectorizing the data using word embedding technique, each post has one or more sentences, and each sentence is composed with number of words sequence. Here representing input during &#119899; &#119909; the language detection, language translation process.</ns0:p><ns0:p>(16) &#119909; = &#119909; 1 , &#119909; 2, &#119909; 3, &#8230;..&#119909; &#119879; In the detection layer, input text processed to detect the non-English text, here represents &#119889; input to detection layer.</ns0:p><ns0:p>(17) &#119889; = &#119889; 1 , &#119889; 2, &#119889; 3, &#8230;..&#119889; &#119879; If the input text identified as non-English text, then the language translation layer converts the text to English, here t represents the input of translation layer.</ns0:p><ns0:formula xml:id='formula_3'>(18) &#119905; = &#119905; 1 , &#119905; 2, &#119905; 3, &#8230;..&#119905; &#119879;</ns0:formula><ns0:p>The output of the translation layer processed by embedding layer, each input word converted to vector, here S represents vector value.</ns0:p><ns0:formula xml:id='formula_4'>(19) &#119878; = &#119908; 1 , &#119908; 2, &#119908; 3, &#8230;..&#119908; &#119899;</ns0:formula><ns0:p>The Bi-Directional LSTM (BLSTM) is a sequence processing model, it consists of two LSTM units, <ns0:ref type='bibr' target='#b12'>(Karthik &amp; Fathi, 2020)</ns0:ref> one unit taking the input in a forward direction and other unit taking the input in a backward direction. It effectively processes the input and context available to the network. Fig. <ns0:ref type='figure'>11</ns0:ref>. shows the mixed-language data processing flow, the language detection and translation layers convert the non-English to English and then it's embedding the words.</ns0:p><ns0:p>Input layer fed the embedded dataset to Bi-LSTM model and it processes vector output of the embedded layer. The following Fig. <ns0:ref type='figure'>12</ns0:ref>. shows the SA-BLSTM model architecture. This model used for both binary and multiclass classifications and SA applications.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiment and Results</ns0:head><ns0:p>For models training and testing, used Windows 64-bit Operating System with Intel core i7 processor, 16 GB Memory, and on-board GPU NVIDIA MX150 server environment. Developed models using Python, Jupyter, Anaconda IDE and used Python libraries, Pandas for data load processes, Numpy for mathematical operations, Seaborn and Matplotlib for plotting graph, NLTK Tool kit, Wordcloud and Sklearn used for build, train and test the models. This novel method uses a memory caching technique with automated custom Python scripts to speed-up data preprocessing tasks during model training on large volume dataset.</ns0:p><ns0:p>To compare various traditional models results for Twitter dataset, collected related (Abhilash &amp; Sanjay, 2019) experimental results. The following Table <ns0:ref type='table'>1</ns0:ref>. shows the various models accuracy.</ns0:p><ns0:p>The following Fig. <ns0:ref type='figure'>13</ns0:ref>. shows experiment result of fastText model with bigram.</ns0:p><ns0:p>Dataset collected from publicly available Twitter, IMDB movie review, Amazon product review and Yelp sentiment analysis data source from kaggle.com, Total 778631 dataset, 70% (545041) of data used for Training and 30% (233590) of data used to test the models which includes Kaggle.com sentiment datasets, chat conversations from chat application and transcript of sample audio files. These data sources details and URLs are listed in Table <ns0:ref type='table'>2</ns0:ref>.</ns0:p><ns0:p>Designed and developed fastText and Linear SVM (LSVM) models, used the linear kernel setting for LSVM model and for fastText model used the epoch=10, lr=0.01 and loss=softmax parameters. Both the models trained with unigram(n=1), bigram(n=2) and trigram(n=3) and ngrams parameters. The unigram (n=1) model result shows the polarity incoherence <ns0:ref type='bibr' target='#b16'>(Lizhen, Georgiana &amp; Gerhard, 2010)</ns0:ref> due to unigram model gets a weight indicating its polarity and strength, for e.g. not so good Vs. not so bad, the fundamental problem arises in unigram model when assign the weight to not. Analyzed the training result, 3-gram showed the better performance of LSVM and fastText model for this dataset. Tested with authors Pre-trained SBA-LSTM model using the same dataset. We also combined the datasets of Amazon, Yelp, Twitter, IMDB, Chat and audio transcripts for the LSVM and SA-BLSTM models.</ns0:p><ns0:p>During the model testing, captured the test results and tested n-gram features on both the models. The result shows for example, customer wrote the following review about phone purchase experience. e.g., 'Even after three working days of phone purchase. Noticed that phone Service was not good.'</ns0:p><ns0:p>Based on the unigram method the above text considers only one word for instance, in this case the above sentence 'phone Service was not good', it can be written a Probability(P) P('&#119901;&#8462;&#119900;&#119899;&#119890; &#119878;&#119890;&#119903;&#119907;&#119894;&#119888;&#119890; &#119908;&#119886;&#119904; &#119899;&#119900;&#119905; &#119892;&#119900;&#119900;&#119889;') = P('&#119901;&#8462;&#119900;&#119899;&#119890;') * P('&#119878;&#119890;&#119903;&#119907;&#119894;&#119888;&#119890;') * P('&#119908;&#119886;&#119904; (20) ') * P('&#119899;&#119900;&#119905;') * P('&#119892;&#119900;&#119900;&#119889;')</ns0:p><ns0:p>From this equation here unigram n=1 matches pattern word by word in this case 'good' gets more weight considering it is an individual word so the SVM unigram predicts 32% as positive and fastText unigram predicts 19.37% as positive. So, to avoid this problem, used Bigram n=2, where the algorithm considers the 'not good' as a single word while learning the pattern, the probability of whole sentence can be written as follows;</ns0:p><ns0:p>Probability(P) P('&#119901;&#8462;&#119900;&#119899;&#119890; &#119878;&#119890;&#119903;&#119907;&#119894;&#119888;&#119890; &#119908;&#119886;&#119904; &#119899;&#119900;&#119905; &#119892;&#119900;&#119900;&#119889;') = P('&#119878;&#119890;&#119903;&#119907;&#119894;&#119888;&#119890; '|start of sentence) * P('&#119878;&#119890;&#119903;&#119907;&#119894;&#119888;&#119890; | &#119901;&#8462;&#119900;&#119899;&#119890;') * P('&#119908;&#119886;&#119904; | &#119878;&#119890;&#119903;&#119907;&#119894;&#119888;&#119890;') * P('not|is') * P('&#119892;&#119900;&#119900;&#119889; | (21) &#119899;&#119900;&#119905;')</ns0:p><ns0:p>As per maximum likelihood estimation, the condition probability of something like P('&#119892;&#119900;&#119900;&#119889; can be given as the ratio of count of the observed occurrence of 'not good' together by | &#119899;&#119900;&#119905;') the count of the observed occurrence of 'not'. These models can predict new sentences. The following Table <ns0:ref type='table'>3</ns0:ref> shows the model training performance for 3-gram method for both fastText and LSVM. This result shows both linear models were performed well during the model training, fastText performed slightly better than LSVM.</ns0:p><ns0:p>The following Table <ns0:ref type='table' target='#tab_1'>4</ns0:ref> shows n-grams performance measures of model's accuracy, Recall, Precision and F1 Score. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The results from training the model revealed that fastText performed exceedingly compared to the LSVM and SA-BLSTM models, and that fastText is much more suitable with large datasets within a server that has minimal configuration. The results revealed that fastText provided a more accurate response within a small duration of time compared with the other two models, obtaining a 90.71 % rate in comparison to LSVM and SA-BLSTM models. The authors concluded that the n-gram method had better compatibility with both fastText and LSVM for the type of dataset used for the experiment, and noticed that training domain-specific datasets improves the accuracy of the sentiment score when tested with a particular domain. fastText shows much better performance during model training compare to LSVM and SA-BLSTM models. The fastText works well with large dataset within a minimal configuration of server infrastructure setting. Experiment result shows fastText model training time duration is less and it gives more accuracy and response time is faster than LSVM and SA-BLSTM models. The ngram method works better for both fastText and LSVM, especially trigram n=3 for this dataset. Noticed that a domain specific dataset training improves the accuracy of the sentiment score when it tested with particular domain. There is research that is highly essential in the future to explore a framework to build generic models that would be beneficial for industries such as healthcare, retail, and insurance. The SA-BLSTM model the authors have built has the ability to integrate fastText for representation of words to provide increased performance, and has the ability to be pre-trained for such industries that could benefit from this. However, improvements should be made for the quality of audio text files, as well as the use of automation scripts to correct text errors in conversations. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>(2) &#119896;(&#119909;,&#119910;) = &#119909; &#119879; + &#119888; PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66395:1:2:NEW 12 Nov 2021) Manuscript to be reviewed Computer Science Linear kernel is used in this experiment for text classification. The Polynomial kernel, RBF kernel and String kernel functions can be used for other classifications problems. fastText Facebook AI research (FAIR) lab release an open-source free library called fastText for text representation and classification. It's a lightweight method and work on standard generic hardware with multicore CPU. fastText approach evaluated for tag prediction and sentiment analysis by FAIR. fastText experiments</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>log &#119901;(&#119908; &#119905; + &#119895; |&#119908; &#119905; ) ] Here k represents the size of the training window and function for word in the middle. &#119908; &#119905; -k to k is the representation inner summation and it computes the log probability of the word prediction for word in the middle . The outer summation words are based on the corpus &#119908; &#119905; + &#119895; &#119908; &#119905; used for model training.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Here are the equations for all three gates. Input Gate (10) &#119894; &#119905; = &#120590;(&#120596; &#119894; [&#8462; &#119905; -1 , &#119909; &#119905; ] + &#119887; &#119894; ) PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66395:1:2:NEW 12 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Similarly, trigram n=3 to learn the probability of occurrence of pattern so that model becomes more accurate. The trigram n=3 results showed more accurate compare to other unigram and bigram testing parameters for both the models LSVM and fastText. The following metrics are used to evaluate the models training performance matrix based on True Positive (TP), True Negative (TN), False Positive (FP) and False Negative (TN). + tn + fn * 100</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>The LSVM and fastText are showing similar model accuracy results. SAB-LSTM shows less accuracy. The following Table5. shows the % conversations express the positive and negative score of customer sentiment.PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66395:1:2:NEW 12 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,382.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,420.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,351.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,350.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,326.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,367.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,385.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,396.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,312.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,313.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Manuscript to be reviewed&#119900; &#119905; = &#120590;(&#120596; &#119891; [&#8462; &#119905; -1 , &#119909; &#119905; ] + &#119887; &#119900; ) &#119891; &#119905; = &#120590;(&#120596; &#119900; [&#8462; &#119905; -1 , &#119909; &#119905; ] + &#119887; &#119891; ) &#119888; &#119905; = &#119905;&#119886;&#119899;&#8462; (&#119888;&#120596;[&#8462; &#119905; -1 , &#119909; &#119905; ] + &#119887; &#119888; ) &#119888; &#119905; = &#119891; &#119905; * &#119888; &#119905; -1 + &#119894; &#119905; &#119888; &#119905; Cell Sate of Forget gate (15) &#8462; &#119905; = &#119900; &#119905; * &#119905;&#119886;&#119899;&#8462; (&#119888; &#119905; )</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Output Gate</ns0:cell><ns0:cell /><ns0:cell>(11)</ns0:cell></ns0:row><ns0:row><ns0:cell>Forget Gate</ns0:cell><ns0:cell /><ns0:cell>(12)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Represents sigmoid function, LSTM block output of previous state at timestamp t-1. Represents input at current timestamp. &#120590; &#119909; &#119905; , are represents weight Represents &#8462; &#119905; -1 &#120596; &#119894; &#120596; &#119891; , &#119886;&#119899;&#119889; &#120596; &#119900; for the input, forget and output gates. , and are represents bias for input, output, forget &#119887; &#119894; &#119887; &#119900; &#119887; &#119891;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>gates. The following are equations for cell state for gates (Ashok &amp; Anandan, 2020).</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Cell State of Input gate</ns0:cell><ns0:cell /><ns0:cell>(13)</ns0:cell></ns0:row><ns0:row><ns0:cell>Cell Sate of Output gate</ns0:cell><ns0:cell>*</ns0:cell><ns0:cell>(14)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Represents input gate cell state at timestamp (t). Represents memory cell state at timestamp &#119888; &#119905; (t). Represents cell state of final output at timestamp (t). &#119888; &#119905; &#8462; &#119905;</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 4 : Models test and performance measures results with various parameters.</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Parameters</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>Precision</ns0:cell><ns0:cell>F1</ns0:cell></ns0:row><ns0:row><ns0:cell>LSVM</ns0:cell><ns0:cell>Unigram,</ns0:cell><ns0:cell>87.74 %</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.88</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Bigram</ns0:cell><ns0:cell>89.96%</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.895</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Trigram</ns0:cell><ns0:cell>90.11%</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.896</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Kernel=linear</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>fastText</ns0:cell><ns0:cell>Unigram,</ns0:cell><ns0:cell>88.23%</ns0:cell><ns0:cell>0.876</ns0:cell><ns0:cell>0.886</ns0:cell><ns0:cell>0.868</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Bigram</ns0:cell><ns0:cell>90.55%</ns0:cell><ns0:cell>0.896</ns0:cell><ns0:cell>0.907</ns0:cell><ns0:cell>0.901</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Trigram</ns0:cell><ns0:cell>90.71%</ns0:cell><ns0:cell>0.896</ns0:cell><ns0:cell>0.910</ns0:cell><ns0:cell>0.902</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>epoch=10, r=0.01,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>loss=softmax</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SA-BLSTM</ns0:cell><ns0:cell>epoch=10,lr=0.01,</ns0:cell><ns0:cell>77.00%</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.76</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>loss=softmax</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> </ns0:body> "
" November 3rd, 2021 Anandan Chinnalagu Department of Computer Science, Govt Arts College (Affiliated to Bharathidasan University, Tiruchirappalli), Kulithalai, Karur, TN, India Dear Editors, We would like to thank the reviewers for their valuable comments on the manuscript and have edited the manuscript based on their input. We hope that the manuscript is now ready to be published in the PeerJ Computer Science Journal. Regards Anandan Chinnalagu. On behalf of all authors. Below is our response to reviewers’ comments. Reviewer 1 Basic reporting Clear and unambiguous Literature review is fair Figures and Tables are clear Experimental design Novelty is lacking only application of existing algorithm with GPU based infrastructure Validity of the findings Good choice of algorithm and datasets Additional comments 1. Paper does not reveal whether multiple modalities have been used for sentiment Analysis. It can be mentioned. 2. Sarcasm is not part analysis and can be added Reviewer 1 Experimental design Novelty is lacking only application of existing algorithm with GPU based infrastructure Added Line # 424 – 426 in manuscript (Experiment and Result Section): We develop a novel method using memory caching technique with automated custom Python scripts to speed-up data preprocessing tasks during model training on large volume dataset. Additional comments 1. Paper does not reveal whether multiple modalities have been used for sentiment Analysis. It can be mentioned. 2. Sarcasm is not part analysis and can be added Added Line # 55 – 60 to address Additional comments in manuscript (Introduction Section) 1. Our model’s evaluation includes multiple modalities sentiment analysis such as audio transcript, voice and text chats from various internal sources along with publicly available social media data sources. We use unigram, bigram, trigram and n-grams textual features in terms of multimodal sentiment analysis. 2. Our model’s training dataset includes customer sarcastic review posting in the context of customer sentiment. In this experiment polarity, negation and sarcastic are consider and classify as positive and negative sentiment. Reviewer 2: Asha Thimmegowda Basic reporting 1. The article is clear in understanding and standard structure has been used. 2. Sufficient literature has been surveyed but lacks standard journal references like IEEE transactions, Elsevier springer etc. 3. The data set used is standard datasets, Experimental design 1.Sentiment analysis using machine learning is one of the upcoming area and is within aim and scope. 2. Extensive investigation is performed with machine learning and deep learning models. 3. I could see only fasttext comparison with amazon review and all other datasets mentioned, but LSVM and SA-BLSTM of existing literature is not shown for amazon, yelp dataset. Validity of the findings 1. findings provided in the table are valid. 2. Conclusion should highlight the accuracy measure of the proposed model Additional comments Highlight what is the % of performance evaluation in abstract also Experimental design 3. I could see only fasttext comparison with amazon review and all other datasets mentioned, but LSVM and SA-BLSTM of existing literature is not shown for amazon, yelp dataset. Added Line # 448 – 449 to address Experimental design comment (Experimental and Result Section) 3. We also combined the datasets of Amazon, Yelp, Twitter, IMDB, Chat and audio transcripts for the LSVM and SA-BLSTM models. Validity of the findings Conclusion should highlight the accuracy measure of the proposed model Added Line # 496 to mention the proposed model accuracy in conclusion section Our proposed fastText model obtains higher accuracy of 90.71 % compare to LSVM and SA-BLSTM models. Additional comments Highlight what is the % of performance evaluation in abstract also Added Line # 29 – 31 to mention the model performance % in abstract Our proposed fastText model obtains higher accuracy of 90.71 % as well as 20% more performance compare to LSVM and SA-BLSTM models. "
Here is a paper. Please give your review comments after reading it.
314
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>In recent years, the advent of cloud computing has transformed the field of computing and information technology. It has been enabling customers to rent virtual resources and take advantage of various on-demand services with the lowest costs. Despite the advantages of cloud computing, it faces several threats; an example is a Distributed Denial of Service (DDoS) attack, which is considered among the most serious ones. This paper presents realtime monitoring and detection of DDoS attacks on the cloud using a machine learning approach. Na&#239;ve Bayes, K-Nearest Neighbor, Decision Tree, and Random Forest machine learning classifiers have been selected to build a predictive model named 'Real-Time DDoS flood Attack Monitoring and Detection RT-AMD.' The DDoS-2020 dataset was constructed with 70,020 records to evaluate RT-AMD's accuracy. The DDoS-2020 contains three protocols for network/transport-level, which are TCP, DNS, and ICMP. This paper evaluates the proposed model by comparing its accuracy with related works. Our model has shown improvement in the results and reached real-time attack detection by using incremental learning. The model achieved 99.38% accuracy for the random forest in realtime on the cloud environment and 99.39% on local testing. The RT-AMD was evaluated on the NSL-KDD dataset as well, in which it achieved 99.30% accuracy in real-time in a cloud environment.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>The emergence of cloud computing has gained much attention due to its various features such as costeffectiveness and on-demand service provision. Cloud computing is a shared environment (multi-tenancy) between more than one user, using the same physical resources. Despite its advantages, the shared environment concept may threaten the security and availability of provided services. A Cloud Services Provider (CSP) must have the ability to ensure the security and availability of resources to maintain the commitment to customers, called the Service Level Agreement (SLA). Cloud computing is becoming popular by the day as more people and companies are attracted to employing it in their businesses. Its utilization is of high benefit; however, security remains a serious problem, especially in the public cloud environment.</ns0:p><ns0:p>This study will investigate current work in DDoS attacks targeting cloud services and propose an efficient model to detect DDOS flooding attacks at the network/transport-level. This model is called the Real-Time DDoS flood Attack Monitoring and Detection (RT-AMD) Model, which aims to enhance cloud services Figure <ns0:ref type='figure' target='#fig_10'>1</ns0:ref>: Services delivery models Many security challenges are faced by CSP, of which the main one is the trust that must be in place between CSP and the cloud customers. Trust is how the provider can protect the customer data from any breach <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. One of the popular features of the cloud environment is multi-tenancy and virtualization. Many customers share physical resources, which constitutes a considerable challenge in making such an environment secure <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>.</ns0:p><ns0:p>Shared data in the cloud can create risks of customers' data being lost or used by an unauthorized third party. There are many other types of cyberattacks on cloud security allowed by system and application vulnerabilities, such as account hijacking, malicious insiders, data loss, denial, and distributed denial of service. These attacks have a substantial negative impact on confidentiality, integrity, and availability of data. The availability of cloud services is one of the most critical CSP goals. Unavailability adversely affects CSP and cloud customers. DoS and DDoS attacks are the main threats leading to a cloud service's unavailability. DoS is a cyberattack where an attacker aims to make the systems and servers unavailable, preventing customers from accessing the servers and resources <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. DoS attacks launched in a distributed manner to speed up the consumption of the resources for one or many targets are called Distributed Denial of Service attacks (DDoS) <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. DDoS Attack types are explained in the following subsection.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>DDoS Attack Types</ns0:head><ns0:p>DDoS attacks are classified based on targeted protocols such as the network/transport or application level. &#61623; Network/Transport level DDoS attacks: These attacks occur mostly when using network and transport layer protocols such as TCP, UDP, and ICMP. These attacks are further categorized into three types:</ns0:p><ns0:p>1. Volume attacks: The attacker aims to consume all the resources of the target servers and make them unavailable by sending many packets (bandwidth/flooding attack) such as TCP flood, ICMP flood, etc.</ns0:p><ns0:p>2. Protocol attacks: In this type, the attack consumes all resources and intermediate connection media as a firewall by exploiting protocol vulnerabilities and bugs as TCP SYN flood, TCP SYN-ACK flood, etc. 3. Reflection and amplification attacks: Attempts to consume the victim's resources by sending fake request messages (such as ping requests) through spoofing the victim's IP address to the reflectors. The reflectors send a high volume of response messages to the victim's IP address such as Smurf attacks. &#61623; Application-level DDoS attacks: These attacks aim to consume services' resources or cause starvation of resources to disrupt customers through establishing requests, overloading the application servers.</ns0:p><ns0:p>The most popular type of attack at this level is HTTP flooding attacks. Many studies have classified DDOS attack at the application level based on the following categories <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>:</ns0:p><ns0:p>1. Session flooding attack: Servers' resources are disabled from being launched when session request rates are high. These requests usually are higher than those generated by valid users. 2. Request flooding attack: Sends sessions that contain more requests than the valid users. 3. Asymmetric attack: Wastes resources such as CPU and memory of the server by sending sessions with high-workload requests. 4. Slow request/response attack: Uses all server resources by sending incomplete requests slowly to keep the servers in the waiting state to receive data.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Intrusion Detection System</ns0:head><ns0:p>An intrusion detection system (IDS) is a device or software tool that identifies unusual events by monitoring the network traffic to distinguish the normal from abnormal behaviors <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>. IDS is classified into three main categories based on the analysis method. The choice between methods depends on several factors, such as the anomaly type, applied environment, security level required, and the cost <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>.</ns0:p><ns0:p>The IDS methods classifications are Signature-based, Anomaly-based, and Hybrid detection <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>. Signature-based detection, also called Knowledge-based or Rule-based detections, is suitable for detecting known attacks by comparing captured behavior. Anomaly-based detection, also known as Behavior-based, is useful to detect unknown attacks. These techniques compare the observed behavior with normal behavior to detect abnormal events. Hybrid-based detection works by combining the detection techniques mentioned above. The performance of this detection depends on the types of techniques chosen. Table <ns0:ref type='table'>1</ns0:ref> shows the advantage and limitations of these detection methods <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>: Summary of IDS techniques 3 Literature Review This section presents and critically analyzes existing research studies to detect the attacks in the three categories of IDS methods mentioned above.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Signature-based detection</ns0:head><ns0:p>Tanr&#305;verdi et al. <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref>, Bakshi et al. <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>, and Modi et al. <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> proposed a signature-based detection method. Tanr&#305;verdi et al. <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref> presented the detection of web attacks using a blockchain-based attack detection model. The signatures listed in this study are automatically updated by blockchain technology. An additional advantage of this proposed method is that it can be used against zero-day attacks. Bakshi et al. <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> presented a method to distinguish between the normal and abnormal traffic in VMs. It uses Snort to analyze the collected traffic to determine the attack. The virtual server then drops packets coming from the specified IP address. Modi et al. <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> presented a method to detect known attacks and derivatives of known attacks. It also uses a Snort tool to detect the known attacks from network traffic. The detected attack is input to a signature DB to predict derivatives of the attack by using signature as a priority. Manuscript to be reviewed Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Anomaly-based detection</ns0:head><ns0:p>Hong et al. <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> and Kemp et al. <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref> presented an anomaly-based solution to detect slow HTTP attacks, a type of DDoS attack. Hong et al. <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> developed a software-defined networking (SDN) controller; however, Kemp et al. <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref> deployed the proposed model using machine learning techniques. It selected eight classification algorithms for predictive models: Random Forest, decision trees, K-Nearest Neighbor, Multilayer Perceptron, RIPPER (JRip), Support Vector Machines, and Na&#239;ve Bayes. The authors used the Weka machine learning toolkit to build these models. ANOVA was used to compare the values of slow attack detection among the eight models. They evaluated the models by Area Under the receiver operating characteristic curve (AUC), Receiver Operating Characteristic (ROC) curve graphs, True Positive Rate (TPR), and False Positive Rate (FPR).</ns0:p><ns0:p>Singh et al. <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>, Filho et al. <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>, Wang et al. <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>, and Sreeram et al. <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref> presented anomaly-based solutions to detect HTTP attacks. Singh et al. <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> used a Multilayer Perceptron with a Genetic Algorithm (MLP-GA)-based method for detecting DDoS attacks on incoming traffic. Authors in <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> identified four features to detect application-layer attacks; first is the number of HTTP counts, referring to the count number of requests per IP address. It is assumed that any single IP address that sends more than 15-20 HTTP GET/POST requests is an attack. Second is the number of IP addresses, referring to the number of IP addresses in small windows time. It is assumed the attacks have more than 20 IPs in windows time. The third was the constant mapping function; the attacker's ports are different from legitimate users' as the one used by the attacker is varied and remains open. The fourth is fixed frame length; codes with fixed frame length are considered as an attack.</ns0:p><ns0:p>In comparison, Filho et al. <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref> proposed an online smart detection system for DoS/DDoS attack detection.</ns0:p><ns0:p>The detection approach used the Random Forest Tree algorithm to classify various types of DoS/DDoS attacks such as flood TCP, flood UDP, flood HTTP, and slow HTTP. However, Wang et al. <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref> proposed a detection scheme for HTTP-flooding (HTTP-Soldier) based on web browsing clicks. HTTP-soldier used the large-deviation principle of webpage popularity to be able to distinguish between normal and abnormal traffic. The large-deviation probability-based detection may affect some normal users. The authors mentioned that their proposed scheme could not detect a single Uniform Resource Locator (URL) attack.</ns0:p><ns0:p>The false positive of a Multi-URL attack with the most popular webpages is 12.2%, but the false positive of Multi-URL attack with the least popular webpages is at 17.1%. This solution can achieve high performance in Multi-URL attacks with the most popular web pages. Sreeram et al. <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref> used bio-inspired machine learning metrics to detect HTTP flood attacks to achieve fast and early detection. Authors in <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref> adopted the Bat algorithm, which has low process complexity, as a bio-inspired approach. Choi et al. <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>, Aborujilah and Musa <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref>, and Sahi et al. <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref> presented a cloud-based flood attack detection method. Choi et al. <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref> proposed a method to integrate the detection of DDoS flood attacks and MapReduce processing in a cloud computing environment. The proposed framework consists of three parts: first is the packet and log collection module (PLCM), which analyses packet transmission and web server logs in the first part. Second is the pattern analysis module (PAM), which produces the pattern for DDoS attack detection. Finally is the detection module (DM), which detects DDoS attacks by comparing them with a normal behavior model. However, Aborujilah and Musa <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref> presented the detection based on the covariance matrix approach. The proposed detection was divided into training and testing phases. A training phase aimed to construct a normal network traffic profile. The testing phase was to detect any abnormal traffic by the deviation between the normal and any other network traffic. The normal traffic is captured from end-users browsing the Internet in their cloud, whereas the flooding attack traffic is generated using the PageRebooter tool. It was evaluated by using the confusion matrix and present results for an internal and external cloud environment, while Sahi et al. <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref> proposed a detection model for TCP flood DDoS attacks. This model selected different classifiers, the least squares support vector machine (LS-SVM), Na&#239;ve Bayes, K-nearest, and Multilayer Perceptron.</ns0:p><ns0:p>Lin et al. <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>, Li et al. <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref>, and Nawir et al. <ns0:ref type='bibr' target='#b21'>[22]</ns0:ref> presented an anomaly-based detection method for detecting DDoS attacks. The proposal of Lin et al. <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref> and Li et al. <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Lin et al. <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref> used Long Short-Term Memory (LSTM) to build the neural network model, which is a specific recurrent neural network structure (RNN). Li et al. <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref> used LSTM and Gated recurrent units (GRU) recurrent neural networks. It used BGP and NSL-KDD datasets. The best accuracy achieved was using the BGP dataset in the range of 90-95%. However, the proposed of Nawir et al. <ns0:ref type='bibr' target='#b21'>[22]</ns0:ref> used machine learning algorithms. The authors in [Reff9+] selected five machine learning algorithms to include Na&#239;ve Bayes (NB), Averaged One Dependence Estimator (AODE), Radial Basis Function Network (RBFN), Multi-Layer Perceptron (MLP), and J48 trees. A comparison was drawn between these algorithms based on accuracy and processing time. A UNSW-NB15 dataset was selected in this experiment.</ns0:p><ns0:p>Haider et al. <ns0:ref type='bibr' target='#b22'>[23]</ns0:ref> proposed a deep Convolutional Neural Network (CNN) framework for efficient DDoS attack detection in SDN. This proposed framework has been evaluated using hybrid state-of-the-art algorithms on CICIDS2017 dataset. Hwang et al. <ns0:ref type='bibr' target='#b23'>[24]</ns0:ref> proposed an unsupervised deep learning model for early network traffic anomaly detection, namely D-PACK based on CNN. The experimental results show low false-positive rate and high accuracy. Novaes et al. <ns0:ref type='bibr' target='#b24'>[25]</ns0:ref> proposed using short-term memory and fuzzy logic for DDoS attack detection and mitigation in SDN. The proposed system consists of three phases: Characterization, anomaly detection, and mitigation. The evaluation of this system has been conducted using CICDDoS2019 dataset with archived accuracy of 96.22%.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Hybrid-based detection</ns0:head><ns0:p>Hatef et al. <ns0:ref type='bibr' target='#b25'>[26]</ns0:ref>, Zekri et al. <ns0:ref type='bibr' target='#b26'>[27]</ns0:ref>, and Saleh et al. <ns0:ref type='bibr' target='#b24'>[25]</ns0:ref> proposed a hybrid-based detection system. While Hatef et al. <ns0:ref type='bibr' target='#b25'>[26]</ns0:ref> and Zekri et al. <ns0:ref type='bibr' target='#b26'>[27]</ns0:ref> deployed cloud environment approaches. Hatef et al. <ns0:ref type='bibr' target='#b25'>[26]</ns0:ref> is a hybrid intrusion detection approach in cloud computing (HIDCC). The applied detection was a combination of signature-based and anomaly-based detection techniques. The Snort tool is used for known attacks (signature-based detection) by employing the Apriori algorithm to generate a pattern from derived attacks. Both clustering and classification algorithms are applied for the undetectable attack through Snort. The clustering module receives and determines the input packet based on the sample vector. Then the classifier module determines the final class of the packet through algorithm C4.5 as a decision tree classifier according to the found cluster. However, Zekri et al. <ns0:ref type='bibr' target='#b26'>[27]</ns0:ref> proposed using machine learning for anomaly detection and Snort technique for signature detection. Three algorithms were selected: Decision Tree, Na&#239;ve Bayes, and K-Means algorithms. The decision tree achieved the best accuracy. The proposed Saleh et al. <ns0:ref type='bibr' target='#b27'>[28]</ns0:ref> was applied in real time and deployed in three stages. First, the na&#239;ve Base feature selection (NBFS) technique was employed to reduce the dimensionality of sample data. Second, optimized Support Vector Machines (OSVM) were used to reject the noisy input sample as it might have caused misclassification. Finally, the attacks were detected by Prioritized K-Nearest Neighbors (PKNN) classifier. This proposed scheme takes time in the first and second stages at feature selection and outlier rejection before attack detection.</ns0:p><ns0:p>This study will explore the use of data mining techniques (classification) in real-time detection. We will focus on the network/transport level as it is the core layer of network architecture. There are few existing studies on volume-based network/transport-level DDoS attack detection in the cloud environment. Moreover, there are very few studies that have proposed online detection with a high detection rate. This study will employ different classification algorithms (machine learning) that suit our needs to build detection models. We will then evaluate these models by comparing them against two main factors: efficiency of detection and detection rate.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>Proposed Framework</ns0:head><ns0:p>This section explains the RT-AMD model framework by presenting its components and the employed data mining classification methods.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Main components</ns0:head><ns0:p>The proposed RT-AMD model consists of two main components: monitoring and detection. </ns0:p></ns0:div> <ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Computer Science 1-The monitoring component is responsible for monitoring the network traffic for requests coming to the webserver on the cloud and extracting the traffic from the network log. If the traffic from the log is found in the Blacklist, an alert with corresponding information is sent to the cloud admin. If it is not in the Blacklist, it will move into the next component, detection.</ns0:p></ns0:div> <ns0:div><ns0:head>2-The detection component uses trained classifiers to detect if the incoming traffic behavior is normal</ns0:head><ns0:p>or abnormal. When abnormal behavior occurs, it will alert the system, send information to the admin, then update the Blacklist with the new traffic information. Otherwise, it can access the cloud and benefit from its services. An overview of the proposed environment is shown in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>. At the beginning, the admin needs to log in/register to benefit from the RT-AMD tool. Once the registration is done, RT-AMD will send a verification code for the entered email to ensure that the email address is correct. The tool will then start monitoring the network traffic to detect any malicious behavior. Once an attack occurs, the RT-AMD will detect it, then send all information assigned to this traffic to the admin email. This makes it easier for the admin to act on any malicious behavior. A flowchart of the proposed detection framework is shown in Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Data Mining Classification Methods</ns0:head><ns0:p>Data mining analyzes large volumes of data to identify and predict any threats; this helps to solve problems and reduce risks. Data mining can answer questions that have typically been time-consuming to address manually by using several statistical techniques to analyze data in various ways.</ns0:p><ns0:p>Recently, high arrival rates of online data streams have imposed high resource requirements on data mining processing systems. DataStream Mining (also known as stream learning) is a technique of extracting knowledge structures from an unbounded and ordered sequence of data that exists over time (stream data) <ns0:ref type='bibr' target='#b29'>[30]</ns0:ref>. Some differences between stream data mining and traditional data mining are shown in the following <ns0:ref type='bibr' target='#b30'>[31]</ns0:ref>:</ns0:p><ns0:p>1-Machine learning of streaming scenarios cannot retrieve all of the data of the dataset in advance. Data chunks are available in a stream, one by one, or bundle by block. 2-Data arriving over time in streams may be unlimited in their number, resulting in difficulty storing all arrival data in the memory. 3-Data from streams need to be analyzed quickly to provide real-time response and prevent data waiting. 4-Some incoming stream data may lack accurate class labels because of the label query's high cost for each data stream. Building knowledge from stream data mining is called incremental learning <ns0:ref type='bibr' target='#b30'>[31]</ns0:ref>. Incremental learning has received much attention from both academia and industry. It is a machine learning approach where knowledge is applied as new instances arrive, and what has been learned is updated according to the new instances <ns0:ref type='bibr' target='#b31'>[32]</ns0:ref>.</ns0:p><ns0:p>There are several techniques in data mining, such as regression, clustering association, and classification. Different techniques serve different purposes. However, most data mining techniques usually applied for this area are classification techniques. Classification is a type of supervised learning that predicts the class label to which data belongs. The classifier works by obtaining a training dataset containing several attributes and class labels. The classifier then tests the dataset to evaluate the model.</ns0:p><ns0:p>The proposed framework will employ the classifiers in the detection component of the framework with two class labels. The attributes assigned to normal behavior are labelled 'normal' and those to abnormal behavior 'anomaly.' Some classifiers were found to have better detection results than others based on the recommendations in the related works <ns0:ref type='bibr' target='#b11'>[12,</ns0:ref><ns0:ref type='bibr' target='#b13'>14,</ns0:ref><ns0:ref type='bibr' target='#b27'>28]</ns0:ref>. We selected the following: Na&#239;ve Bayes, Decision Trees, K-Nearest Neighbor, and Random Forest. These classifiers have been selected to build the predictive models.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>Dataset Collection</ns0:head><ns0:p>A DDoS-2020 dataset is a network/transport-level dataset that authors have assembled and that contains of two types of traffic: attack and normal, with 70,020 records of traffic. Attack traffic consists of 27,169 instances, and the normal traffic consists of 42,851 instances. We have collected attack traffic from the CAIDA DDoS Attack 2007 dataset and collected normal traffic by capturing packets using Wireshark. The Center for Applied Internet Data Analysis Dataset 'DDoS Attack 2007' <ns0:ref type='bibr' target='#b32'>[33]</ns0:ref> contains an approximately one-hour collection of anonymized (abnormal) traffic from a DDoS attack on August 4, 2007 <ns0:ref type='bibr' target='#b32'>[33]</ns0:ref>. Wireshark is one of the most popular open-source network analyzer tools under the GNU General Public License (GPL) <ns0:ref type='bibr' target='#b33'>[34]</ns0:ref>. Wireshark captures packets using the 'pcap' library and different network media types, including Ethernet, Wi-Fi, Bluetooth, and others <ns0:ref type='bibr' target='#b34'>[35]</ns0:ref>. The DDoS-2020 dataset contains information corresponding to each packet: source IP address, destination IP address, protocol type (ICMP, TCP, and DNS), packet length, packet timestamp, and label to determine whether the traffic is normal ('0') or attack ('1'). We have 29,554 instances of TCP, 20,727 instances of ICMP, and 19,739 instances of DNS with 0% missing value. Figure <ns0:ref type='figure'>4</ns0:ref> shows the distribution of TCP, ICMP, and DNS protocols in the dataset.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>4</ns0:ref>: The distribution of protocols in the dataset The dataset contains two types of traffic: attack traffic consists of 27,169 instances and normal traffic consists of 42,851 instances. The label for normal traffic is = 0 and the label for attack traffic is = 1. The protocol's distribution ratio on normal traffic and the dataset's attack traffic are shown below in Figure <ns0:ref type='figure'>5</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref>: Protocol's distribution ratio on normal and attack traffic The timestamp range in the dataset is between 89.981-11045.6567 seconds and 0% missing value. Each timestamp has a range of instances for each type of traffic (attack, normal). The timestamp distribution ratio on normal traffic and attack traffic in the dataset is shown below in Figure <ns0:ref type='figure'>6</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>6</ns0:ref>: Distribution of timestamp over labels The range of length is between 46-1800 with 1,190 distinct and 0% missing value. Each length ranges across a group of instances for each type of traffic (attack, normal). The length distribution ratio on normal traffic and the attack traffic in the dataset is shown below in Figure <ns0:ref type='figure'>7</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>7</ns0:ref>: Distribution of length over labels During the collection and cleaning of the DDoS-2020 dataset, we were determined to distribute the elements' ratio in a balanced manner. The balance is considered in the distribution of traffic between instances (normal and attack) and the distribution of individual protocol that has range of instances for attack and normal traffic. Figure <ns0:ref type='figure' target='#fig_5'>8</ns0:ref> shows the distribution ratio of attacks and normal traffic for each TCP, ICMP, and DNS protocol. Manuscript to be reviewed Figure <ns0:ref type='figure' target='#fig_6'>9</ns0:ref> shows the distribution ratio of attacks and normal traffic for each timestamp. Figure <ns0:ref type='figure' target='#fig_11'>10</ns0:ref> shows the distribution ratio of attacks and normal traffic for each length range.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Figure <ns0:ref type='figure' target='#fig_11'>10</ns0:ref>: Traffic distribution of length. The timestamp and the length of traffic is also notable; the distribution ratio of the timestamp and length for each protocol should be semi-balanced. Figures <ns0:ref type='figure' target='#fig_11'>11 and 12</ns0:ref> show the distribution ratio of timestamp and length over protocols sequentially. </ns0:p></ns0:div> <ns0:div><ns0:head n='6'>Evaluation Results</ns0:head><ns0:p>RT-AMD model is implemented using the Python programing language, SQLite, and GCP. Python is a powerful language; it contains many libraries for machine learning. One of the libraries that is well-known for real-time learning purposes is Scikit multi-flow. The evaluation of this model was in the GCP environment.</ns0:p><ns0:p>As mentioned above, the RT-AMD tool was evaluated by the selected machine learning algorithms. Na&#239;ve Bayes, Decision tree, K-neighbors, and Random forest were selected for this evaluation. We experimented with the RT-AMD tool in three situations: offline localhost, online localhost, and online remote virtual host created by GCP. Configuration of localhost was with Microsoft Windows 10 Pro operating system, 24.0 GB RAM, and Intel(R) Core(TM) i7-7500 processor, and the remote virtual host was configured with e2medium machine type, 2 vCPUs, and 4 GB memory.</ns0:p><ns0:p>The evaluation measured accuracy and performance. The random forest achieved the best accuracy in incremental learning either on localhost or remote virtual host at around 99.38%. However, K-neighbors achieved the best accuracy in offline learning. Table <ns0:ref type='table'>2</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_12'>13</ns0:ref> show the details for each experiment. The Na&#239;ve Bayes achieved the efficient execution-time for online learning: 12.08s for cloud testing and 22.32s for local testing. Table <ns0:ref type='table'>3</ns0:ref> shows the details of execution time for local online testing and online cloud testing.</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref>: RT-AMD accuracy for each experiment Figure <ns0:ref type='figure' target='#fig_12'>13</ns0:ref>: RT-AMD accuracy for each experiment Table <ns0:ref type='table'>3</ns0:ref>: Execution time details The proposed tool is tested with different datasets; our DDoS-2020 dataset and NSL-KDD dataset <ns0:ref type='bibr'>[36]</ns0:ref>. The NSL-KDD dataset contains 125,964 samples, among which 67,343 are normal and 58,621 attacks. The NSL-KDD is a useful dataset and popularly used in previous studies <ns0:ref type='bibr' target='#b22'>[23,</ns0:ref><ns0:ref type='bibr' target='#b27'>28]</ns0:ref>. It is a new version of the KDD'99 dataset. Table <ns0:ref type='table' target='#tab_1'>4</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_11'>14</ns0:ref> show the details of accuracy for each dataset in cloud testing. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_11'>14</ns0:ref>: Accuracy and execution time for each dataset in cloud testing</ns0:p></ns0:div> <ns0:div><ns0:head n='7'>Discussion</ns0:head><ns0:p>As we mentioned above, the experimental RT-AMD tool was used in three different situations; offline localhost, online localhost, and online remote virtual host created by GCP. We achieved 99.38% accuracy with real-time detection in a cloud environment for the random forest. The accuracy for online detection is much higher than offline detection; this is because of Scikit-multi-flow library incremental learning characteristics. The execution time of random forest on the cloud was worst for many reasons, such as the virtual machine's abilities and the way random forest algorithms work.</ns0:p><ns0:p>Our model achieved the best accuracy with a real-time response on the cloud environment in comparison with related work. This is due to the model work and the machine learning algorithm's efficiency using the Scikit-multi-flow library. We have seen above the same model results offline using a Scikit-learn and online using the Scikit-multi-flow library. The latter library features incremental learning that gradually improves the algorithms' performance with runtime, thus improving results. Table <ns0:ref type='table' target='#tab_2'>5</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_11'>15</ns0:ref> show the details of this comparison. This study discusses the issues surrounding DDoS attacks on cloud environments, presenting the main types of DDoS attacks and the challenges and risks faced. Further, it reviews some of the previous techniques detecting DDoS attacks. Machine learning is one of the most common techniques used to detect DDoS attacks. Furthermore, incremental learning is one of the best strategies to learn and classify in realtime. This study's main contributions are to evaluate machine learning algorithms for the dataset collected and investigate the results with related works. Furthermore, we improve outcomes and reach real-time attack detection by using incremental learning.</ns0:p><ns0:p>The RT-AMD model is proposed to detect DDoS attacks on the cloud environment using machine learning techniques. Four machine learning algorithms were selected to evaluate this model: Na&#239;ve Bayes, Decision Tree, K-neighbors, and Random Forest. The RT-AMD model was developed by python, SQLite databases, and GCP to detect and alert of the DDoS attacks and test on the cloud environment platform. It was evaluated by using two datasets, the DDoS-2020 and NSL-KDD dataset. The DDoS-2020 dataset has been collected with two ranges of traffic (attack and normal), with three distinct network/transport As a result, the RT-AMD model achieved high accuracy in DDoS-2020 dataset testing and NSL-KDD dataset testing. The random forest algorithm obtained the best accuracy, reaching 99.38% with the DDoS-2020 dataset and 99.30% with the NSL-KDD dataset. This model achieved real-time detection without the negative effect on accuracy by using an incremental learning strategy, and without needing pre-training machine learning.</ns0:p><ns0:p>There are various ways to extend the study presented in this research. These include extending dataset samples to include different types of DDoS, and evaluating and testing this model on other cloud computing-related environments such as Mobile Cloud Computing (MCC), a combination between cloud computing and mobile computing. Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science Figure 3</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64454:1:0:NEW 17 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64454:1:0:NEW 17 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: Overview of RT-AMD framework.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Flowchart of RT-AMD framework</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64454:1:0:NEW 17 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Traffic distribution of protocols.Figure9shows the distribution ratio of attacks and normal traffic for each timestamp.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9: Traffic distribution of timestamp.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 11 : 12 :</ns0:head><ns0:label>1112</ns0:label><ns0:figDesc>Figure10shows the distribution ratio of attacks and normal traffic for each length range.Figure10: Traffic distribution of length. The timestamp and the length of traffic is also notable; the distribution ratio of the timestamp and length for each protocol should be semi-balanced. Figures11 and 12show the distribution ratio of timestamp and length over protocols sequentially.Figure 11: Distribution of timestamp over protocols Figure 12: Distribution of length over protocols</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64454:1:0:NEW 17 Nov 2021) Manuscript to be reviewed Computer Science protocols: TCP, ICMP, and DNS. The attack traffic were obtained from CAIDA DDoS Attack 2007 and the normal traffic were obtained by using Wireshark.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Funding Statement:</ns0:head><ns0:label /><ns0:figDesc>The authors extend their appreciation to the Deputyship for Research &amp; Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number (IFRPC-114-612-2020) at King Abdulaziz University, DSR, jeddah, Saudi arabia.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: Services delivery models</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Flowchart of RT-AMD framework</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Accuracy and execution time for each dataset in cloud testing</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64454:1:0:NEW 17 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Comparison of the result with related work Figure 15: Comparison of the result with related work</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>8 Conclusion and future work</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /></ns0:figure> </ns0:body> "
"Dear Editors We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. In particular, we have updated the references list with three research papers dated 2020 and updated the manuscript accordingly. We believe that the manuscript is now suitable for publication in PeerJ. Prof. Omaimah Omar Bamasag Professor of Cybersecurity On behalf of all authors Reviewer 1 (Anonymous) Basic reporting The analysis of the findings is very interesting. Many challenges of this proposed method are covered. Literature reference is relevant, but does not cover all the topics, such as AI, and is not so up to date. Author response: The focus of the related works section was on reviewing research in the area of Intrusion Detection ( IDS) mechanism be it signature , anomaly or hybrid. Authors took care that surveyed studies were within the last 5 years. However, there are some older studies that are very relevant to our topic. Also, we came across many interesting up-to- date studies, but we faced difficulty in obtaining the full version of the papers despite trying contacted the authors. Experimental design As a concept, it is well designed. I found the topic interesting and definitely an area of growth. Author response: Thank you very much Validity of the findings Despite the research effort and the achieved high rate of accuracy, there are a number of challenges that need to be addressed, such as execution-time for online learning of random forest algorithm, the lack of up-to-date real-world datasets for training, what is the rate of false-positive and false-negative, there are scalability issues or performance? Author response: The execution-time for online of random forest depends on the VM's capabilities on cloud environment. The machine has only 4 GB of RAM memory and 2 vCPU plus, As we know, random forest algorithm needs higher specification than this machine as the algorithm builds many trees as forest mechanism. A TP, FP, TN, FN is a matrix to evaluate the classification models. in line 216 -217, it is mentioned that our model was evaluated based on Accuracy. Accuracy alone does not show the precise information when working with a class-imbalanced dataset. However, in our DDoS-2020 dataset, we addressed this issue by distributing the data on two classes: normal and attack. We also identified the type of traffic as: protocol type, length, or time. Reviewer 2 (Anonymous) Basic reporting The paper is well structured and easy to read. Also, the use of English is quite good. The introduction provides a great, generalized background of the topic and the motivations for this study are clear. A minor comment concerns the References part, where it would have been better to have used more recent literature with more up-to-date knowledge. The experimental part is well written and is appropriate for the study. To conclude, this research work apparently fulfills the purpose for which it was carried out. It would be interesting to see in the future, the use of more machine learning classifiers in building the Real-Time DDoS flood Attack Monitoring and Detection RT-AMD predicting model or even use more datasets for executing the same experiments. For the above reasons I strongly recommend the acceptance of this paper. Author response: The following references, published in 2020, have been added to the references list and the manuscript have been updated with the following paragraph to review the added citations – added as the last paragraph in subsection (3.2 Anomaly-based detection). “Haider et al.[23] proposed a deep Convolutional Neural Network (CNN) framework for efficient DDoS attack detection in SDN. This proposed framework has been evaluated using hybrid state-of-the-art algorithms on CICIDS2017 dataset. Hwang et al.[24] proposed an unsupervised deep learning model for early network traffic anomaly detection, namely D-PACK based on CNN. The experimental results show low false-positive rate and high accuracy. Novaes et al.[25] proposed using short-term memory and fuzzy logic for DDoS attack detection and mitigation in SDN. The proposed system consists of three phases: Characterization, anomaly detection, and mitigation. The evaluation of this system has been conducted using CICDDoS2019 dataset with archived accuracy of 96.22%”. [23] Haider, S., Akhunzada, A., Mustafa, I., Patel, T. B., Fernandez, A., Choo, K. K. R., & Iqbal, J. (2020). A deep CNN ensemble framework for efficient DDoS attack detection in software defined networks. Ieee Access, 8, 53972-53983. [24] Hwang, R. H., Peng, M. C., Huang, C. W., Lin, P. C., & Nguyen, V. L. (2020). An unsupervised deep learning model for early network traffic anomaly detection. IEEE Access, 8, 30387-30399. [25] Novaes, M. P., Carvalho, L. F., Lloret, J., & Proença, M. L. (2020). Long short-term memory and fuzzy logic for anomaly detection and mitigation in software-defined network environment. IEEE Access, 8, 83765-83781. Experimental design The research question of this research is well defined and undoubtedly adds knowledge to an otherwise quite explored scientific area, that of attack monitoring and detection in cloud computing. The methods used, although not innovative, have certainly been used in the right way and the overall investigation is performed to a high technical standard. Authors response: Thank you very much. Validity of the findings Throughout the research there is no replication of pre-existing knowledge. Conclusions are both well stated and linked to original research question. Authors response: Thank you very much. "
Here is a paper. Please give your review comments after reading it.
315
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. Fine-grained sentiment analysis is used to interpret consumers' sentiments, from their written comments, towards specific entities on specific aspects. Previous researchers have introduced three main tasks in this field (ABSA, TABSA, MEABSA), covering all kinds of social media data (e.g. review specific, questions and answers, and community-based). In this paper, we identify and address two common challenges encountered in these three tasks, including the low-resource problem and the sentiment polarity bias.</ns0:p></ns0:div> <ns0:div><ns0:head>Methods.</ns0:head><ns0:p>We propose a unified model called PEA by integrating data augmentation methodology with the pre-trained language model, which is suitable for all the ABSA, TABSA and MEABSA tasks. Two data augmentation methods, which are entity replacement and dual noise injection, are introduced to solve both challenges at the same time. An ensemble method is also introduced to incorporate the results of the basic RNNs-based and BERT-based models.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>. PEA shows significant improvements on all three fine-grained sentiment analysis tasks when compared with state-of-the-art models. It also achieves comparable results with what the baseline models obtain while using only 20% of their training data, which demonstrates its extraordinary performance under extreme low-resource conditions.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Consumers worldwide have posted trillions of text comments on online shopping sites and social platforms to express their opinions. The efficiency of how modern merchandisers drive insights from those opinions would be the key to their success in the data-driven era. Sentiment analysis is such a solution for businesses to understand consumers' opinions effectively. Traditional coarse-grained sentiment analysis aims to identify the sentiment polarity of the given sentence. Different from that, fine-grained sentiment analysis is managed to match sentiments with corresponding entities and aspects in the given sentence. For example, given the comment 'I've used MacBookPro, it's convenient.' Coarse-grained sentiment analysis describes the whole sentence a positive sentiment. Fine-grained sentiment analysis describes a positive sentiment towards MacBookPro (entity) on its convenience level (aspect), which is a provided (sentence, aspect, entity) pair. Previous researchers have introduced three tasks on fine-grained sentiment analysis towards entities and aspects (definitions and 2 examples are illustrated in Table <ns0:ref type='table'>1</ns0:ref>) : 1. Aspect-Based Sentiment Analysis (ABSA), 2. Targeted Aspect-Based Sentiment Analysis (TABSA), 3. Multi-Entity Aspect-Based Sentiment Analysis (MEABSA). ABSA was primarily based on the review-specific data acquired from E-commerce or life service websites (e.g. Amazon, Yelp) where there is only one or even no entity mentioned in the data. Although performing well on consumer reviews, models designed for ABSA have limited performance on posts coming from social platforms (e.g. Twitter, Reddit) where there are multiple entities and aspects mentioned. For example, a software engineer on Twitter wrote 'I've used MacBookPro, it's convenient. But now I switched to ThinkPad because it's just as convenient and has a better price.' There are two entities introduced: MacBookPro and To alleviate the low-resource problem in various NLP tasks, data augmentations have been applied in previous works. The optional strategies mainly include word replacement, noise injection, text generation and so on. For example, it is useful to generate additional training examples that contain rare words in synthetically created contexts for machine translation <ns0:ref type='bibr'>(Fadaee, Bisazza &amp; Monz, 2017)</ns0:ref>. Another similar idea injected low-resource words into highresource sentences to improve the low-resource translation task <ns0:ref type='bibr' target='#b48'>(Xia et al., 2019)</ns0:ref>. Additionally, data augmentations such as synonym replacement and delexicalization have been applied to the NER task <ns0:ref type='bibr'>(Dai &amp; Adele&#65292; 2020)</ns0:ref> and dialogue language understanding <ns0:ref type='bibr' target='#b17'>(Hou et al., 2018)</ns0:ref> respectively. Kim and associates <ns0:ref type='bibr' target='#b21'>(Kim, Roh &amp; Kim, 2019)</ns0:ref> proposed a method for spoken language understanding by introducing noise in all slots without classifying types of slots to improve the performance of low-resource dataset with 'open-vocabulary' slots.</ns0:p><ns0:p>Research on Bias problems in NLP Bias, such as racial bias and gender bias <ns0:ref type='bibr' target='#b22'>(Kiritchenko &amp; Mohammad, 2018;</ns0:ref><ns0:ref type='bibr' target='#b40'>Thelwall, 2018)</ns0:ref>, is also a trending topic of concern in different NLP researches. For example, Zhao and associates <ns0:ref type='bibr' target='#b53'>(Zhao et al., 2018)</ns0:ref> tried to mitigate gender bias by creating an augmented dataset identical to the original one by replacing the entities such as 'he' or 'she'. Another work formally proposed the Counterfactual data augmentation (CDA) for gender bias mitigation in the coreference resolution task, by replacing every occurrence of a gendered word in the original corpus with its flipped one <ns0:ref type='bibr'>(Lu et al., 2020)</ns0:ref>. Recently, there are some related works to deal with the low-resource and polarity bias problems in coarse-grained sentiment analysis, which aims to predict the sentiments of the given posts. An early work introduced a bias-aware thresholding method motivated by cost-sensitive learning <ns0:ref type='bibr' target='#b19'>(Iqbal, Karim &amp; Kamiran, 2015)</ns0:ref>. Recent works include designing a sentiment bias processing strategy for the lexicon-based sentiment analysis <ns0:ref type='bibr' target='#b14'>(Han et al., 2018)</ns0:ref>, and using the generationbased data augmentation method to deal with the low-resource problem in coarse-grained sentiment analysis <ns0:ref type='bibr'>(Gupta, 2019)</ns0:ref>. To the best of our knowledge, there is no recent work discussing solutions to low-resource or polarity bias problems in fine-grained sentiment analysis.</ns0:p></ns0:div> <ns0:div><ns0:head>Methods</ns0:head><ns0:p>ABSA, TABSA and MEABSA are three widely discussed tasks for fine-grained sentiment analysis, whose common objective is to predict the sentiment towards each aspect of each target entity. The detailed comparisons and examples can be found in Table <ns0:ref type='table'>1</ns0:ref> in the introduction section. This section introduces the methodologies, which we used to unify the ABSA, TABSA, and MEABSA tasks together with the same architecture. The proposed all-in-one solution to Predict sentiment towards Entities and Aspects is named PEA. Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>or</ns0:head><ns0:p>. The prediction target becomes towards all the aspects for the</ns0:p><ns0:formula xml:id='formula_0'>|&#119864; &#119898; | = 1 |&#119864; &#119898; | = 2 &#119910; &#119886;&#119904;&#119901;&#119890;&#119888;&#119905; &#119895; &#119890;&#119899;&#119905;&#119894;&#119905;&#119910; &#119894; target entity in . &#119875;&#119900;&#119904;&#119905; &#119898;</ns0:formula><ns0:p>For MEABSA, the most challenging task, there are multiple entities and aspects in , where &#119875;&#119900;&#119904;&#119905; &#119898;</ns0:p></ns0:div> <ns0:div><ns0:head>and</ns0:head><ns0:p>. It aims to predict towards the mentioned aspects for every entity</ns0:p><ns0:formula xml:id='formula_1'>|&#119864; &#119898; | &#8805; 1 |&#119860; &#119898; | &#8805; 1 &#119910; &#119886;&#119904;&#119901;&#119890;&#119888;&#119905; &#119895; &#119890;&#119899;&#119905;&#119894;&#119905;&#119910; &#119894; in . &#119890;&#119899;&#119905;&#119894;&#119905;&#119910; &#119894; &#119875;&#119900;&#119904;&#119905; &#119898;</ns0:formula><ns0:p>The general training workflow of PEA includes:</ns0:p><ns0:p>(1) given an original training set , generate a new training set based on entity replacement.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119863; &#119863;'</ns0:head><ns0:p>For the ABSA task, there is no entity involved, so the entity replacement step is skipped and &#119863; '</ns0:p><ns0:p>. For TABSA and MEABSA, entity replacement is conducted to get an entity-replaced = &#119863; dataset , and . The entity replacement used in PEA is introduced in the first part &#119875;&#119863; &#119863; ' = &#119863; &#8746; &#119875;&#119863; of subsection 'Data Augmentation'.</ns0:p><ns0:p>(2) An RNN-based model is trained on the new training set as one of the basic models. The &#119863; '</ns0:p><ns0:p>dual noise injection is conducted on the input posts, entities and aspects to get the noise-injected vectors. The dual injection used in PEA is introduced in the second part of subsection 'Data Augmentation'. Then, we take an attentional recurrent neural network-based model, CEA <ns0:ref type='bibr' target='#b50'>(Yang et al., 2018)</ns0:ref>, as an example, to be the basic model, whose output is the predicted sentiment polarity distribution of the given inputs. It is introduced in the first part of subsection 'Basic Models'.</ns0:p><ns0:p>(3) A pre-trained language model is trained on the new training set as the other basic model.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119863; '</ns0:head><ns0:p>Auxiliary question sentences are constructed for training the BERT-based model, which can predict fine-grained sentiment polarity distribution with the given inputs. The detailed design is described in the second part of subsection 'Basic Models'.</ns0:p><ns0:p>(4) Finally, the ensemble method is applied to fuse the predicted sentiment polarity distribution of the RNN-based and BERT-based model as the outputs of PEA, which is the final predicted sentiment polarity. The fusion strategy is introduced in the third part of subsection 'Fusion Strategy'.</ns0:p></ns0:div> <ns0:div><ns0:head>Data Augmentation</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66426:1:0:NEW 14 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Data augmentation is widely used to improve learning performance, prevent overfitting, and increase robustness under low-resource conditions. This section illustrates two innovative, taskspecific data augmentation methods that are deployed in the model. Entity Replacement. The low-resource problem in fine-grained sentiment analysis mainly comes from entities in the posts. This problem can be alleviated by increasing the low-resource entities. Among the data augmentation methods used in recent works for alleviating the lowresource problem in other NLP tasks, replacing words in context with similar ones is a viable data augmentation method <ns0:ref type='bibr'>(Fadaee, Bisazza &amp; Monz, 2017;</ns0:ref><ns0:ref type='bibr'>Xian et al., 2019;</ns0:ref><ns0:ref type='bibr'>Dai &amp; Adel, 2020)</ns0:ref>. Usually, similar words can be extracted from word similarity calculation <ns0:ref type='bibr' target='#b46'>(Wang &amp;Yang, 2015)</ns0:ref>, and can also be extracted from a handcraft ontology such as WordNet.</ns0:p><ns0:p>In previous works, any word in a sentence can be replaced. This kind of replacement is extremely risky in fine-grained sentiment analysis tasks. For example, if a sentiment word, such as 'happy', was replaced, it would unintentionally change the sentiment polarity at the same time. To avoid this kind of situation, we proposed the entity replacement method which successfully addresses this problem. to train models. = &#119863; &#8746; &#119875;&#119863; In step 2, target entities are selected dynamically based on the scarcity of entities in the original training set so that every entity will have sufficient training instances eventually. In other words, the fewer times an entity presents in the original training set, the more likely it will be selected as the target entity. The detailed probability that an entity is selected is calculated as follows:</ns0:p><ns0:p>(1) inverse proportional function, where . &#119909; -1 = 1 &#119909; Table <ns0:ref type='table'>2</ns0:ref> shows an example of such a replacement. Besides increasing the number of training instances, we think data augmentation also helps solve the polarity bias problem. For example, if an entity is always labeled positive in the training set, it will be more likely to be predicted positive no matter what the post is about. The proposed data augmentation methods help the polarity balance for entities because the entity may be replaced into a positive or neutral or negative expression randomly. To conclude, the low-resource entity replacement is designed to increase the number of training instances, especially for the low-resource entities, and help solve the polarity bias problem in sentiment prediction towards multiple entity settings.</ns0:p><ns0:formula xml:id='formula_2'>P(&#119890;&#119899;&#119905;&#119894;&#119905;&#119910; &#119894; ) = |&#119898;&#119890;&#119899;&#119905;&#119894;&#119900;&#119899;(&#119890;&#119899;&#119905;&#119894;&#119905;&#119910; &#119894; )| -1 &#8721; |&#119864;| &#119895; = 1 |&#119898;&#119890;&#119899;&#119905;&#119894;&#119900;&#119899;(&#119890;&#119899;&#119905;&#119894;&#119905;&#119910; &#119895; )| -1 ,&#8704; &#119894; &#8712; [1,</ns0:formula><ns0:p>Dual Noise Injection. To improve the generalization ability of PEA, we also involve the noise injection method. In previous NLP tasks, such as machine translation <ns0:ref type='bibr' target='#b4'>(Cheng et al., 2018)</ns0:ref> and spoken language understanding <ns0:ref type='bibr' target='#b21'>(Kim, Roh &amp; Kim, 2019)</ns0:ref>, it has shown the effectiveness of improving the model's generalization ability by injecting noises. In these works, noise is usually injected into the context representation for the post directly. For fine-grained sentiment analysis, the inputs include context texts, entities, entity terms, aspects and aspect terms. It is not applicable to only inject noises on context representations like previous works. Therefore, we propose the idea of dual noise injection: a noise is injected into the representation of entity and entity terms in the context at the same time. A similar practice is performed on the aspect and aspect terms. In this task, the dual noise injection is used to simulate new entities and new aspects, enabling the model to make better predictions when it comes across low-resource entities or aspects. Following the common choice of previous works <ns0:ref type='bibr' target='#b4'>(Cheng et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b21'>Kim, Roh &amp; Kim, 2019)</ns0:ref>, we also use the Gaussian noise to inject noises into the embedding space of posts, entities and aspects. Fig. <ns0:ref type='figure' target='#fig_6'>2</ns0:ref> is an example to illustrate the detailed processes of dual noise injection. The dual noise injection consists of 3 steps: &#61623; We first express the post, entity, and aspect in vectors space , ,</ns0:p><ns0:formula xml:id='formula_3'>&#119907; &#119908; &#8712; &#8477; &#119879; &#215; &#119896; , &#119907; &#119890; &#8712; &#8477; &#119896; &#119907; &#119886; &#8712; &#8477; &#119896;</ns0:formula><ns0:p>where , represents the number of words in the post, and is the</ns0:p><ns0:formula xml:id='formula_4'>&#119907; &#119908; = {&#119907; &#119908; 1 , &#8230;,&#119907; &#119908; &#119879; } &#119879; &#119896; dimension of representations.</ns0:formula><ns0:p>The embedding vectors can be initialized by GloVe <ns0:ref type='bibr' target='#b31'>(Pennington, Socher &amp; Manning, 2014)</ns0:ref>. &#61623; Then we sample noise vectors and for entity and aspect respectively from</ns0:p><ns0:formula xml:id='formula_5'>&#119899; &#119890; &#8712; &#8477; &#119896; &#119899; &#119886; &#8712; &#8477; &#119896;</ns0:formula><ns0:p>the Gaussian distribution. &#61623; At last, we extract indicator vector for entity terms indicating the location of</ns0:p><ns0:formula xml:id='formula_6'>&#119894; &#119890; = {&#119894; 0 &#119890; ,&#8230;,&#119894; &#119879; &#119890; }</ns0:formula><ns0:p>entity terms in the post. Each element in is binary. is set to 1 when the word in the</ns0:p><ns0:formula xml:id='formula_7'>&#119894; &#119890; &#119894; &#119905; &#119890; &#119905; &#119905;&#8462;</ns0:formula><ns0:p>post is an entity term, otherwise, it is set to 0. Note that an entity term may consist of one or more words. In the same manner, we can get an indicator vector for aspect term. Then, we &#119894; &#119886; inject the noise to the entity, the aspect, and the post:</ns0:p><ns0:p>(2) &#119907; ' &#119890; = &#119907; &#119890; + &#119899; e .</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_8'>&#119907; ' &#119886; = &#119907; &#119886; + &#119899; a . (4) &#119907; ' &#119908; &#119894; = &#119907; &#119908; &#119894; + &#119894; &#119890; &#215; &#119899; e + &#119894; &#119886; &#215; &#119899; a .</ns0:formula><ns0:p>In step 2, the same noise vector (e.g. ) needs to be applied to the entity and entity term. This is &#119899; e to ensure the new-generated entity and entity term remain the same relative location in the embedding space. We also apply the same noise vector (e.g. ) to the aspect and the aspect term &#119899; a in the same manner. The noise injected into the entity and aspect does not have to be equal. Also, if the noise level is not large enough, it won't substantially change the effect of injections.</ns0:p><ns0:p>In order to test what is the best noise level in this case, we conduct experiments to determine the settings, which is introduced in section 'Experimental Settings'.</ns0:p></ns0:div> <ns0:div><ns0:head>Basic Models</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66426:1:0:NEW 14 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Recently, both RNN-based models and BERT-based models have shown effectiveness in the fine-grained sentiment analysis <ns0:ref type='bibr' target='#b50'>(Yang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b51'>Yang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b38'>Sun, Huang &amp; Qiu, 2019;</ns0:ref><ns0:ref type='bibr'>Xu et al., 2020;</ns0:ref><ns0:ref type='bibr'>)</ns0:ref>. Due to the different structures of RNN and BERT, both kinds of models have advantages and weaknesses respectively. PEA incorporates both models to help make the final prediction more accurate. RNN-based Model for Fine-grained Sentiment Analysis. The CEA model is designed for MEABSA task, and can also be used for ABSA and TABSA tasks. It takes the word vectors of the post, the entity vectors and aspect vectors as inputs, and predicts the fine-grained sentiments towards the given aspect of the entity. To incorporate noise injection with CEA, we feed the noise-injected vectors to CEA, the general structure of noise-injected CEA is as Fig. <ns0:ref type='figure' target='#fig_7'>3</ns0:ref> shows. Firstly, we feed every noise-injected word vector in the post to CEA. An LSTM layer is &#119907; ' &#119908; &#119894; applied to extract the semantics of the post after a few data processing layers. After that, a deep memory network is applied to update entity and aspect representations with the given noiseinjected entity vector and aspect vector . The updated representations are fed into a dense &#119907; '</ns0:p></ns0:div> <ns0:div><ns0:head>&#119890; &#119907; '</ns0:head><ns0:p>&#119886; layer to predict the final sentiment. For detailed explanation of CEA, refer to the original paper <ns0:ref type='bibr' target='#b50'>(Yang et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Because CEA requires entities and aspects as inputs, it is naturally suitable for the TABSA and MEABSA tasks. For the ABSA task, if there is no entity mentioned in the post, we can set the entity vector to a zero vector as the input. This makes the CEA-based basic model be able to deal with all the ABSA, TABSA and MEABSA tasks. Pre-trained Language Model for Fine-grained Sentiment Analysis. The pre-trained language model is useful for enabling low-resource tasks to benefit from a huge amount of unlabeled data by pre-training. Bidirectional Encoder Representations from Transformers (BERT) <ns0:ref type='bibr' target='#b9'>(Devlin et al., 2018)</ns0:ref> is one of the key innovations in language representation learning <ns0:ref type='bibr' target='#b18'>(Howard &amp; Ruder, 2018;</ns0:ref><ns0:ref type='bibr' target='#b30'>Peters et al., 2018)</ns0:ref>. It has achieved good results in many natural language processing tasks <ns0:ref type='bibr'>(Acheampong et al., 2021;</ns0:ref><ns0:ref type='bibr'>van Aken et al., 2019)</ns0:ref>. BERT uses bidirectional pre-training for language representations, and it is pre-trained on two tasks: masked language model for understanding the relationship between words, and next sentence prediction for understanding the relationship between sentences for downstream tasks.</ns0:p><ns0:p>The design of pre-training makes use of a huge amount of unlabeled data, making it suitable for low-resource situations. Thus, we incorporate BERT to further enhance performance. Sun <ns0:ref type='bibr' target='#b38'>(Sun, Huang &amp; Qiu, 2019)</ns0:ref> argued that constructing an auxiliary question sentence for the BERT model is useful in the TABSA task. We follow the conclusion and make the auxiliary question sentence for the entity and aspect with the template of 'What is the sentiment towards the [aspect] of [entity]?'. Then the sentiment classification task is turned into a sentence pair classification task. The label set of this setting includes {Positive, Neutral, Negative}. The BERT model takes two paragraphs as input with the token [CLS] at the beginning and [SEP] at the end of each paragraph. We set the post as the first paragraph and the auxiliary question sentence as the second. Here is an example.</ns0:p></ns0:div> <ns0:div><ns0:head>Input:</ns0:head><ns0:p>[CLS] I've used MacBookPro, it's convenient.</ns0:p><ns0:p>[SEP] What is the sentiment towards the convenience of MacBookPro? [SEP] Output: Positive By constructing auxiliary question sentences along with the posts, we can generate inputs suitable for training BERT-based models, whose outputs are the predictions of sentiments towards targeted aspects of entities. The construction of inputs can be applied to the TABSA and MEABSA directly. For the ABSA task, there is no entity mentioned in the post, the underlined part in the constructed question template, which is 'What is the sentiment towards the [aspect] of [entity]?', will be omitted. This makes the BERT-based basic model be able to deal with all the ABSA, TABSA and MEABSA tasks.</ns0:p></ns0:div> <ns0:div><ns0:head>Fusion Strategy</ns0:head><ns0:p>Ensemble methods can improve the predictive performance of a single model by training multiple models and combining their predictions. The weighting method is one of the effective strategies to fuse outputs, which assign weights to each basic model to combine the final decision <ns0:ref type='bibr' target='#b35'>(Sagi &amp; Rokach, 2018)</ns0:ref>, including simple averaging and weighted averaging <ns0:ref type='bibr'>(Zhou, 2021)</ns0:ref>. We follow the strategy of simple averaging and combine the data augmented CEA with BERT to be the final model. We train the two models separately, and ensemble their predictions by taking the sentiment polarity with the largest averaged predicted probability as the final output. For a given post , the fine-grained sentiment prediction towards of , denoted as</ns0:p><ns0:formula xml:id='formula_9'>&#119875;&#119900;&#119904;&#119905; &#119898; &#119886;&#119904;&#119901;&#119890;&#119888;&#119905; &#119895; &#119890;&#119899;&#119905;&#119894;&#119905;&#119910; &#119894;</ns0:formula><ns0:p>, is calculated as Equation ( <ns0:ref type='formula'>5</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Time Complexity Analysis</ns0:head><ns0:p>Compared with existing deep learning-based models, our proposed PEA model involves entity replacement, dual noise injection and prediction fusion as additional modules. The analysis of time complexity for these three parts is described as follows.</ns0:p><ns0:p>For entity replacement, we calculated the selected probability for every entity, whose time complexity is</ns0:p><ns0:p>, where is the total number of entities in the dataset. We then traversed &#119874;(&#119864;) &#119864; every instance and conduct entity replacement, whose time complexity is , where is the &#119874;(&#119873;) &#119873; number of instances in the data set. The total time complexity of entity replacement is &#119874;(&#119864;)</ns0:p><ns0:p>.</ns0:p></ns0:div> <ns0:div><ns0:head>+ &#119874;(&#119873;)</ns0:head><ns0:p>For dual noise injection, we traversed every token in each instance to find the tokens referring to entity and aspect, whose time complexity is , where is the length of each instance. We &#119874;(&#119879;) &#119879; added dual noises on all instances, whose time complexity is also . The total time &#119874;(&#119873;) complexity of dual noise injection is .</ns0:p></ns0:div> <ns0:div><ns0:head>&#119874;(&#119879;) &#215; &#119874;(&#119873;)</ns0:head><ns0:p>For prediction fusion, we fused the prediction with the weighted summation operation on every category for each instance, whose time complexity is , where is the number of &#119874;(&#119888;) &#215; &#119874;(&#119873;) &#119888; categories of sentiments.</ns0:p><ns0:p>The total time complexity of extra operations in our proposed PEA model is (&#119874;(&#119864;) + &#119874;(&#119873;))</ns0:p><ns0:p>. + (&#119874;(&#119879;) &#215; &#119874;(&#119873;)) + (&#119874;(&#119888;) &#215; &#119874;(&#119873;))</ns0:p></ns0:div> <ns0:div><ns0:head>Experiments and Analysis</ns0:head><ns0:p>In this section, we introduce the experimental settings and results to validate the effectiveness of our PEA model.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental Settings</ns0:head><ns0:p>We evaluate four benchmark datasets of three tasks, including datasets in two languages: English and Chinese. Statistics of the used datasets are displayed in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p><ns0:p>&#8226; Restaurant and Laptop are two datasets from SemEval 2014 <ns0:ref type='bibr'>(Pontiki et al., 2014)</ns0:ref> for ABSA. Both datasets are reviews in English and each review contains aspects and corresponding sentiment polarities, including positive, negative and neutral.</ns0:p><ns0:p>&#8226; SentiHood is a widely used dataset for TABSA <ns0:ref type='bibr' target='#b34'>(Saeidi et al., 2016)</ns0:ref>. It consists of 5,215 sentences in English, and 3,862 of which contain a single aspect, the rest contains multiple aspects. Each sentence is annotated with a list of tuples, which are aspect, given entity and corresponding sentiment polarity, including positive and negative. The whole dataset is split into train, validation and test set.</ns0:p><ns0:p>&#8226; BabyCare is a large public dataset for MEABSA <ns0:ref type='bibr' target='#b50'>(Yang et al. 2018)</ns0:ref>. It consists of babycare reviews in Chinese and each review is in the format of a list of tuples, which are context, aspects, corresponding entities and sentiment polarities, including positive, negative and neutral. The whole dataset is split into train, validation and test set. Common settings. For the BERT and CEA models, we use default parameters. For all English datasets, we use BERT-Base English models 1 and 6B300d GloVe <ns0:ref type='bibr' target='#b31'>(Pennington, Socher &amp; Manning, 2014)</ns0:ref> word embeddings 2 . For the Chinese dataset, we use BERT-Base Chinese and the same word vectors provided by <ns0:ref type='bibr' target='#b50'>(Yang et al., 2018)</ns0:ref>. For multi-word entity terms and aspect terms, we follow the preprocessing in previous works <ns0:ref type='bibr' target='#b50'>(Yang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b36'>Song et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b51'>Yang et al., 2019)</ns0:ref>. We use the average vectors of all the words in the entity/aspect term as the entity/aspect term vectors. Task specific settings. For ABSA task, the Restaurant and Laptop datasets are used for experiments. Because there is no entity in these datasets, so entity replacement in data augmentation is removed when implementing PEA. For TABSA task, the SentiHood dataset is used for experiments. Because aspect location is not given in this dataset, aspect noise injection is removed in this task. For MEABSA task, the BabyCare dataset is used for experiments. When implementing PEA, both entity replacement and noise injection are remained in this task. Data augmentation settings. We perform entity replacement on the training data for the whole dataset and merge the pseudo instances with original instances. According to the proposed entity replacement method, those entities, which are low-resource in the original training set, have a higher probability to be chosen for replacement. Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> lists the top 10 low-resource entities in the BabyCare dataset, and displays the number of instances that belong to every category for both the original training set and the entity-replaced dataset. We can observe that, for those lowresource entities, such as 'Kabrita', the number of negative and neutral instances has significantly increased by using entity replacement. This can help relieve both the low-resource and polarity bias problems. For noise injection, and are two parameters to be determined. We follow the common setting &#120583; &#120590; in previous works <ns0:ref type='bibr' target='#b21'>(Kim, Roh &amp; Kim, 2019)</ns0:ref> (2) compared with two BERT-based baselines, our proposed PEA achieves further improvement in most evaluation metrics. This may be because the prediction of PEA comes from both data augmented CEA and BERT, which helps ensemble the predictions of two basic models.</ns0:p><ns0:p>(3) different from the performance in ABSA and MEABSA, the improvement of PEA in the TABSA task seems slightly in accuracy and AUC score, this may be because aspect location is not given in this dataset (but given in other tasks), therefore, aspect noise injection is removed for this experiment. So we have conducted a statistical analysis test in the following section to show the performance difference between the two models is statistically significant.</ns0:p><ns0:p>Results on the MEABSA Task. We evaluate the Chinese benchmark dataset BabyCare for the MEABSA task. We compare with all the published state-of-the-art baselines, including CEA <ns0:ref type='bibr' target='#b50'>(Yang et al., 2018)</ns0:ref>, DT-CEA <ns0:ref type='bibr' target='#b51'>(Yang et al., 2019)</ns0:ref>, Cold-start Aware Deep Memory Network (CADMN) <ns0:ref type='bibr' target='#b36'>(Song et al., 2019)</ns0:ref>. These methods are exactly designed for this task. We also compare with MemNet <ns0:ref type='bibr'>(Tang, Qin &amp; Liu, 2016)</ns0:ref>, ATAE-LSTM <ns0:ref type='bibr' target='#b47'>(Wang et al., 2016)</ns0:ref>, IAN <ns0:ref type='bibr' target='#b28'>(Ma et al., 2017)</ns0:ref>, and their modified versions MemNet+, ATAE-LSTM+ and IAN+, which are used as baselines in a recent MEABSA work <ns0:ref type='bibr' target='#b36'>(Song et al., 2019)</ns0:ref>. We follow the designs introduced in <ns0:ref type='bibr' target='#b36'>(Song et al., 2019)</ns0:ref>: these three modified plus versions remain the basic model structure of MemNet, ATAE-LSTM and IAN respectively. The additional entities in the MEABSA task are treated as the aspects, and are added to the models in the same manner of aspects. These methods are originally designed for the ABSA task, and they are often regarded as baselines in former MEABSA research. Following the former research, Accuracy and Marco-F1 are evaluation metrics for this dataset, Marco-Precision and Macro-Recall are also reported. Table <ns0:ref type='table' target='#tab_10'>7</ns0:ref> displays the comparisons between our model and baselines. We can have the following observations:</ns0:p><ns0:p>( (4) compared with all the baselines, our proposed method PEA achieves significant improvement under all evaluation metrics. Compared with the previous state-of-the-art CADMN model, the improvements of PEA reach about 4% in accuracy and 5% in F1. The MEABSA is the most challenging fine-grained sentiment analysis task, this experimental result shows PEA has a significant advantage in the MEABSA task. Statistical Analysis Test. Refer to the previous works <ns0:ref type='bibr'>(Li et al., 2020)</ns0:ref>, we conduct McNemars test as the statistical analysis test to further show the statistical difference between two models. &#119901; -value is the significance level, which means the performance difference between the two models.</ns0:p><ns0:p>If the estimated -value is lower than 0.05, the performance difference between the two models &#119901; is statistically significant. Table <ns0:ref type='table'>8</ns0:ref> displays the -values between PEA and other models on three &#119901; sentiment analysis tasks respectively. We can observe that the performance differences between PEA and other baselines are statistically significant in all tasks, which show the effectiveness of the proposed PEA model from the perspective of statistical analysis. For example, in the TABSA task, the improvement of PEA compared with BERT-pair-NLI-M is not very high in accuracy, which is 94.3% vs 93.8% in Table <ns0:ref type='table' target='#tab_9'>6</ns0:ref>. In the statistical analysis test, the estimated -value between PEA and BERT-pair-&#119901; NLI-M is 0.0174. According to the definition of -value, it shows that the performance &#119901; difference between BERT-pair-NLI-M and PEA is statistically significant. Additionally, by</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66426:1:0:NEW 14 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>observing Table <ns0:ref type='table' target='#tab_10'>7</ns0:ref> and Table <ns0:ref type='table'>8</ns0:ref> together, we can find PEA has significant advantages in the most challenging MEABSA task.</ns0:p></ns0:div> <ns0:div><ns0:head>Ablation Study</ns0:head><ns0:p>Experimental results so far show that the PEA approach is superior to the baselines on all the ABSA, TABSA and MEABSA on selected datasets. Because PEA consists of data augmented CEA and BERT, we would like to further investigate the effectiveness of each part in the model. A case study is also introduced in this section.</ns0:p></ns0:div> <ns0:div><ns0:head>Effectiveness of Components in PEA</ns0:head><ns0:p>Ablation study is used to show how each part of the model affects the performance by removing them. We conduct experiments on all four datasets of three tasks for comparisons. Experimental results are as Table <ns0:ref type='table'>9</ns0:ref> shows. The proposed PEA model integrates data augmented CEA and BERT. Because entity replacement and noise injection are applied to data augmented CEA, we use CEA, CEA+EntityReplacement (CEA+ER for short) and CEA+EntityReplacement+NoiseInjection (CEA+ER+NI for short) respectively for ablation study to show the effectiveness of applying two data augmentation techniques. The BERT-based model is also used for comparisons in ablation studies. We can have the following observations from Table <ns0:ref type='table'>9</ns0:ref>:</ns0:p><ns0:p>(1) comparing CEA and CEA+ER, we can find involving entity replacement can have improvement on MEABSA and TABSA tasks. We also counted the number of instances for every entity based on the original training set and the entity-replaced dataset. The statistics are demonstrated with the box plot in Fig. <ns0:ref type='figure' target='#fig_9'>5</ns0:ref>. It shows that using the proposed entity-replacement method can significantly increase the number of instances for low-resource entities, and all entities have at least 252 instances for training. For ABSA, there is no entity provided in the dataset, so the entity replacement procedure is removed.</ns0:p><ns0:p>(2) by adding noise injection, the CEA+ER+NI model achieves about 1.3% improvement on the Restaurant dataset over the CEA+ER model, and achieves slight improvement on other datasets. These observations show that using entity replacement and noise injection can bring positive impacts on fine-grained sentiment analysis. This may be because using data augmentation can increase the number of training instances, especially for low-resource entities and aspects, and help overcome polarity bias.</ns0:p><ns0:p>(3) by comparing the performance of PEA with the BERT-based model and data augmented CEA model, PEA achieves the best performance in most cases. The strength of BERT-based model is that it makes use of a huge amount of unlabeled data by pre-training, but it also has weaknesses. The BERT model depends on the Transformer <ns0:ref type='bibr' target='#b41'>(Vaswani et al., 2017)</ns0:ref>, which further mainly relies on its self-attention mechanism. It has been suggested that self-attention has limitations that it cannot process input sequentially <ns0:ref type='bibr' target='#b7'>(Dehghani et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b15'>Hao et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b37'>Shen et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b13'>Hahn, 2020)</ns0:ref>. Such a weakness is just the strength of recurrent neural networks, which is one of the core components in CEA. Our model PEA combines the advantages of both and performs the best in most cases. To better understand the strengths and weaknesses of data augmented CEA and BERT, we carry out a case study in the next section.</ns0:p></ns0:div> <ns0:div><ns0:head>Case Study</ns0:head><ns0:p>We give empirical validation on the strengths and weaknesses of two basic models, including the BERT-based model and data augmented CEA, by a further case study on misclassifications of both models. We test on the most challenging task MEABSA and use the corresponding We also give the third example in Table <ns0:ref type='table'>10</ns0:ref>, where all the CEA, BERT-based model and PEA made the wrong prediction. The gold output should be negative, but all models predicted it as neutral. The possible reason is that there are no aspect terms directly towards the target entity 'Kao', which cause the model to give the prediction as neutral.</ns0:p></ns0:div> <ns0:div><ns0:head>Results on Challenging Conditions</ns0:head><ns0:p>There are two challenges in sentiment prediction towards entities and aspects: the low-resource problem and the polarity bias problem. In this section, we evaluate the negative effect of challenges and the ability of models to solve them.</ns0:p></ns0:div> <ns0:div><ns0:head>Results on Extreme Low-Resource Conditions</ns0:head><ns0:p>To further test the model's performance under extreme low-resource conditions, we randomly selected 5%, 10%, 20%, and 50%, each time, from the original dataset as our training dataset. All tests are performed under the most challenging Babycare dataset. Experimental results are as Fig. <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> shows. The x-axis refers to the percentage of data used for training, the y-axis refers to the Macro-F1 of different models. ER and NI are the abbreviations of entity replacement and noise injection. We can have the following observations from Fig. <ns0:ref type='figure' target='#fig_10'>6</ns0:ref>.</ns0:p><ns0:p>(1) for all models, as the percentage of the training set used decreases, the models' performance drops significantly, which further illustrates the significance of the low-resource problem on sentiment prediction.</ns0:p><ns0:p>(2) CEA+ER outperforms the CEA model under all the low-resource conditions, which shows the effectiveness of using entity replacement. By using noise injection, the CEA+ER+NI achieves further improvements over CEA and CEA+ER.</ns0:p><ns0:p>( <ns0:ref type='table' target='#tab_11'>11</ns0:ref> shows, the last column displays the decline between the performance on the Original test set and EPB test set.</ns0:p><ns0:p>After comparing the sentiment prediction results from using the evident polarity biased data with the results from using the origin data, we have the following observations:</ns0:p><ns0:p>(1) the performance of all models has varying degrees of decline on the polarity biased EPB dataset. This shows the polarity bias problem is one of the challenges in fine-grained sentiment analysis.</ns0:p><ns0:p>(2) comparing CEA and CEA+DA, the performance on the EPB test dataset is close to the performance on the original test set. This is because data augmentations can relieve the polarity bias problem by providing plenty, omni-polar sentiment training data, and reduced the variance of test results to offer more stable performance. This shows applying data augmentations can address the polarity bias problem in fine-grained sentiment analysis and make the model more generality.</ns0:p><ns0:p>(3) comparing CEA and the BERT-based model, the performance on the original test set of the BERT-based model has a significant improvement than that of CEA. (4) PEA achieves the best performance on the original test set, and relieves the polarity problem on the EPB test at the same time, which also shows the necessity and effectiveness of using the ensemble methods to fuse the predictions of CEA and BERT based models with data augmentations.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this paper, we developed the PEA model, which unified the ABSA, TABSA, and MEABSA tasks together for the first time and provided an all-in-one solution to interpret consumers' opinions on all kinds of social media platforms. For the first time, we analysed the effect of the sentiment polarity bias problem in these tasks. Most importantly, we created two innovative, task-specific methods to alleviate the low-resource problem and the polarity bias problem, not only getting promising experimental results, but also providing inspiration for successors to make more contributions in this area. For future work, there are two possible extensions worth considering. The first one is to look for new ways to combine pre-trained language models with RNN-based models, to integrate both advantages. The second one is to further investigate more types of fine-grained sentiment analysis, and propose unified models handling various finegrained sentiment-related tasks, for example, emotion cause analysis. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Entity replacement is used to generate pseudo instances for training. The entire process involves 3 steps: &#61623; Creating a duplicate of the original training set . &#119863; &#61623; Replacing each entity in the duplicated dataset with the target entity to get an entity-replaced dataset . &#119875;&#119863; &#61623; Combining the original dataset with the entity-replaced dataset as the new training dataset &#119863; '</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>( 1 )</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>by observing the accuracy and F1 performance, two Capsule Network-based models TransCap and IACapsNet are much better than other previous baselines. This is because the key components of TransCap and IACapsNet are recurrent neural works and attention mechanisms. It shows that the RNN-based model has advantages in predicting fine-grained sentiments over conventional methods.(2) by observing the precision and recall on both datasets, the recall scores of most models include TD-LSTM, ATAE-LSTM, IAN, RAM and ASGCN are much worse, while PEA can have better performance.(3) compared with all the baselines, our proposed model PEA achieves significant improvements on both datasets. The experimental results show the PEA model is superior to other baselines in the ABSA task under all evaluation metrics. Results on the TABSA Task. We evaluate the English benchmark dataset SentiHood for the TABSA task. It consists of 5,215 sentences, 3,862 of them contain a single target, and the remainder multiple targets. We compare with all the published state-of-the-art baselines, including Logistic Regression (LR)<ns0:ref type='bibr' target='#b34'>(Saeidi et al., 2016)</ns0:ref>, LSTM+TA+SA<ns0:ref type='bibr' target='#b29'>(Ma, Peng &amp; Cambria, 2018)</ns0:ref>, SenticLSTM<ns0:ref type='bibr' target='#b29'>(Ma, Peng &amp; Cambria, 2018)</ns0:ref>, Dmu-Entnet<ns0:ref type='bibr' target='#b26'>(Liu, Cohn &amp; Baldwin, 2018)</ns0:ref>, RE+Delayed-memory<ns0:ref type='bibr' target='#b24'>(Liang et al., 2019)</ns0:ref>, BERT-pair-QA-B and BERT-pair-QA-M<ns0:ref type='bibr' target='#b38'>(Sun, Huang &amp; Qiu, 2019)</ns0:ref>. Following the former research in the TABSA task, Accuracy and AUC are usually reported and used as evaluation metrics, in the paper, Marco-Precision, Macro-Recall and Marco-F1 are also reported. Results on TABSA are presented in Table6. We can have the following observations:(1) BERT-pair-QA-M and BERT-pair-QA-B are the previous state-of-the-art models. Compared with other none-BERT based baselines, BERT-pair-QA-M and BERT-pair-QA-B outperform the LR, LSTM+TA+SA, SenticLSTM, Dmu-Entnet and RE+Delayed-memory models in both accuracy and AUC score. This result shows the effectiveness of the pre-trained language model for fine-grained sentiment analysis.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>1) MemNet, ATAE-LSTM, and IAN in the first three lines only model aspects while ignoring entity modeling. Their performances are worse than the plus versions MemNet+, ATAE-LSTM+, and IAN+, which model the entity in the same manner as aspect, illustrating the effectiveness of entity modeling in the MEABSA task. (2) The CEA model combines the advantages of both attention-based LSTM and deep memory networks, the former is the key component of ATAE-LSTM+ and the latter is the key component of MemNet+. The performance of CEA is much better than ATAE-LSTM+ and MemNet+, which reaches about 15% in accuracy. This shows that the CEA model has advantages in the MEABSA task, and is more suitable to be chosen as an RNN-based basic model for PEA. (3) DT-CEA and CADMN are two extension models based on CEA. DT-CEA incorporated dependency information to improve CEA. CADMN used a frequency-guided attention mechanism to improve CEA. The performance of CADMN and DT-CEA are comparable to each other and are little better than CEA.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Babycare dataset for the case study. To show the stability of the models rather than the occasionality, we have trained the BERT-based model, the Data augmented CEA model and the PEA model five times. The predictions of two representative examples are as Table 10 shows. For example 1, the BERT-based model makes the same misclassification on the inputs five times and the data augmented CEA model achieves the correct predictions. Example 2 is just the opposite. Such stable misclassifications reveal the defects of both models. The first example has a special pattern: the coreference structure of '...the former..., ...the latter...'. The second example consists of two simple sentences. Correctly predicting the first example need the ability of global sequence or structure understanding which is the advantage of recurrent neural networks. The recurrent neural network is one of the core components of CEA. Correctly predicting the second example need the ability of local attention which is the advantage of self-attention, which is the core component of the BERT-based model. PEA fuses the prediction with both BERT-based model and data augmented CEA model based on ensemble methods, which make the correct prediction on both examples. This case study further helps illustrate the value and necessity of ensembling two basic models.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>) for the BERT-based model, when the resource is extremely low, the BERT-based model deteriorates sharply. For example, when 5% of data is used for training, the Macro-F1 of BERTbased model and PEA is 57.16% vs 64.37%. This shows that the combination of the data augmented CEA and BERT-based model for PEA can boost the stability of the model. (4) the dotted line in red refers to the baseline results with 100% data for training, we can observe from Fig. 6 that when only 20% data are used for training, the proposed PEA can achieve a similar performance of the CEA model with full-resource data for training. With the size of training data becoming larger, the improvement of PEA becomes more obvious. This shows the PEA model, which combines data augmented CEA with BERT-based model, has advantages under lowresource conditions. Results on Evident Polarity Biased Conditions Polarity bias occurs when sentiment polarity distribution towards an entity is not uniform. Polarity bias reduces the performance when sentiments towards an entity diverge in the training set and in the test set (e.g. 70% of sentiment towards entity A are positive in the training set while 60% of which are negative in the test set). We create a new test set named EPB test set, which consists of all the instances with entities polarity biased from the original test set. Using the BabyCare test set, we find entities in 30% of data (1070 out of 3677) have the evident polarity bias problem. Experimental results are as Table</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The graphical abstract of the PEA model.</ns0:figDesc><ns0:graphic coords='45,42.52,178.87,525.00,249.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. An example of dual noise injection.</ns0:figDesc><ns0:graphic coords='46,42.52,178.87,525.00,252.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. General structure of CEA with noise-injected vectors</ns0:figDesc><ns0:graphic coords='47,42.52,178.87,525.00,325.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Macro-F1 performance on four datasets with different values of &#963; in noise infection</ns0:figDesc><ns0:graphic coords='48,42.52,199.12,525.00,393.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Box plot of the number of instances for every entity based on the original training and the entity-replaced dataset respectively.</ns0:figDesc><ns0:graphic coords='49,42.52,199.12,522.75,349.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Performance on extreme low-resource conditions.</ns0:figDesc><ns0:graphic coords='50,42.52,178.87,525.00,334.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>demonstrates the graphical abstract of the PEA model. Firstly, the unified problem setting of fine-grained sentiment analysis covering ABSA, TABSA and MEABSA is as follows. a post , with an entity set (if available) &#119875;&#119900;&#119904;&#119905; &#119898; = [&#119908; 1 , &#119908; 2 ,&#8230;, &#119908; &#119879; ] &#119864; &#119898; = {&#119890;&#119899;&#119905;&#119894;&#119905;&#119910; 1 , &#119890;&#119899;&#119905;&#119894;&#119905;&#119910; 2 and an aspect set . For the words or , &#8230;,&#119890;&#119899;&#119905;&#119894;&#119905;&#119910; |&#119864; &#119898; | } &#119860; &#119898; = {&#119886;&#119904;&#119901;&#119890;&#119888;&#119905; 1 , &#119886;&#119904;&#119901;&#119890;&#119888;&#119905; 2 , &#8230;,&#119886;&#119904;&#119901;&#119890;&#119888;&#119905; |&#119860; &#119898; | }</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Given multiple words in</ns0:cell><ns0:cell>&#119875;&#119900;&#119904;&#119905; &#119898;</ns0:cell><ns0:cell cols='3'>, which are corresponding to the entities or aspects in</ns0:cell><ns0:cell>or &#119864; &#119898; &#119860; &#119898;</ns0:cell><ns0:cell>, we call</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>them entity terms and aspect terms. The fine-grained sentiment analysis aims to predict the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>sentiment For the ABSA task, the entity set towards the given &#119910; &#119886;&#119904;&#119901;&#119890;&#119888;&#119905; &#119895; &#119890;&#119899;&#119905;&#119894;&#119905;&#119910; &#119894; &#119864; &#119898; = &#8709; &#119886;&#119904;&#119901;&#119890;&#119888;&#119905; &#119895; and the prediction target is simplified to of the certain in . &#119890;&#119899;&#119905;&#119894;&#119905;&#119910; &#119894; &#119875; &#119898;</ns0:cell><ns0:cell>&#119910; &#119886;&#119904;&#119901;&#119890;&#119888;&#119905; &#119895;</ns0:cell><ns0:cell>.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>For the TABSA task, in each post</ns0:cell><ns0:cell>&#119875;&#119900;&#119904;&#119905; &#119898;</ns0:cell><ns0:cell cols='2'>, there is only one or two entities in the entity set, where</ns0:cell></ns0:row><ns0:row><ns0:cell>Problem Setting</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66426:1:0:NEW 14 Nov 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>There is no entity in the dataset, so entity replacement in data augmentation is removed. Results on two ABSA datasets are shown in Table5. We can have the following observations:</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>for , which is &#120583;</ns0:cell><ns0:cell>. For , we conduct &#120583; = 0 &#120590;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>experiments on all four datasets with ranging from 0.01 to 0.4 to quantify the noise level. &#120590;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>Experimental results are in Fig. 4.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>The -axis refers to different values of , the -axis refers to the Macro-F1 performance. Four &#119909; &#120590; &#119910;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>lines with different kinds of marks refer to the results of four datasets. Experimental results show</ns0:cell></ns0:row><ns0:row><ns0:cell>that when</ns0:cell><ns0:cell>&#120583; = 0</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>&#120590; = 0.05</ns0:cell><ns0:cell cols='2'>, noise injection achieves the utmost performance on all tasks. We</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>including</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Target-Dependent Long Short-Term Memory (TD-LSTM) (Tang et al., 2016), MemNet (Tang,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Qin &amp; Liu, 2016), Attention-based LSTM with Aspect Embedding (ATAE-LSTM) (Wang et al.,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>2016), Interactive Attention Network (IAN) (Ma et al., 2017), Recurrent Attention on Memory</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>(RAM) (Chen et al., 2017), Transfer Capsule Network (TransCap) (Chen &amp; Qian, 2019), Aspect-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>specific Graph Convolutional Network (ASGCN) (Zhang, Li &amp; Song, 2019), and Capsule</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Network with Interactive Attention (IACapsNet) (Du et al., 2019). Following the former</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>research, Accuracy and Marco-F1 are evaluated for both datasets, Marco-Precision and Macro-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Recall are also reported.</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>use this setting in the following experiments. Model implementation settings. We implement our proposed model with TensorFlow 2.1, Python 3.7. The device we used consists of CPU (E5 2630 v4), GPU (1080ti * 4) and RAM (256G). We compare our model with the state-of-the-art baselines on 3 tasks predicting sentiment towards entities and aspects.ResultsAccuracy and Marco-F1 score are two main-stream metrics in most sentiment analysis research, where Marco-F1 is the F1 score averaged over all the classes. In the following experiments, Marco-Precision, Macro-Recall and AUC score are also used according to different tasks. Results on the ABSA Task. We evaluate the English benchmark datasets 3 Restaurant and Laptop for the ABSA task. We compare with the published state-of-the-art baselines,</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Definition of the confusion matrix.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66426:1:0:NEW 14 Nov 2021)Manuscript to be reviewed</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Top 10 low-resource entities in the BabyCare dataset, with the number of instances that belong to 2 every polarity category for both the original training set and entity-replaced dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>the original training set</ns0:cell><ns0:cell /><ns0:cell cols='2'>the entity-replaced dataset</ns0:cell></ns0:row><ns0:row><ns0:cell>Entity</ns0:cell><ns0:cell cols='2'>Negative Neutral</ns0:cell><ns0:cell>Positive</ns0:cell><ns0:cell cols='2'>Negative Neutral</ns0:cell><ns0:cell>Positive</ns0:cell></ns0:row><ns0:row><ns0:cell>&#20339;&#36125;&#33406;&#29305;(Kabrita)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell>69</ns0:cell><ns0:cell>243</ns0:cell></ns0:row><ns0:row><ns0:cell>&#21487;&#29790;&#24247;(Karicare)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>108</ns0:cell><ns0:cell>382</ns0:cell></ns0:row><ns0:row><ns0:cell>&#21531;&#20048;&#23453;(JunLeBao)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>121</ns0:cell><ns0:cell>409</ns0:cell></ns0:row><ns0:row><ns0:cell>&#21652;&#21703;&#29066;(Cowala)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>144</ns0:cell><ns0:cell>446</ns0:cell></ns0:row><ns0:row><ns0:cell>&#22810;&#32654;&#28363;(Dumex)</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell>84</ns0:cell></ns0:row><ns0:row><ns0:cell>&#22826;&#23376;&#20048;(Happy Prince)</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>306</ns0:cell></ns0:row><ns0:row><ns0:cell>&#22902;&#31881;(milk powder)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>304</ns0:cell></ns0:row><ns0:row><ns0:cell>&#27431;&#36125;&#22025;(OuBecca)</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>86</ns0:cell><ns0:cell>305</ns0:cell></ns0:row><ns0:row><ns0:cell>&#30334;&#31435;&#20048;(Natrapure)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>73</ns0:cell><ns0:cell>37</ns0:cell><ns0:cell>71</ns0:cell></ns0:row><ns0:row><ns0:cell>&#35834;&#20248;&#33021;(Nutrilon)</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell>44</ns0:cell><ns0:cell>146</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66426:1:0:NEW 14 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 (on next page)</ns0:head><ns0:label>5</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Performance (%) on two datasets for the ABSA task, Accuracy, Marco-Precision, Macro-Recall and Marco-F1are reported.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Performance (%) on two datasets for the ABSA task, Accuracy, Marco-Precision, Macro-Recall and Marco-F1are reported.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Models</ns0:cell><ns0:cell cols='3'>Restaurant Accuracy Precision Recall</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell cols='3'>Laptop Accuracy Precision Recall</ns0:cell><ns0:cell>F1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>TD-LSTM 75.18</ns0:cell><ns0:cell>70.60</ns0:cell><ns0:cell cols='3'>56.57 58.51 64.26</ns0:cell><ns0:cell>57.67</ns0:cell><ns0:cell>56.67 54.10</ns0:cell></ns0:row><ns0:row><ns0:cell>MemNet</ns0:cell><ns0:cell>77.32</ns0:cell><ns0:cell>69.87</ns0:cell><ns0:cell cols='3'>64.38 64.61 68.65</ns0:cell><ns0:cell>63.58</ns0:cell><ns0:cell>63.62 62.69</ns0:cell></ns0:row><ns0:row><ns0:cell>ATAE-LSTM</ns0:cell><ns0:cell>74.38</ns0:cell><ns0:cell>67.43</ns0:cell><ns0:cell cols='3'>57.28 58.32 66.14</ns0:cell><ns0:cell>61.22</ns0:cell><ns0:cell>58.97 56.91</ns0:cell></ns0:row><ns0:row><ns0:cell>IAN</ns0:cell><ns0:cell>76.16</ns0:cell><ns0:cell>67.43</ns0:cell><ns0:cell cols='3'>59.31 60.56 65.20</ns0:cell><ns0:cell>61.64</ns0:cell><ns0:cell>58.54 54.08</ns0:cell></ns0:row><ns0:row><ns0:cell>RAM</ns0:cell><ns0:cell>76.07</ns0:cell><ns0:cell>72.07</ns0:cell><ns0:cell cols='3'>58.65 59.59 68.03</ns0:cell><ns0:cell>64.03</ns0:cell><ns0:cell>63.86 60.82</ns0:cell></ns0:row><ns0:row><ns0:cell>TransCap</ns0:cell><ns0:cell>79.20</ns0:cell><ns0:cell>70.76</ns0:cell><ns0:cell cols='3'>70.81 70.78 74.76</ns0:cell><ns0:cell>71.77</ns0:cell><ns0:cell>71.99 70.08</ns0:cell></ns0:row><ns0:row><ns0:cell>ASGCN</ns0:cell><ns0:cell>74.29</ns0:cell><ns0:cell>71.95</ns0:cell><ns0:cell cols='3'>56.74 56.45 69.75</ns0:cell><ns0:cell>66.21</ns0:cell><ns0:cell>63.75 62.29</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IACapsNet 81.79</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='2'>73.40 76.80</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>73.29</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>PEA(Our) 84.82</ns0:cell><ns0:cell>80.41</ns0:cell><ns0:cell cols='3'>76.31 78.14 78.68</ns0:cell><ns0:cell>74.43</ns0:cell><ns0:cell>76.60 75.07</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Performance (%) on the SentiHood dataset for the TABSA task, Accuracy, Marco-Precision, Macro-Recall, Marco-F1 and AUC are reported.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Models</ns0:cell><ns0:cell cols='3'>Accuracy Precision Recall</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell>AUC</ns0:cell></ns0:row><ns0:row><ns0:cell>LR</ns0:cell><ns0:cell>87.5</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>90.5</ns0:cell></ns0:row><ns0:row><ns0:cell>LSTM+TA+SA</ns0:cell><ns0:cell>86.8</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>SenticLSTM</ns0:cell><ns0:cell>89.3</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Dmu-Entnet</ns0:cell><ns0:cell>90.2</ns0:cell><ns0:cell>74.8</ns0:cell><ns0:cell>76.3</ns0:cell><ns0:cell>75.5</ns0:cell><ns0:cell>94.8</ns0:cell></ns0:row><ns0:row><ns0:cell>RE+Delayed-memory</ns0:cell><ns0:cell>92.8</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>96.2</ns0:cell></ns0:row><ns0:row><ns0:cell>BERT-pair-QA-B</ns0:cell><ns0:cell>93.3</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>97.0</ns0:cell></ns0:row><ns0:row><ns0:cell>BERT-pair-QA-M</ns0:cell><ns0:cell>93.8</ns0:cell><ns0:cell>83.4</ns0:cell><ns0:cell>85.7</ns0:cell><ns0:cell>84.5</ns0:cell><ns0:cell>97.1</ns0:cell></ns0:row><ns0:row><ns0:cell>PEA(Our)</ns0:cell><ns0:cell>94.3</ns0:cell><ns0:cell>86.0</ns0:cell><ns0:cell>84.5</ns0:cell><ns0:cell>85.2</ns0:cell><ns0:cell>97.4</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Performance (%) on the BabyCare dataset for the MEABSA task, Accuracy, Marco-Precision, Macro-Recall and Marco-F1 are reported.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Models</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Precision</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>F1</ns0:cell></ns0:row><ns0:row><ns0:cell>MemNet</ns0:cell><ns0:cell>62.74</ns0:cell><ns0:cell>59.81</ns0:cell><ns0:cell>48.84</ns0:cell><ns0:cell>46.13</ns0:cell></ns0:row><ns0:row><ns0:cell>ATAE-LSTM</ns0:cell><ns0:cell>66.09</ns0:cell><ns0:cell>58.47</ns0:cell><ns0:cell>49.68</ns0:cell><ns0:cell>47.75</ns0:cell></ns0:row><ns0:row><ns0:cell>IAN</ns0:cell><ns0:cell>61.93</ns0:cell><ns0:cell>41.71</ns0:cell><ns0:cell>47.04</ns0:cell><ns0:cell>43.73</ns0:cell></ns0:row><ns0:row><ns0:cell>MemNet+</ns0:cell><ns0:cell>65.32</ns0:cell><ns0:cell>59.93</ns0:cell><ns0:cell>50.55</ns0:cell><ns0:cell>47.93</ns0:cell></ns0:row><ns0:row><ns0:cell>ATAE-LSTM+</ns0:cell><ns0:cell>66.25</ns0:cell><ns0:cell>56.01</ns0:cell><ns0:cell>51.93</ns0:cell><ns0:cell>51.87</ns0:cell></ns0:row><ns0:row><ns0:cell>IAN+</ns0:cell><ns0:cell>65.81</ns0:cell><ns0:cell>44.42</ns0:cell><ns0:cell>50.06</ns0:cell><ns0:cell>46.50</ns0:cell></ns0:row><ns0:row><ns0:cell>CEA</ns0:cell><ns0:cell>80.20</ns0:cell><ns0:cell>77.68</ns0:cell><ns0:cell>75.23</ns0:cell><ns0:cell>76.29</ns0:cell></ns0:row><ns0:row><ns0:cell>DT-CEA</ns0:cell><ns0:cell>81.74</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>78.23</ns0:cell></ns0:row><ns0:row><ns0:cell>CADMN</ns0:cell><ns0:cell>81.45</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>78.37</ns0:cell></ns0:row><ns0:row><ns0:cell>PEA(Our)</ns0:cell><ns0:cell>85.72</ns0:cell><ns0:cell>83.97</ns0:cell><ns0:cell>82.60</ns0:cell><ns0:cell>83.25</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66426:1:0:NEW 14 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Macro-F1 and standard deviation of Macro-F1 (in the brackets) on evident polarity biased (EPB) test set and original test set. 'DA' is short for Data Augmentation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Models</ns0:cell><ns0:cell>EPB test set</ns0:cell><ns0:cell>Original test set</ns0:cell><ns0:cell>Decline on EPB</ns0:cell></ns0:row><ns0:row><ns0:cell>CEA</ns0:cell><ns0:cell>0.7542(0.0123)</ns0:cell><ns0:cell>0.7714(0.0040)</ns0:cell><ns0:cell>1.72%</ns0:cell></ns0:row><ns0:row><ns0:cell>CEA+DA</ns0:cell><ns0:cell>0.7753(0.0069)</ns0:cell><ns0:cell>0.7768(0.0036)</ns0:cell><ns0:cell>0.15%</ns0:cell></ns0:row><ns0:row><ns0:cell>BERT-based</ns0:cell><ns0:cell>0.8068(0.0070)</ns0:cell><ns0:cell>0.8162(0.0040)</ns0:cell><ns0:cell>0.94%</ns0:cell></ns0:row><ns0:row><ns0:cell>model</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>PEA</ns0:cell><ns0:cell>0.8153(0.0069)</ns0:cell><ns0:cell>0.8234(0.0061)</ns0:cell><ns0:cell>0.81%</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66426:1:0:NEW 14 Nov 2021)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66426:1:0:NEW 14 Nov 2021) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='1'>https://github.com/google-research/bert. 2 https://nlp.stanford.edu/projects/glove/</ns0:note> <ns0:note place='foot' n='3'>http://alt.qcri.org/semeval2014/task4/</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66426:1:0:NEW 14 Nov 2021)Manuscript to be reviewed</ns0:note> </ns0:body> "
"School of Artificial Intelligence and Computer Science Jiangnan University 1800 Lihu Avenue Wuxi, Jiangsu, China Nov.14th, 2021 Dear Editors and Reviewers, We sincerely thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. We accept all the suggestions and have responded all the questions. We have carefully revised this manuscript and followings are point-by-point responses to all the comments. We believe that the manuscript is now suitable for publication in PeerJ Computer Science. Dr. Heng-yang Lu Department of Computer Science and Technology On behalf of all authors. Reviewer 1 Basic reporting 1- The writing of the paper can be improved across the entire paper. I am providing suggestions and line numbers for the authors to make some of the corrections. Agreed. All your suggestions have indeed greatly improved our paper, we have revised our paper with all your suggestions. Thank you so much. 2- The main issue of the paper which made it very hard for me to read is that it is very wordy and the tables and the figures are only appearing at the end. So it's very hard to read the text on results without looking at numbers and figures. I strongly recommend that authors address this. Thank you for your comments. The tables and figures only appeared at the end is under the guidance and requirement of the Journal. We have been reminded not to attach tables and figures in the manuscript. When it is published, the tables and figures will be organized just next to the texts. To make it easy to review, we will upload a well-organized version as a supplementary file for your review, where figures and tables are close to the corresponding texts. 3- Authors can also break text into paragraph in places where a paragraph is too long and provide a title for the paragraph to separate different aspects or the parts of the discussion. This is specifically needed when reporting results. Thank you for your comments. We have broken some long text into paragraph and added titles. The current format is under the guidance and requirement of the Journal, so the paragraphs and titles might not be obvious. To make it easy to review, we will upload a well-organized version as a supplementary file for your review. 4- I find the case study section very difficult to read and it also focuses on one or two examples. It does not really provide much insight. Also, you have only included examples where the PEA model does well. It is much more helpful to know which examples it can not get right. First of all, if you bring the relevant table to be next to the section, it can make it easier to understand the text. But in general, if you want to gain extra space, I would remove this section. Agreed. Thank you for your question. (1) We have added new examples, where PEA fail to predict the sentiment polarity correctly and explained the possible reason in case study. (2) We uploaded the manuscript and tables separately, this is under the guidance and requirement of the Journal, when it is published, the tables will be organized just next to the section. To make it easy to review, we will upload a well-organized version as a supplementary file for your review. 5- When reading through the paper, it's not clear from the method section that PEA will be a unified approach for all the tasks. Bringing Figure 1 to the beginning of the paper will help clarify this. Agreed. Thank you for your comments. We have brought Figure 1 to the beginning of the methods section. What’s more, because the Journal requires not to attach tables and figures in the manuscript, figure 1 will appear at the end of the manuscript for review. To make it easy for reading, we will upload a well-organized version as a supplementary file, where figure 1 is at the beginning of the method section. 6- There are some exceptions for different tasks and datasets but these are only mentioned when reporting the results. These should be in the experimental set up at least. Agreed. Thank you. We have added the new paragraph named “task specific settings” in the experimental set up to introduce some exceptions for all the ABSA, TABSA and MEABSA tasks according to your suggestions. Detailed Comments 1- in all the paper ‘pre-trained language representation model’ can be replaced with ‘pre-trained language model’ Agreed. Thank you for your suggestion. We have replaced all the mentioned phases to “pretrained language model”. 2- Line 115: it’s strange “which makes both advantages of RNN-based models and BERT-based models with ensemble methods” Agreed. Thank you. We have changed it to “combines both advantages of RNN-based models and BERT-based models …” 3- Lines: 128-131 can be rephrased: graph neural networks (ref1, ref2) have been applied to the problem. - Also, stacked LSTM can be mentioned closer to the LSTM related work. Agreed. Thank you. We have rephrased the whole related work section under your suggestion. 4- Related work: the term ‘associates’ has been used a lot, it’s best to rephrase some sentences to avoid this. Agreed. Thank you. We have rephrased the whole related work section under your suggestion, and avoided using too much “associates” terms. 5- If the formatting allows, it’s a good idea to have section or paragraph titles in the related work to separate different aspects. Agreed. Thank you. We have separated the related work with three subsections, including ‘Research on fine-grained sentiment analysis’, ‘Research on data augmentation in NLP’ and ‘Research on Bias problems in NLP’ 6- In the methods section, PEA is the name of the entire solution including data augmentation. It’s better to do a separation: Data Augmentation, Baseline Models, etc. Agreed. Thank you. We have separated this part into “Data augmentation”, “Basic models” and “Fusion strategy” according to your suggestion. 7- In line 215: the fusion strategy should not be part of the basic models and part of the proposed model. Agreed. Thank you. We have put this part into a new subsection named “Fusion strategy” according to your suggestion. 8- line 231: we invented is a strange word, maybe use propose. Agreed. Thank you. We have changed ‘invented’ to ‘proposed’. 9- Equation (1), |entity_i| is a misleading notation. Maybe we can say: |mention(entity_i)| Agreed. Thank you. We have changed |entity_i| to |mention(entity_i)| in Equation (1). 10- line 247: Table 2 shows an example … Agreed. Thank you for your suggestion, we have changed “Table 2 is …” to “Table 2 shows …” 11- Lines 260-262, the sentence starting with ‘However’, I don’t understand what we mean by it. Can you please rephrase Agreed. Thank you. We have rewritten this sentence to make it clearer, as follows: In these works, noise is usually injected into the context representation for the post directly. For fine-grained sentiment analysis, the inputs include context texts, entities, entity terms, aspects and aspect terms. It is not applicable to only inject noises on context representations like previous works. Therefore, we propose the idea of dual noise injection: a noise is injected into the representation of entity and entity terms in the context at the same time…… 12- line 272: ‘T is the length of the post, which is the number of words in the post’, the two are redundant, use one. Agreed. Thank you. We have rewritten this sentence as “T represents the number of words in the post …” 13- line 290: we conduct experimental attempts to determine the settings Thank you. We have changed “experimental attempts” to “experiments” according to your suggestion. 14- line 296: basic models: basic can be dropped Agreed. Thank you. ‘basic’ was deleted. 15- line 307: ‘Detailed explanations of CEA can refer to the original paper’: for detailed explanation of CEA, refer to the original paper Agreed. We have revised according to your suggestion. Thank you. 16- line 311: Here, you mention that you deal with all the three tasks using the same architecture. You need to make it more clear early on. Agreed. Thank you for your suggestion. We have added “we deal with three tasks with the same architecture in the beginning of the “Methods” section. We have also mentioned “CEA is suitable for three tasks at the beginning of introducing CEA. 17- line 317: It achieved good results in .. -> I has achieved good results (you should add references on tasks where BERT has achieved good results) Agreed. We have changed “It achieved good results in many nlp tasks” to “It has achieved …” and we have also added corresponding references. Thank you. 18- line 347: sentimental should be just sentiment Agreed. Thank you. We have revised this typo. 19- line 390: ‘experimental attempts’ -> experiments Agreed. Thank you. We have changed to ‘experiments. 20- line 401: ‘researches’ -> research (replace in all the paper) Agreed. We have replaced ‘researches’ to ‘research’ in the all paper. Thank you. 21- line 431: ‘while PEA can have stable performance’ what do we mean by stable performance? Thank you for your question. Here we want to express PEA has better performance. We have changed “stable” to “better”, to make it easier to understand. 22- line 417: you need to have a new section called Results. All the paragraph names should have ‘Results’ instead of ‘Evaluation’ Agreed. Thank you. We have changed the original “Evaluation” section into the new “Results” section, and changed all the paragraph titles from “Evaluation on … task” to “Results on … task”. 23- line 400-416: In my opinion, the definition of the metrics (equations) can be removed. But do explain what macro F1 is (F1 averaged over all the classes). Agreed. We have deleted the definition of the metrics and simply introduced we use accuracy and macro-F1 (which is F1 averaged over all the classes ) as metrics in the experiments according to your suggestions. Thank you. 24- line 447: This result shows that the effectiveness -> This result shows the effectiveness Agreed. Thank you. We have deleted ‘that’ in the sentence. 25- line 450: ‘This may be due to the prediction of PEA comes from’-> This may be because the prediction ….. Agreed. Thank you. We have changed ‘due to’ to ‘because’. 26- line 452: what is the point of (3), it’s not clear. Do you mean that the increase in performance is not very high? Agreed. Thank you for your question. Yes, the increase in performance of PEA is not very high on the TABSA task. This is different from the performance in ABSA and MEABSA tasks. So, the point of (3) aims to analyse and discuss the possible reasons on the TABSA task. We have also rewritten this part to make it clearer. 27- 454: ‘aspect noise injection is removed for this experiment’ : this is the first time this is mentioned. It should have been mentioned in the experimental set up. Agreed. Thank you. We have added the new paragraph named “task specific settings” in the experimental set up and mentioned the setting of “removing aspect noised injection for TABSA task” in this part. 28- line 488: the definition of p-value can be improved. What does ‘The -value is set as 0.05’ means, I think that can be removed. Agreed. Thank you for your suggestion. Refer to the previous works (Li et al., 2020), we conduct McNemars test as the statistical analysis test to further show the statistical difference between two models. p-value is the significance level, which means the performance difference between the two models. We have improved the introduction of p-value and removed “The -value is set as 0.05” according to your suggestion. 29- Lines 495-499: very confusing as to which ones are significant and which one is not significant. Which models does this sentence refer to ‘The estimated -value between these two models is 0.0174, which is lower than 0.05’? Agreed. Thank you for your question. The original lines 495-499 belong to the “Statistical analysis test”, which uses p-value to estimate the statistical difference between two models. The original lines 495-499 take the results of PEA and BERT-pair-NLI-M in the TABSA task as an example. (1) from the perspective of accuracy, the improvement of PEA compared with BERTpair-NLI-M in the TABSA task is not very high. (2) from the perspective of statistical analysis test, the estimated p-value between PEA and BERT-pair-NLI-M is 0.0174. According to the definition of p-value, it shows that the performance difference between BERT-pair-NLI-M and PEA is statistically significant. We have rewritten this part in the revisions. 30- line 502-503: ‘Experimental results in section “Experiments and Analysis” illustrated the PEA model’s outstanding performance in ABSA, TABSA and MEABSA than the baselines.’ -> experimental results so far show that the PEA approach is superior to the baselines on all the ABSA, TABSA and MEABSA on selected datasets. Agreed. Thank you. We have changed this sentence according to your suggestion. 31- line 559: ‘It is introduced that there are two challenges’ -> ‘There are two challenges’ Agreed. Thank you. We have changed this sentence to “There are two challenges …” 32- line 575: instead of ‘seriously’, use another word, maybe sharply or massively. Agreed. Thank you. We have changed ‘seriously’ to ‘sharply’. 33- line 576: ‘This shows that combine the’ -> This shows that the combination of .. Agreed. Thank you. We have changed ‘This shows that combine the’ to ‘This shows that the combination of ..’ 34- line 611: ‘Moreover, we first-time discovered and defined’ -> For the first time, we analysed the effect of the sentiment polarity bias’ Agreed. Thank you. We have changed “Moreover, we first-time discovered and defined” to “For the first time, we analysed the effect of the sentiment polarity bias …” 35- Table 4 is not necessary, you can remove it. Agreed. We have deleted Table 4, the confusion matrix. Thank you. Experimental design 1- Experimental design seems fine. One comment is that how did the authors came up with the weights 0.5 for the ensemble model? Is it possible to optimize this hyper-parameter? Thank you for your question. (1) The fusion strategy we used in this paper belongs to simple averaging strategy, which is the most commonly used combination strategy in ensemble learning. Because there are two basic models used in PEA, the weights are averaged set as 0.5 based on the idea of simple averaging. (2) Another way to combine the outputs of both models is weighted averaging strategy, in this way, the weights of different models can be different. Thank you for your comments, we have added more explanations in the revisions. Validity of the findings The findings seem valid and they will be easier to understand if the reporting improves. Thank you. We have carefully revised our manuscript with all your comments. Reviewer 2 Basic reporting no comment Experimental design 1- Entity replacement is good for low-resource problem, it's also good for the polarity bias problem. It would be better to give some examples of entity replacement results on polarity biased entities. Agreed. Thank you for your suggestion. We have added a table (Table 4) in the experiment section. It lists the top 10 low-resource entities in the BabyCare dataset, and displays the number of instances that belong to every category for both the original training set and the entity-replaced dataset. We can observe that, for those low-resource entities, such as “Kabrita”, the number of negative and neutral instances has significantly increased by using entity replacement. This shows entity replacement can help relieve both the low-resource and polarity bias problems. Validity of the findings 1- I'm interested in the application of this model. If we want to apply the model to real scenarios, what should the data be like to input to the model. Thank you for your question. We have added two examples to show the input format when applied to real scenarios. We will add tags to tell the model which part is the context, which part is the entity and which part is aspect. Here is one example: <context> I’ve used MacBookPro, it’s convenient. </context> <entity from=“10” to=“20”>MacBookPro </entity> <aspect> from=“27” to=“37”>convenience level</aspect> Additional comments 1- The low-resource and polarity bias problems are indeed two challenges in the research field of fine-grained sentiment analysis. The experiments show good results. 2- The design of 'dual noise injection' is interesting, it models even non-existed entities and aspects. This improves the generalization ability of the model and the experiments verified it. 3- The case study setting is good for understanding the strength and weakness of RNN-based model and BERT-based model, and explains why the two models should be combined. Thank you. Reviewer 3 Basic reporting 1- Expansion of abbreviations like CEA, RNN, TD-LSTM, ATAE-LSTM should be given the first time the abbreviation is used Agreed. Thank you for your suggestion. We have given the full names of abbreviations in the revision, including CEA, RNN, TD-LSTM, ATAE-LSTM, IAN, RAM, TransCap, ASGCN, IACapsNet, LR, CADMN etc. 2- There are several works on Aspect-Based Sentiment Analysis, but very few on Targeted AspectBased Sentiment Analysis and Multi-Entity Aspect-Based Sentiment Analysis. So, literature review on TABSA and MEABSA could be elaborated in detail. Even ABSA literature work could be detailed Agreed. Thank you. We have added more related works and rewritten the related work section to make it more detailed according to your comments. 3- Racial and gender bias is discussed in literature review, is it implemented in the proposed work? Thank you for your question. In this paper, we mainly consider the polarity bias problem in the fine-grained sentiment analysis. The literature on racial and gender bias is referenced to introduce the bias problem in NLP. To make it more clear, we have add a title for this subsection named “Research on Bias problems in NLP” in the Literature Reviews section. Experimental design 1- The various attributes of the datasets can be explained, so as to keep the reader engaged. Agreed. Thank you for your suggestion. We have added explanations of all the four datasets in detail at the beginning of the experiment section. Validity of the findings 1- According to this work, what is the scope of fine – grained sentiment analysis? How do you differentiate it with coarsely grained sentiment analysis? Explain it beforehand Thank you for your question and suggestion. Traditional coarse-grained sentiment analysis aims to identify the sentiment polarity of the given sentence. Different from that, fine-grained sentiment analysis is managed to match sentiments with corresponding entities and aspects in the given sentence. For example, given the comment “I’ve used MacBookPro, it’s convenient.” Coarsegrained sentiment analysis describes the whole sentence a positive sentiment. Fine-grained sentiment analysis describes a positive sentiment towards MacBookPro (entity) on its convenience level (aspect), which is a provided (sentence, aspect, entity) pair. We have revised and added this explanation in the beginning of introduction. 2- Figure is cited in text, which makes few things unexplained. For example, Figure 1 gives a prediction as output, What is predicted here? Agreed. Thank you. The ‘prediction’ of output in figure 1 means the predicted sentiment polarity of PEA. We have changed the ‘sentiment polarity prediction’ in the original figure 1 to ‘sentiment polarity distribution’, changed the ‘output: prediction’ to ‘output: predicted sentiment polarity’. We have also revised the illustration of figure 1 in the manuscript to make it clearer. Additional comments 1- Few more examples can be added to explain ABSA, TABSA and MEABSA for better clarity. Agreed. Thank you for your suggestion. We have added more examples to explain ABSA, TABSA and MEABSA in Table 1 for better clarity. "
Here is a paper. Please give your review comments after reading it.
316
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The reality gap is the discrepancy between simulation and reality-the same behavioural algorithm results in different robot swarm behaviours in simulation and in reality (with real robots). In this paper, we study the effect of the reality gap on the psychophysiological reactions of humans interacting with a robot swarm. We compare the psychophysiological reactions of 28 participants interacting with a simulated robot swarm and with a real (non simulated) robot swarm. Our results show that a real robot swarm provokes stronger reactions in our participants than a simulated robot swarm. We also investigate how to mitigate the effect of the reality gap (i.e., how to diminish the difference in the psychophysiological reactions between reality and simulation) by comparing psychophysiological reactions in simulation displayed on a computer screen and psychophysiological reactions in simulation displayed in virtual reality. Our results show that our participants tend to have stronger psychophysiological reactions in simulation displayed in virtual reality (suggesting a potential way of diminishing the effect of the reality gap).</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The reality gap is the discrepancy between simulation and reality-the same behavioural algorithm results in different robot swarm behaviours in simulation and in reality (with real robots). In this paper, we study the effect of the reality gap on the psychophysiological reactions of humans interacting with a robot swarm. We compare the psychophysiological reactions of 28 participants interacting with a simulated robot swarm and with a real (non simulated) robot swarm. Our results show that a real robot swarm provokes stronger reactions in our participants than a simulated robot swarm. We also investigate how to mitigate the effect of the reality gap (i.e., how to diminish the difference in the psychophysiological reactions between reality and simulation) by comparing psychophysiological reactions in simulation displayed on a computer screen and psychophysiological reactions in simulation displayed in virtual reality. Our results show that our participants tend to have stronger psychophysiological reactions in simulation displayed in virtual reality (suggesting a potential way of diminishing the effect of the reality gap).</ns0:p></ns0:div> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In a near future, swarms of autonomous robots are likely to be part of our daily life. Whether swarms of robots will be used for high-risk tasks (e.g., search and rescue, demining) or for laborious tasks (e.g., harvesting, environment cleaning, grass mowing) <ns0:ref type='bibr' target='#b7'>(Dorigo et al., 2014</ns0:ref><ns0:ref type='bibr' target='#b8'>(Dorigo et al., , 2013))</ns0:ref>, it will be vital for humans to interact with these robot swarms (e.g., supervise, issue commands or receive feedback).</ns0:p><ns0:p>Recently, human-swarm interaction has become an active field of research. More and more, researchers in human-swarm interaction validate their work by performing user studies (i.e., group of human participants performing an experiment of human-swarm interaction). However, a large majority of the existing user studies are performed exclusively in simulation, with human operators interacting with simulated robots on a computer screen, e.g., <ns0:ref type='bibr' target='#b1'>Bashyal and Venayagamoorthy (2008)</ns0:ref>; <ns0:ref type='bibr' target='#b25'>Nunnally et al. (2012)</ns0:ref>; De la Croix and Egerstedt (2012); <ns0:ref type='bibr' target='#b38'>Walker et al. (2012)</ns0:ref>; <ns0:ref type='bibr' target='#b15'>Kolling et al. (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b37'>Walker et al. (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b27'>Pendleton and Goodrich (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b22'>Nagavalli et al. (2015)</ns0:ref>.</ns0:p><ns0:p>Simulation is a convenient choice for swarm roboticists, as it allows experimental conditions to be replicated perfectly in different experimental runs. Even more importantly, gathering enough real robots to make a meaningful swarm is often prohibitively expensive in terms of both money and time. However, conducting user studies in simulation suffers from a potentially fundamental problem-the inherent discrepancy between simulation and the reality (henceforth referred to as the reality gap).</ns0:p><ns0:p>In this paper, we study the effect of the reality gap on human psychology. Understanding the psychological impact of any interactive system (be it human-computer interaction, human-robot interaction or human-swarm interaction) on its human operator is clearly essential to the development of an effective interactive system <ns0:ref type='bibr' target='#b5'>(Carroll, 1997)</ns0:ref>. To date, it is not yet clear what the effect of the reality gap is on human psychology in human-swarm interaction studies. Our goal is to study this effect.</ns0:p><ns0:p>We present an experiment in which humans interact with a simulated robot swarm displayed on a computer screen, with a simulated robot swarm displayed in virtual reality (within a virtual reality headset) and with a real (i.e., non simulated) robot swarm (see Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). In our experimental setup, our goal was to produce results that were as objective as possible. To this end, we firstly recorded psychological impact using psychophysiological measures (e.g., heart-rate, skin conductance), which are considered more objective than purely questionnaire-based methods <ns0:ref type='bibr' target='#b2'>(Bethel et al., 2007)</ns0:ref>. Secondly, we made purely passive the interaction of our human operators with the robot swarm. In this purely passive interaction, our participants do not issue any commands to, nor receive any feedback from the robot swarm. Finally, we decided that our participants would interact with a robot swarm executing a simple random walk behaviour (compared to a more complex foraging behaviour, for instance). These two choices allow us to isolate the reality gap effect. The passive interaction reduces the risk that psychophysiological reactions to the interaction interface (e.g., joystick, keyboard, voice commands) would be the strongest measurable reaction, drowning out the difference in reaction to the reality gap. The choice of a simple random walk behaviour reduces the risk that any psychophysiological reactions are caused by reactions to artefacts of a complex swarm robotics behaviour. Our results show that our participants have stronger psychophysiological reactions when they interact with a real robot swarm than when they interact with a simulated robot swarm (either displayed on a computer screen or in a virtual reality headset). Our results also show that our participants reported a stronger level of psychological arousal when they interacted with a robot swarm simulated in a virtual reality headset than when they interacted with a robot swarm simulated on a computer screen (suggesting that virtual reality is a technology that could potentially mitigate the effect of the reality gap in humanswarm interaction user studies). We believe the results we present here should have a significant impact on best-practices for the future human-swarm interaction design and test methodologies.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED LITERATURE</ns0:head><ns0:p>Human-swarm interaction, the field of research that studies how human beings can interact with swarm of autonomous robots, is getting more and more attention. Some research focuses on more technical aspects, such as control methods (direct control of the robots, or indirect control of the robots) <ns0:ref type='bibr' target='#b15'>(Kolling et al., 2013)</ns0:ref>, the effect of neglect benevolence (determining the best moment to issue a command to a swarm) <ns0:ref type='bibr' target='#b38'>(Walker et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b22'>Nagavalli et al., 2015)</ns0:ref>, the interaction based on gestures <ns0:ref type='bibr' target='#b30'>(Podevijn et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b23'>Nagi et al., 2014</ns0:ref><ns0:ref type='bibr' target='#b24'>Nagi et al., , 2015) )</ns0:ref> or the effect of bandwidth limitation during the interaction <ns0:ref type='bibr' target='#b25'>(Nunnally et al., 2012)</ns0:ref>. These examples do not constitute an exhaustive review of the literature. For a more comprehensive survey, we refer the reader to <ns0:ref type='bibr' target='#b16'>Kolling et al. (2016)</ns0:ref>.</ns0:p><ns0:p>To date, however, very little research in the human-swarm interaction literature has focused on the psychology of humans interacting with robot swarms. De la Croix and Egerstedt (2012) studied the effect of communication network topologies (made by the robots) on humans. The authors found that when humans control a swarm of robots, certain types of topologies increased the workload. <ns0:ref type='bibr' target='#b37'>Walker et al. (2013)</ns0:ref> and <ns0:ref type='bibr' target='#b0'>Amraii et al. (2014)</ns0:ref> investigated the effect of two command propagation methods (i.e., methods to propagate a command issued by a human being to all the robots of a swarm) when a human operator guides a leader robot (i.e., a single robot). In their work, a human operator guides the leader robot by changing the leader robot's velocity and heading through a graphical interface. They compared the flooding propagation method to the consensus propagation method. In the flooding propagation method, Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the robots of the swarm controlled by a human operator all set their velocity and heading to the leader robot's velocity and heading. In the consensus propagation method, the robots of the swarm all set their velocity and heading to the average velocity and heading of their neighbors. The authors showed that the humans' workload is lower in the flooding propagation method than in the consensus propagation method. <ns0:ref type='bibr' target='#b34'>Setter et al. (2015)</ns0:ref> studied the humans' workload level when a human being guides a robot swarm with an haptic control device (i.e., a device allowing a human to guide the robots and to receive haptic feedback from the robots). <ns0:ref type='bibr' target='#b27'>Pendleton and Goodrich (2013)</ns0:ref> studied the effect of the robot swarm size (i.e., the number of robots in a swarm) on the human workload level. They conducted an experiment in which participants had to guide swarms of 20, 50 and 100 simulated robots. They found that human workload is not dependent on the number of robots when interacting with a robot swarm. <ns0:ref type='bibr' target='#b29'>Podevijn et al. (2016)</ns0:ref> studied the effect of the robot swarm size on the human psychophysiological state. They found that higher robot swarm sizes provoke stronger psychophysiological responses.</ns0:p><ns0:p>With the exception of <ns0:ref type='bibr' target='#b34'>Setter et al. (2015)</ns0:ref> and <ns0:ref type='bibr' target='#b29'>Podevijn et al. (2016)</ns0:ref>, all the works that study the psychology of humans interacting with a robot swarm are performed in simulation only. Due to the inherent existence of the reality gap, though, it is not clear if human-swarm interaction studies performed in simulation only would provoke the same psychological reactions as the same human-swarm interaction studies performed with a robot swarm made up of real robots.</ns0:p><ns0:p>The question of the psychological reaction differences when humans interact with a real robot or with a simulated robot has been already addressed in the research field of social robotics. In social robotics, the goal of the robot designers is for the robot to socially interact with humans <ns0:ref type='bibr' target='#b12'>(Hegel et al., 2009)</ns0:ref>. Most of the works that address the question of the humans' psychological reaction differences between the interaction with real robots and simulated robots in social robotics tend to show that humans prefer to interact with a real robot than with a simulated robot. In the following research, all authors used a measure of 'enjoyment'. The enjoyment is assessed either by a self-developed questionnaire, or by following the game flow model (a model developed to evaluate player enjoyment in games <ns0:ref type='bibr' target='#b35'>(Sweetser and Wyeth, 2005)</ns0:ref>). When a robot provides humans with help and instructions on a given task, <ns0:ref type='bibr' target='#b14'>Kidd and Breazeal (2004)</ns0:ref>, <ns0:ref type='bibr' target='#b36'>Wainer et al. (2007)</ns0:ref> and <ns0:ref type='bibr' target='#b9'>Fasola and Matari&#263; (2013)</ns0:ref> all reported that humans had a more enjoyable experience (assessed by a self-developed questionnaire) with a real robot compared to a simulated robot. <ns0:ref type='bibr' target='#b28'>Pereira et al. (2008)</ns0:ref> and <ns0:ref type='bibr' target='#b18'>Leite et al. (2008)</ns0:ref> also show that humans had a more enjoyable experience with a real robot than with a simulated robot when their participants were playing chess against the robot (both assessed by the game flow model). In <ns0:ref type='bibr' target='#b31'>Powers et al. (2007)</ns0:ref>, the participants of the authors' study conversed with a real robot and with a simulated robots about health habits. The results of the study revealed that their participants reported to have a more enjoyable conversation with the real robot than with the simulated robot (assessed by a self-developed questionnaire). <ns0:ref type='bibr' target='#b39'>Wrobel et al. (2013)</ns0:ref> performed an experiment in which elder participants play a card game against a computer, a real robot and a simulated robot. In their results, their participants reported more joy playing against the computer than against the real robot or the simulated robot. However, their participants had a more enjoyable experience playing against the real robot than against the simulated robot (assessed by the game flow model). For a more comprehensive survey about the psychological differences when humans interact with real robots and simulated robots, we refer the reader to <ns0:ref type='bibr' target='#b19'>Li (2015)</ns0:ref>.</ns0:p><ns0:p>Our work is different from the existing body of research in human-robot interaction because the interaction between humans and robot swarms is inherently different from the interaction between humans and a single robot. This difference is firstly due to the relative simplicity of the robots used in swarm robotics. Robots used in swarm robotics are not equipped with dedicated communication hardware (such as speech-based or text-based communication). And even if they were equipped with dedicated communication hardware, it would be overwhelming-due to the large number of robots-for a human operator to send data (e.g., commands) to and receive data (e.g., feedback) from each individual robot. A second reason for the difference is that there is no social interaction between human beings and robot swarms.</ns0:p><ns0:p>In this paper, we study the differences in psychological reactions when a human being passively interacts with a real robot swarm, with a simulated robot swarm displayed in a virtual reality environment, and with a simulated robot swarm displayed on a computer screen. Moreover, while all of the aforementioned social robotic works only use dedicated psychological questionnaires to study the participants' psychological reactions, we use a combination of psychological questionnaire and physiological measures in order to study the psychophysiological reactions of participants interacting with a robot swarm.</ns0:p></ns0:div> <ns0:div><ns0:head>3/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2016:04:10113:1:0:NEW 28 Jul 2016)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>METHODOLOGY Hypotheses</ns0:head><ns0:p>A review of the human-swarm interaction literature reveals that the majority of the user experiments are performed in simulation. We believe that conducting a human-swarm interaction experiment in simulation can lead to different results than if the same experiment was conducted with real robots. A reason for the results to be different in simulation and in reality is the inherent presence of the reality gap. It is not always possible, however, to perform a human-swarm interaction with real robots (e.g., because an experiment requires a large number of robots). It is our vision that the effects of the reality gap in simulation should be mitigated as much as possible. In order to mitigate the effects of the reality gap, we propose to use virtual reality for simulating the robot swarm. We based the experiment of this paper on these two hypotheses:</ns0:p><ns0:p>&#8226; The psychophysiological reactions of humans are stronger when they interact with a real robot swarm than when they interact with a simulated robot swarm.</ns0:p><ns0:p>&#8226; The psychophysiological reactions of humans are stronger when they interact with a simulated robot swarm displayed in virtual reality than when they interact with a simulated robot swarm displayed on a computer screen.</ns0:p><ns0:p>Confirming the first hypothesis would imply that human-swarm interaction experiments should be done with real robots instead of simulation. Confirming the second hypothesis would imply that in order to mitigate the effect of the reality gap (if it is not possible to use real robots), it is better for a researcher to simulate a robot swarm in virtual reality because it provokes in humans more realistic psychophysiological reactions compared to simulated robots displayed on a computer screen.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental Scenario</ns0:head><ns0:p>We designed an experimental scenario that allowed us to study the effect of the reality gap on humans in the context of human-swarm interaction. To study the effect of the reality gap, we divided our experimental scenario into three sessions. The order of the three sessions was randomly assigned to our participants. In each session, a participant has to supervise (i.e., watch attentively) a swarm made up of 20 robots. In the so-called Real Robots session, the participant supervises a real (i.e., non simulated) swarm of 20 robots (see Fig. <ns0:ref type='figure' target='#fig_1'>2 (A)</ns0:ref>). In the Screen Simulation session, the participant supervises a simulated swarm of 20 robots displayed on a computer screen. In this session, the robot swarm is visible to the participant from the top view (see Fig. <ns0:ref type='figure' target='#fig_1'>2 (B)</ns0:ref>). In the Virtual Reality session, the participant supervises a simulated swarm of 20 robots displayed in a virtual reality environment. The participant wears a virtual reality headset (i.e., a smartphone put in a Google virtual reality cardboard (https://www.google.com/get/cardboard) and is immersed in a 3D virtual world in which 20 simulated robots are present (see Fig. <ns0:ref type='figure' target='#fig_1'>2 (C</ns0:ref>)). During the three sessions (i.e., Real Robots, Screen Simulation, Virtual Reality), the participant has to supervise the robots for a period of 60 s. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Measures</ns0:head><ns0:p>We used two types of measures: self-reported measures and psychophysiological measures. We use self-reported measures (i.e., data gathered from our participants using a dedicated psychological questionnaire) to determine whether our participants are subjectively conscious of their psychophysiological reaction changes and whether these reaction changes are positive (i.e., our participants report to have a positive experience) or negative (i.e., our participants report to have a negative experience). We use psychophysiological measures, on the other hand, to determine objectively the psychological state of our participants based on physiological responses. These psychophysiological measures are considered objective because it is difficult for humans to intentionally manipulate their physiological responses (for instance to intentionally decrease heart rate). In the following two sections, we first present the self-reported measures used in this study. Then, we present the psychophysiological measures.</ns0:p></ns0:div> <ns0:div><ns0:head>Self-reported Measure</ns0:head><ns0:p>In this study, we collect our participants' self-reported affective state. We measure our participants' affective state with two scales: valence and arousal. Valence is the cognitive judgement (i.e., pleasure or displeasure) of an evaluation such as the interaction with robots considered in this study. Higher valence values correspond to greater pleasure, while lower valence values correspond to a less pleasurable experience. The arousal scale assesses the mental alertness and the level of physical activity or level of excitation <ns0:ref type='bibr' target='#b20'>(Mehrabian, 1996)</ns0:ref> felt during an evaluation.</ns0:p><ns0:p>We developed an open source electronic version of the Self-Assessment Manikin (SAM) questionnaire <ns0:ref type='bibr' target='#b17'>(Lang, 1980)</ns0:ref>. This electronic version of the SAM questionnaire runs on a tablet device.The SAM questionnaire represents the values of the arousal scale and of the valence scale with a picture. In this version of the SAM questionnaire, each scale is composed of 9 values represented by 5 pictures and 4 inter-points between each of the 5 pictures (i.e., a value of the scale that is not represented by a picture).</ns0:p><ns0:p>The tablet application displays the scales in a vertical arrangement where the top-most picture represents the lowest level of the scale (e.g., lowest level of arousal), and the bottom-most picture represents the highest level of the scale (e.g., highest level of arousal). Each picture and each inter-point are associated with a numerical score. Numerical scores vary from 1 to 9. In the valence scale, 1 corresponds to the lowest level of valence (i.e., pleasure is minimal) and 9 corresponds to the highest level of valence (i.e., pleasure is maximal). In the arousal scale, 1 corresponds to the lowest level of arousal (i.e., excitement is minimal) and 9 corresponds to the highest level of arousal (i.e., excitement is maximal). Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> shows a screen-shot of the SAM questionnaire running on a tablet device.</ns0:p></ns0:div> <ns0:div><ns0:head>Psychophysiological Measure</ns0:head><ns0:p>Physiological responses can be used to study the human psychophysiological state (e.g., emotional state or cognitive state). Physiological responses are activated by the autonomic nervous system. The autonomic nervous system is divided into the sympathetic nervous system and the parasympathetic nervous system. The sympathetic nervous system is considered to be responsible for the activation of the fight-or-flight physiological responses (i.e., physiological responses in case of stress). The parasympathetic nervous system, on the other hand, is considered to be responsible for maintaining physiological responses to a normal activity (i.e., the physiological responses at rest).</ns0:p><ns0:p>The electrodermal activity (i.e., the skin's electrical activity) and the cardiovascular activity are two common physiological activities used in the literature to study the autonomic nervous system. In this research, we study our participants' electrodermal activity by monitoring their skin conductance level (SCL) and we study our participants' cardiovascular activity by monitoring their heart rate.</ns0:p><ns0:p>The SCL is a slow variation of the skin conductance over time and is measured in microsiemens (&#181;S). An increase of the SCL is only due to an increase of the sympathetic nervous system activity. It is, therefore, a measure of choice to study the human fight-or-flight response. SCL has also been correlated to the affective state's arousal <ns0:ref type='bibr' target='#b3'>(Boucsein, 2012)</ns0:ref>. The heart rate is the number of heart beats per unit of time. It is usually measured in beats per minute (BPM). Unlike the SCL though, variation of the heart rate can not be unequivocally associated with a variation of the sympathetic nervous system only. Heart rate can vary due to either a variation of the sympathetic nervous system, a variation of the parasympathetic nervous system, or a combination of both <ns0:ref type='bibr' target='#b4'>(Cacioppo et al., 2007)</ns0:ref>. Heart rate activity is, therefore, more difficult to analyse and interpret than the SCL. responses between our participants, we first recorded our participants' physiological responses at rest (i.e., the baseline), then we recorded our participants' physiological responses during the experiment. In our statistical analyses, we use the difference between our participants' physiological responses at rest and during the experiment.</ns0:p></ns0:div> <ns0:div><ns0:head>Equipment and Experimental Setup</ns0:head></ns0:div> <ns0:div><ns0:head>Physiological response acquisition</ns0:head><ns0:p>We monitored our participants' physiological responses with a PowerLab 26T (ADInstruments) data acquisition system augmented with a GSR Amp device. The PowerLab 26T was connected via USB to a laptop computer running Mac OSX Yosemite. We used the software LabChart 8 to record the physiological responses acquired by the PowerLab 26T data acquisition system. We used an infrared photoelectric sensor (i.e., a photopletismograph) to measure the blood volume pulse (BVP) of our participants (i.e., changes in the pulsatile blood flow). The blood volume pulse can be retrieved from the photopletismograph from the peripheral parts of the human body such as on the fingers. We can compute the heart rate from the blood volume pulse. Firstly, we calculate the inter-beat interval (i.e., time in seconds between two peaks in the blood volume pulse). Then, we calculate the heart rate by dividing 60 by the inter-beat interval. For instance, if the inter-beat interval of an individual is 1 s, this individual's heart rate is 60 BPM. Fig. <ns0:ref type='figure' target='#fig_3'>4 (A)</ns0:ref> shows the blood volume pulse of a participant during a time window of 10 s. The photopletismograph was attached to the index finger of a participant's dominant hand. The photopletismograph was directly connected to the PowerLab 26T. The skin conductance's unit is the microsiemens (y-axis). The x-axis is the time in minutes since the beginning of the recording. The skin conductance is computed by measuring the current flowing between two electrodes and by dividing this current by a constant voltage applied between the electrodes. The skin conductance level of this participant during these 10 s is of 5.17 &#181;S.</ns0:p><ns0:p>To monitor the electrodermal activity of our participants, we used brightly polished stainless steel bipolar electrodes connected to the GSR Amp device. These bipolar electrodes were attached to the medial phalanxes of the index and middle fingers of a participant's non-dominant hand. In order to monitor the skin conductance, the GSR Amp device applies a direct constant voltage between the bipolar electrodes. The constant voltage is small enough (i.e., 22 mV) to prevent the participants from feeling it.</ns0:p><ns0:p>As the voltage is known and constant (22 mV), the GSR Amp device can measure the current between the bipolar electrodes. When the current is known, the GSR Amp device can calculate the conductance of the skin by applying the Ohm's law (conductance is the current measured between the electrodes divided by the constant voltage applied by the GSR Amp device between the electrodes). Fig. <ns0:ref type='figure' target='#fig_3'>4</ns0:ref> (B) shows the skin conductance of a participant during a time window of 10 s.</ns0:p></ns0:div> <ns0:div><ns0:head>Environment and Robot Behaviour</ns0:head><ns0:p>In all of the three sessions of our experimental scenario (i.e., Real Robots, Virtual Reality, Screen Simulation), we used a square environment of dimension 2 m &#215; 2 m. Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> shows the environment of each of the three sessions. At the beginning of each session, 20 robots are randomly placed in the environment.</ns0:p><ns0:p>When an experiment starts, the 20 robots perform a random walk with obstacle avoidance behaviour for a period of 60 s. Each robot executes the two following steps: i) it drives straight with a constant velocity of 10 cm/s, and ii) it changes its direction when it encounters either a robot or an obstacle in the direction of movement (i.e., it turns in place until the obstacle is no longer detected in the front part of its chassis).</ns0:p></ns0:div> <ns0:div><ns0:head>Robot Platform</ns0:head><ns0:p>The platform used in this study is the wheeled e-puck robot (see Fig. <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>) equipped with an extension board.</ns0:p><ns0:p>The e-puck robot is designed for educational and research purposes <ns0:ref type='bibr' target='#b21'>(Mondada et al., 2009)</ns0:ref>. The extended version of the e-puck robot is 13 cm high and has a diameter of 7 cm. In this study, we used only a limited subset of the sensors and actuators available to the e-puck robot: the proximity sensors, and the wheel actuators. See <ns0:ref type='bibr' target='#b21'>Mondada et al. (2009)</ns0:ref> and <ns0:ref type='bibr' target='#b11'>Guti&#233;rrez et al. (2008)</ns0:ref> for further details and for a complete list of the sensors and actuators available on the e-puck platform. We programmed the e-puck robots using the software infrastructure described in <ns0:ref type='bibr' target='#b10'>Garattoni et al. (2015)</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head>Participants</ns0:head><ns0:p>We recruited 28 participants from the campus population of the Universit&#233; Libre de Bruxelles. All participants were between 18 and 29 years old with an average age of 22.75 years old (SD = 3.28). We considered current or anterior cardiovascular problems that could act on the central nervous system as exclusion criteria (i.e., we excluded potential participants with cardiovascular problems). Our participants received an informed consent form explaining that they were filmed during the experiment and that their physiological responses were being collected for research purpose only. At the end of the experiment, we offered a 7 C financial incentive for participation.</ns0:p></ns0:div> <ns0:div><ns0:head>Ethics Statement</ns0:head><ns0:p>Our participants gave their written informed consent. The experiment was approved by the Ethics Committee of the Faculty of Psychology, Universit&#233; Libre de Bruxelles (approval number: 061/2015).</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental Procedure</ns0:head><ns0:p>We conducted our experiments in the robotic experiment room of the artificial intelligence laboratory at the Universit&#233; Libre de Bruxelles. Upon arrival, we explained to the participant that she was going to supervise, i.e., watch attentively, a swarm of robots with three different types of visualization interfaces (i.e., on a computer screen, in a virtual reality headset and in reality with real robots). We then showed to the participant the swarm of robots displayed in the three visualization interfaces. The participant was allowed to look at a computer screen displaying a top view of a swarm of robots, to wear the virtual reality headset and to look at the real robots. Once the participant was familiar with the three visualization interfaces, we presented and explained how to answer the electronic version of the SAM questionnaire.</ns0:p><ns0:p>Then, we invited the participant to read and sign the consent form. We then asked the participant to wash their hands in clear water (i.e., with no soap) and to remain seated on a chair placed in a corner of the environment used for the Real Robots session (see Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). We attached the participant to two physiological sensors (i.e., a pulse transducer for measuring the participant's cardiovascular activity and two finger electrodes for measuring the participant's electrodermal activity). We proceeded with a 5 minute rest period in order to collect the participant's physiological baseline (i.e., physiological responses at rest). After the 5 minute rest period, we started the first session. After each session, we asked the participant to answer the SAM questionnaire. Before starting the next session of the experiment, we collected the participant's baseline during an additional 3 minute rest period. This 3 minute rest period allowed the participant to get back to a normal physiological activity. During the whole duration of the experiment, the participant remained seated on the same chair. During the Real Robots session, the participant was immersed in the environment in which the robots were randomly moving. Prior to the Virtual Reality session, we attached the virtual reality headset to the participant. Prior to the Screen Simulation session, we placed a computer screen in front of the participant.</ns0:p><ns0:p>After the experiment ended, we detached the sensors from the participant and conducted a brief Manuscript to be reviewed Computer Science interview with her. During the interview, we explained to the participant the goal of the study. Then, we answered our participant's questions. We finished the experiment by thanking the participant and by giving the participant the 7 C incentive. The entire experiment's duration was 30 minutes per participant.</ns0:p></ns0:div> <ns0:div><ns0:head>DATA ANALYSIS AND RESULTS</ns0:head><ns0:p>Out of the 28 participants who took part to the experiment, we had to remove the physiological data (i.e., heart rate and skin conductance) of 5 participants due to sensor misplacement. We, however, kept the self-reported data (i.e., valence and arousal values reported by the SAM questionnaire) of these 5 participants. In the following of this section, therefore, we analyse the psychophysiological data of 23 participants (15 female and 8 male) and the self-reported data of 28 participants (17 female and 11 male). We analysed our data with the R software (R Core Team, 2015) by performing a repeated measures design analysis.</ns0:p><ns0:p>Because the data was not normally distributed, we did not use the repeated measure ANOVA test (as the test assumes a normal distribution). Rather, we used a non-parametric Friedman test to analyse both the psychophysiological data and the self-reported data (i.e., the SAM questionnaire). The Friedman test is a rank-based test that does not make any assumption on the distribution of the data. In our case, the Friedman test's null hypothesis states that the three sessions Real Robots, Virtual Reality and Screen Simulation have the same median. The alternative hypothesis states that at least two sessions do not have the same median. When the Friedman test is significant, we can reject the null hypothesis in favour of the alternative hypothesis. The alternative hypothesis, however, does not allow us to determine which sessions differ in their median.</ns0:p><ns0:p>In order to determine which sessions differ in their median, we proceeded with a pairwise comparison of the three sessions with a Wilcoxon rank-signed test. The Wilcoxon rank-signed test's null hypothesis states that the median difference between the paired values from two sessions (i.e., a value from one session paired to a value from another session) is equal to zero. The alternative hypothesis states that the median difference of the paired values are not equal to zero. When the Wilcoxon rank-signed test is significant, we can reject the null hypothesis in favour of the alternative hypothesis, and conclude that there is a significant difference between the two sessions.</ns0:p><ns0:p>Performing multiple pairwise comparisons (there are three pairwise comparisons between our three sessions) introduces the risk of increasing the Type I error, i.e., to declare the test significant while it is not. In order to control the Type I error, we apply a Bonferroni-Holm correction to the p-values obtained by the Wilcoxon rank-signed test.</ns0:p><ns0:p>In addition to determining the effect of the reality gap on our participants, we also determined whether psychophysiological data and self-reported data were correlated (e.g., whether skin conductance is correlated with arousal, or whether arousal and valence are correlated). In order to determine this correlation, we performed a Spearman's rank-order correlation test.</ns0:p><ns0:p>In Table <ns0:ref type='table'>1</ns0:ref>, we summarise the results of the psychophysiological and self-reported data (i.e., median and Friedman's mean rank of heart rate, SCL, arousal and valence) in each session (i.e., Real Robots, Virtual Reality, Screen Simulation) as well as the inference statistics of the Friedman tests (i.e., p-values and &#967; 2 ). (2) = 24.87 &lt; .001</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Descriptive statistics of the psychophysiological data and of the self-reported data. We report the median and the Friedman's mean rank (in parentheses) of the three sessions (Real Robots, Virtual Reality, Screen Simulation). We also report the inference statistics of the Friedman test (i.e., &#967; 2 and p).</ns0:p><ns0:p>The results of the Friedman test on the psychophysiological data does not show any main effect of the reality gap on our participants' heart rate (&#967; 2 (2) = 0.78, p = .67). The results show, however, a main effect of the reality gap on our participants' skin conductance level (&#967; 2 (2) = 15.2, p &lt; .001). A Wilcoxon In addition to studying the effect of the reality gap on our participants, we investigated whether or not some of the dependent variables (i.e., heart rate, skin conductance, arousal and valence) were pair-wise correlated. In order to calculate a correlation between psychophysiological data and self-reported data (e.g., correlation between skin conductance and arousal) we only took into account the self-reported data of the participants whose psychophysiological data had not been rejected (due to sensor misplacement).</ns0:p><ns0:p>For the correlation test between arousal and valence we took the 28 participant data points. We did not find any correlation within each of the three sessions (i.e., there was no correlation for any pair-wise dependent variable within the Real Robots session nor the Virtual Reality session nor the Screen Simulation session).</ns0:p><ns0:p>We, therefore, investigated whether there was some correlations when the data of each condition was pooled together (e.g., we aggregated skin conductance values from the three sessions). Regarding correlation between psychophysiological data and self-reported data, we found a correlation between skin conductance and valence (&#961; = .42, p &lt; .001) and a weak correlation between skin conductance and arousal (&#961; = .253, p = .03). There was no correlation between heart rate and valence and between heart rate and arousal. Concerning the self-reported data, we found a correlation between arousal and valence (&#961; = .32, p = .002). We did not find any correlation between heart rate and skin conductance.</ns0:p><ns0:p>Finally, we also studied the gender effect (i.e., whether females and males differ in their results) and the session order effect (i.e., whether the participants become habituated to the experiment). We analysed the gender effect by splitting into two groups the males' and females' results of each dependent variable (i.e., heart rate, skin conductance, arousal and valence) for each condition (i.e., Screen Simulation, Virtual Reality, Real Robots). We compared these two groups with a Wilcoxon rank-sum test-the equivalent test of the Wilcoxon rank-signed test for independent groups. We did not find any statistically significant difference between males and females in any condition, for any dependent variable. We studied the session order effect as follows. For each condition and for each dependent variable, we separated into three groups the results of the participants who encountered the session first, second or third, respectively.</ns0:p><ns0:p>We compared the three groups with a Kruskall-Wallis test-a non-parametric test similar to a Friedman test but for independent groups. We did not find any statistically significant difference among the three groups in any session, for any dependent variable, suggesting that the session order had no significant effect on our participants.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION AND CONCLUSION</ns0:head><ns0:p>In this paper, we presented a study on the effect of the reality gap on the psychophysiological reactions of humans interacting with a robot swarm. We had two hypotheses. The first hypothesis stated that humans interacting with a real (i.e., non simulated) robot swarm have stronger psychophysiological reactions than if they were interacting with a simulated robot swarm. The second hypothesis stated that humans interacting with a simulated robot swarm displayed in a virtual reality environment have stronger psychophysiological reactions than if they were interacting with a simulated robot swarm displayed on a computer screen.</ns0:p><ns0:p>Both the self-reported data (i.e., arousal and valence) and the psychophysiological data (i.e., skin conductance) show that the reality gap has an effect on the human psychophysiological reactions. Our participants had stronger psychophysiological reactions when they were confronted to a real swarm of robots than when they were confronted to a simulated robot swarm (in virtual reality and on a computer screen). These results confirm our first hypothesis.</ns0:p><ns0:p>Of course, it is not always possible for researchers to conduct a human-swarm interaction study with real robots, essentially because real robots are still very expensive for a research lab and real robot experiments are time consuming. It is, therefore, not realistic to expect human-swarm interaction researchers to conduct human-swarm interaction experiments with dozens or hundreds of real robots. For this reason, we decided to investigate the possibility of using virtual reality in order to mitigate the effect of the reality gap. To the best of our knowledge, virtual reality has yet never been used in the research field of human-swarm interaction and is little studied in social robotics <ns0:ref type='bibr' target='#b19'>(Li, 2015)</ns0:ref>. Only the self-reported arousal show that our participants had stronger reactions during simulation in virtual reality than during simulation on a computer screen. With these results, we can not strongly confirm our second hypothesis.</ns0:p><ns0:p>However, the results of the skin conductance and the self-reported valence, combined with the significant results of the arousal, both show a trend of our participants to have stronger psychophysiological responses in virtual reality than in front of a computer screen.</ns0:p></ns0:div> <ns0:div><ns0:head>11/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10113:1:0:NEW 28 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In this paper, we designed our experiment based on a purely passive interaction scenario. In a passive interaction scenario, human operators do not issue commands to a robot swarm. We motivated our choice of a passive interaction by the fact that an active interaction could influence the human psychophysiological state (making it difficult to separate the effect of the active interaction and the effect of the reality gap on our participants' psychophysiological state). However, now that we have shown the effect of the reality gap in a purely passive interaction scenario, future work should focus on this effect in an active interaction scenario in which human operators do issue commands to a robot swarm. For instance, we could use the results presented in this paper as a baseline and compare them with those of an active interaction scenario in which human operators have to guide a swarm in an environment.</ns0:p><ns0:p>In human-swarm interaction, as for any interactive system, it is fundamental to understand the psychological impact of the system on a human operator. To date, in human-swarm interaction research, such understanding is very limited, and worse is often based purely on the study of simulated systems.</ns0:p><ns0:p>In this study, we showed that performing a human-swarm interaction study with real robots, compared to simulated robots, significantly changes how humans psychophysiologically react. We, therefore, recommend to use as much as possible real robots for human-swarm interaction research. We also showed that in simulation, a swarm displayed in virtual reality tends to provoke stronger responses than a swarm displayed on a computer screen. These results, therefore, tend to show that if it is not possible for a researcher to use real robots, virtual reality is a better choice than simulation on a computer screen.</ns0:p><ns0:p>Even though more research should focus on this statement, we encourage researchers in human-swarm interaction to consider using virtual reality when it is not possible to use a swarm of real robots.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Example of an experiment. (A) A participant interacts with a swarm made up of 20 real robots. (B) A participant is attached to a virtual reality head set and interacts with a simulated swarm of 20 robots. (C) A participant interacts with a simulated swarm of 20 robots displayed on a computer screen. The participant shown in this figure is the first author of this paper and did not take part in the experiment. The pictures shown in this figure were taken for illustration purpose.</ns0:figDesc><ns0:graphic coords='3,141.73,230.31,413.57,89.63' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Robots and environments for each of the three sessions. (A) View of the real robots and environment. The view is displayed from the participant's perspective. (B) Top view of the robots and of the environment simulated on a computer and displayed on a screen. (C) View of the robots and of the environment simulated in virtual reality. The view is displayed from the participant's perspective.</ns0:figDesc><ns0:graphic coords='5,141.73,557.97,413.55,99.65' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Electronic version of the Self-Assessment Manikin questionnaire. On the left of the screen: the valence scale. The top-most picture corresponds to the lowest level of valence. The bottom-most picture corresponds to the highest level of valence. On the right of the screen: the arousal scale. The top-most picture corresponds to the lowest level of arousal. The bottom-most picture corresponds to the highest level of arousal. The pictures used in the application are taken from and available at http://www.pxlab.de (last access: April 2016).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Physiological measures. (A) The graph of a participant's blood volume pulse during 10 seconds. The BVP does not have a standard unit. The x-axis is the time in minutes since the beginning of the recording. The time between two peaks (depicted with two dots connected with a line on the picture) is called the inter-beat interval (IBI). The participant's heart rate (the number of beats per minute) is computed by dividing 60 by the inter-beat interval. In this example, the mean heart rate of the participant during these 10 s is of 87 BPM. (B) The graph of a participant's skin conductance during 10 s.The skin conductance's unit is the microsiemens (y-axis). The x-axis is the time in minutes since the beginning of the recording. The skin conductance is computed by measuring the current flowing between two electrodes and by dividing this current by a constant voltage applied between the electrodes. The skin conductance level of this participant during these 10 s is of 5.17 &#181;S.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. An e-puck robot used in our experiments. The proximity sensors are used to detect and avoid nearby robots. The wheel actuators are set to a speed of 10 cm/s.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:04:10113:1:0:NEW 28 Jul 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Boxplots showing the heart rate (A) values, skin conductance level (B), arousal values (C)and valence values (D) of all three sessions (Real Robots, Virtual Reality, Screen Simulation). The median value of each session is shown using the bold horizontal line in the box. Outliers are represented using dots. We also report on the results of the pairwise Wilcoxon rank-signed test by connecting the boxplots of the sessions showing pairwise statistical significance.</ns0:figDesc></ns0:figure> </ns0:body> "
"INVESTIGATING THE EFFECT OF THE REALITY GAP ON THE HUMAN PSYCHOPHYSIOLOGICAL STATE IN THE CONTEXT OF HUMAN - SWARM INTERACTION submitted by: PeerJ First round review Note: Modifications and new text in the revised paper are highlighted using red font. EDITOR Editor’s comment #1: Although all 4 reviewers liked the paper, and agreed that it was well written, reviewers 3 and 4 in particular made suggestions for revision. Please respond to their suggestions and revise accordingly. Authors’ answer comment #1: We kindly thank the editor for his time and advices. We have responded to each reviewers’ comment, and addressed them below and in the paper. When the paper has been edited based on a reviewer’s comment, we specified the section where modifications have been made. We now believe that the paper has reached the quality-level necessary for publication. Kind regards, Gaëtan Podevijn, on behalf of the authors. REVIEWER 1 COMMENTS : Reviewer comment #1: The paper shows an interesting topic related to human-being psychophysiological reactions toward activities which have a collaboration between human and machine devices. In reality, there is a big difference of results achieved from simulation environment and reality, especially in matters related to human. Because human has a heterogeneity reaction mechanism even in the same environment, the reactions are dependent on task quantity, under pressure or not, the order of sequence of tasks and so on. Authors indicate that virtual reality devices can be a solution contributes to reducing feeling differences in a comparison between simulation environments and reality. It plays an important role for other study or applicable fields regarding human psychophysiology, but it also would demand another measurement of the impact of virtual reality environment to the awareness of participants. I am interested in the research topic as well as its application, I suggest to accept the paper. Authors’ answer to comment #1: We thank reviewer 1 for reading and reviewing the paper. REVIEWER 2 COMMENTS : Reviewer comment #1: The paper presents an interesting method on reducing (eliminating) the reality gap on human interactions with robots. I think that it is very important for researchers on virtual (or augmented) reality communities. page 1 of 6 Also, the questions has been well designed, and the hypothetical testing has been well done. Authors’ answer to comment #1: We thank reviewer 2 for the time dedicated to the reading of the paper. REVIEWER 3 COMMENTS : Reviewer comment #1: This paper describes a set of experiments in which human subjects observe either a “flat screen” simulated robotic swarm, a robotic swarm rendered in a virtual environment, and a real robotic swarm. The aim of the study is to measure the reaction of the subjects to the three different types of swarm using their psychophysiological states, as well as questionnaires. From my perspective, the paper is extremely well written. The subject is introduced in a very clear way, with reference to relevant literature. My knowledge of this field (human-robot interaction) is limited, but by reading the literature review from this paper, I got a sufficiently clear idea of the state of the art. I can not say whether the authors missed to cite recent important papers. The paper is structured very well, with clearly stated hypotheses, very good description of methods, results, and conclusions. I really appreciated the justifications for the statistical tests used which significantly help the reader to digest the results. The figures are fine. Authors’ answer to comment #1: We thank reviewer 3 for the time dedicated to reading the paper and to providing valuable comments. We address reviewer’s 3 comments below and we refer to the sections in the paper where modifications have been made. Reviewer comment #2: I am not used to work with human subjects, so I have no experience with these type of methods. It seems to me that the experimental design is rigorous and solid enough to provide answers to the initial hypothesis. The factor gender is not balanced. Not sure if this matters, probably not, but adding few words on this would be better. As I have already said, methods, analysis, and results are very well explained. Authors’ answer to comment #2: It is true that the number of male participants and female participants is unbalanced. We did not, however, focus the research on a “gender effect” during humanswarm interaction (we admit, though, that there exists papers in human-robot interaction that focus on the gender effect). However, as reviewer 4 also suggested, we have analysed the gender effect in order to see if there was significant differences between male and female. We did not find any statistical differences though. We have added a statement about this at the end of the “Data analysis and results” section. Reviewer comment #3: It would be nice to say few words on why the interaction with a swarm should be different from the interaction with any other single robot system. If no difference should be made, then your results can be extended to any robotic system, and you could link your study to a broad literature. Authors’ answer to comment #3: In the reviewed version of the paper, we discussed in the “Related literature” section the main inherent differences that exist between an interaction with a single robot and with a robot swarm. In the revised version of the paper, we have made this point clearer in the “Related literature” section. Reviewer comment #4: One part that I found a bit week is about the motivations of the work. The paper, from the title, promise human-swarm interaction. For any expert in swarm robotics, the interest is focused on what the swarm does (or does not). In this work, the actions of the swarm comes after (in term of significance) the response of the subject, which ultimately does not “interact” with the swarm (the subject observes the swarm). I think this secondary role of the behaviour of the swarm, and the non interaction between human observer and swarm has to be made more explicit in the introduction. Otherwise some reader may interpret the “human-swarm interaction” in the title in a wrong way. page 2 of 6 Authors’ answer to comment #4: We deliberately kept the swarm behaviour passive, and as simple as possible, so as to avoid either aspects of the behaviour or aspects of the interaction affecting the human psychophysiological state—we needed to isolate changes in the psychophysiological state that were purely due to the difference in medium. This was already discussed in the “Introduction” section of the reviewed version of the paper. However we have now made this discussion more explicit. Reviewer comment #5: Point 2 [previous comment] has to be further discussed in the conclusions, when it is time to clarify the relevance of this study for human-swarm interaction. To what extent, an analysis of an observation-based scenario relates to more complex scenarios where the human subjects interact with the swarm? Please, expand this point in the conclusion. Authors’ answer to comment #5: Reviewer 4 has made a similar comment, and we agree with both reviewer 3 and 4 that it should be clear what can the differences between a “passive” and an “active” interaction be. In the conclusions of the revised version of the paper, we discuss the potential differences and future work on an “active interaction” scenario. Again, we thank reviewer 3 for his valuable comments. REVIEWER 4 COMMENTS : Reviewer comment #1: This paper describes an investigation that is timely and interesting. Indeed, the question of whether a simulation can be regarded as equivalent to reality in robotics research has been an object of debate for many years to date. And still one can find nowadays papers accepted in prestigious journals that validate their results in simulation only. This is especially relevant in situations in which the human-robot interaction plays a major role, and this so-called reality gap can make a substantial difference in the psychological effects on the interacting human. The paper addresses this issue in the context of a recent research field, namely human-swarm interaction. A field in which research in general, and most previous similar studies in particular, are conducted primarily in simulation. For all these reasons, the user study, as described in the paper, is welcome and its hypothesis are relevant and original. The paper is extremely well-written including all kind of details that make the experiments fully replicable. A strong point of the methodology is the use of both subjective self-reported measures based on questionnaires along with more objective psychophysiological measures (based on skin conductance and heart rate). The experimental procedure is in general correct as well as the statistical analysis of the data. The conclusions are, in principle, reasonably well supported by the experimental results. Authors’ answer to comment #1: We thank reviewer 4 for his time and for his valuable comments that helped us to improve the paper. We address reviewer 4’s comments below and we refer reviewer 4 to the sections of the paper where modifications have been made. Reviewer comment #2: The first one [concern] refers to the nature of the interaction itself in the experiments. The subjects were instructed to 'supervise' the swarm of robots, i.e. “watch attentively” for 60 s. Their behavior was a simple random walk with obstacle avoidance. This watching activity is far from any reciprocal action or influence and, consequently, it is questionable whether this can be considered as an interaction at all. In consequence the validity of their conclusions as pertaining to robotswarm _interaction_ is limited. Moreover, given the low-level of involvement of the subjects in the robot actions, the expected response of the sympathetic nervous system should be low since this situation will generate little stress in the human. The authors should have made a greater effort to come up with a proper interaction scenario with a greater implication of the participant. Authors’ answer to comment #2: Reviewer 3 has made a similar comment. We do agree that the swarm behaviour used in our study is not particularly interesting. However, our goal was to keep the swarm behaviour simple and the interaction passive in order to be able to isolate, as much as page 3 of 6 possible, the effect of the reality-gap on the human psychophysiological state (more complex swarm behaviours and advanced interaction interfaces could, by their own, have an effect on the human psychophysiological state, reducing our confidence on the effect of the reality-gap). We have now made this argument more explicit in the introduction of the revised version of the paper. Using this simple, passive swarm behaviour, we were already able to observe the effect of the realitygap (as our results show)—a significant psychophysiological response change between simulation and reality. In an active interaction scenario, it is possible that this psychophysiological response could be even stronger. However, adding complexity to the experimental setup would make drawing conclusions harder—careful controls would need to be in place to make sure that any psychophysiological responses were not due to artefacts of swarm behaviour, or aspects of the interaction, rather than to the reality-gap which is the core focus of the research. We have added a discussion in the conclusion about this possible future work. Reviewer comment #3: My second concern is a consequence of the first one and questions the relevance of the study to swarm robotics itself. Since the subjects are essentially watching, the fact that it is robots what they are watching could be psychologically irrelevant, to the extent that they could have been watching non-robotic swarms like a school of fish in a tank, a set of billiard balls moving randomly on a table, or even a complex mechanism with several moving parts with no resemblance to a swarm at all. Would in these cases the results have been the same due to the reality gap? The answer for me is unclear and additional experiments should have been conducted with this kind of conditions as control cases. Authors’ answer to comment #3: We set out to show that in the context of human-swarm interaction, the reality-gap has an impact on the psychophysiological response of a human participant. Our experiments and analysis support that conclusion. The reviewer may be correct in saying that it might be possible to observe similar results with other complex organic or inorganic systems. However, such results would have no bearing on the hypothesis we set out to confirm. In other words, it is possible that our experiments only addressed a subset of reality-gap induced psychophysiological reactions. However this is the subset we are interested in. Reviewer comment #4: Addressing these two questions would require additional experimental work, but I believe it would greatly enhance the merit and impact of research. In this case, the questionnaires should be improved. Asking directly the subjects to select directly their level of valence or arousal is not good practice and there exist much better questionnaires that provide these levels by asking a set of more natural questions about their interaction experience. Probably these questions would not make much sense in the first place since as an interaction experience it was rather poor. Authors’ answer to comment #4: Our study is the first of its kind, and we believe therefore already represents an important contribution. We agree that further experiments and different perspectives on the interaction experience could be valuable in the future—we have added a sentence to that effect in the conclusion. Reviewer comment #5: Still the paper could be accepted without additional experimental work if the following recommendations are incorporated. Authors’ answer to comment #5: We understand the reviewer’s point that additional experiments could improve the paper. However, we would like to note that our set of experiments is already rather comprehensive. Experiments with real robots are extremely time-consuming, even more when we have to use dedicated material for monitoring participants’ physiological activity. However, we made sure to address the following suggestions. Reviewer comment #6: The authors write that there is no social interaction between humans and page 4 of 6 robot swarms. This may be true but they should clarify the notion of human-swarm interaction, describe the nature of the interactions addressed in previous related studies, and justify why their watching action can still be valid as an interaction in this context.The authors should discuss the second point above and justify why, under the given circumstances, the study is relevant to swarm robotics and the fact that robots or even swarms were used is not accessory. Authors’ answer to comment #6: We understand reviewer 4’s concerns. However, as explained in our answer to comment #2, our supervision task was made purely passive in order to isolate as much as we could the effect of the reality-gap on our participants. Any other tasks requiring our participants to interact more “actively” with the swarm might have affected their psychophysiological state. Now that we have studied the effect of the reality-gap in the context of a purely passive interaction scenario, we believe that future work should focus on the effect of the reality-gap in the context of an active interaction scenario, as suggested by reviewer 4. In this future work, the results presented in this paper can be used as a baseline. This discussion is now included in our conclusions. Regarding the notion of human-swarm interaction, we already gave several examples and references of human-swarm interaction from different studies in the first paragraph of the related literature. Reviewer comment #7: Apparently the order of the three conditions was random for each participant (please clarify this more explicitly). It would be interesting to know whether there are significant differences among the same condition in different order; given the rather passive role of the subjects, the third time they watch the same situation a weaker reaction may result in any case due to habituation. Authors’ answer to comment #7: The order of the three sessions were indeed randomised among the participants. We now make this point clearer in the “Experimental scenario” section of the revised paper. We thank reviewer 4 for the idea of testing potential differences within the order of a specific condition. Based on the comments of the reviewer, we have conducted a new analysis on the order effect as follows. For each condition (i.e., screen simulation, virtual reality, real robots), we separated into three groups the results of our participants who encountered the condition first, second or third respectively. For instance, for the virtual reality condition, we put in a first group the results of the participants who encountered the virtual reality condition first, in a second group the results of the participants who encountered the virtual reality condition in second, and in a third group the results of the participants who encountered the virtual reality condition last. We followed this procedure for the four dependent variables (i.e., heart rate, SCL, arousal and valence). A significant difference among the three groups would imply that the order of the conditions had an effect on the results of our participants (and would potentially result, as stated by reviewer 4, in habituation). We compared these three groups with the Kruskall-Wallis test by ranks (the test compares three independent groups). The results of the Kruskall-Wallis test do not show any significant difference among the three groups, suggesting that the order of the conditions had no significant effect on the participants’ results. We have added in the “Data analysis and results” section a dedicated paragraph explaining the potential order effect and we report the absence of order effect suggested by the Kruskall-Wallis test. Reviewer comment #8: In experiments in social HRI results tend to be different depending of the gender of the participants. Having results separated and analyzed by gender would add value to the study. Authors’ answer to comment #8: We also thank reviewer 4 for this suggestion. Following the recommendation of the reviewer, we have performed an analysis on the effect of the gender by splitting the results of each condition into two groups “male” and “female”. We performed a Wilcoxon rank-sum test between the groups but we did not find any significant difference in any condition. We now shortly report these results in the “Data analysis and results” section. page 5 of 6 Reviewer comment #9: When reporting the literature about the reality gap in social HRI the term “more enjoyable” is repeated again and again. The authors should be more precise in their reporting and specify what psychological effect was measured in each case. Authors’ answer to comment #9: We clarified the measures used by the authors of the studies reported in the “Related literature” section. Reviewer comment #10: Since the reader is referred to Garattoni et al. (2015) for details about the software infrastructure and it is a non-archival publication, a web link should be provided. Authors’ answer to comment #10: We have added a web link to this reference. Once again, we would like to thank reviewer 4 for his valuable comments and advice. page 6 of 6 "
Here is a paper. Please give your review comments after reading it.
317
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The expeditious growth of the World Wide Web and the rampant flow of network traffic have resulted in a continuous increase of network security threats. Cyber attackers seek to exploit vulnerabilities in network architecture to steal valuable information or disrupt computer resources. Network Intrusion Detection System (NIDS) is used to effectively detect various attacks, thus providing timely protection to network resources from these attacks. To implement NIDS, stream of supervised and unsupervised machine learning approaches is applied to detect irregularities in network traffic and to address network security issues. Such NIDSs are trained using various datasets that include attack traces.</ns0:p><ns0:p>However, due to the advancement in modern-day attacks, these systems are unable to detect the emerging threats. Therefore, NIDS needs to be trained and developed with a modern comprehensive dataset which contains contemporary common and attack activities. This paper presents a framework in which different machine learning classification schemes are employed to detect various types of network attack categories.</ns0:p><ns0:p>Five machine learning algorithms: Random Forest, Decision Tree, Logistic Regression, K-Nearest Neighbors and Artificial Neural Networks, are used for attack detection. This study uses a dataset published by the University of New South Wales (UNSW-NB15), a relatively new dataset that contains a large amount of network traffic data with nine categories of network attacks. The results show that the classification models achieved the highest accuracy of 89.29% by applying the Random Forest algorithm. Further improvement in the accuracy of classification models is observed when Synthetic Minority Oversampling Technique (SMOTE) is applied to address the class imbalance problem. After applying the SMOTE, the Random Forest classifier showed an accuracy of 95.1% with 24 selected features from the Principal Component Analysis method.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>In today's developed and interconnected world, the number of networks and data security breaches is increasing immensely. The reasons include the growth of network traffic and advances in technology that have led to the creation of newer types of attacks. As a result, the level of attack eventually increases <ns0:ref type='bibr' target='#b36'>(Mikhail, Fossaceca and Iammartino, 2019)</ns0:ref>.</ns0:p><ns0:p>There exist numerous network security attacks in today's era and to timely detect these attacks, several NIDSs are being developed and deployed. These NIDSs are widely used to protect digital resources against attacks and intrusions on networks <ns0:ref type='bibr' target='#b51'>(Zong, Chow and Susilo, 2018;</ns0:ref><ns0:ref type='bibr' target='#b46'>Vidal and VidalMonge, 2019)</ns0:ref>. Intrusion detection systems use two different methods, that is, anomaly-based detection and signature-based detection <ns0:ref type='bibr' target='#b37'>(Moustafa, Creech and Slay, 2017;</ns0:ref><ns0:ref type='bibr' target='#b31'>Li et al., 2019)</ns0:ref>. In an anomaly detection system, the network traffic is monitored and critical network characteristics are continuously tracked and analyzed <ns0:ref type='bibr' target='#b17'>(Habeeb et al., 2018)</ns0:ref>. It generates alerts if unusual or anomalous activity is detected. Whereas, in signature detection system, well-known patterns of attacks (known as signatures) are stored. The network packets are searched for those patterns <ns0:ref type='bibr' target='#b14'>(Faker and Dogdu, 2019)</ns0:ref>. If a pattern is accurately matched, the system generates an alert regarding that malicious activity <ns0:ref type='bibr' target='#b7'>(Azeez et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Although various attack detection mechanisms are available, they are still not effective enough to detect and analyze intrusions or malicious activities <ns0:ref type='bibr' target='#b2'>(Ahmed, Mahmood and Hu, 2016)</ns0:ref>. Typically, anomaly-based detection systems are developed using different machine learning techniques for predicting intrusions in networks. Research has been conducted in this regard using datasets such as KDDCUP99 <ns0:ref type='bibr' target='#b11'>(Choudhary and Kesswani, 2020)</ns0:ref>, KDD98 (W. <ns0:ref type='bibr' target='#b49'>Haider et al., 2017)</ns0:ref> and NSL-KDD7 <ns0:ref type='bibr' target='#b41'>(Rathore and Park, 2018)</ns0:ref>. However, due to the evolution of computer networks, these datasets are negatively affecting the results of NIDS <ns0:ref type='bibr' target='#b23'>(Khraisat et al., 2019)</ns0:ref>. One of the factors influencing the results is the availability of modern-day attack data, as these datasets were created almost two decades ago. Consequently, due to revolution of network traffic, the traffic data available in those datasets is different from the existing modern-day traffic <ns0:ref type='bibr' target='#b39'>(Moustafa and Slay, 2016)</ns0:ref>.</ns0:p><ns0:p>Improving the performance of existing NIDS requires modern and state-of-the-art datasets that are up-to-date. Therefore, a more efficient and more accurate evaluation of NIDS requires relatively new state-of-the-art datasets, including modern-day network's normal and attack activities. In this research, a framework has been developed for attack detection in a network using the UNSW-NB15 dataset <ns0:ref type='bibr' target='#b44'>(Tama and Rhee, 2019;</ns0:ref><ns0:ref type='bibr' target='#b3'>Aissa, Guerroumi and Derhab, 2020)</ns0:ref>. This dataset is more recent and includes new attacks. Previously, KDDCUP99, KDD98, and NSL-KDD7 were widely used for NIDS benchmarked datasets. However, these older datasets are not as useful for today's network traffic <ns0:ref type='bibr' target='#b48'>(Viet et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Kumar et al., 2020)</ns0:ref>. However, few researchers have used the new data set, the UNSW-NB15 dataset, to detect an attack but their work has been limited <ns0:ref type='bibr' target='#b9'>(Bagui et al., 2019)</ns0:ref>.</ns0:p><ns0:p>The model proposed in this study uses the UNSW-NB15 1 dataset and not only achieves better accuracy than previous research <ns0:ref type='bibr' target='#b20'>(Kasongo and Sun, 2020;</ns0:ref><ns0:ref type='bibr' target='#b27'>Kumar, Das and Sinha, 2021)</ns0:ref> but also effectively detects all categories of attack. The work has been done using Python. Initially, preprocessing techniques were used on 80,000 randomly selected instances from the UNSW-NB15 dataset to normalize data values. Later, feature engineering has been performed to select the relevant features. To improve the performance of the classifiers, the research solved the problem of class imbalance using SMOTE. Subsequently, Random Forest (RF), Decision Tree (DT), Logistic Regression (LR), K-Nearest Neighbors (KNN) and Artificial Neural Network (ANN) have been used for classification. Lastly, evaluation metrics were used to compare the performance of all classifiers.</ns0:p><ns0:p>Following are the major contributions of this research: &#61623; The dataset includes 45 features from which we identified 24 features that were most significant in identifying the attack. &#61623; The various pre-processing techniques have been collectively applied to the UNSW-NB15 dataset to make the data meaningful and informative for model training. &#61623; The class imbalance problem is addressed using when Synthetic Minority Oversampling Technique (SMOTE), thereby improving the detection rate of rare attacks. &#61623; We have provided a comparison of five machine learning algorithms for detecting network attack categories.</ns0:p><ns0:p>The rest of the paper is organized as follows: In Section 2, related work has been presented; Section 3 describes the methodology of the framework developed; Section 4 and 5 elaborates the discussion of the experimental results and the last Section 6 hence concludes the paper.</ns0:p></ns0:div> <ns0:div><ns0:head>Related Work</ns0:head><ns0:p>As technology advances with modern techniques, computer networks are using the latest technologies to put it into practice, which has dramatically changed the level of attacks. Therefore, to target the present-day attack categories, UNSW-NB15 dataset has been created <ns0:ref type='bibr' target='#b38'>(Moustafa and Slay, 2015;</ns0:ref><ns0:ref type='bibr' target='#b48'>Viet et al., 2018)</ns0:ref> The research conducted using the UNSW-NB15 dataset is still not sufficient. However, some of the research work done using datasets is discussed below. Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref> presents the summary and comparison of the discussed related work. <ns0:ref type='bibr' target='#b38'>Moustafa and Slay (Moustafa and Slay, 2015)</ns0:ref> developed a model that focused on the classification of attack families available in the UNSW-NB15 dataset. The study used the Association Rule Mining technique for feature selection. For classification, Expectation-Maximization (EM) algorithm and NB have been used. However, the accuracy of both algorithms for detecting rare attacks was not significantly higher as the Na&#239;ve Bayes had an accuracy of 78.06% and the accuracy of EM was 58.88%. <ns0:ref type='bibr' target='#b39'>Moustafa and Slay (Moustafa and Slay, 2016)</ns0:ref> further extended their work in 2016 and used correlation coefficient and gain ratio for feature selection in their work. Thereafter, five classification algorithms of NB, DT, ANN, LR, and EM were used on the UNSW-NB15. Results showed that 85% accuracy was achieved using DT with 15.75 False Alarm Rate (FAR). This research utilized a subset of UNSW-NB15; however, detection accuracy was not satisfactory.</ns0:p><ns0:p>For detecting botnets and their tracks, Koroniotis et al. <ns0:ref type='bibr' target='#b24'>(Koroniotis et al., 2017)</ns0:ref> presented a framework using machine learning techniques on a subset of the UNSW-NB15 dataset using network flow identifiers. Four classification algorithms were used i.e., Association Rule Mining (ARM), ANN, NB and DT. The results showed that the DT obtained the highest accuracy of 93.23% with a False Positive Rate (FPR) of 6.77%.</ns0:p><ns0:p>In 2019, Meftah et al. <ns0:ref type='bibr' target='#b35'>(Meftah, Rachidi and Assem, 2019)</ns0:ref> applied a two-stage anomaly-based NIDS approach to detect network attacks. The proposed method used LR, Gradient Boost Machine (GBM) and Support Vector Machine (SVM) with the Recursive Feature Elimination (RFE) and RF feature selection techniques on a complete UNSW-NB15 dataset. The results showed that the accuracy of multi-classifiers using DT was approximately 86.04%, respectively. <ns0:ref type='bibr' target='#b25'>Kumar et al. (Kumar et al., 2020)</ns0:ref> proposed an integrated calcification-based NIDS using DT models with a combination of clusters created using the k-mean algorithm and IG's feature selection technique. The research utilized only 22 features and five types of network attacks of UNSW-NB15 dataset, and the RTNITP18 dataset, which served as a test dataset to test the performance of the proposed model. The result showed an accuracy of 84.83% using the proposed model and 90.74% using the C5 model of DT. <ns0:ref type='bibr' target='#b20'>Kasongo and Sun (Kasongo and Sun, 2020)</ns0:ref> presented the NIDS approach using five classification algorithms of LR, KNN, ANN, DT and SVM in conjunction with the feature selection technique of the XGBoost algorithm. The research used the UNSW-NB15 dataset to apply binary and multiclass classification methods. Although binary classification performed well with an accuracy of 96.76% using the KNN classifier, multiclass classification didn't perform well as it achieved the highest accuracy of 82.66%. <ns0:ref type='bibr' target='#b27'>Kumar et al. (Kumar, Das and Sinha, 2021)</ns0:ref> proposed Unified Intrusion Detection System (UIDS) to detect normal traffic and four types of network attack categories by utilizing UNSW-NB15 dataset. Proposed UIDS model was designed with the set of rules (R) derived from various DT models including k-means clustering and IG's feature selection technique. In addition, various PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_5'>2021:09:65380:1:0:NEW 5 Nov 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science algorithms such as C5, Neural Network and SVM were also used to train the model. As a result, the proposed model improved with an accuracy of 88.92% over other approaches. However, other algorithms such as C5, Neural Network and SVM achieved an accuracy of 89.76%, 86.7% and 78.77%, respectively.</ns0:p><ns0:p>From a brief review of related literature as shown in Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref>, it is evident that more work needs to be done to identify the features for the families of network attacks. There is a need to determine a generic model that provides better accuracy for all the attacks presented in the dataset.</ns0:p><ns0:p>This research provides a model that determines a common subset of features. Subsequently, by using that feature subset we would be able to identify all attacks, belonging to any category with consistent accuracy. It focuses on the implementation of a generic model that provides improved classification accuracy. Moreover, there is limited research that has used the class imbalance technique to balance instances of rare attacks present in the dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Proposed Methodology</ns0:head><ns0:p>The framework utilizes a subset of the UNSW-NB15 dataset. It consists of two main steps. The first step involves data pre-processing, in which standardization and normalization of data are performed. Due to the high dimensional nature of the dataset, some features that are irrelevant or redundant may lead to reduce the accuracy of attack detection. To solve this problem, feature selection is used, in which only the relevant subset of features is selected to eliminate useless and noisy features from multidimensional datasets. Afterward, we have then addressed the class imbalance problem. In the next step, different classifiers are trained with relevant features to detect all categories of attack to get maximum accuracy. Finally, accuracy, precision, recall and F1-score performance measures are used to evaluate the model. The proposed methodology that represents the overall framework is shown in Fig. <ns0:ref type='figure'>1</ns0:ref>. Dataset UNSW-NB15 dataset has been created by researchers in 2015 focusing on advanced network intrusion techniques. It contains 2.5 million records with 49 features <ns0:ref type='bibr' target='#b13'>(Dahiya and Srivastava, 2018)</ns0:ref>. There are nine different classes of attack families with two label values i.e., normal or attack <ns0:ref type='bibr' target='#b22'>(Khan et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b21'>Khammassi and Krichen, 2020)</ns0:ref> in the UNSW-NB15 dataset <ns0:ref type='bibr' target='#b10'>(Benmessahel, Xie and Chellal, 2018)</ns0:ref>. These classes are described in Table <ns0:ref type='table'>2</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Dataset Pre-Processing</ns0:head><ns0:p>This phase involves the following steps: data standardization and data normalization.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61623; Data Standardization</ns0:head><ns0:p>As there were features with different ranges of values in the dataset. Initially, we performed data standardization to convert the data from normal distribution into standard normal distribution. Therefore, after rescaling, a mean value of an attribute is equal to 0 and the resulting distribution is equal to the standard deviation. The formula to calculate a standard score (z-score) is:</ns0:p><ns0:formula xml:id='formula_0'>&#119911; = ( &#119909; -&#956;) &#963;</ns0:formula><ns0:p>Where x is the data sample, &#956; is the mean and &#963; is the standard deviation <ns0:ref type='bibr' target='#b50'>(Xiao et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61623; Data Normalization</ns0:head><ns0:p>In data normalization, the value of each continuous attribute is scaled between 0 and 1 such that the result of attributes does not dominate each other <ns0:ref type='bibr' target='#b16'>(Gupta et al., 2016)</ns0:ref>. In this research, the normalizer class of Python has been used. This class enables the normalization of a particular dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Feature Selection</ns0:head><ns0:p>Feature selection is a technique that is used to select features that mostly correlate and contribute to the target variable of the dataset <ns0:ref type='bibr' target='#b5'>(Aljawarneh, Aldwairi and Yassein, 2017)</ns0:ref>. In this research, feature selection is done using Correlation Attribute Evaluation (CA), Information Gain (IG) and Principal Component Analysis (PCA). CA measures the relationship between each feature with the target variable and select only those relative features that have moderately higher positive or negative values, i.e., closer to 1 or -1 <ns0:ref type='bibr' target='#b43'>(Sugianela and Ahmad, 2020)</ns0:ref>. While IG feature selection technique is used to determine relevant features and minimizing noise caused by unrelated features. These relevant features are calculated from the entropy matrix which measures the uncertainty of the dataset <ns0:ref type='bibr' target='#b28'>(Kurniabudi et al., 2020)</ns0:ref>. Through Principal Component Analysis, the size of large datasets is reduced by retaining the relevant features that depend on the target class <ns0:ref type='bibr' target='#b25'>(Kumar, Glisson and Benton, 2020)</ns0:ref>.</ns0:p><ns0:p>The above-mentioned feature selection techniques help to train the model correctly with only the relevant features that accurately predict the target class.</ns0:p></ns0:div> <ns0:div><ns0:head>Class Imbalance</ns0:head><ns0:p>The UNSW-NB15 dataset is highly imbalanced not only because the number of normal traffic instances is much higher than different attack categories, but also because the different categories of attack instances are not equal in distribution. This problem is known as 'Class Imbalance'. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>classes in the dataset <ns0:ref type='bibr' target='#b30'>(Laureano, Sison and Medina, 2019)</ns0:ref>. Table <ns0:ref type='table'>4</ns0:ref> shows the instance percentages in each class after applying SMOTE.</ns0:p></ns0:div> <ns0:div><ns0:head>Classification Algorithms</ns0:head><ns0:p>Five classification algorithms, that is, RF, DT, LR, KNN and ANN were employed to train the model.</ns0:p><ns0:p>&#61623; Random Forest Random Forest is an ensemble classifier that is used for improving classification results. It comprises multiple Decision Trees. In comparison with other classifiers, RF provides lower classification errors. Randomization is applied for the selection of the best nodes for splitting when creating separate trees in RF <ns0:ref type='bibr' target='#b19'>(Jiang et al., 2018)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61623; Decision Tree</ns0:head><ns0:p>In the Decision Tree algorithm, the attributes are tested on internal nodes, the outcomes of the tests are represented by branches, and leaf nodes hold labels of the classes <ns0:ref type='bibr' target='#b0'>(Afraei, Shahriar and Madani, 2019)</ns0:ref>. Attribute selection methods are used for identifying nodes. Those selected attributes minimize the information that is required for tuple classification in the resulted partition. Hence, reflecting the minimum uncertainty or impurity in those partitions. Therefore, minimizing the projected number of tests required for tuple classification. In this research, ID3 algorithms utilize entropy class to determine which attributes should be queried on, at every node of those decision trees.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61623; Logistic Regression</ns0:head><ns0:p>Logistic Regression is a probabilistic classification model. It casts the problem into a generalized linear regression form. It has a sigmoid curve. The equation of the sigmoid function or logistic function is:</ns0:p><ns0:formula xml:id='formula_1'>&#119878;(&#119909;) = &#119890; &#119909; 1 + &#119890; &#119909;</ns0:formula><ns0:p>This function is used for mapping values to probabilities. It works by mapping real values to other values between 0 and 1 <ns0:ref type='bibr' target='#b29'>(Kyurkchiev and Markov, 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61623; K-Nearest Neighbors</ns0:head><ns0:p>In K-Nearest Neighbors, a new data point is attached with the data points in the training set and based on that attachment, a value is assigned to that new data point. This uses feature similarity for prediction. In KNN Euclidean, Manhattan or Hamming distance are used for calculating the distance between a test data and each record of training data <ns0:ref type='bibr' target='#b18'>(Jain, Jain and Vishwakarma, 2020)</ns0:ref>. Afterward, according to the value of distance, the rows are sorted. From those rows, K rows from the top are selected. Based on the most frequent classes of these rows classes to the test points are assigned.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61623; Artificial Neural Network</ns0:head><ns0:p>In the Artificial Neural Network algorithm, there are three layers that consist of computational units called neurons. These layers are input, output and hidden layers. The number of neurons in these layers depends on the features of the dataset and classes which have to be detected and chosen with different techniques. Different types of activation functions are used in the ANN algorithm for calculating the weighted sum of the connections between neurons. This algorithm has biases in the hidden layer and an output layer which are adjusted to reduce errors and improve accuracy in training and testing the model <ns0:ref type='bibr' target='#b6'>(Andropov et al., 2017)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation Metrics</ns0:head><ns0:p>A confusion matrix is used for the comparison of the performance of machine learning algorithms. This matrix is used for the creation of different metrics by the combination of the values of True Negative (TN), True Positive (TP), False Negative (FN) and False Positive (FP) <ns0:ref type='bibr' target='#b45'>(Tripathy, Agrawal and Rath, 2016)</ns0:ref>. Below are some of the performance measures to evaluate models by the use of the confusion matrix.</ns0:p><ns0:p>Accuracy shows the correctness or closeness of the approximated value to the actual or true value of the model which means a portion of the total samples that are classified correctly <ns0:ref type='bibr' target='#b33'>(Lin, Ye and Xu, 2019)</ns0:ref>. The following formula is used to calculate the accuracy of the model:</ns0:p><ns0:formula xml:id='formula_2'>&#119860;&#119888;&#119888;&#119906;&#119903;&#119886;&#119888;&#119910; = &#119879;&#119875; + &#119879;&#119873; &#119879;&#119875; + &#119879;&#119873; + &#119865;&#119873; + &#119865;&#119875;</ns0:formula><ns0:p>Precision shows which portion of relevant instances is actually positive among the selected instances <ns0:ref type='bibr' target='#b42'>(Roy and Cheung, 2018)</ns0:ref>. The following formula is used to calculate precision:</ns0:p><ns0:formula xml:id='formula_3'>&#119875;&#119903;&#119890;&#119888;&#119894;&#119904;&#119894;&#119900;&#119899; = &#119879;&#119875; &#119879;&#119875; + &#119865;&#119875;</ns0:formula><ns0:p>Recall or True Positive Rate (TPR) calculates the fraction of actual positives that are correctly identified <ns0:ref type='bibr' target='#b34'>(Ludwig, 2017)</ns0:ref>. The formula used to find recall is:</ns0:p><ns0:formula xml:id='formula_4'>&#119877;&#119890;&#119888;&#119886;&#119897;&#119897; = &#119879;&#119875; &#119879;&#119875; + &#119865;&#119873;</ns0:formula><ns0:p>F1-score is interpreted as the harmonic mean of precision and recall means it combines the weighted average of precision and recall <ns0:ref type='bibr' target='#b40'>(Niyaz et al., 2016)</ns0:ref>. The following formula is used to calculate F1-score: </ns0:p></ns0:div> <ns0:div><ns0:head>Experiment and Result Analysis</ns0:head><ns0:p>Following the methodology depicted in Fig. <ns0:ref type='figure'>1</ns0:ref>, experimental setup is established. In this research, a sample of 80000 instances is randomly selected from the UNSW-NB15 dataset. Initially, data standardization and normalization have been performed to rescale data values of the dataset and then three feature selection techniques are applied to select the most relevant features. Afterward, the class imbalance problem is resolved using SMOTE. Lastly, five classification algorithms i.e., RF, DT, LR, KNN and ANN are used to classify between the attack categories and normal traffic. The following 35 features are selected by applying <ns0:ref type='bibr'>IG: sttl, ct_state_ttl, ct_flw_http_mthd, sbytes, id, smean, sload, dur, sinpkt, rate, proto, ct_dst_src_ltm, service, dbytes, sjit, ct_srv_dst, dload, dinpkt, dmean, ct_srv_src, synack, tcprtt, ct_dst_sport_ltm, djit, ct_src_dport_ltm, dtcpb, stcpb, spkts , dloss , ct_dst_ltm, ackdat, label, dpkts, ct_src_ltm, sloss.</ns0:ref> After applying CA method, 24 features have been achieved from the set of 49 features: id, ct_dst_sport_ltm, ct_dst_src_ltm, ct_src_dport_ltm, sttl, ct_srv_dst, ct_srv_src, ct_dst_ltm, ct_src_ltm, ct_state_ttl, state, swin, dwin, proto, service, rate, dttl, stcpb, dtcpb, dmean, dload, tcprtt, ackdat, synack.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance Analysis of Classification Models without Feature Selection</ns0:head><ns0:p>By applying PCA, 151 subsets from the set of 49 features were resulted, out of which 10 subsets with 80% results were selected. After tremendous analysis and evaluation of these 10 subsets, 24 features have been extracted. Afterward, five classifiers were trained by using these features: id, dur, dwin, proto, djit, swin, smean, state, service, ct_src_dport_ltm, dbytes, ct_dst_ltm, PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65380:1:0:NEW 5 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science ct_dst_sport_ltm, ct_src_ltm, dloss, ct_flw_http_mthd, ct_srv_dst, dpkts, sttl, dmean, spkts, sbytes, sloss and sinpkt.</ns0:p><ns0:p>After training the classifiers with the above features, the results showing in Table <ns0:ref type='table'>6</ns0:ref> were obtained.</ns0:p><ns0:p>It is observed using IG technique, that the RF classifier achieved the highest accuracy of 89.5% approx. with precision rate (76.8%), recall (72.3%) and F1-score (73.7%). In contrast to other classifiers, LR and KNN didn't perform well with IG as their recall and F1-score have below 50% scores. There is no much difference between the accuracy of RF and DT classifiers as both give almost the same accuracy, recall and F1-score measures using IG technique. The only difference is in the precision rate as RF achieved 76.8% and DT scored 69.6% precision value.</ns0:p><ns0:p>It is observed that the accuracy of all the classifiers decreased when the model is trained using the CA technique. RF classifier achieved the highest accuracy of 86.3% but with low precision, recall and F1-score measures. The accuracy of DT and ANN classifiers are approximately the same as the RF classifier with a minor difference of 2% to 5%. However, ANN classifier has very lowperformance measures as compared to RF and DT. Also, LR and KNN have the lowest accuracy measures with poor performance metrics.</ns0:p><ns0:p>It is observed using PCA feature selection technique, that RF classifier obtained the highest accuracy of 89.3% with precision (77.3%), recall (70.8%) and F1-score (73.1%) rates. All the classifiers achieved the accuracy in between 80% to 89% but with low performance measures as compared to IG feature selection technique. LR recorded the lowest recall rate with 40.6% and F1score with 40%.</ns0:p><ns0:p>After evaluation of the performance of three feature selection methods, it was observed that the feature selection technique of IG and PCA performed well as compared to CA. RF and DT classifiers approximately achieved the same accuracy between 88% to 89% when trained with IG and PCA. However, for precision, recall and F1-score measures, these classifiers showed average scores. Therefore, it is concluded that, no major changes have been observed in the results after applying feature selection techniques as classifiers achieved almost same accuracy before feature selection.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance Analysis of Classification Models by handling Imbalanced Data</ns0:head><ns0:p>To handle imbalanced data, SMOTE technique has been applied in this research to adjust the class distribution of dataset and increase the instances of minority classes of those network attacks that has lower instances. After handling imbalance data, the results showing in Table <ns0:ref type='table'>7</ns0:ref> were obtained.</ns0:p><ns0:p>By using IG feature selection technique after applying SMOTE, it is observed that the RF classifier achieved highest accuracy of 95.1% with highest precision rate (94.8%), recall (95.7%) and F1-score (95.1%). Also, the accuracy of DT is 94.7%, almost nearest from the RF. The accuracy of both algorithms increased after handling imbalanced classes i.e., from 89.5% to 95.0% in RF and 88.5% to 94.5% in DT. Whereas, after applying SMOTE, LR and ANN didn't perform well as their accuracies were decreased from 82.2% to 69.4% in LR and 85.7% to 77.3% in ANN using IG method. The accuracy of KNN is almost the same using all three feature selection techniques but with good precision, recall and F1-score measures.</ns0:p><ns0:p>By using CA feature selection technique after applying SMOTE, it is noticed that RF and DT classifiers achieved highest accuracy in between 92.6% to 93.5% with above 90% precision, recall and F1-score measures. The accuracy of both the algorithms increased after applying SMOTE. Also, a minor change occurred in KNN as their accuracy is improved from 76.8% to 78.4% after handing imbalanced classes. However, when comes to LR and ANN, both algorithms did not perform well with class balance as their accuracies have been decreased i.e., in LR, from 74.5% to 62.0% and in ANN, from 80.3% to 71.7%.</ns0:p><ns0:p>By using PCA after applying SMOTE, it is noticed that the RF classifier achieved the highest accuracy of 95.1% with a precision rate of 94.8%, recall of 95.7% and F1-score of 95.1%. DT classifier achieved the accuracy of 94.7%, which is almost nearest to the accuracy of RF. After applying SMOTE, there is no change in the results of KNN using PCA method. However, the accuracy of LR and ANN decreased from 80.4% to 68.2% in LR and 85.2% to 77.6% in ANN but with increased precision, recall and F1-score measures.</ns0:p></ns0:div> <ns0:div><ns0:head>Overall Performance Evaluation of Classification Models after handling Class Imbalance</ns0:head><ns0:p>After handling class balancing by using SMOTE, it is concluded that RF classifier performed well with good results up to 95.1% by using PCA feature selection technique. Also, it is noticed that class balancing did not impact on LR and ANN classifiers as their accuracy decreased after handling minority classes.</ns0:p><ns0:p>&#61623; Confusion Metrics of best performed Classifier: Random Forest After analysis of the five classification models, it is observed that RF scheme provided the highest accuracy. On the basis of which, the confusion matrix of RF classification model is analyzed to observe the attack prediction accuracy of the nine categories of attacks separately.</ns0:p><ns0:p>In Fig. <ns0:ref type='figure'>2</ns0:ref>, it is depicted that all the normal traffic instances were identified correctly by RF (i.e., it had 100% accuracy). In attack categories, all the instances of Backdoor, Shellcode and Worms were also identified correctly showing 100 prediction accuracy. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The research proposed a framework that predicts a variety of network attack categories using supervised machine learning algorithms. The dataset used in this study is the UNSW-NB15 dataset, a relatively new and containing a large amount of network traffic data, with nine types of network attack categories. </ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper presents a framework for network intrusion detection. The performance of the proposed framework has been analyzed and evaluated on the UNSW-NB15 dataset. The proposed framework uses different pre-processing techniques that includes data standardization and normalization, feature selection techniques and class balancing methods. The usability of the selected features along with using data standardization and normalization techniques is analyzed by applying them on five different classification models. The results showed that the features selected by PCA contributed much to improve accuracy than other methods. For improving the accuracy of the classification models, the class imbalance problem is also addressed which increased the framework performance with high margin. In can be concluded on the basis of evaluation results that both RF and DT classifiers performed well over the UNSW-NB15 dataset in terms of accuracy, precision, recall, and F1-score metrics. It can also be concluded that major issue in UNSW-NB15 dataset is not only the presence of highly correlated features but also the class imbalance problem of the dataset. Therefore, we used a novel combination of different pre-processing techniques in order to resolve all the underlying issues of the dataset and developed a fast and efficient network security intrusion detection system. </ns0:p></ns0:div><ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>depicts the distribution of nine categories of attack and normal instances in the training dataset. The attack categories such as Analysis, Backdoor, Shellcode and Worms have very few</ns0:figDesc><ns0:table /><ns0:note>instances. This highly imbalanced nature of the dataset causes problems in training machine learning algorithms for accurately predicting cyber-attacks. To address the class imbalance issue, this research uses SMOTE. SMOTE synthesizes instances of minority classes to balance all the PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65380:1:0:NEW 5 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Table 5 summarizes the results of classification models without feature selection, in which RF achieved the highest accuracy, precision, recall, and F1-score values approximately 89.29%, 76.9%, 72.6% and 74.1% respectively. Whereas, LR obtained the lowest accuracy, precision, recall, and F1-score values of approximately 82.07%, 50.5%, 42.6% and 42.3%. DT achieved an accuracy of 88.29% with 70.5% precision, 72.7% recall and 71.4% F1-score. KNN also recorded the same accuracy as LR but with better precision, recall and F1-score results. Whereas, ANN recorded average performance with accuracy, precision, recall, and F1-score values of approximately 84.80%, 61.0%, 53.2% and 54.5%.</ns0:figDesc><ns0:table /><ns0:note>Performance Analysis of Classification Models with Feature SelectionThree feature selection techniques i.e., CA, IG and PCA are used in this research.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>Whereas, 1759 out of 1763 instances of Analysis attack (i.e., 99.77% accuracy), 2341 out of 2534 instances of Fuzzers (i.e., 92.38% accuracy), 5461 out of 5545 instances of Generic (i.e., 98.49% accuracy), 2151 out of 2357 instances of Reconnaissance (i.e., 91.26% accuracy) were identified correctly.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65380:1:0:NEW 5 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>The proposed framework implies five machine learning algorithms in conjunction with preprocessing techniques, different methods of feature selection and SMOTE. After training, the results of the classifiers shown inTable 6, 7 and 8 were obtained. Compared to previous studies, as shown in Table 1, our model performed well with the highest accuracy of 95.1% using RF classifier with 24 features selected by PCA after applying SMOTE. DT classifier has also performed well with accuracy between 92.6% to 94.7% using different feature selection techniques. Existing studies that summarized in Table 1, achieved less than 90% accuracy, except for the research proposed by Kumar et al. (Kumar et al., 2020) and Koroniotis et al. (Koroniotis et al., 2017). Kumar et al. showed 90.74% accuracy using the C5 model of DT in conjunction with the IG feature selection technique. While the model proposed by Koroniotis et al. achieved 93.23% accuracy using DT classifier. In addition, none of the studies listed in Table1have resolved the class imbalance problem of the UNSW-NB15 dataset as there are many studies<ns0:ref type='bibr' target='#b4'>(Al-Daweri et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b1'>Ahmad et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b8'>Bagui and Li, 2021;</ns0:ref><ns0:ref type='bibr' target='#b13'>Dlamini and Fahim, 2021)</ns0:ref> that have highlighted this issue. We addressed the class imbalance problem by applying SMOTE that improved the performance of the classifiers and achieved good results.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 1 : Summary of Existing Studies related to Network Attack Categories</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65380:1:0:NEW 5 Nov 2021)</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Computer Science</ns0:cell><ns0:cell cols='2'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Dataset</ns0:cell><ns0:cell>Algorithms</ns0:cell><ns0:cell>Accuracy/</ns0:cell><ns0:cell>Limitations</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(complete</ns0:cell><ns0:cell /><ns0:cell>FAR/FPR</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>/Partial)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Moustafa</ns0:cell><ns0:cell>UNSW-NB15</ns0:cell><ns0:cell>Na&#239;ve Bayes and</ns0:cell><ns0:cell>Accuracy:</ns0:cell><ns0:cell>This research is</ns0:cell></ns0:row><ns0:row><ns0:cell>and Slay,</ns0:cell><ns0:cell>(Partial)</ns0:cell><ns0:cell>EM Algorithm</ns0:cell><ns0:cell>Na&#239;ve Bayes</ns0:cell><ns0:cell>determining only five</ns0:cell></ns0:row><ns0:row><ns0:cell>2015</ns0:cell><ns0:cell>KDD99</ns0:cell><ns0:cell /><ns0:cell>-37.5%</ns0:cell><ns0:cell>network attack</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Dataset</ns0:cell><ns0:cell /><ns0:cell>EM</ns0:cell><ns0:cell>categories. The problem</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Algorithm:</ns0:cell><ns0:cell>of class imbalance has</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>75.80%</ns0:cell><ns0:cell>not been resolved. As a</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>FPR: 22.08</ns0:cell><ns0:cell>result, the algorithms are</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>not performing well.</ns0:cell></ns0:row><ns0:row><ns0:cell>Moustafa</ns0:cell><ns0:cell>UNSW-NB15</ns0:cell><ns0:cell>Na&#239;ve Bayes,</ns0:cell><ns0:cell>Accuracy:</ns0:cell><ns0:cell>In this study, data pre-</ns0:cell></ns0:row><ns0:row><ns0:cell>and Slay,</ns0:cell><ns0:cell>(Partial)</ns0:cell><ns0:cell>Decision Tree,</ns0:cell><ns0:cell>between</ns0:cell><ns0:cell>processing techniques</ns0:cell></ns0:row><ns0:row><ns0:cell>2016</ns0:cell><ns0:cell>Dataset</ns0:cell><ns0:cell>Artificial Neural</ns0:cell><ns0:cell>78.47% to</ns0:cell><ns0:cell>have not been</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Network, Logistic</ns0:cell><ns0:cell>85.56%</ns0:cell><ns0:cell>implemented and the</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Regression, and</ns0:cell><ns0:cell>FAR:</ns0:cell><ns0:cell>issue of class imbalance</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Expectation-</ns0:cell><ns0:cell>between</ns0:cell><ns0:cell>has not been resolved.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Maximisation</ns0:cell><ns0:cell>15.75% to</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>23.79%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Koroniotis</ns0:cell><ns0:cell>UNSW-NB15</ns0:cell><ns0:cell>Naive Bayes,</ns0:cell><ns0:cell>Accuracy:</ns0:cell><ns0:cell>This work didn't solve</ns0:cell></ns0:row><ns0:row><ns0:cell>et al., 2017</ns0:cell><ns0:cell>Dataset</ns0:cell><ns0:cell>Decision Tree,</ns0:cell><ns0:cell>between</ns0:cell><ns0:cell>the problem of class</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Association Rule</ns0:cell><ns0:cell>63.97% to</ns0:cell><ns0:cell>imbalance. Hence, the</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Mining (ARM) and</ns0:cell><ns0:cell>93.23%</ns0:cell><ns0:cell>algorithms did not</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Artificial Neural</ns0:cell><ns0:cell>FPR:</ns0:cell><ns0:cell>perform well to detect</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Network</ns0:cell><ns0:cell>between</ns0:cell><ns0:cell>some network attacks.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>6.77% to</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>36.03%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Meftah,</ns0:cell><ns0:cell>UNSW-NB15</ns0:cell><ns0:cell>Logistic</ns0:cell><ns0:cell>Accuracy:</ns0:cell><ns0:cell>This research didn't</ns0:cell></ns0:row><ns0:row><ns0:cell>Rachidi</ns0:cell><ns0:cell>Dataset</ns0:cell><ns0:cell>Regression,</ns0:cell><ns0:cell>achieving a</ns0:cell><ns0:cell>address the class</ns0:cell></ns0:row><ns0:row><ns0:cell>and Assem,</ns0:cell><ns0:cell /><ns0:cell>Gradient Boost</ns0:cell><ns0:cell>multi-</ns0:cell><ns0:cell>imbalance problem.</ns0:cell></ns0:row><ns0:row><ns0:cell>2019</ns0:cell><ns0:cell /><ns0:cell>Machine, and</ns0:cell><ns0:cell>classification</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Support Vector</ns0:cell><ns0:cell>accuracy of</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Machine</ns0:cell><ns0:cell>86.04%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Kumar et</ns0:cell><ns0:cell>UNSW-NB15</ns0:cell><ns0:cell>Decision Tree</ns0:cell><ns0:cell>Accuracy:</ns0:cell><ns0:cell>This work has predicted</ns0:cell></ns0:row><ns0:row><ns0:cell>al., 2020</ns0:cell><ns0:cell>(Partial) and</ns0:cell><ns0:cell>Models (C5,</ns0:cell><ns0:cell>84.83%</ns0:cell><ns0:cell>only 4 out of 9 categories</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>RTNITP18</ns0:cell><ns0:cell>CHAID, CART,</ns0:cell><ns0:cell>using</ns0:cell><ns0:cell>of the UNSW-NB15</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Dataset</ns0:cell><ns0:cell>QUEST)</ns0:cell><ns0:cell>proposed</ns0:cell><ns0:cell>dataset. Also, the</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>model and</ns0:cell><ns0:cell>problem of class</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>90.74%</ns0:cell><ns0:cell>imbalance has not been</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>using C5</ns0:cell><ns0:cell>solved in this study.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>model</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Kasongo</ns0:cell><ns0:cell>UNSW-NB15</ns0:cell><ns0:cell>Logistic regression,</ns0:cell><ns0:cell>Accuracy:</ns0:cell><ns0:cell>The problem of class</ns0:cell></ns0:row><ns0:row><ns0:cell>and Sun,</ns0:cell><ns0:cell>Dataset</ns0:cell><ns0:cell>K-Nearest</ns0:cell><ns0:cell>between</ns0:cell><ns0:cell>imbalance has not been</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 : Class Distribution in Dataset Class Distribution Instance count (%)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Normal</ns0:cell><ns0:cell>32.10</ns0:cell></ns0:row><ns0:row><ns0:cell>Generic</ns0:cell><ns0:cell>22.80</ns0:cell></ns0:row><ns0:row><ns0:cell>Exploits</ns0:cell><ns0:cell>19.06</ns0:cell></ns0:row><ns0:row><ns0:cell>Fuzzers</ns0:cell><ns0:cell>10.32</ns0:cell></ns0:row><ns0:row><ns0:cell>DoS</ns0:cell><ns0:cell>7.08</ns0:cell></ns0:row><ns0:row><ns0:cell>Reconnaissance</ns0:cell><ns0:cell>5.85</ns0:cell></ns0:row><ns0:row><ns0:cell>Analysis</ns0:cell><ns0:cell>1.12</ns0:cell></ns0:row><ns0:row><ns0:cell>Backdoor</ns0:cell><ns0:cell>0.93</ns0:cell></ns0:row><ns0:row><ns0:cell>Shellcode</ns0:cell><ns0:cell>0.65</ns0:cell></ns0:row><ns0:row><ns0:cell>Worms</ns0:cell><ns0:cell>0.06</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='1'>https://research.unsw.edu.au/projects/unsw-nb15-dataset PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65380:1:0:NEW 5 Nov 2021)</ns0:note> </ns0:body> "
" Jinnah University for Women 5C, Nazimabad, Karachi – 74600, Pakistan Tel: (92-21) 36620857-59 https://www.juw.edu.pk/ hafizaanisaahmed@gmail.com November 6, 2021 Dear Editors, Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments. We would like to thank the editor and the reviewers for their time and invaluable comments. The reviewers’ insightful comments were helpful in improving the quality of paper. We have carefully addressed all the comments and incorporated the changes in the manuscript. In particular all of the code we wrote is available and I have included multiple links throughout the paper to the appropriate code repositories. We believe that the manuscript is now suitable for publication in PeerJ. Best regards, Hafiza Anisa Ahmed Department of Computer Science and Software Engineering On behalf of all authors Reviewer 1 (Anonymous) Basic reporting 1. The author claims in lines 70-71 that the proposed model performs better than previous related research, but there is no specific explanation and comparison in the paper, which is hard to convince. I suggest that the experiment is not just to compare the performance differences of existing Methodologies, but to compare the superiority of the detection framework over the previous research when facing the same dataset. The model proposed in this study uses the UNSW-NB15 dataset and not only achieves better accuracy than previous research but also effectively detects all categories of attack. The comparison of related work is given in Table 1 that highlights the limitations of existing work. In addition, the ‘Discussion Section’ has also been included in the manuscript comparing our work with existing studies. 2. Lines 72-73 indicate that the data preprocessing technology mentioned in this paper is novel and has not been employed. However, there are abundant researches on data standardization and normalization technology and SMOTE technology for dealing with imbalances. So, this statement is debatable. Readers are more concerned about the innovation of this work. Therefore, the author should not only declare the novelty of the work but also discuss and prove it in detail. This will help readers grasp the value of the work. There are a few studies that have applied SMOTE with UNSW-B15 dataset, however the approach and results presented is this paper are different from other existing studies. Brief comparison of results is discussed in the manuscript (tracked changes) from line 485 to 496 and in the revised manuscript (clean) from line 442 to 453. Experimental design 3. There are some errors in the presentation of data, tables and conclusions: 1) Since the IG method is used, the description from 89.5% to 95.1% in RF in line 358 should be modified from 89.5% to 95% in RF. Agreed. We have corrected this in line 429 of the manuscript (tracked changes) and in line 391 of the revised manuscript (clean). 2) The decrease in ANN accuracy is incorrectly stated in lines 370 to 371 because 76.8% is the accuracy of KNN. The ANN accuracy decreases from 80.3% to 71.7%. Agreed. We have corrected this in line 443 of the manuscript (tracked changes) and in line 403 of the revised manuscript (clean). 3) In lines 377 to 378 the conclusion is wrong, the accuracy of KNN increases from 84.0% to 84.7% after using PCA, which does not reflect the decrease in accuracy with PCA, and we note that the accuracy of ANN drops exactly 85.2% to 77.6% again after using PCA. Agreed. We have corrected this in line 450 of the manuscript (tracked changes) and in line 409 of the revised manuscript (clean). 4) In line 385 the conclusion is wrong, class balancing does not affect Logistic Regression and Artificial Neural Networks, not Decision Tree and Artificial Neural Networks as described in the paper. Agreed. We have corrected this in line 457 of the manuscript (tracked changes) and in line 416 of the revised manuscript (clean). Validity of the findings No comment Additional comments No comment Reviewer 2 (Nebrase Elmrabit) Basic reporting 4. In general, the abstract should be about 240/260 words in length and up to the 500 words limit. It can contain short statements summarising the project scope, research significance/motivation/problem statement, aim, objectives, the method and techniques used to work towards these objectives, and the results achieved and conclusions made. It should give a reader sufficient information to decide whether or not to read the rest of your paper. Make sure to revise your abstract to address most of the above requirements, especially your motivation and aims. The abstract has been revised and all suggestions are incorporated. 5. The abstract keywords are not presented. Keywords have been added to the ‘Keywords section’ of the article submission process. 6. The uncommon abbreviations should be spelt out at first use only. EM algorithm in line 107 is not abbreviated, and SMOTE is abbreviated twice in lines 87 & 199. Therefore, the English language should be improved, I suggest you have a colleague who is fluent in English and familiar with the subject matter review your manuscript. Agreed. We have corrected the abbreviation problems throughout the manuscript. All grammatical mistakes are corrected and language of paper is improved. 7. Make sure all the equations are in the middle of the line. We have corrected and now all the equations are in the middle of the line. 8. In line 318 the table name error. Agreed. We have corrected this in line 386 of the manuscript (tracked changes) and in line 353 of the revised manuscript (clean). 9. In line 155 it’s F1-score, and not f1-score. Agreed. We have corrected this throughout the manuscript. 10. Line 101 – 103, the reference style in table 1 is not as the main text, also, there is no reference to what you cited in this table. Also, you should add more research from 2020 and 2021 in this table. We have updated the references in Table 1 and revised the Table 1 by including the research of 2020 and 2021. Experimental design 11. In lines 66-67, you need to clarify why you used only the UNSW-NB15 dataset and update this section as in 2021 more than 24 open access papers mention the UNSW-NB15 dataset. https://paperswithcode.com/dataset/unsw-nb15 UNSW-NB15 dataset is one of the most recent datasets having traces of nine attacks. Therefore, it is being used by many researchers recently. The older datasets such as KDDCUP99, KDD98 and NSL-KDD7 have been used by hundreds of researchers earlier. It is common in research community to work on popular datasets and apply various techniques to improve research. 12. In line 71 you mentioned the previous research, and you claim that your result achieves better accuracy than the previous research. You should cite them here. Citations are added in line 93 of the manuscript (tracked changes) and in line 81 of the revised manuscript (clean). 13. In lines 79-79 you should provide detailed information about your experimental setup. Experimental details are added in the manuscript (tracked changes) from line 94 to 101 and in the revised manuscript (clean) from line 82 to 89. 14. A methodology diagram in the methodology section will benefit this section. Figure 1 depicting methodology diagram is added in the revised manuscript and referred in the ‘Proposed Methodology’ and ‘Experiment and Result Analysis’ sections. 15. In line 316, the label has been extracted as a feature, can you clarify why? There was a typo in the manuscript. Label is not extracted as a feature. 16. In lines 184-188 More details and discussion are required in this section. Details of the feature selection techniques are added in the manuscript (tracked changes) from line 242 to 252 and in the revised manuscript (clean) from line 215 to 225. 17. The percentage in table 3 is not accurate if you consider all datasets and not just the training datasets. You can refer to Table VIII in the reference given below for more details. N. Moustafa and J. Slay, 'UNSW-NB15: comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set),' 2015 Military Communications and Information Systems Conference (MilCIS), 2015, pp. 1-6, doi: 10.1109/MilCIS.2015.7348942. The percentage in table 3 is accurate as the research has used only the training dataset for model training and class imbalances. After this, we have used a test dataset to test and evaluate the model. We updated the manuscript t(tracked changes) in line number 259 and the revised manuscript (clean) in line number 232 by mentioning the training dataset in the text under the heading ‘Class Imbalance’. Validity of the findings 18. In this paper, the authors used the SMOTE technique and feature selection techniques to address the class imbalance issue for some attack types that have very few instances on the UNSW-NB15 Dataset. They managed to achieve a better result when applying both techniques. However, the authors haven’t validated their results with the current research results. A Validation section should be included in this paper. The manuscript includes a 'discussion section' in which we discussed our findings and compare them with the results of existing studies. Additional comments 19. Only one dataset has been tested with the proposal framework, I recommend adding more than one dataset to evaluate their framework. There are number of research work that focuses on only one dataset [1-4]. The scope of this work is also limited to UNSW-NB15 dataset. However, we will consider using different datasets in future. [1]. Stiawan, D., Idris, M. Y. B., Bamhdi, A. M., & Budiarto, R. (2020). CICIDS-2017 dataset feature analysis with information gain for anomaly detection. IEEE Access, 8, 132911-132921. [2]. Kasongo, S. M., & Sun, Y. (2020). Performance analysis of intrusion detection systems using a feature selection method on the UNSW-NB15 dataset. Journal of Big Data, 7(1), 1-20. [3]. Kumar, V., Das, A. K., & Sinha, D. (2020). Statistical analysis of the UNSW-NB15 dataset for intrusion detection. In Computational Intelligence in Pattern Recognition (pp. 279-294). Springer, Singapore. [4]. Zoghi, Z., & Serpen, G. (2021). Unsw-nb15 computer security dataset: Analysis through visualization. arXiv preprint arXiv:2101.05067. 20. I noticed that the raw data and the code are not submitted All the codes are currently available in this public repository: https://github.com/hafizaanisa29/Network_Attack_Research_CodeFiles, and I've added a link to the declaration section of the article submission process. 21. However, I do think the authors could further improve the papers by addressing the above comments, and adding a discussion section at the end of this paper will make it even more readable and comprehensive. 'Discussion Section' has been added in the manuscript which summarizes our research and compares our work with existing studies. "
Here is a paper. Please give your review comments after reading it.
318
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>In the field of deep learning,the processing of large network models on billions or even tens of billions of nodes and numerous edge types is still flawed,and the accuracy of recommendations is greatly compromised when large network embeddings are applied to recommendation systems.To solve the problem of inaccurate recommendations caused by processing deficiencies in large networks, this paper combines the attributed multiplex heterogeneous network with the attention mechanism that introduces the softsign and sigmoid function characteristics and derives a new framework SSN GATNE-T(S represents the softsign function, SN represents the attention mechanism introduced by the Softsign function, and GATNE-T represents the transductive embeddings learning for attribute multiple heterogeneous networks.). The attributed multiplex heterogeneous network can help obtain more user-item information with more attributes. No matter how many nodes and types are included in the model, our model can handle it well, and the improved attention mechanism can help annotations to obtain more useful information via a combination of the two. This can help to mine more potential information to improve the recommendation effect; in addition, the application of the softsign function in the fully connected layer of the model can better reduce the loss of potential user information, which can be used for accurate recommendation by the model. Using the Adam optimizer to optimize the model can not only make our model converge faster, but it is also very helpful for model tuning.The proposed framework SSN GATNE-T was tested for two different types of datasets , Amazon and YouTube , using three evaluation indices , ROC-AUC(Receiver Operating Characteristic-Area under Curve) , PR-AUC(Precision Recall-Area under Curve) and F1(F1-score) , and found that SSN GATNE-T improved on all three evaluation indices compared to the mainstream recommendation models currently in existence.This not only demonstrates that the framework can deal well with the shortcomings of obtaining accurate interaction information due to the presence of a large number of nodes and edge types of the embedding of large network models , but also demonstrates the effectiveness of addressing the shortcomings of large networks to improve recommendation performance.In addition , the model is also a good solution to the cold start problem . 42 been developed to analyse user-item interaction in the recommendation process, different models have 43 different pros and cons regarding the user-item interaction, so the recommended effects are also different.</ns0:p></ns0:div> <ns0:div><ns0:head>44</ns0:head><ns0:p>The nonnegligible part of the recommendation algorithm is the embedding method. Different network 45 embedding[4] methods have different recommendation effects on the recommendation algorithm and 46 different acquisitions of node and edge-type correlation information. For user item attributes, the 47</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>In the field of deep learning,the processing of large network models on billions or even tens of billions of nodes and numerous edge types is still flawed,and the accuracy of recommendations is greatly compromised when large network embeddings are applied to recommendation systems.To solve the problem of inaccurate recommendations caused by processing deficiencies in large networks, this paper combines the attributed multiplex heterogeneous network with the attention mechanism that introduces the softsign and sigmoid function characteristics and derives a new framework SSN_GATNE-T(S represents the softsign function, SN represents the attention mechanism introduced by the Softsign function, and GATNE-T represents the transductive embeddings learning for attribute multiple heterogeneous networks.). The attributed multiplex heterogeneous network can help obtain more user-item information with more attributes. No matter how many nodes and types are included in the model, our model can handle it well, and the improved attention mechanism can help annotations to obtain more useful information via a combination of the two. This can help to mine more potential information to improve the recommendation effect; in addition, the application of the softsign function in the fully connected layer of the model can better reduce the loss of potential user information, which can be used for accurate recommendation by the model. Using the Adam optimizer to optimize the model can not only make our model converge faster, but it is also very helpful for model tuning. The proposed framework SSN_GATNE-T was tested for two different types of datasets , Amazon and YouTube , using three evaluation indices , ROC-AUC(Receiver Operating Characteristic-Area under Curve) , PR-AUC(Precision Recall-Area under Curve) and F1(F1-score) , and found that SSN_GATNE-T improved on all three evaluation indices compared to the mainstream recommendation models currently in existence.This not only demonstrates that the framework can deal well with the shortcomings of obtaining accurate interaction information due to the presence of a large number of nodes and edge types of the embedding of large network models , but also demonstrates the effectiveness of addressing the shortcomings of large networks to 1 <ns0:ref type='bibr'>INTRODUCTION 36</ns0:ref> In the era of information explosion, recommendation systems <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> are essential to keep users engaged 37 and satisfy personalized recommendations <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. Users expect to obtain personalized content on modern 38 e-commerce, entertainment and social media platforms, but the effectiveness of recommendations is 39 limited by existing user-item interactions and model capacity. Therefore, some models often ignore the 40 user-item interaction part, as it is easy to ignore the relevant information existing in the node and edge 41 types, and the acquisition of attribute information <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> is even more flawed. Although many models have acquisition of information is also different, and the processing of small networks or large networks with different types of nodes and edges is also different <ns0:ref type='bibr' target='#b6'>[5]</ns0:ref>. As a promising direction in contemporary times, a heterogeneous information network composed of multiple types of nodes and links has emerged <ns0:ref type='bibr' target='#b8'>[6]</ns0:ref> (heterogeneous information network) as a general information modelling method <ns0:ref type='bibr' target='#b9'>[7]</ns0:ref>. HIN(Heterogeneous Information Network) is often used in recommended systems to simulate many forms of rich auxiliary data, which is inseparable from its ability to characterize various heterogeneous data very flexibly. In particular, as a relational sequence connecting pairs of objects in an HIN <ns0:ref type='bibr' target='#b10'>[8]</ns0:ref>, metapaths have been widely used to extract structural features that capture the relevant semantics of recommendations <ns0:ref type='bibr' target='#b11'>[9]</ns0:ref>. Simply put, the existing HIN-based recommendation methods can be divided into two types. The first type uses pathbased semantic relevance instead of an HIN as a direct feature of recommendation relevance; the second type performs some transformations on path-based similarity for learning effective transformation features, which are then used to enhance recommendation methods of the original user or project representation <ns0:ref type='bibr' target='#b13'>[10]</ns0:ref>.</ns0:p><ns0:p>Both of these methods are designed to improve the characterization of two-way user-item interactions, which are achieved by extracting features based on metapaths.</ns0:p><ns0:p>In the current recommendation algorithm, to better obtain the user's attributes and node information, different embedding methods, such as network embedding, heterogeneous network embedding <ns0:ref type='bibr' target='#b15'>[11]</ns0:ref>, multichannel heterogeneous network embedding and attribute network embedding <ns0:ref type='bibr' target='#b17'>[12]</ns0:ref>, have been produced, and each embedding method has very good development. Network embedding mainly completes graph embedding(GE) <ns0:ref type='bibr' target='#b19'>[13]</ns0:ref> and graph neural networks(GNN) <ns0:ref type='bibr' target='#b21'>[14]</ns0:ref>. Although heterogeneous networks can obtain various types of nodes or edges <ns0:ref type='bibr' target='#b23'>[15]</ns0:ref>, they are difficult to mine due to the complex combination of heterogeneous content and structure. In addition, research on embedded dynamic and heterogeneous data <ns0:ref type='bibr' target='#b25'>[16]</ns0:ref> is limited. Since there is not only a single type of proximity network between various types of nodes but also different types of proximity networks, this type of network produces a network with multiple views, which leads to the emergence of multichannel heterogeneous network embedding. Attribute network embedding <ns0:ref type='bibr' target='#b27'>[17]</ns0:ref> can find low-dimensional vector representations for the nodes in the network and preserve the original network topology and the proximity of node attributes. Although each embedding method has its advantages, these methods are used mainly in networks with a single type of node/edge. There are still shortcomings in the processing of large networks because we need to solve the problem of not only containing nodes but also fewer edge types. The network also solves the large-scale network composed of billions or even tens of billions of nodes and numerous edge types. In these large-scale networks, not only are its nodes and types numerous, but each node is associated with more than one. The two attributes are related, which causes the current existing network to be unable to solve such problems well.</ns0:p><ns0:p>In deep learning , the acquisition of information features of nodes is commonly implemented by GCN , and the one applied to low-dimensional node feature learning are MobileGCN <ns0:ref type='bibr' target='#b29'>[18]</ns0:ref> , by using a novel affinity cod as an update in GCN , it can not only update the node features one can aggregate the features of neighbouring nodes and then obtain more node feature information and then improve the effectiveness of recommendation performance.Self-attention mechanisms are commonly used in image super-resolution processing to select image information to generate higher quality images of the Self-Attention Negative Feedback Network (SRAFBN ) model <ns0:ref type='bibr' target='#b30'>[19]</ns0:ref> , which we apply to recommender systems to improve the effectiveness of recommendation performance.</ns0:p><ns0:p>In the field of deep learning <ns0:ref type='bibr' target='#b31'>[20]</ns0:ref> , large network models contain billions of nodes and edge types , but also more than one type of node and edge , and each node contains many different properties . Today's network embedding methods is mainly focused on homogeneous networks , which are characterised by a single type of node and edge.This is not enough for small networks of a wealth of different nodes and edge types , but even more so for large networks with hundreds of millions of different nodes and edge types.To solve the problem of embedding large networks of multiple nodes and multiple edge types , this paper derives a new framework SSN GATNE-T model by combining to attribute to reuse heterogeneous networks <ns0:ref type='bibr' target='#b33'>[21]</ns0:ref> with an attention mechanism that introduces softsign and sigmoid function properties.</ns0:p><ns0:p>In the SSN GATNE-T model , in order to enable the model to better handle large networks of billions or even tens of billions of different nodes and different edge types , and to better exploit the hidden information among many nodes and edge types that can potentially provide recommendations to users , we focus on combining to attribute to reuse heterogeneous networks of improved attention mechanisms , especially the improvement in attention mechanisms. In this section, we apply the characteristics of the softsign and sigmoid functions to the attention mechanism, and the purpose is to better obtain and label the associated information between different types of nodes and many edge types and use this information </ns0:p></ns0:div> <ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Computer Science to obtain more potential interests and hobbies of the user. Interactive information is used to accurately recommend interests and hobbies for the user. Applying the softsign function to the fully connected layer of the model can better reduce the potential of users mined from the information interacted by different node attributes and different edge types through attributed multiplex heterogeneous networks and improved attention mechanisms that can be used for accuracy. The loss of the recommended potential information has achieved good results in obtaining user potential information; using the Adam optimizer can make our model converge faster, and it is also very helpful for model tuning. Take the YouTube dataset as an example. It includes connections between users, sharing friends, sharing subscriptions, sharing subscribers, and sharing favourite videos. The interests of the users obtained from different interactions in the YouTube dataset are also different and should be treated differently. Our model will ignore not only the user-item interaction of these node attributes but also the potential connections between nodes and these five types of edge types. The attributed multiplex heterogeneous network can not only help us better obtain the information of each attribute but also help our more comprehensive YouTube data set with existing hidden information between the nodes and the five types of edge types, and the improved attention mechanism can mark the acquired information for mining potential information and make accurate recommendations for users. The same is true for the Amazon dataset. The only difference between the two datasets is the number of node types and edge types. Since the nodes and edge types between the two datasets have different network sizes, they also have different recommended effects.</ns0:p><ns0:p>For this paper, the main contributions are as follows.</ns0:p><ns0:p>(1) The SSN GATNE-T models addresses the shortcomings of network embedding of large network model on hundreds of millions of nodes and edge types that exist on deep learning to obtain information about the interactions between nodes and edge types , and can better handle large network models.</ns0:p><ns0:p>(2)The problem of large networks acquiring node-edge type interaction defects is solved , allowing the recommendation system to acquire more potential node-edge type interaction information about recommendation of attribute reuse of heterogeneous networks and improved attention mechanism annotation , which in turn improves the effectiveness of recommendation performance.</ns0:p><ns0:p>(3)Reduce the loss of potential information about users mined by the model that can be used for accurate recommendations.</ns0:p><ns0:p>(4)The introduction to the Adam optimiser allows for faster convergence of the model and is also very helpful in tuning the model. Finally, through multiple experiments of the SSN GATNE-T model on YouTube and Amazon datasets, the improvement of its three evaluation indexes shows the effectiveness of our model for accurate recommendations. In addition, because our model does not ignore the user-item interaction information of various node attributes, it can solve the cold start problem very well.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>RELEVANT KNOWLEDGE</ns0:head><ns0:p>Network embedding method. GE(Graph embedding) is an important portion of network embedding, as is a GNN(graph neural network) <ns0:ref type='bibr' target='#b35'>[22]</ns0:ref>. DeepWalk trains a tabulation model by randomly generating a corpus. LINE(large-scale information network embedding) <ns0:ref type='bibr' target='#b36'>[23]</ns0:ref> learns the representation of nodes on large-scale networks while maintaining first-and second-order approximations. Node2vec(scalable feature learning for networks) <ns0:ref type='bibr' target='#b38'>[24]</ns0:ref> designs a biased random walk program to effectively explore different communities. NetMF (network embedding as matrix factorization) <ns0:ref type='bibr' target='#b39'>[25]</ns0:ref> improves the matrix factorization framework generated by the defects of DeepWalk and LINE. For GNN, GCN(graph convolutional networks) <ns0:ref type='bibr' target='#b40'>[26]</ns0:ref> uses a convolution operation to merge the feature representation of neighbours into the feature representation of nodes. The graph deduces an induction method that integrates node features and structural information, learning the functional representation of each node instead of embedding it directly, which helps summarize the nodes that are ignored in the training process <ns0:ref type='bibr' target='#b42'>[27]</ns0:ref>.</ns0:p><ns0:p>The heterogeneous network embedding method. HNE (heterogeneous network embedding) <ns0:ref type='bibr' target='#b44'>[28]</ns0:ref> jointly considers the content and topology of the network and represents different objects in the heterogeneous network as a unified vector representation. PTE (predictive text embedding) <ns0:ref type='bibr' target='#b46'>[29]</ns0:ref> embeds a low-dimensional space by constructing numerous heterogeneous text networks. HERec (heterogeneous information network embedding for recommendation) <ns0:ref type='bibr'>[30]</ns0:ref> is first transformed by the fusion function and then integrated into the extended matrix factorization model(MF).</ns0:p><ns0:p>Attribute network embedding method. TADW (text-associated deep walk) <ns0:ref type='bibr' target='#b48'>[31]</ns0:ref> combines text features with network representation <ns0:ref type='bibr' target='#b49'>[32]</ns0:ref>. LANE(label-informed attributed network embedding) <ns0:ref type='bibr' target='#b51'>[33]</ns0:ref> Manuscript to be reviewed Computer Science label information in attribute network embedding. ANRL(Attributed network representation learning) <ns0:ref type='bibr' target='#b52'>[34]</ns0:ref> is an attribute network embedding method that has a good effect on attribute information.</ns0:p><ns0:p>Multiplex heterogeneous network embedding methods . In large network models , they usually contain different types of correlation information on different nodes , which then gives rise to multiplex network embedding , currently commonly used are PMNE (Principled multilayer network embedding ) , MVE (Multiplex Network Embedding ) , MNE ( Multiplex Network Embedding ) , Mvn2vec (Multi-View Network Embedding ). PMNE proposed three methods of projecting multiplex networks of continuous vector spaces <ns0:ref type='bibr' target='#b53'>[35]</ns0:ref>.MVE <ns0:ref type='bibr' target='#b54'>[36]</ns0:ref> embeds networks of multiple views of different edge types of information about nodes into a single collaborative embedding by using an attention mechanism.MNE <ns0:ref type='bibr' target='#b55'>[37]</ns0:ref> , on the other hand , uses a common embedding method of each node and several additional edge-types embedding methods of joint learning through a unified network embedding model.Mvn2vec <ns0:ref type='bibr' target='#b56'>[38]</ns0:ref> investigates the feasibility of representing the semantics of different edges individually in different views and then modelling , preserving and collaborating to achieve the embedding effect. GANTE-T <ns0:ref type='bibr' target='#b33'>[21]</ns0:ref> and GATNE-I <ns0:ref type='bibr' target='#b33'>[21]</ns0:ref> . The GATNE-T models aggregates information about the interaction of neighbouring nodes of different edge types of the study node , and then generates different vector representations of the nodes for each edge type to handle large networks containing hundreds of millions of nodes . The GATNE-I model is proposed to compensate for the inability of the GANTE-T model to perform inductive learning , and it is also can handle invisible node types. FAME and FAME <ns0:ref type='bibr' target='#b57'>[39]</ns0:ref> . The FAME and FAMEm models are fast attributing multi-way heterogeneous network embedding models for large network data containing excessive amounts of data . This model automatically preserves the attributes of nodes and the interaction information about different edge types at embedding time by efficiently mapping different forms of cells into the same latent space.</ns0:p><ns0:p>The method of this article. The method proposed in this article is SSN GATNE-T, which uses base embedding and edge embedding to find different types of interactions, which can better obtain the associated information between each node and different types and can also better mark us as users.Accurately recommended associated information. Regardless of whether the node type and edge type are single or numerous, it owns the nodes. For the YouTube and Amazon datasets, the attribute characteristics of different nodes and the associated information existing between each node and different edge types are obtained through the attribute multiplexing heterogeneous network, and then the self-attention mechanism is improved to mark and obtain the information. The attribute relationship between the node and the edge type is required by the user for accurate recommendation, and then the recommendation is made through the similarity, and if the node has no feature, it is automatically generated. There are many recommended </ns0:p></ns0:div> <ns0:div><ns0:head n='3'>MODEL</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1'>Model architecture</ns0:head><ns0:p>Network G = (V, &#958; , A), &#958; = U r&#8712;R &#958; r is an attributed multiplex heterogeneous network, in which edges with edge types r &#8712; R and |R| &#8805; 1 form &#958; r together. We separate the network of each edge type or consider r &#8712; R as G = (V, &#958; r , A).The Attributed Multiplex HEterogeneous Network(AMHEN) describes the model architecture diagram of users and projects. Model structures with different types of nodes and different numbers of edge types are also different. For networks with more edge types and nodes, the processing results of the model are better; as the edge type category increases, the number of nodes increases, and the recommendation effect of the model enhances. Of course, the more node types and edge types there are, the more complex the AMHEN model is. The more hidden information you choose, the wider the recommended directions for users, and the more accurate the recommendations will be. In the SSN GATNE-T model , more than 15% of the node pairs in the two datasets we use will have more than one type of edge , and users may have multiple interaction information about the items . Our model accomplishes embedding of node as well as edge types through Base Embedding and Edge Embedding , where Base Embedding is accomplished by sharing between different types of edges contained in different nodes , and Edge Embedding is calculated by aggregating information about their neighbouring nodes through an attention mechanism . The base embedding is done by sharing between different types of edges contained in different nodes , while the edge embedding is computed by aggregating the relevant information about its neighbouring nodes through an attention mechanism.The two embedding methods In order to better to describe the effectiveness of the SSN GATNE-T model for large web embeddings and the effectiveness of recommendation performance , the model architecture of users and videos of the Youtube dataset is described in Figure1 , which contains 2 node types and 5 edge types . The two node types are users and items with different attributes , with user inputs including information such as gender , age and address , and item inputs including information such as movie name and movie genre.The aim at combining the attribute reuse heterogeneous network model with the improved attention mechanism is to make better use of the five edge types , i.E user-item interactions. The purpose of introducing the improved attention mechanism is to more explicitly labelling the access to useful information , to more fully grasp the connections between users , and to make recommendations between users who are closely connected , as the more similarities they have in common , the more similar their interests will be , and thus the better the recommendations will be . Our model SSN GATNE-T uses mainly the interactive connection between the attributes of different nodes, determines the potential hobbies of the user and then recommends the model. In our SSN GATNE-T model, we use some descriptive symbols. To explain the model more clearly, we use some symbols; Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> describes the meaning of each symbol.</ns0:p><ns0:p>In the YouTube dataset, our model collects the connections between nodes and edge types to determine whether there is a connection between users, whether they have shared friends, whether there are the same subscription videos, whether they are all subscribed users of the same video, and connections between them and whether there are five types of interactive information of common favourite videos, and our improved attention mechanism can help us to better label these five interactive relationships. If all five interactive relationships exist in the obtained annotation information, then the users can recommend videos to each other that they have not watched; then there are four users with the same interaction relationship, and then three, two, and the last, etc. The most important thing is if two of these five interaction relationships exist between users, their hobbies will be very similar. The potential interaction relationship is obtained through the node and edge types, and then the interaction correlation between the two is judged. The correlation between two of the five edge types is inferred, and then five more relevant videos are selected for recommendation. The same is true for the Amazon dataset; the difference is that the variables contained in the dataset are different.</ns0:p><ns0:p>To obtain a more accurate recommendation effect, we can only obtain the best model combination through continuous experiments so that our model can more accurately identify the user-item interaction relationship. Our model can help us better obtain the information of each attribute node in the user-item interaction through an attributed multiplex heterogeneous network, and the improved attention mechanism can help annotation obtain more useful information. To improve the attention mechanism, in this part, we apply the characteristics of the softsign and sigmoid functions to the attention mechanism to obtain more potential hobbies of users that can be accurately recommended for users. The formula is described as </ns0:p><ns0:formula xml:id='formula_0'>a i,r = w T r sigmoid (W r U i ) T 1+ | (w T r sigmoid (W r U i )) T |<ns0:label>(2)</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>a i,r = w T r W r U i 1+e &#8722;(WrU i ) T 1+ | w T r W r U i 1+e &#8722;(WrU i ) T |<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>In addition, in the fully connected layer of the model, we apply the characteristics of the softsign function, which can better reduce the user potential that is discovered through the attributed multiplex heterogeneous network and the improved attention mechanism through the user interaction information for different node attributes. The loss of potential information that can be used for accurate recommendation has achieved good results in obtaining potential information. As a result of the information obtained by using the attribute multiplexing heterogeneous network, the network can more completely obtain the attribute node information in the user-item interaction, and our improved attention mechanism can help us to mark more useful information, mine more potential interests and potential interaction relationships of users, and then provide users with the most matching recommendations based on the potential information that is mined.</ns0:p><ns0:p>Taking the YouTube dataset as an example, we made a structure diagram of the recommendation description process, as shown in Figure1. On the left side of Figure <ns0:ref type='figure' target='#fig_8'>1</ns0:ref>, the user and video information is described, and five interaction relationships between the two are outlined. The right part depicts the AMHEN chart setting method. Through this network, the information of each attribute node in the user-item interaction can be better obtained; that is, the potential information existing in two nodes and five edge types can be mined, and the improved attention mechanism contained inside can help in tagging obtain more useful information. As more potential information is obtained from users, we have more information to provide users with more accurate recommendations. The lower side of Figure1 depicts the recommendation performance comparison of the SSN GATNE-T model and the GATNE-T and GATNE-I models on the YouTube dataset.</ns0:p><ns0:p>Taking the Amazon dataset as an example, we made a structure diagram of the recommendation description process, as shown in Figure2. On the left side of Figure2, the user and video information is described, and two interaction relationships between the two are outlined. The right side depicts the setting of the attributed multiplex heterogeneous network (AMHEN) chart. Through this network, the information of each attribute node in the user-project interaction can be better obtained, that is, to explore the potential between the two nodes and the 2 types of edge types. The two types of edges, such as collaborative purchase and collaborative viewing of information, can help to dig out more potential interests of users, and the improved attention mechanism used can help labels obtain more useful nodes and edge types. The edge type includes whether there is a connection between users, whether they have shared friends, whether there are the same subscription videos, whether they all subscribe to the same video and whether there are common favourite videos among them. Figure2 also contains 2 types of nodes and 2 types of edge types. The edge types include product attributes and synergy between products as well as view and collaborative purchase links. The two types of nodes in Figure1 and Figure2 are both users and items with different attributes.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>SSN GATNE-T network structure</ns0:head><ns0:p>The SSN GATNE-T model is shown in Figure3. The application object of this model is an attributed multiplex heterogeneous network. This model uses two parts of 'basic embedding' and 'edge embedding' to complete the entire embedding, as shown in Figure3. The upper and lower parts of the middle are Manuscript to be reviewed Manuscript to be reviewed information for embedding with a particular node on each edge type r using base embedding and edge embedding to complete the overall data embedding, while the base embedding between each node is shared across edge types and the edge embedding of a node is achieved by aggregating information about the interactivity around its node, so the output layer of the heterogeneous jump graph specifies a set of polynomial distributions for each node type in the neighbourhood of the input node V.In this example, V = V 1 &#8746;V 2 &#8746;V 3 and K 1 , K 2 , K 3 specify the size of the neighbourhood of V on each node type separately. 'base embedding' and 'edge embedding'. The purpose of base embedding is to separate the topological features of the network. Edge embedding is the embedding representation of a specific node on different edge types. Take the orange node as an example. To calculate its edge embedding, there are several types of edges connected to it. From the graph, we can find that there are two types of edges connected to it: green and blue. Then, we separate the green and blue edges for this node. Two new network structures have been created.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:p>In general, the SSN GATNE-T model does not integrate node attributes for the embedding representation of each node. Instead, the model aggregates neighbouring nodes to generate their own edge embedding representations for different types of edge types connected to the node and then introduces improvements. The attention mechanism calculates their respective attention coefficients (that is, their different importance) to perform a fusion, thereby obtaining the overall embedding representation of the node.</ns0:p><ns0:p>Starting from the embedding learning of attribute-based multiple heterogeneous networks of the transduction environment, our model SSN GATNE-T is given, as shown in Figure3. More specifically, in SSN GATNE-T, node v i on edge type r consists of two parts: edge embedding and foundation embedding. Each node v i type is different and its foundation embedding can be shared. For node v i on edge type R, its k-level edge embedding u</ns0:p><ns0:formula xml:id='formula_2'>(k) i,r &#8712; R s , (1 &#8804; k &#8804; K) is</ns0:formula><ns0:p>aggregated by its neighbour's edge embedding, as follows:</ns0:p><ns0:formula xml:id='formula_3'>u (k) i,r = aggregator u (k&#8722;1) j,r , &#8704;v j &#8712; N i,r<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Where N i,r is the neighbour of node v i on edge type r. In our direct push model, the initial edge embedding U (0) i,r of nodes and edge types is randomly initialized. Moreover, the mean aggregator can also be used as the aggregation function, as follows: For each v i , v j , r &#8712; training sample 6</ns0:p><ns0:formula xml:id='formula_4'>u (k) i,r = &#963; &#372; (k) &#8226; mean u (k&#8722;1) j,r , &#8704;v j &#8712; N i,r<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Use equation ( <ns0:ref type='formula' target='#formula_8'>9</ns0:ref>) to calculate the overall embedding v i,r 7</ns0:p><ns0:p>Update the model parameters Or use the max-pool aggregator:</ns0:p><ns0:formula xml:id='formula_5'>u (k) i,r = max &#963; &#372; (k) pool u k&#8722;1 j,r + b(k) pool , &#8704;v j &#8712; N i,r<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>Among these functions, edge embedding v i is represented by the K th level edge embedding u i,r , the activation function is represented by , and U i is formed by embedding and connecting all edges of node v i , and its size is m:</ns0:p><ns0:formula xml:id='formula_6'>U i = (u i,1 , u i,2 , . . . , u i,m )<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>The self-attention mechanism <ns0:ref type='bibr' target='#b33'>[21]</ns0:ref> is used to calculate the coefficient a i,r &#8712; R m of the linear combination of vectors U i on edge types r, and the softsign function is applied thereto as follows:</ns0:p><ns0:formula xml:id='formula_7'>a i,r = softsign w T r sigmoid (W r U i ) T<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>Where w r and W r are trainable parameters of edge types of sizes d a and d a&#215;s , respectively, and the transposition of the matrix is represented by superscript T. Therefore, the overall embedding of node v i can be expressed as:</ns0:p><ns0:formula xml:id='formula_8'>v i,r = b i + &#945; r M T r U i a i,r<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>Where b i is the basic embedding of node v i , &#945; r is the superparameter representing the importance of edge embedding to overall embedding, and M r &#8712; R s&#215;d is the trainable transformation matrix.</ns0:p><ns0:p>When using the attributed multiplex heterogeneous network, we also adopted a self-attention mechanism. The self-attention mechanism we use is a combination of the softsign function and the sigmoid function.That is, a i,r = softsign w T r sigmoid (W r U i ) T , The combination of the improved attention mechanism and the attribute multiplexing heterogeneous network has a very good effect on the three evaluation indexes of ROC-AUC, PR-AUC, and F1, while the improvement of the F1 index is greater. In addition, we also chose the softsign function of the activation function of the fully connected layer of the model because we found that using the softsign function here combined with the attention mechanism we adopted has unexpected results, which greatly improves the evaluation index F1, showing the effectiveness of the recommendation performance of our model.In order to better to describe our SSN GATNE-T model , we describe our SSN GATNE-T algorithm in Algorithm 1 .</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Model architecture</ns0:head><ns0:p>The SSN GATNE-T model combines attributed multiplex heterogeneous networks with an attention mechanism that introduces softsign and sigmoid function properties.attributed multiplex heterogeneous network can help to obtain more attributes of the user-item information, regardless of the number of nodes and type of model, our model can be good at handling, and improved attention mechanism can help labeling to obtain more useful information, the combination of the two can help to explore more potential information to improve the recommendation effect.In addition, the application of the softsign function Manuscript to be reviewed</ns0:p><ns0:p>Computer Science in the fully connected layer of the model can better reduce the loss of potential information about the user that can be used for accurate recommendations, and we use the Adam optimizer to optimize our model in order to avoid the impact on the accuracy of the model recommendations due to the excessive node and edge type information obtained. The optimisation of the model can qualitatively improve the recommendation effect of the recommender system . The optimization methods include Adagrad , Momentum , RMSprop , GradientDescent and Adam . Through extensive experimental validation , we use the Adam optimiser for the model optimisation , which allows for faster convergence and also helps with the tuning of the model .The Adam optimizer is shown in the following equation.</ns0:p><ns0:formula xml:id='formula_9'>m t = &#181; * m t&#8722;1 + (1 &#8722; &#181;) * g t (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>)</ns0:formula><ns0:formula xml:id='formula_11'>n t = v * n t&#8722;1 + (1 &#8722; v) * g 2 t (<ns0:label>11</ns0:label></ns0:formula><ns0:formula xml:id='formula_12'>) mt = m t 1 &#8722; v t (12) n = n t 1 &#8722; v t (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_13'>)</ns0:formula><ns0:formula xml:id='formula_14'>&#8710;&#952; t = &#8722; mt &#8730; nt + &#949; * &#951;<ns0:label>(14)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head n='4'>EXPERIMENT</ns0:head><ns0:p>In this part, we first introduce the setting of two evaluation data sets and parameters; second, we compare our recommendation algorithm with other state-of-the-art methods to verify the effectiveness of our recommendation model and conduct a bar graph and a line graph. By comparison, we can clearly see the improvement of the three evaluation indexes of ROC-AUC, PR-AUC, and F1 by our proposed method.</ns0:p><ns0:p>Finally, we analyse each step of the improvement of our model through ablation experiments to verify the effectiveness of our model.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Dataset and parameter settings</ns0:head><ns0:p>In this article, we use YouTube and Amazon datasets to complete the experiment. The Amazon product data set includes product metadata and links between products; the YouTube data set includes various types of interactions. Since some baselines cannot be extended to the entire graph and the total data volume of nodes and edges contained in the original data set is too large, we selected part of the data in the original data set as the sampling data, which is the sampling data set we adopt, and pass SSN GATNE-T.</ns0:p><ns0:p>The model evaluates the recommendation effect and performance of the sampled data set. Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> describes the information of the two sampled datasets, including the total number of nodes, the total number of edges, the number of node types, and the number of edge types. The n-type and e-type in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> represent the node type and edge type, respectively.</ns0:p><ns0:p>Amazon.In the experiment, the two interactions of the product attributes contained in the data and the collaborative viewing between the products and the collaborative buying link were used, which can be observed and understood from Figure2.</ns0:p><ns0:p>YouTube. The YouTube dataset contains five types of interactions, including shared friends and shared subscriptions, which can be observed from Figure1.</ns0:p><ns0:p>In the model, to filter out the most suitable parameter settings for the SSN GATNE-T model, we chose to experiment the settings and determine the parameters one-by-one. Taking the embedding size of the whole and the edge as an example, the number of parameters with setting changes are only two. The For each edge type r, coefficients &#945; r and &#946; r are set to 1. In the experiment, if the ROC-AUC of the verification set reaches the optimal value before all iterations are completed, the model will be terminated early.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Evaluation index</ns0:head><ns0:p>In this article, to better understand the recommended performance of the model, we use the ROC-AUC, PR-AUC and F1 (F1-score), which are three evaluation indicators.</ns0:p><ns0:p>The ROC-AUC is an evaluation index that comprehensively considers the ROC and AUC. The two indexes of sensitivity and specificity are not affected by unbalanced data.</ns0:p><ns0:p>ROC-AUC makes an evaluation indicator that takes into account both ROC and AUC. F1 is an evaluation indicator that takes into account both precision and recall values.</ns0:p><ns0:p>F1 is an evaluation index that fully considers the precision value and the recall value:</ns0:p><ns0:formula xml:id='formula_15'>F &#8722; score = 2 1 P + 1 R = 2PR P + R (<ns0:label>15</ns0:label></ns0:formula><ns0:formula xml:id='formula_16'>)</ns0:formula><ns0:p>Where P stands for Precision and R stands for Recall.</ns0:p><ns0:formula xml:id='formula_17'>predision = tp tp + fp (16) recall = t p t p + f n<ns0:label>(17)</ns0:label></ns0:formula><ns0:p>To solve the single-point limitation of the P, R, and F-measures, we use PR-AUC/AP (average precision), which can reflect the overall evaluation index. The larger the PR-AUC value is, the better the performance of the model.</ns0:p><ns0:formula xml:id='formula_18'>PR &#8722; AUC/AP = 1 0 p(r)d(r)<ns0:label>(18)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head n='4.3'>Comparison of model accuracy</ns0:head><ns0:p>In this section , we compare our model SSN GATNE-T with ANRL , PMNE (n) [30] , PMNE (r ) , PMNE (c ) , MVE (Multi-View Network ) <ns0:ref type='bibr' target='#b48'>[31]</ns0:ref> , MNE <ns0:ref type='bibr' target='#b49'>[32]</ns0:ref> , GATNE-T, GATNE-I , FAME , FANEm and other models for the evaluation of the Youtube and Amazon datasets , and express them in line graph to analyse in detail the impact on our model on these two datasets and where its advantages lie.For this model comparison , we demonstrate the effectiveness of our model for large network data by comparing the SSN GATNE-T model with other models on two large network datasets containing different nodes and different edge types , Amazon and Youtube , and then demonstrate that the potential information on the different edge types of user nodes and project nodes obtained by improving the performance of the model for large networks can be used to improve the effectiveness of the recommendation performance.</ns0:p><ns0:p>In the experiment, we divided the dataset into a training set, a validation set and a test set and used the area under the ROC curve (ROC-AUC), the area under the PR curve (PR-AUC) and the F1 score as the evaluation indicators for model evaluation. The accuracy of our model and the pros and cons of its recommendation effect are judged through the impact of these three evaluation indexes on the two YouTube and Amazon datasets via different models, as shown in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>. When evaluating, we Manuscript to be reviewed</ns0:p><ns0:p>Computer Science models, our model can better obtain the interaction relationship between nodes and edge types, thereby digging out more potential user interests. In addition, our improved attention mechanism can also help us obtain and label node and edge types. The potential correlation information between the node and edge types is precisely because our model can obtain more important information that can be used for recommendations compared with other models, which makes our model have a great improvement in recommendation performance. Compared with the DeepWalk, LINE, node2vec, NetMF, and GraphSAGE network embedding models, our SSN GATNE-T model can obtain not only node information but also the potential interaction relationship between nodes and edge types and the relationship between nodes and edges. The more types and quantities there are, the more options that can be used to provide users with accurate recommendations. Compared with HEN, PTE, metapath2vec and HERec, which are heterogeneous network embedding methods, our model mines the potential information in the network more easily than the heterogeneous network, which is full of people, to obtain the combination of content and structure. It also makes a great contribution to the scalability of the network. Our model can not only be used for the processing of small networks but can also perfectly mine its potential information for large networks containing billions of nodes and edge types. Compared with PMNE, MVE, MNE, and Mvn2vec, which are eager for multichannel heterogeneous network embedding methods, compared to the limitation of capturing a single view of the network and the effectiveness of obtaining node and edge type embedding information, our model is superior; the edge type correlation information mining and labelling are more comprehensive. Compared with TADW, LANE, and ANRL, which are these types of attribute network embedding methods, attribute network embedding is a low-dimensional vector representation of nodes looking for and retaining the network topology and the proximity of node attributes, and our model can not only do this, it is also possible to mine more potential attribute information for recommendation through the correlation between node and edge types. Compared with the existing recommendation algorithms, our SSN GATNE-T model can either obtain the correlation information between labelled nodes and edge types, obtain the attribute information of nodes and edge types, or process billions of nodes and edges. In the processing of large-scale networks of nodes and edges, our model can use more or more relevant information for recommendations, and the recommendation effect is greatly improved.</ns0:p><ns0:p>Our SSN GATNE-T model has the most obvious improvement effect on the YouTube dataset, and it has a corresponding improvement for Amazon. Although both of these datasets contain two nodes, users and projects, because the Amazon dataset contains only two edge-type interactions, while the YouTube data set contains five edge-type interactions, the nodes contained in the two data sets are related to the number of edges being different, which is also the reason for the different recommendation effects of the two. Since there are five types of edges in the YouTube dataset and there are only two types of edges in the Amazon dataset, our SSN GATNE-T model can mine more user potential information for recommendations in the correlation between nodes and edge types. Yes, compared to other advanced recommendation models, our model shows a greater improvement on the YouTube dataset than on the Amazon dataset. However, the YouTube dataset contains a total of 2000 nodes and 1310617 edges, and the Amazon dataset contains 10116 nodes and 148865 edges. In contrast, the Amazon dataset contains more nodes but fewer edges. The network constructed by the Amazon dataset is simpler than the network constructed by the YouTube dataset in the information it obtains and annotates. Compared with the YouTube dataset, there is less irrelevant information to be excluded. Although the improvement in the Amazon dataset is less than the improvement in the YouTube dataset compared to other models, the final recommendation effect is the best. Yes, we can conclude from its evaluation indexes ROC-AUC, PR-AUC, and F 1 that our model reached 0.9754, 0.9717, and 0.9386, respectively, while only reaching 0.8583, 0.8437, and 0.7812 on the YouTube dataset. In addition, our model has improved the three evaluation indexes of ROC-AUC, PR-AUC, and F1, but the improvement for F1 is more obvious.To this end, we have made separate line graphs of the evaluation metrics of our model and other models for the Amazon and Youtube datasets, as shown in Fig4 and Fig5, to give you a clearer picture.</ns0:p><ns0:p>For the Youtube dataset , it contains user nodes and project nodes , with the user nodes having attributes such as gender , age and address , and the project inputs having information such as movie name and movie genre , and also consisting of five edge types of interaction between different nodes. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>relationships that exist in the YouTube dataset will affect our model. The recognition rate of user-item interactions has increased. Heterogeneous networks can collect more information between users through attribute reuse, and a self-attention mechanism can better mark and obtain the information we need to make more changes for users as good and more suitable recommendations, so its evaluation index has been greatly improved. In comparison with the observations in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In order to get a clearer picture of the strengths and weaknesses for each model , we have made line graphs of the evaluation results of each model for the Amazon and Youtube datasets . These are shown in Figure <ns0:ref type='figure'>4</ns0:ref> and Figure <ns0:ref type='figure'>5</ns0:ref>.</ns0:p><ns0:p>In order to get a clearer picture of the strengths and weaknesses for each model , we have made line graphs of the evaluation results of each model for the Amazon and Youtube datasets . These are shown in Figure <ns0:ref type='figure'>4</ns0:ref> and Figure <ns0:ref type='figure'>5</ns0:ref>. The rising trend of the line graph shows the effectiveness of the SSN GATNE-T model in processing the data onto the large network model in the Amazon dataset , which can further show that the SSN GATNE-T is more efficient in acquiring information about the interactions between different nodes and different edge types , and the information obtained is more comprehensive , which in turn improves the effectiveness of the recommendation performance when the information obtained is used for recommendation. The evaluation index of our model SSN GATNE-T is at the highest point of the line graph, which shows that our model has indeed improved. This shows that our model is more effective than other mainstream recommendation models in processing and analysing the YouTube and Amazon datasets and in making recommendations, which is a testament to the effectiveness of our model.</ns0:p><ns0:p>Since the Adam optimisation approach allows the model to converge faster and then achieve accurate recommendations faster and better , and is also very helpful for model tuning , we have adopted the Adam optimization approach to the SSN GATNE-T model. In optimizing the model , we used not only Adam's optimisation method but also Adagrad , Momentum , RMSprop and GradientDescent to optimize the model , and the evaluation indices of the recommendation system after optimization is shown in Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref> .</ns0:p><ns0:p>Although each optimizer has its own advantages and disadvantages for different datasets , there is still a gap compared to the Adam optimization method , and the network and each parameter of the model are the same when choosing the model optimization method , so the Adam optimization method is chosen for our SSN GATNE-T model.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4'>Fridman test</ns0:head><ns0:p>In order to better describe the effectiveness of the model, we use the Fridman algorithm to analyse the effectiveness of our SSN GANTE-T model in obtaining relevance information of small and large networks containing multiple types of nodes and multiple types of edges through attributed multiplex heterogeneous network and an improved attention mechanism, and verify the effectiveness of our model in obtaining the missing information on the interaction between different nodes and different edge types in a large network of hundreds of millions of nodes and the effectiveness of obtaining potential information, and Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science then the effectiveness of the obtained information in improving the accuracy of recommendations when used in a recommendation system.</ns0:p><ns0:p>For both the Youtube and Amazon datasets , we defined four variables , edge type , node1 , node2 and label , to perform non-parametric tests on the results we obtained to check the relevance of the interest implied. node1 and node2 represent user nodes and item nodes respectively . Whether it is the YouTube dataset or the Amazon dataset, we have defined four variables, edge-type, node1, node2, and label, to perform nonparametric tests on the results we obtained to test the relevance. During the analysis process, the five types of interactions of edge-type in the YouTube dataset, including shared friends and shared subscriptions, are represented by 1, 2, 3, 4, and 5, respectively. For the edge-type product attributes in the Amazon dataset and the collaboration between products, the two interactions of view and collaborative purchase links are represented by 1 and 2, respectively, and the label is represented by 1 and 0 to indicate yes or no. For the correlation information between the nodes existing in these two datasets and different types, we use edge-type and label to perform nonparametric tests on the relevant samples of node1 and node2. To this end, we performed a hypothesis test summary and related samples through the Wilcoxon signed rank test summary. Tables <ns0:ref type='table' target='#tab_9'>5 and 6</ns0:ref> describe the test of the YouTube dataset, and Tables <ns0:ref type='table' target='#tab_11'>7 and 8</ns0:ref> describe two items of the Amazon dataset.In Tables <ns0:ref type='table' target='#tab_9'>5 and 6</ns0:ref> , which describe the summary statistics of the model on the Youtube dataset , we performed the relevant sample Wilcoxon signed rank test of the model results in the median of the difference between node1 and node2 equal to 0 as the original hypothesis .Test. The summary statistics of the model on the Youtube dataset are described in Tables <ns0:ref type='table' target='#tab_8'>4 and 5</ns0:ref>. We used the median of the difference between node1 and node2 equal to 0 as the original hypothesis to perform the relevant sample Wilcoxon signed rank test on the model results. The total number of samples tested was 262010, the test statistic was 25120694864.000 and the significance level a was 0.05, when P&gt;0.05,the original hypothesis is retained, otherwise the original hypothesis is rejected.During the testing of the Youtube dataset, the significance P obtained from the test was 1.000 and the asymptotic significance P from the two-sided test was also 1.000, so the original hypothesis was retained.While Tables <ns0:ref type='table' target='#tab_11'>7 and 8</ns0:ref> depict the summary statistics of the model on the Amazon dataset, we performed the relevant sample Wilcoxon signed rank test on the model results with the original hypothesis that the median of the difference between node1 and node2 is not equal to 0. The total number of samples tested was 29488, the test statistic was 234912151.00, the significance level a was 0.05, and the The significance P obtained from the test is 0. The asymptotic significance P after the two-sided test is also 0, so the original hypothesis is rejected.In the hypothesis testing of the Youtube and Amazon datasets, we also used the Bonferroni adjustment with a more stringent significance level of 0.025 and 0.005 in order to reduce the error in the test results, and the results were the same as those obtained with a significance level of 0.05. The results are the same as those obtained by using a significance level of 0.05.It can be seen that our model provides a stronger boost to the recommendations provided by the processing analysis of the Youtube dataset compared to the Amazon dataset.</ns0:p><ns0:p>Taking the YouTube dataset as an example, we use edge-type and label as the basis to describe the correlation between node1 and node2 through the five interactions they contain and create a continuous field of node1 and node2 that changes with the frequency and total number of nodes. The information histogram is shown in Figure6 and Figure7.</ns0:p><ns0:p>Taking the Amazon dataset as an example, we processed the model on a case-by-case basis, as shown in Table <ns0:ref type='table' target='#tab_12'>9</ns0:ref>. During this process, the missing values of the user and the system are both 0. In our model, this is due to the introduction of attribute reuse heterogeneous network, which indicates the problem of missing information in the process of obtaining potential correlation information between nodes and edge Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science types, making the interactive information we obtain more comprehensive and showing the effectiveness of our model. In addition,we describe the distribution parameters for the estimation of the model as shown in Table <ns0:ref type='table' target='#tab_13'>10</ns0:ref>. We also described the normal P-P graph and detrended normal P-P graph of node2 through edge types based on node1 and label, as shown in Figure8 and Figure9.</ns0:p><ns0:p>The data and graphs obtained through the Friedman test show that our model has a very obvious effect on the acquisition of the correlation information between each node and different edge-types in the large and small networks, and more potential related information is obtained; moreover, the more effectively we can make accurate recommendations for users. Taking YouTube as an example, our model can identify and obtain more of the five interaction relationships between users and projects: sharing friends, sharing subscriptions, sharing subscribers, and sharing favourite videos, showing the effectiveness of the recommendation effect of our model.</ns0:p><ns0:p>The Friedman test , on the one hand , shows that our SSN GATNE-T models does not suffer from missing node information due to the variety and number of nodes and edge types it contains when dealing with large web datasets such as Amazon and Youtube , as evidenced by the zero missing user and item values. On the other hand , the summary of the Wilcoxon signed-rank test and the summary of the node1 and node2 case processing , as well as the estimation of the distribution parameters , demonstrate that the model is more comprehensive in obtaining information about the potential interactions between different nodes and different edge types.Both the absence of missing nodes in the user's project and the availability of more comprehensive information about the potential interactions between different nodes and different edge types can improve the effectiveness of the recommendation performance by providing the user with more relevant information for the recommendation.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5'>Ablation experiment</ns0:head><ns0:p>In this research, we conducted many experiments on the YouTube and Amazon datasets and obtained Manuscript to be reviewed From the results of the various ablation experiments shown in Table <ns0:ref type='table' target='#tab_14'>11</ns0:ref>, the attribute multiplexing heterogeneous network and the attention mechanism that introduces the softsign and sigmoid function characteristics, the fully connected layer that adds the softsign function characteristics, and the optimization by the Adam optimizer (SSN GATNE-T) have an important impact on the recommendation model.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>The evaluation index ROC-AUC of the model reached 0.8583 and 0.9754 on the YouTube and Amazon datasets, respectively; PR-AUC reached 0.8437 and 0.9717, respectively; F1 reached 0.7812 and 0.9386, respectively. We tested our model on the YouTube dataset and found that compared to only introducing attributed multiplex heterogeneous networks and the original attention mechanism (GATNE-T), our model can better compare the obtained nodes. The relevant information between edge-types is better processed and analysed, so the three evaluation indexes of ROC-AUC, PR-AUC and F1 increased by 1.44%, 2.98% and 1.68%, respectively. Compared to the attribute multiplexing heterogeneous network, we improved the introduction of the attention mechanism (S GATNE-T) with softsign and sigmoid function characteristics.</ns0:p><ns0:p>Our model can better reduce the information mined by the model through the interaction of different node attributes and different edge-types, which can be used for accurate recommendation. The potential information loss is considered, and the recommended evaluation indexes have been increased by 0.96%, Manuscript to be reviewed Compared with other models, our SSN GATNE-T model can not only better process and analyse the related information between the obtained nodes and edge types but also reduce the number of users that can be mined for accurate recommendation. The loss of potential information, in addition, can make the model converge better. Through the attribute multiplexing heterogeneous network and our improved attention mechanism, we can obtain the user's potential interests and hobbies for the most suitable recommendations for the user, cooperate with the fully connected layer that introduces the softsign function to process the model, and then optimize the effectiveness of the recommendation model by the Adam optimizer for accurate recommendation effects.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>To better describe the improvement in the recommendation performance of each module of our SSN GATNE-T model in the YouTube and Amazon datasets, we made the experimental results of our model's ablation experiment on the two datasets into Figure10 and Figure11. The line charts of Figure10 show that the progress of each module of our model has greatly improved the recommendation performance of the YouTube dataset. Figure11 shows that in the Amazon dataset recommendation process, although our model has improved the three evaluation indexes of ROC-AUC, PR-AUC and F1, the most obvious improvement effect is F1, and the improvement of each module has been improved very well,</ns0:p><ns0:p>showing the effectiveness of our model in the recommendation system.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>CONCLUSION</ns0:head><ns0:p>In this article, the SSN GATNE-T model that we propose first reuses heterogeneous networks through attributes that can not only help us deal with a single type of node network model but also help us better obtain tens of billions of nodes and many edges. The information of each attribute node in the Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>2 / 26 PeerJ</ns0:head><ns0:label>226</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:09:66273:1:0:CHECK 23 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>integrates</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>3 / 26 PeerJ</ns0:head><ns0:label>326</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:09:66273:1:0:CHECK 23 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>methods currently available. Compare our SSN GATNE-T model with representative methods such as DeepWalk, metapath2vec, PMNE(c)(Principled multilayer network embedding)[30]MVE (Multiplex Network Embedding)[31]GANTE-T[21]GATNE-I[21]FAME and FAME[39].</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>5 / 26 PeerJ</ns0:head><ns0:label>526</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:09:66273:1:0:CHECK 23 Nov 2021) Manuscript to be reviewed Computer Science follows. a i,r = softsign w T r sigmoid (W r U i )</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sexual information. Since our model obtains and annotates more potential user information, we have more information to provide users with more accurate recommendations. The lower side of Figure2 depicts the recommendation performance comparison of the SSN GATNE-T model and the GATNE-T and GATNE-I models on the YouTube dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure1</ns0:head><ns0:label /><ns0:figDesc>Figure1 and Figure2 show examples of AMHEN. Figure1 contains 2 node types and 5 edge types.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>6 / 26</ns0:head><ns0:label>626</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66273:1:0:CHECK 23 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Recommendation description structure diagram of the YouTube dataset</ns0:figDesc><ns0:graphic coords='9,141.73,65.57,413.58,314.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Recommendation description structure diagram of the Amazon dataset</ns0:figDesc><ns0:graphic coords='9,141.73,413.03,413.58,291.48' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure3. The SSN GATNE-T model schematic diagram SSN GATNE-T uses network structure information for embedding with a particular node on each edge type r using base embedding and edge embedding to complete the overall data embedding, while the base embedding between each node is shared across edge types and the edge embedding of a node is achieved by aggregating information about the interactivity around its node, so the output layer of the heterogeneous jump graph specifies a set of polynomial distributions for each node type in the neighbourhood of the input node V.In this example, V = V 1 &#8746;V 2 &#8746;V 3 and K 1 , K 2 , K 3 specify the size of the neighbourhood of V on each node type separately.</ns0:figDesc><ns0:graphic coords='10,141.73,63.78,413.57,221.07' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>8 / 26</ns0:head><ns0:label>826</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66273:1:0:CHECK 23 Nov 2021) Manuscript to be reviewed Computer Science Algorithm 1: SSN GATNE-T Inputs: Network G = (V, &#958; , A), base embedding dimension d,edge embedding dimension s, learning rate ,coefficients &#945;, &#946; . Output: the overall embedding v i,r of all different nodes v with different edge types r 1 Initialise all model parameters 2 Generate a random node sample v i , v j , r of edge type r 3 Generate training sample v i , v j , r by random samples related to edge type r. 4 When not converged 5</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>10 / 26 PeerJ</ns0:head><ns0:label>1026</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:09:66273:1:0:CHECK 23 Nov 2021) Manuscript to be reviewed Computer Science settings of other parameters do not change during the process of determining these two parameters, and the determination of these two parameters is to first determine one of them and then determine the other while maintaining the determined items. After many tuning experiments, we finally set the embedding size of the whole and the edge to 200 and 10, respectively; the walking length and times of the node to 10 and 20, respectively; and the window to 5. The number of negative samples L of each positive training sample fluctuates in the range of 5-50.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>26 PeerJ</ns0:head><ns0:label>26</ns0:label><ns0:figDesc>compare our model SSN GATNE-T with models including metapath2vec, ANRL, PMNE(n), PMNE(r), PMNE(c), MVE, MNE, GATNE-T GATNE-I , FAME and FANEm. Our model has different effects on different datasets, and each model has different effects on these three evaluation indexes. Compared with other models, our model SSN GATNE-T aggregates neighbour nodes to generate their own edge embedding representations for different types of edge types connected to nodes and then introduces an improved attention mechanism to calculate their respective attention. The coefficient to performs a fusion, thereby obtaining the overall embedding representation of the node. Compared with other 11/Comput. Sci. reviewing PDF | (CS-2021:09:66273:1:0:CHECK 23 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>26 PeerJ</ns0:head><ns0:label>26</ns0:label><ns0:figDesc>The user's hobbies and interests obtained from different interactions in the YouTube dataset are also different and should be treated differently. Our model SSN GATNE-T will not ignore these node attribute user-item interactions. Compared with the four models (DEEPWalk, metapath2vec, MVE, and PMNE(c)), our model pays more attention to the attributes of nodes and user-item interactions. The five interaction12/Comput. Sci. reviewing PDF | (CS-2021:09:66273:1:0:CHECK 23 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>For large network datasets 26 PeerJ</ns0:head><ns0:label>26</ns0:label><ns0:figDesc>using network embedding methods of the acquisition of correlation information about different nodes and different edge types the models that work relatively well are GATNE-T , GATNE-I , FAME , FANEm and our SSN GATNE-T model , the evaluation indices of each model are shown in Table3. The GATNE-T and GATNE-I models are accomplished by dividing the overall embedding of the model into base embedding and edge embedding , and the GATNE-I model also incorporates attribute embedding . Compare with the GATNE-T and GANTE-I models , the edge embedding of SSN GATNE-T is more powerful because our improved attention mechanism can better aggregate the neighbourhood node information and then obtain more relevant information between the node and edge types for recommendation to improve the accuracy of the recommendation model.Compared to the GATNE-T and GATNE-I models , the SSN GATNE-T model obtained significant improvements in all three evaluation indices , ROC-AUC , PR-AUC , and F 1 , especially the F1 index , for the Amazon and YouTube datasets.On the Amazon dataset , the assessment index F1 improved by +1.07% compared to GATNE-T and by +2.74% compared to GATNE-I ; on the YouTube dataset , the assessment index F1 improved by +1.68% compared to GATNE-T and by +1.68% compared to GATNE-I.The FAME and FAMEm models to achieve fast and effective node representation learning by cleverly integrating spectral graph transformations and node attribute features into a sparse random projection system , and although it takes less time to obtain interaction information on nodes and edge types than the SSN GATNE-T model , the number of interactions it obtains is less than that of the SSN GATNE-T model . The ultimate goal of acquiring information is to improve the effectiveness of the recommendation performance , and the SSN GATNE-T model is more capable of acquiring information about potential interactions between different nodes and edge types , which in turn leads to more accurate recommendations.Compared to the FAME and FAMEm models on the Amazon dataset , the SSN GATNE-T model also achieved significant improvements in the ROC-AUC , PR-AUC and F 1 evaluation indices , with the ROC-AUC improving by 1.71% and 3.88% respectively , the PR-AUC improving by 2.28% and 3.70% respectively , and the F 1 improving by 4.29% and 5.82% respectively . The ROC-AUC increased by 1.71% and 3.88% , the PR-AUC by 2.28% and 3.70% and the F 1 by 4.29% and 5.82%.This can be verified from the comparative information on the evaluation indices between the models in Table3. Therefore , compared to the GATNE-T , GATNE-I , FAME and FAMEm models , our SSN GATNE-T model , both for the Amazon dataset and the Youtube dataset , is able to aggregate the interaction information between different edge types of neighbouring nodes of each node very well , and due to the stronger edge embedding capability13/Comput. Sci. reviewing PDF | (CS-2021:09:66273:1:0:CHECK 23 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 4 .Figure 5 .</ns0:head><ns0:label>45</ns0:label><ns0:figDesc>Figure 4. YouTube dataset evaluation results comparison line chart</ns0:figDesc><ns0:graphic coords='17,141.73,66.72,413.58,262.28' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 6 . 26 PeerJ</ns0:head><ns0:label>626</ns0:label><ns0:figDesc>Figure 6. A histogram of continuous field information for node1 as the frequency and total number of nodes change.</ns0:figDesc><ns0:graphic coords='20,141.73,172.27,413.59,413.59' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 7 . 26 PeerJ</ns0:head><ns0:label>726</ns0:label><ns0:figDesc>Figure 7. Histogram of continuous field information for node1 as the frequency and total number of nodes change.</ns0:figDesc><ns0:graphic coords='21,141.73,172.28,413.58,413.58' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head /><ns0:label /><ns0:figDesc>the best model settings by improving the model step by step. In the experiments of this article, to obtain more effective information about users and projects and identify more potential interests of users, we first set the parameters of the entire model. When the other embedding parameters is fixed , for the overall embedding dimension , we choose multiple dimensions including 50 , 100 , 150 and 300 to test the model , and the best result are achieved when the overall embedding dimension is 200 ; similarly , for the edge embedding dimension , we choose multiple dimensions including 5 , 10 , 20 and 30 to test the model , and the best result are achieved when the edge embedding dimension is 10 , so the overall embedding dimension and the edge embedding dimension of SSN GATNE-T model are 200 and 10 respectively. The node walking length and number of times are set to 10 and 20, respectively, and the window is set to 5. The number of negative samples L of each positive training sample fluctuates in the range of 5-50. For each edge type r, the coefficients &#945; r and &#946; r are set to 1. In addition, the model in this paper is optimized by the Adam optimizer. To prove the validity of the model proposed in this study, we performed ablation 20/26 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66273:1:0:CHECK 23 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head>Figure 8 .Figure 9 .</ns0:head><ns0:label>89</ns0:label><ns0:figDesc>Figure 8. Normal P-P diagram of node2</ns0:figDesc><ns0:graphic coords='23,141.73,95.26,413.57,243.35' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head /><ns0:label /><ns0:figDesc>faster and the training times are faster than that of 64 , the accuracy is lower and the evaluation results obtained are not satisfactory . Therefore , the hyper parameters epoch and batch size of SSN GATNE-T model are 100 and 64 respectively. In the ablation experiments , we performed the ablation study by deleting or replacing the modules in this model when the epoch and batch size was 100 and 64 respectively and the corresponding parameters were kept constant. In these two data sets, only the attributed multiplex heterogeneous network and the original attention mechanism (GATNE-T) are introduced, and the attributed multiplex heterogeneous network and our improved attention mechanism that introduces the softsign and sigmoid function characteristics (S GATNE-T), attributes the multiplex heterogeneous network and introduces softsign and sigmoid function characteristics of attention mechanism and adds softsign function characteristics of the fully connected layer (SN GATNE-T), which attributed multiplex heterogeneous networks and introduced softsign and sigmoid functions The characteristic attention mechanism and the fully connected layer with the softsign function characteristic and the Adam optimizer (SSN GATNE-T) are used for optimization, and other parameter settings remain unchanged.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head>1 . 26 PeerJ</ns0:head><ns0:label>126</ns0:label><ns0:figDesc>98%, and 1.43%, respectively, compared to the attribute multiplexing heterogeneous network and the introduction of the softsign and sigmoid function characteristics of the attention mechanism and the addition of the softsign function. The characteristic fully connected layer (SN GATNE-T), our model 22/Comput. Sci. reviewing PDF | (CS-2021:09:66273:1:0:CHECK 23 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_23'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Youtube data set ablation comparison line chart</ns0:figDesc><ns0:graphic coords='25,141.73,63.78,413.59,272.42' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_24'><ns0:head>23 / 26 PeerJ</ns0:head><ns0:label>2326</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:09:66273:1:0:CHECK 23 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_25'><ns0:head>Figure 11 . 26 PeerJ</ns0:head><ns0:label>1126</ns0:label><ns0:figDesc>Figure 11. Amazon data set ablation comparison line chart</ns0:figDesc><ns0:graphic coords='26,141.73,63.78,413.59,275.05' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Notations to describe</ns0:figDesc><ns0:table><ns0:row><ns0:cell>4/26</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Dataset statistics </ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Dataset Amazon Youtube</ns0:cell></ns0:row><ns0:row><ns0:cell>nodes</ns0:cell><ns0:cell>10116</ns0:cell><ns0:cell>2000</ns0:cell></ns0:row><ns0:row><ns0:cell>edges</ns0:cell><ns0:cell cols='2'>148865 1310617</ns0:cell></ns0:row><ns0:row><ns0:cell>n-types</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>e-types</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>5</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 ,</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>the evaluation index of the LINE model is not very good. Compared with the LINE model, ROC-AUC has improved by +33.61%, PR-AUC by +33.39% and F1 by +25.29%.All three assessment indices of the SSN GATNE-T model gained a greater or lesser improvement compared to other models outside the LINE model. Therefore, our model SSN GATNE-T has a good recommendation effect on the YouTube dataset and has a good effect on the three evaluation indexes.The Amazon dataset includes collaborative viewing and collaborative purchasing links between product attributes and products, which are often overlooked. Our model SSN GATNE-T does not ignore these node attribute user-item interactions. Compared with the four models DEEPWalk, metapath2vec, MVE, and PMNE(c), our model pays more attention to the node attributes and user-item interactions. Since the interaction relationship contained in the Amazon data set is only the collaborative viewing and collaborative purchase links between product attributes and products, the four models of DEEPWalk, metapath2vec, MVE, and PMNE(c) have good evaluation results on the Amazon data set. Therefore, its evaluation index is less effective than YouTube, a dataset with five interaction relationships. However, our model is still greatly improved compared to the four models of DEEPWalk, metapath2vec, MVE, and PMNE(c). In comparison with the observations in Table3, the evaluation index of the ANRL model is not very good.</ns0:figDesc><ns0:table /><ns0:note>Its recommended performance is the worst. Compared with the ANRL model, the SSN GATNE-T model has increased ROC-AUC by +36.08%, PR-AUC by +38.21%, and F1 by +38.60%.All three assessment indices of the SSN GATNE-T model received a greater or lesser improvement compared to other models outside the ANRL model. Therefore, our model SSN GATNE-T has a good recommendation effect on the Amazon dataset, especially the evaluation index F1.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Comparison of evaluation results of various models</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>YouTube</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Amazon</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>ROC-AUC PR-AUC</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell cols='2'>ROC-AUC PR-AUC</ns0:cell><ns0:cell>F1</ns0:cell></ns0:row><ns0:row><ns0:cell>DeepWalk</ns0:cell><ns0:cell>0.7111</ns0:cell><ns0:cell>0.7004</ns0:cell><ns0:cell>0.6552</ns0:cell><ns0:cell>0.942</ns0:cell><ns0:cell>0.9403</ns0:cell><ns0:cell>0.8738</ns0:cell></ns0:row><ns0:row><ns0:cell>node2vec</ns0:cell><ns0:cell>0.7121</ns0:cell><ns0:cell>0.7032</ns0:cell><ns0:cell>0.6536</ns0:cell><ns0:cell>0.9447</ns0:cell><ns0:cell>0.943</ns0:cell><ns0:cell>0.8788</ns0:cell></ns0:row><ns0:row><ns0:cell>LINE</ns0:cell><ns0:cell>0.6424</ns0:cell><ns0:cell>0.6325</ns0:cell><ns0:cell>0.6235</ns0:cell><ns0:cell>0.8145</ns0:cell><ns0:cell>0.7497</ns0:cell><ns0:cell>0.7635</ns0:cell></ns0:row><ns0:row><ns0:cell>metapath2vec</ns0:cell><ns0:cell>0.7098</ns0:cell><ns0:cell>0.7002</ns0:cell><ns0:cell>0.6534</ns0:cell><ns0:cell>0.9415</ns0:cell><ns0:cell>0.9401</ns0:cell><ns0:cell>0.8748</ns0:cell></ns0:row><ns0:row><ns0:cell>ANRL</ns0:cell><ns0:cell>0.7593</ns0:cell><ns0:cell>0.7321</ns0:cell><ns0:cell>0.7065</ns0:cell><ns0:cell>0.7168</ns0:cell><ns0:cell>0.703</ns0:cell><ns0:cell>0.6772</ns0:cell></ns0:row><ns0:row><ns0:cell>PMNE(n)</ns0:cell><ns0:cell>0.6506</ns0:cell><ns0:cell>0.6359</ns0:cell><ns0:cell>0.6085</ns0:cell><ns0:cell>0.9559</ns0:cell><ns0:cell>0.9548</ns0:cell><ns0:cell>0.8937</ns0:cell></ns0:row><ns0:row><ns0:cell>PMNE(r)</ns0:cell><ns0:cell>0.7061</ns0:cell><ns0:cell>0.6982</ns0:cell><ns0:cell>0.6539</ns0:cell><ns0:cell>0.8838</ns0:cell><ns0:cell>0.8856</ns0:cell><ns0:cell>0.7967</ns0:cell></ns0:row><ns0:row><ns0:cell>PMNE(c)</ns0:cell><ns0:cell>0.7039</ns0:cell><ns0:cell>0.6982</ns0:cell><ns0:cell>0.6539</ns0:cell><ns0:cell>0.9355</ns0:cell><ns0:cell>0.9346</ns0:cell><ns0:cell>0.8642</ns0:cell></ns0:row><ns0:row><ns0:cell>MVE</ns0:cell><ns0:cell>0.7039</ns0:cell><ns0:cell>0.7010</ns0:cell><ns0:cell>0.6510</ns0:cell><ns0:cell>0.9298</ns0:cell><ns0:cell>0.9305</ns0:cell><ns0:cell>0.8780</ns0:cell></ns0:row><ns0:row><ns0:cell>MNE</ns0:cell><ns0:cell>0.8230</ns0:cell><ns0:cell>0.8218</ns0:cell><ns0:cell>0.7503</ns0:cell><ns0:cell>0.9028</ns0:cell><ns0:cell>0.9174</ns0:cell><ns0:cell>0.8325</ns0:cell></ns0:row><ns0:row><ns0:cell>GATNE-T</ns0:cell><ns0:cell>0.8461</ns0:cell><ns0:cell>0.8193</ns0:cell><ns0:cell>0.7683</ns0:cell><ns0:cell>0.9744</ns0:cell><ns0:cell>0.9705</ns0:cell><ns0:cell>0.9287</ns0:cell></ns0:row><ns0:row><ns0:cell>GATNE-I</ns0:cell><ns0:cell>0.8447</ns0:cell><ns0:cell>0.8232</ns0:cell><ns0:cell>0.7683</ns0:cell><ns0:cell>0.9625</ns0:cell><ns0:cell>0.9477</ns0:cell><ns0:cell>0.9136</ns0:cell></ns0:row><ns0:row><ns0:cell>FAMEm</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.939</ns0:cell><ns0:cell>0.937</ns0:cell><ns0:cell>0.887</ns0:cell></ns0:row><ns0:row><ns0:cell>FAME</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.959</ns0:cell><ns0:cell>0.950</ns0:cell><ns0:cell>0.900</ns0:cell></ns0:row><ns0:row><ns0:cell>SSN GATNE-T</ns0:cell><ns0:cell>0.8583</ns0:cell><ns0:cell>0.8437</ns0:cell><ns0:cell>0.7812</ns0:cell><ns0:cell>0.9754</ns0:cell><ns0:cell>0.9717</ns0:cell><ns0:cell>0.9386</ns0:cell></ns0:row></ns0:table><ns0:note>of our model , we can obtain more comprehensive The potential interaction information between different nodes and edge types can be more comprehensively captured and used only for recommendation , thus improving the effectiveness of the model's recommendation performance.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Comparison of model optimization approaches</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Youtube</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Amazon</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>ROC-AUC PR-AUC</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell cols='2'>ROC-AUC PR-AUC</ns0:cell><ns0:cell>F1</ns0:cell></ns0:row><ns0:row><ns0:cell>Adagrad</ns0:cell><ns0:cell>0.5549</ns0:cell><ns0:cell>0.5546</ns0:cell><ns0:cell>0.5452</ns0:cell><ns0:cell>0.5849</ns0:cell><ns0:cell>0.5510</ns0:cell><ns0:cell>0.5632</ns0:cell></ns0:row><ns0:row><ns0:cell>Momentum</ns0:cell><ns0:cell>0.8256</ns0:cell><ns0:cell>0.8327</ns0:cell><ns0:cell>0.7677</ns0:cell><ns0:cell>0.9547</ns0:cell><ns0:cell>0.953</ns0:cell><ns0:cell>0.9205</ns0:cell></ns0:row><ns0:row><ns0:cell>RMSprop</ns0:cell><ns0:cell>0.8435</ns0:cell><ns0:cell>0.8365</ns0:cell><ns0:cell>0.7796</ns0:cell><ns0:cell>0.9685</ns0:cell><ns0:cell>0.9657</ns0:cell><ns0:cell>0.9302</ns0:cell></ns0:row><ns0:row><ns0:cell>GradientDescent</ns0:cell><ns0:cell>0.8297</ns0:cell><ns0:cell>0.7977</ns0:cell><ns0:cell>0.7500</ns0:cell><ns0:cell>0.9726</ns0:cell><ns0:cell>0.9675</ns0:cell><ns0:cell>0.9293</ns0:cell></ns0:row><ns0:row><ns0:cell>Adam</ns0:cell><ns0:cell>0.8583</ns0:cell><ns0:cell>0.8437</ns0:cell><ns0:cell>0.7812</ns0:cell><ns0:cell>0.9754</ns0:cell><ns0:cell>0.9717</ns0:cell><ns0:cell>0.9386</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>YouTube dataset assumptions</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Hypothesis test summary</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Original hypothesis</ns0:cell><ns0:cell>Test</ns0:cell><ns0:cell>Signi f icance a,b</ns0:cell><ns0:cell>Decision</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>The median of the difference between node1 is equal to 0. and node2</ns0:cell><ns0:cell>Wilcoxon Signed Samples. Rank Test for Correlated</ns0:cell><ns0:cell>1.000</ns0:cell><ns0:cell>Keep the original hypothesis.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>a. The significance level is 0.050.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>b. Shows progressive significance</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Wilcoxon signed rank test for relevant samples of YouTube dataset Wilcoxon signed rank test for relevant samples of YouTube dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>TotalN</ns0:cell><ns0:cell>262010</ns0:cell></ns0:row><ns0:row><ns0:cell>Test statistics</ns0:cell><ns0:cell>25120694864.000</ns0:cell></ns0:row><ns0:row><ns0:cell>Standard error</ns0:cell><ns0:cell>.000</ns0:cell></ns0:row><ns0:row><ns0:cell>Standardized test statistics</ns0:cell><ns0:cell>.000</ns0:cell></ns0:row><ns0:row><ns0:cell>Progressive significance (two-sided test)</ns0:cell><ns0:cell>1.000</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Hypothesis testing of Amazon data set</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Hypothesis test summary</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Original hypothesis</ns0:cell><ns0:cell>Test</ns0:cell><ns0:cell>Signi f icance a,b</ns0:cell><ns0:cell>Decision</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>The median of the difference between node1 is not equal to 0. and node2</ns0:cell><ns0:cell>Wilcoxon Signed Samples. Rank Test for Correlated</ns0:cell><ns0:cell>1.000</ns0:cell><ns0:cell>Reject the original hypothesis.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>a. The significance level is 0.050.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>b. Shows progressive significance</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Wilcoxon signed rank test for relevant samples of Amazon dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Wilcoxon signed rank test for relevant samples of Amazon dataset</ns0:cell></ns0:row><ns0:row><ns0:cell>TotalN</ns0:cell><ns0:cell>29488</ns0:cell></ns0:row><ns0:row><ns0:cell>Test statistics</ns0:cell><ns0:cell>234912151.000</ns0:cell></ns0:row><ns0:row><ns0:cell>Standard error</ns0:cell><ns0:cell>1461801.487</ns0:cell></ns0:row><ns0:row><ns0:cell>Standardized test statistics</ns0:cell><ns0:cell>11.985</ns0:cell></ns0:row><ns0:row><ns0:cell>Progressive significance (two-sided test)</ns0:cell><ns0:cell>.000</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>17/26</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66273:1:0:CHECK 23 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Case handling</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Case handling summary</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>node2</ns0:cell><ns0:cell>label</ns0:cell></ns0:row><ns0:row><ns0:cell>Series or sequence length</ns0:cell><ns0:cell /><ns0:cell cols='2'>29492 29492</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Number of missing values User missing values</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>in the graph</ns0:cell><ns0:cell>System missing value</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>The cases are not weighted.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Estimation of distribution parameters</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Estimated distribution parameters</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>node2</ns0:cell><ns0:cell>label</ns0:cell></ns0:row><ns0:row><ns0:cell>Normal</ns0:cell><ns0:cell>position</ns0:cell><ns0:cell>332795.29</ns0:cell><ns0:cell>.50</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>distribution Scaling 145311.429 .500</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>The cases are not weighted.</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_14'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Ablation studies on YouTube and Amazon datasets For the hyper parameters of the SSN GATNE-T model we chose epoch and batch size to tune the accuracy of the model recommendations. When the batch size and other parameters are fixed, the model is tuned by adjusting the epoch to 10, 30, 50, 100, 150 and 200, respectively, to run the model.It wasfound that the model was not optimal at the end of the run for epoches of 10 , 30 and 50 , whereas for epoches of 150 and 200 , the model reached the optimal solution earlier and then stopped , but the model evaluation results was smaller than the optimal solution obtained for epoch 100 . Therefore, if the epoch is chosen to be 100, the final result of the model will be obtained to the optimal solution, neither because the number of iterations is insufficient to reach the optimal solution nor because the model reaches the optimal solution early, which causes the model to be terminated early and the result is not accurate.When epoch was 100 and other parameters were fixed , we determined the size of batch size to be 64 through experiments , and we also chose batch size of 16 , 32 and 128 to adjust the model . The results obtained are not as good as those obtained when the batch size of 128 is chosen , although the convergence time is</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Youtube</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Amazon</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>ROC-AUC PR-AUC</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell cols='2'>ROC-AUC PR-AUC</ns0:cell><ns0:cell>F1</ns0:cell></ns0:row><ns0:row><ns0:cell>GATNE-T</ns0:cell><ns0:cell>0.8461</ns0:cell><ns0:cell>0.8193</ns0:cell><ns0:cell>0.7683</ns0:cell><ns0:cell>0.9744</ns0:cell><ns0:cell>0.9705</ns0:cell><ns0:cell>0.9287</ns0:cell></ns0:row><ns0:row><ns0:cell>S GATNE-T</ns0:cell><ns0:cell>0.8501</ns0:cell><ns0:cell>0.8273</ns0:cell><ns0:cell>0.7702</ns0:cell><ns0:cell>0.9692</ns0:cell><ns0:cell>0.9711</ns0:cell><ns0:cell>0.9281</ns0:cell></ns0:row><ns0:row><ns0:cell>SN GATNE-T</ns0:cell><ns0:cell>0.8536</ns0:cell><ns0:cell>0.8356</ns0:cell><ns0:cell>0.7776</ns0:cell><ns0:cell>0.9747</ns0:cell><ns0:cell>0.9698</ns0:cell><ns0:cell>0.9343</ns0:cell></ns0:row><ns0:row><ns0:cell>SSN GATNE-T</ns0:cell><ns0:cell>0.8583</ns0:cell><ns0:cell>0.8437</ns0:cell><ns0:cell>0.7812</ns0:cell><ns0:cell>0.9754</ns0:cell><ns0:cell>0.9717</ns0:cell><ns0:cell>0.9386</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>analysis on the model itself and compared this model with the model already proposed.</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Response to the Comments of the Editors and All Reviewers Dear Editor, We have studied the valuable comments from you and reviewers carefully. Thanks a lot for the reviewers to help us to improve the quality of the manuscript.  Following upon the reviewers' suggestions, we have revised the manuscript.The point to point responds to the reviewer’s comments are listed as following: Response to the reviewer’s comments: Reviewer #1: Basic reporting Comment 1: Novelty of idea is not clear. What is your main impact from the research and results? Make clear presentation also in relation to recent ideas in the field of deep learning. Response: Thank you so much for your careful check.In the field of deep learning,the processing of large network models on billions or even tens of billions of nodes and numerous edge types is still flawed,and the accuracy of recommendations is greatly compromised when large network embeddings are applied to recommendation systems.To solve the problem of inaccurate recommendations caused by processing deficiencies in large networks,this paper combines the attributed multiplex heterogeneous network with the attention mechanism that introduces the softsign and sigmoid function characteristics and derives a new framework SSN_GATNE-T.The framework is well suited to deal with the shortcomings of obtaining accurate interaction information due to the presence of a large number of nodes and edge types in the embedding of large network models, and also proves to be effective in improving the performance of recommendations after addressing the shortcomings of large networks.In addition, we have revised and added the motivation and implications of the model study in the summary and introduction sections. Comment 2: Related models to present: MobileGCN applied to low-dimensional node feature learning, Self-attention negative feedback network for real-time image super-resolution Response: We are very grateful for your valuable comments.In the introduction section we have added a description of the MobileGCN and Self-attention sections and cited the relevant literature. Comment 3: Revise abstract to better show your main achievement and results. Response: Thank you so much for your careful check.For the abstract section we have modified it to describe the current shortcomings of deep learning, added the motivation for the model study, and added the implications of the model findings. Experimental design Comment 1: Proposed in fig. 1 – fig. 2 model is not clear. What is the main input to this model? How do you preprocess info on the input? Response: Thank you so much for your careful check.In the SSN_GATNE-T model , more than 15% of the node pairs in the two datasets we use will have more than one type of edge , and users may have multiple interaction information about the items . Our model accomplishes embedding of node as well as edge types through Base Embedding and Edge Embedding , where Base Embedding is accomplished by sharing between different types of edges contained in different nodes , and Edge Embedding is calculated by aggregating information about their neighbouring nodes through an attention mechanism . The base embedding is done by sharing between different types of edges contained in different nodes , while the edge embedding is computed by aggregating the relevant information about its neighbouring nodes through an attention mechanism.The two embedding methods are used to provide information on the interaction between the two types of nodes , user and project , and between the different edge types , and the nodes contain both user and project information . In the Youtube dataset , user input information includes gender , age and address , while item input information includes movie title and movie genre. Comment 2: What features are considered by your model? It is not presented how do you make them from the input. Do you use normalization before processing? Response: We are very grateful for your valuable comments.Our model takes into account the characteristics of both user and project nodes as well as the characteristics of the different edge type between nodes . The Youtube dataset has user input information such as gender , age and address , and item input information such as movie title and movie genre . Each node is embedded in two parts : ' base embedding ' and ' edge embedding ' , where the purpose of base embedding is to isolate the topological features of the network , and edge embedding are to embed a particular node on different edge types . Amazon and Youtube contain 2 and 5 edge types respectively , and then an improved attention mechanism is introduced to fuse them to obtain the overall embedding representation of the nodes . Aggregate the initial embedding of nodes and edge types by means of a mean aggregator or by using a max-pool aggregator. Comment 3: How is the model of graph in fig. 3 constructed? Response: Thank you so much for your careful check.The SSN_GATNE-T model schematic diagram: SSN_GATNE-T uses network structure information for embedding with a particular node on each edge type r using base embedding and edge embedding to complete the overall data embedding, while the base embedding between each node is shared across edge types and the edge embedding of a node is achieved by aggregating information about the interactivity around its node, so the output layer of the heterogeneous jump graph specifies a set of polynomial distributions for each node type in the neighbourhood of the input node V. In this example, and specify the size of the neighbourhood of V on each node type separately. Comment 4: Why Adam was used in your model? Did you test other algorithms? Response: We are very grateful for your valuable comments.Since the Adam optimisation approach allows the model to converge faster and then achieve accurate recommendations faster and better , and is also very helpful for model tuning , we have adopted the Adam optimization approach from to the SSN_GATNE-T model. In optimizing the model , we used not only Adam's optimisation method but also Adagrad , Momentum , RMSprop and GradientDescent to optimize the model , and the evaluation indices of the recommendation system after optimization is shown in Table 4 . Although each optimizer has its own advantages and disadvantages for different datasets , there is still a gap compared to the Adam optimization method , and the network and each parameter of the model are the same when choosing the model optimization method , so the Adam optimization method is chosen for our SSN_GATNE-T model. Validity of the findings Comment 1: It is not necessary to use probability in eq. (15) – eq. (16). Do you really use probability in your model? Can you show in which place of model it is used? Response: Thank you so much for your careful check.This section is mainly used to explain the ROC-AUC as an evaluation indicator, which we have modified. Comment 2: Your model needs comparisons to other model and comparisons on other data. Response: We are very grateful for your valuable comments.We have added a comparison of the SSN_GATNE-T model with GATNE-T , GATNE-I , FAME and FANEm , which are the current models with good results of the Amazon and Youtube datasets , in the experimental section . In the first half of the experiment a short description of the advantages of our model compared to the state-of-the-art recommendation models that currently exist is given . In addition , the parametric part of the model is also described in the ablation experiments . Comment 3: It is not clear what is your research hypothesis for amazon or youtube data. We read about results but what actually do you want to verify? Response: Thank you so much for your careful check.We use the Amazon and Youtube datasets to validate the effectiveness of our SSN_GATNE-T model on large networks with different nodes and edge types, and how much the information obtained improves the accuracy of recommendations for users when used in recommendations.The SSN_GATNE-T framework is shown to be able to deal with the deficiencies in obtaining accurate interaction information due to the presence of a large number of nodes and edge types of the embedding of large network models , and is also shown to be effective against improving recommendation performance after addressing the deficiencies in large networks . Additional comments Comment 1: Images need better quality, since resolution is low and will have bad outlook in paper Response: We are very grateful for your valuable comments.We have added a clear image in the supplementary file after increasing the resolution. Reviewer #2: Experimental design Comment 1: I suggest the authors amend related description and depict with experiments. Response: We are very grateful for your valuable comments.We are very grateful for your valuable comments.We have added a comparison of the SSN_GATNE-T model with GATNE-T , GATNE-I , FAME and FANEm , which are the current models with good results of the Amazon and Youtube datasets , in the experimental section . In the first half of the experiment a short description of the advantages of our model compared to the state-of-the-art recommendation models that currently exist is given . In addition , the parametric part of the model is also described in the ablation experiments .For the section on the choice of the optimisation method of the model we have added a comparison experiment of the models with different optimisation methods.In addition, we have added comparisons to other parts of the model's experiments. Validity of the findings Comment 1: The abstract and conclusion need to be improved. The abstract must be a concise yet comprehensive reflection of what is in your paper. Please modify the abstract according to “motivation, description, results and conclusion” parts. I suggest extending the conclusions section to focus on the results you get, the method you propose, and their significance. Response:We are very grateful for your valuable comments.We have revised the abstract according to the Motivation, Description, Results and Conclusion sections and have expanded the Conclusion section.Abstract:In the field of deep learning,the processing of large network models on billions or even tens of billions of nodes and numerous edge types is still flawed,and the accuracy of recommendations is greatly compromised when large network embeddings are applied to recommendation systems.To solve the problem of inaccurate recommendations caused by processing deficiencies in large networks,this paper combines the attributed multiplex heterogeneous network with the attention mechanism that introduces the softsign and sigmoid function characteristics and derives a new framework SSN_GATNE-T.The framework is well suited to deal with the shortcomings of obtaining accurate interaction information due to the presence of a large number of nodes and edge types in the embedding of large network models, and also proves to be effective in improving the performance of recommendations after addressing the shortcomings of large networks.In addition, we have revised and added the motivation and implications of the model study in the summary and introduction sections.Conclusion:The experimental results demonstrate the significant advantages of the SSN_GANTE-T model over the current mainstream recommendation algorithms . In addition , the SSN_GATNE-T model is proved to be very good at dealing with the problem of obtaining accurate interaction information when embedding large network models , and is also proved to be effective against improving the recommendation performance after solving the defects of large networks.This shows that the SSN_GATNE-T model does not affect the ability to obtain relevant information about different node and edge types due to the type and number of nodes and edge types , which greatly improves the processing and analysis of large networks with hundreds of millions of nodes , and improves the effectiveness of the recommendation performance as the SSN_GATNE-T model can better obtain potential information on node and edge types of user recommendations. Comment 2: What is the motivation of the proposed method? The details of motivation and innovations are important for potential readers and journals. Please add this detailed description in the last paragraph in section I. Please modify the paragraph according to 'For this paper, the main contributions are as follows: (1) ......' to Section I. Please give the details of motivations. Response: We are very grateful for your valuable comments.In the field of deep learning,the processing of large network models on billions or even tens of billions of nodes and numerous edge types is still flawed,and the accuracy of recommendations is greatly compromised when large network embeddings are applied to recommendation systems.The motivation for the proposed model is to solve the problem of inaccurate recommendations due to processing deficiencies in large networks , and thus to improve the effectiveness of recommendation performance . For this paper, the main contributions are as follows. (1) The SSN_GATNE-T models addresses the shortcomings of network embedding of large network model on hundreds of millions of nodes and edge types that exist on deep learning to obtain information about the interactions between nodes and edge types , and can better handle large network models. (2) The problem of large networks acquiring node-edge type interaction defects is solved , allowing the recommendation system to acquire more potential node-edge type interaction information about recommendation of attribute reuse of heterogeneous networks and improved attention mechanism annotation , which in turn improves the effectiveness of recommendation performance. (3) Reduce the loss of potential information about users mined by the model that can be used for accurate recommendations. (4) The introduction to the Adam optimizer allows for faster convergence of the model and is also very helpful in tuning the model. Comment 3: The description of manuscript is very important for potential reader and other researchers. I encourage the authors to have their manuscript proof-edited by a native English speaker to enhance the level of paper presentation. Response: We are very grateful for your valuable comments.We have revised our English manuscript and contacted (copyediting@peerj.com) to make changes to our manuscript. Comment 4: Please update references with recent paper in CVPR, ICCV, ECCV et al and Elsevier, Springer. In your section 1 and section 2, I suggest the authors amend several related literatures and corresponding references in recent years. For example: The improved image inpainting algorithm via encoder and similarity constraint (The Visual Computer); Research on image inpainting algorithm of improved total variation minimization method (Journal of Ambient Intelligence and Humanized Computing); The image annotation algorithm using convolutional features from intermediate layer of deep learning (Multimedia Tools and Applications); Image super-resolution reconstruction based on feature map attention mechanism (Applied Intelligence). Response: We are very grateful for your valuable comments.We have revised the references , citing those published in Springer and Elsevier , and have revised the paper and cited relevant literature through your suggested references . Comment 5: Please check all parameters in the manuscript and amend some related description of primary parameters. In section 3, please write the proposed algorithm in a proper algorithm/pseudocode format with Algorithm. Otherwise, it is very hard to follow. Some examples here: https://tex.stackexchange.com/questions/204592/how-to-format-a-pseudocode-algorithm Response: We are very grateful for your valuable comments.We have added changes in the description of the model section and added a description of the model parameters in the experimental section. A description of the algorithm for the model has been added to the model section. Comment 6: The section 2 is too short, and I suggest the authors amend the details of background and motivation. The main section of manuscript is section 3. I suggest the authors amend related depict of proposed method. Response: We are very grateful for your valuable comments.For Section 2, we have added a description of MobileGCN, a description of large network models, and a brief description of GANTE-T, GATNE-I, FAME and FAME, two models that work well with large network datasets such as Amazon and Youtube.For the third part of the manuscript , we have modified the description of the model by adding a description of the model embedding data , a representation of the embedding method , and a description of the model schematic for the SSN_GATNE-T network structure . In addition , we have added a description of the algorithm for the model and revised the description of the optimisation part of the model. Reviewer #3: Basic reporting Comment 1: This article is well organized. The presentation of this paper can be further improved with the help of native speakers of English. Response: We are very grateful for your valuable comments.We have revised our English manuscript and contacted (copyediting@peerj.com) to make changes to our manuscript. Comment 2: All the figures of this article could be replaced with high-resolution ones. Response:Thank you so much for your careful check.We are very grateful for your valuable comments.We have added a clear image in the supplementary file after increasing the resolution. Comment3: In the Introduction section, the motivation of this work is not clear. Why did the work combine HINs and multiplex networks? The authors should point out the background, shortcomings of the existing models, and challenges. Response: We are very grateful for your valuable comments.In the field of deep learning, large network models contain billions of nodes and edge types , but also more than one type of node and edge , and each node contains many different properties . Today's network embedding methods is mainly focused on homogeneous networks , which are characterised by a single type of node and edge.This is not enough for small networks of a wealth of different nodes and edge types , but even more so for large networks with hundreds of millions of different nodes and edge types.To solve the problem of embedding large networks of multiple nodes and multiple edge types , this paper derives a new framework SSN_GATNE-T model by combining to attribute to reuse heterogeneous networks with an attention mechanism that introduces softsign and sigmoid function properties. Comment 4: A few key references of the related work were not included and analyzed in the second section. The authors should carefully review the studies on HIN-based embedding and multiplex network embedding and add those missing works in the revised manuscript. [1] Fenfang Xie, Angyu Zheng, Liang Chen, Zibin Zheng: Attentive Meta-graph Embedding for item Recommendation in heterogeneous information networks. Knowl. Based Syst. 211: 106524 (2021) [2] Léo Pio-Lopez, Alberto Valdeolivas, Laurent Tichit, Élisabeth Remy, Anaïs Baudot: MultiVERSE: a multiplex and multiplex-heterogeneous network embedding approach. Sci. Rep. 11: 8794 (2021) [3] Qiwei Zhong, Yang Liu, Xiang Ao, Binbin Hu, Jinghua Feng, Jiayu Tang, Qing He: Financial Defaulter Detection on Online Credit Payment via Multi-view Attributed Heterogeneous Information Network. WWW 2020: 785-795 [4] Zhijun Liu, Chao Huang, Yanwei Yu, Baode Fan, Junyu Dong: Fast Attributed Multiplex Heterogeneous Network Embedding. CIKM 2020: 995-1004 [5] Yukuo Cen, Xu Zou, Jianwei Zhang, Hongxia Yang, Jingren Zhou, Jie Tang: Representation Learning for Attributed Multiplex Heterogeneous Network. KDD 2019: 1358-1368 [6] Binbin Hu, Zhiqiang Zhang, Chuan Shi, Jun Zhou, Xiaolong Li, Yuan Qi: Cash-Out User Detection Based on Attributed Heterogeneous Information Network with a Hierarchical Attention Mechanism. AAAI 2019: 946-953 [7] Yiming Zhang, Yujie Fan, Yanfang Ye, Liang Zhao, Chuan Shi: Key Player Identification in Underground Forums over Attributed Heterogeneous Information Network Embedding Framework. CIKM 2019: 549-558 Response:Thank you so much for your careful check.We have added descriptions of some models in Section 2 and brief descriptions of the PMNE, MVE, MNE, Mvn2vec, GANTE-T, GATNE-I, FAME, and FAMEm models in the Related Knowledge section, with references to the relevant literature. Comment 5: The concept of multiplex networks has been formally defined in the field of network science. However, the authors did not formulate it in this article. How is the attention mechanism applied to the proposed network that is essentially a multiplex network? Response: We are very grateful for your valuable comments.We have described multiple networks and the existence of some advanced correlated network models in the Related Knowledge section , and described the advantages and disadvantages of some correlated models of multiple networks including GANTE-T , GATNE-I , FAME , and FAMEm . We improve the attention mechanism and then influence the entire attribute reuse heterogeneous network model with respect to the correlation information that exists on different nodes and different edge types , thus improving the effectiveness of the recommendation performance by obtaining more correlation information. Experimental design Comment 1: The selected baselines have been proposed for three years. The authors should add more recent approaches (e.g., the above refs. [2], [4], and [5]) based on attributed multiplex heterogeneous networks to compare with the proposed approach. Response: Thank you so much for your careful check.We added the strengths and weaknesses for the GANTE-T , GATNE-I , FAME , FAMEm models from literature 4 and literature 5 and compared the above models with the SSN_GATNE-T model , described the strengths of our model and cited relevant literature.For the FAME and FAMEm models , the literature only mentions the description of the results from / of the Amazon dataset , but not for the Youtube dataset . The results we obtained by replicating the YouTube dataset on the FAME and FAMEm model were not very satisfactory . So we did not add the description of the Youtube dataset on the FAME and FAMEm models Comment 2: In addition to the parameter setting of the proposed approach, the authors should introduce the settings of each baseline’s hyperparameters, which may impact the performance of each baseline. Response: We are very grateful for your valuable comments.In the model, we added two hyperparameters batch_size and epoch to the model.For the hyper parameters of the SSN_GATNE-T model we chose epoch and batch size to tune the accuracy of the model recommendations. When the batch_size and other parameters are fixed, the model is tuned by adjusting the epoch to 10, 30, 50, 100, 150 and 200, respectively, to run the model.It was found that the model was not optimal at the end of the run for epoches of 10 , 30 and 50 , whereas for epoches of 150 and 200 , the model reached the optimal solution earlier and then stopped , but the model evaluation results was smaller than the optimal solution obtained for epoch 100 . Therefore, if the epoch is chosen to be 100, the final result of the model will be obtained to the optimal solution, neither because the number of iterations is insufficient to reach the optimal solution nor because the model reaches the optimal solution early, which causes the model to be terminated early and the result is not accurate.When epoch was 100 and other parameters were fixed , we determined the size of batch_size to be 64 through experiments , and we also chose batch size of 16 , 32 and 128 to adjust the model . The results obtained are not as good as those obtained when the batch size of 128 is chosen , although the convergence time is faster and the training times are faster than that of 64 , the accuracy is lower and the evaluation results obtained are not satisfactory . Therefore , the hyper parameters epoch and batch_size of SSN_GATNE-T model are 100 and 64 respectively. Comment 3: The implementation code of the proposed approach is not available for research on the Internet. Response: Thank you so much for your careful check.We have uploaded the dataset and code in the system for validation. Comment 4: In addition to the ablation experiment, the authors should discuss the impacts of some critical hyperparameters (e.g., the dimension of embeddings and the layer number of a multiplex network) on model performance. Response: We are very grateful for your valuable comments.In the model the overall embedding dimension and the edge embedding dimension have a large impact on the model so we describe these two indices.When the other embedding parameters is fixed , for the overall embedding dimension , we choose multiple dimensions including 50 , 100 , 150 and 300 to test the model , and the best result are achieved when the overall embedding dimension is 200 ; similarly , for the edge embedding dimension , we choose multiple dimensions including 5 , 10 , 20 and 30 to test the model , and the best result are achieved when the edge embedding dimension is 10 , so the overall embedding dimension and the edge embedding dimension of SSN_GATNE-T model are 200 and 10 respectively. Validity of the findings Comment 1: The motivation of the Fridman test is not clear. For example, what are the actual meanings of node1 and node2? Why did the Fridman test on the two datasets obtain two different results, i.e., the original hypothesis was retained on the Youtube dataset, while the original hypothesis was rejected on the Amazon dataset. Response: Thank you so much for your careful check.node1 and node2 represent user nodes and project nodes respectively.In order to better describe the effectiveness of the model, we use the Fridman algorithm to analyse the effectiveness of our SSN_GANTE-T model in obtaining relevance information of small and large networks containing multiple types of nodes and multiple types of edges through attributed multiplex heterogeneous network and an improved attention mechanism, and verify the effectiveness of our model in obtaining the missing information on the interaction between different nodes and different edge types in a large network of hundreds of millions of nodes and the effectiveness of obtaining potential information, and then the effectiveness of the obtained information in improving the accuracy of recommendations when used in a recommendation system.Because we have made different assumptions about YouTube and Amazon, we have obtained different results. Comment 2: If possible, the authors should add a subsection to discuss some potential threats to the validity of this work, including construct validity, internal validity, and external validity. Response: We are very grateful for your valuable comments.The Friedman test , on the one hand , shows that our SSN_GATNE-T models does not suffer from missing node information due to the variety and number of nodes and edge types it contains when dealing with large web datasets such as Amazon and Youtube , as evidenced by the zero missing user and item values. On the other hand , the summary of the Wilcoxon signed-rank test and the summary of the node1 and node2 case processing , as well as the estimation of the distribution parameters , demonstrate that the model is more comprehensive in obtaining information about the potential interactions between different nodes and different edge types.Both the absence of missing nodes in the user's project and the availability of more comprehensive information about the potential interactions between different nodes and different edge types can improve the effectiveness of the recommendation performance by providing the user with more relevant information for the recommendation. Jinyong Cheng (Assistant Professor) :cjy@qlu.edu.cn Zhisheng Yang (Student) :yangzhisheng202010@163.com School of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences) Jinan,250353, China "
Here is a paper. Please give your review comments after reading it.
319
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Microservice-based Web Systems (MWS), which provide a fundamental infrastructure for constructing large-scale cloud-based Web applications, are designed as a set of independent, small and modular microservices implementing individual tasks and communicating with messages. This microservice-based architecture offers great application scalability, but meanwhile incurs complex and reactive autoscaling actions that are performed dynamically and periodically based on current workloads. However, this problem has thus far remained largely unexplored. In this paper, we formulate a problem of Dynamic Resource Scheduling for Microservice-based Web Systems (DRS-MWS) and propose a similarity-based heuristic scheduling algorithm that aims to quickly find viable scheduling schemes by utilizing solutions to similar problems. The performance superiority of the proposed scheduling solution in comparison with three state-of-the-art algorithms is illustrated by experimental results generated through a well-known microservice benchmark on disparate computing nodes in public clouds.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>As a new computing paradigm, Microservices have been increasingly developed and adopted for various applications in the past years. Driven by this trend, Microservice-based Web Systems (MWS), which have emerged as a prevalent model for distributed computing, are designed as a set of independent, small and modular microservices implementing individual tasks and communicating with messages.</ns0:p><ns0:p>MWS facilitate fast delivery and convenient update of web-based applications, with auto-scalability for the provisioning of virtualized resources, which could be schedule-based, event-triggered, or threshold value-based <ns0:ref type='bibr' target='#b12'>(Guerrero et al., 2018b)</ns0:ref>. These condition-triggered auto-scaling mechanisms are reactive, meaning that the decision on autoscaling actions is made dynamically and periodically based on the current workload. Another important characteristic of MWS is that they are often deployed in a multicloud environment <ns0:ref type='bibr' target='#b7'>(Fazio et al., 2016)</ns0:ref>, this is because modern Web-based applications are often across organizational boundaries. For example, a common on-line shopping scenario typically involves an e-commence company, a manufacturer, and a bank, all of which host their information systems on their own cloud.</ns0:p><ns0:p>Applications deployed on the cloud need to meet performance requirements, such as response time, and also need to reduce the cost of cloud resource usage. Generally, Web-based application service requests are submitted by users and run in cloud service providers' cluster environments. Resource scheduling requires instance scheduling and auto-scaling within a restricted time limit to determine the number of instances for each microservice and decide how to deploy each instance to the appropriate VM, achieving the goal of minimizing resource consumption and best service performance as well as meeting PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:65158:1:0:CHECK 23 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science user dynamic requests. However, the problem of Dynamic Resource Scheduling for Microservice-based Web Systems (DRS-MWS) is extremely challenging mainly due to the following factors:</ns0:p><ns0:p>&#8226; NP-hard. The instance scheduling problem in cloud environment has long been proved to be a typical NP-hard problem <ns0:ref type='bibr' target='#b26'>(Salleh and Zomaya, 1999)</ns0:ref>. Due to the dynamic arrival of microservice requests, resource scheduling algorithms are required to adapt to rapidly changing requirements and environments. Under strict time constraints, it becomes more difficult to find a approximate optimal solution.</ns0:p><ns0:p>&#8226; Multi-objective optimization. DRS-WMS is a complex multi-objective optimization problem.</ns0:p><ns0:p>In this problem, the solution may involve many conflicting and influencing objectives, and the researcher should obtain the best possible optimization of these objectives simultaneously. For example, resource scheduling algorithms need to provide sufficient service performance for users while maintaining system robustness to prevent system failure.</ns0:p><ns0:p>Existing methods for resource scheduling for MWS include rule-based, heuristic, and learning-based, as discussed in the Related Work section. Among them, evolutionary algorithms (EA), which are heuristic in nature, have recently received a great deal of attention. EA-based approaches are effective in solving complex microservice scheduling problem <ns0:ref type='bibr' target='#b7'>(Fazio et al., 2016)</ns0:ref>, but suffer from inefficiency and thus fail to satisfy the requirement for reactive and dynamic scheduling. This is mainly because they do not consider a priori knowledge about the solution, and often start from a randomly generated initial population. Due to the dynamic nature and the hard time constraint in the microservice scheduling problem, starting with a random population may lead to non-convergence and hence jeopardize the exploration for better solutions <ns0:ref type='bibr' target='#b29'>(Wang et al., 2009)</ns0:ref>.</ns0:p><ns0:p>To address the challenge of dynamic scheduling, we propose Similarity-based Dynamic Resource Scheduling, referred to as Sim-DRS, which aims to quickly find viable scheduling schemes for MWS under a certain time constraint. We tackle the problem of Dynamic Resource Scheduling for Microservicebased Web Systems based on one key hypothesis that solutions to similar problems often share certain structures. Therefore, instead of starting from a random population indiscriminately at each initial iteration in a typical EA approach, we focus on finding solutions to similar problems as part of the initial population to improve the quality of population initialization. This strategy has been shown to be powerful for producing better solutions in the literature <ns0:ref type='bibr' target='#b24'>(Rahnamayan et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b29'>Wang et al., 2009)</ns0:ref> and in our practical experiments.</ns0:p><ns0:p>In summary, our work makes the following contributions to the field:</ns0:p><ns0:p>&#8226; We formulate DRS-MWS as a combinatorial optimization problem.</ns0:p><ns0:p>&#8226; We propose Sim-DRS to solve DRS-MWS, which finds promising scheduling schemes by directly utilizing viable solutions to similar problems, hence obviating the need for a fresh start.</ns0:p><ns0:p>&#8226; We evaluate the performance of Sim-DRS through extensive experiments using the well-known microservice benchmark named TeaStore under different scheduling time constraints. We show that Sim-DRS outperforms three state-of-the-art scheduling algorithms by 9.70%-42.77% in terms of three objectives, and achieves more significant improvements under stricter time constraints.</ns0:p><ns0:p>The remainder of this paper is organized as follows. The Related Work section surveys related work and the Problem Statement presents the analytical models of a microservice-based application and formulates the DRS-MWS problem. The Resource Scheduling Algorithm section designs Sim-DRS, a dynamic resource scheduling algorithm based on similarity. The Experiments section describes the experimental setup and evaluates the scheduling algorithm. In the end, the Conclusion and Future Work section presents a discussion of our approach and a sketch of future work.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Resource scheduling for MWS is an active research topic <ns0:ref type='bibr' target='#b7'>(Fazio et al., 2016)</ns0:ref> and has received a great deal of attention from both industry and academia. Previous studies can be classified into three categories: rule-based, heuristic, and learning-based approaches, as discussed below. <ns0:ref type='bibr' target='#b30'>Yan et al. (2017)</ns0:ref> proposed an elastically scalable strategy based on container resource prediction and message queue mapping to reduce the delay of service provisioning. <ns0:ref type='bibr' target='#b14'>Leitner et al. (2016)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Rule-based Approach</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>a graph-based model for the deployment cost of microservices, which can be used to model the total deployment cost depending on the call patterns between microservices. <ns0:ref type='bibr' target='#b19'>Magalhaes et al. (2017)</ns0:ref> proposed a scheduling architecture consisting of a Web server powered by a soft real-time scheduling engine. <ns0:ref type='bibr' target='#b9'>Gabbrielli et al. (2016)</ns0:ref> proposed JRO (Jolie Redeployment Optimiser) tool to generate a suggested SOA (service-oriented architecture) configuration from a partial and abstract description of a target application. <ns0:ref type='bibr' target='#b8'>Filip et al. (2018)</ns0:ref> proposed a mathematical formulation for describing an architecture that includes heterogeneous machines to handle different microservices. <ns0:ref type='bibr' target='#b32'>Zheng et al. (2019)</ns0:ref> presented SmartVM, a business Service-Level-Agreement (SLA)-aware, microservice-centric deployment framework to handle traffic spikes in a cost-efficient manner. <ns0:ref type='bibr' target='#b6'>Fard et al. (2020)</ns0:ref> proposed a general microservice scheduling mechanism and modeled the scheduling problem as a complex variant of the knapsack problem, which can be expanded for various resource requests in queues and solved by multi-objective optimization methods. <ns0:ref type='bibr' target='#b21'>Mirhosseini et al. (2020)</ns0:ref> developed a scheduling framework called Q-Zilla from the perspective of solving the end-to-end queue delay, and the SQD-SITA scheduling algorithm was proposed to minimize the delay caused by microservice distribution.</ns0:p><ns0:p>Rule-based approaches are straightforward and are efficient in simple environments. The scheduling problem can be solved by constructing rules through domain knowledge, using software architecture and simple data modeling theory, which is effective in an environment that meets certain assumptions.</ns0:p><ns0:p>However, they rely heavily on prior domain knowledge, have a low degree of mathematical abstraction, and may be labor-intensive, imprecise, and have poor results in high variability scenarios. <ns0:ref type='bibr' target='#b16'>Li et al. (2018)</ns0:ref> proposed a prediction model for microservice relevance using optimized artificial bee colony algorithm (OABC). Their model takes into account the cluster load and service performance, and has a good convergence rate. <ns0:ref type='bibr' target='#b27'>St&#233;vant et al. (2018)</ns0:ref> used a particle swarm optimization to find the best placement based on the performance of microservices evaluated by the model on different devices to achieve the fastest response time. <ns0:ref type='bibr' target='#b12'>Guerrero et al. (2018b)</ns0:ref> presented an NSGA-II algorithm to reduce service cost, microservice repair time, and microservice network latency overhead. <ns0:ref type='bibr' target='#b0'>Adhikari and Srirama (2019)</ns0:ref> used an accelerated particle swarm optimization (APSO) technique to minimize the overall energy consumption and computational time of tasks with efficient resource utilization with minimum delay. <ns0:ref type='bibr' target='#b17'>Lin et al. (2019)</ns0:ref> proposed an ant colony algorithm that considers not only the utilization of computing and storage resources but also the number of microservice requests and the failure rate of physical nodes. <ns0:ref type='bibr' target='#b11'>Guerrero et al. (2018a)</ns0:ref> proposed an NSGA-II-based approach to optimize system provisioning, system performance, system failure, and network overhead simultaneously. <ns0:ref type='bibr' target='#b4'>Bhamare et al. (2017)</ns0:ref> presented a fair weighted affinity-based scheduling heuristic to reconsider link loads and network delays while minimizing the total turnaround time and the total traffic generated. <ns0:ref type='bibr' target='#b17'>Lin et al. (2019)</ns0:ref> used an ant colony algorithm to solve the scheduling problem. It considered not only the computing and storage resource utilization of physical nodes, but also the number of microservice requests and failure rates of the nodes, and combined multi-objective heuristic information to improve the probability of choosing optimal path. These approaches generally abstract resources scheduling into an optimization problem through appropriate modeling methods and solve the problem in a certain neighborhood. And heuristic approaches have been proven to be efficient in finding good scheduling solutions in a high-dimensional space <ns0:ref type='bibr' target='#b11'>(Guerrero et al., 2018a)</ns0:ref>, especially under the circumstances of balancing many conflicting objectives. However, they suffer from low performance and search from a random state, which lead to noneffective use of a priori knowledge of existing good solutions.</ns0:p></ns0:div> <ns0:div><ns0:head>Heuristic Approach</ns0:head></ns0:div> <ns0:div><ns0:head>Learning-based Approach</ns0:head><ns0:p>Alipour and Liu (2017) presented a microservice architecture that adaptively monitors the workload of a microservice and schedules multiple machine learning models to learn the workload pattern online and predict the microservice's workload classification at runtime. <ns0:ref type='bibr' target='#b23'>Nguyen and Nahrstedt (2017)</ns0:ref> proposed MONAD, a self-adaptive microservice-based infrastructure for heterogeneous scientific workflows.</ns0:p><ns0:p>MONAD contains a feedback control-based resource adaptation approach to generate resource allocation decisions without any knowledge of workflow structures in advance. <ns0:ref type='bibr' target='#b10'>Gu et al. (2021)</ns0:ref> proposed a dynamic adaptive learning scheduling algorithm to intelligently sorts, allocates, monitors, and adjusts microservice instances online. <ns0:ref type='bibr' target='#b31'>Yan et al. (2021)</ns0:ref> The learning-based solutions are still in their infancy. The main advantage of these approaches is that they can generate scheduling decisions adaptively and automatically, without any human intervention.</ns0:p><ns0:p>However, these approaches require a considerable number of samples to build a reasonable decision model for a microservice system <ns0:ref type='bibr' target='#b3'>(Alipour and Liu, 2017)</ns0:ref>. The high demand of samples is always challenging because only a very limited set of samples can be acquired during a short time period for resource scheduling in production systems.</ns0:p></ns0:div> <ns0:div><ns0:head>PROBLEM STATEMENT</ns0:head><ns0:p>We study a Dynamic Resource Scheduling problem for Microservice-based Web Systems, referred to as DRS-MWS. As is shown in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, a microservice-based Web system is often deployed in a multi-cloud environment consisting of a set of interconnected virtual machines. This system acts as a real-time streaming data pipeline that delivers data and messages to microservices. Users can send requests to microservices once deployed according to their own requirements. Each type of microservice provides a unique function, and multiple microservices collectively constitute an integrated service system. Given a batch of dynamic requests at runtime, the goal of DRS-MWS is to find an optimal provisioning policy to improve the system's service quality and robustness while ensuring the high quality. </ns0:p></ns0:div> <ns0:div><ns0:head>System Components</ns0:head><ns0:p>Specifically, DRS-MWS has the following components.</ns0:p><ns0:p>Microservice. A microservice-based Web System (MWS) contains many kinds of microservice, and each type provides a specific functionality, which can be modeled as a three tuple ms = C ms , R ms , g , where C ms and R ms represent the normalized CPU and memory resource demand for deploying ms on VMs, respectively; g denotes the full capacity of ms to achieve its functionality.</ns0:p></ns0:div> <ns0:div><ns0:head>4/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_9'>2021:08:65158:1:0:CHECK 23 Oct 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Application. Each application, which consists of many microservices and represents a useful business functionality, can implement a corresponding type of requests from users. In this paper, we model an application as a directed acyclic graph (DAG) A = V A , E A , where vertices V A represent a set of</ns0:p><ns0:formula xml:id='formula_0'>m microservices V A = {ms 1 , ms 2 , &#8226; &#8226; &#8226; , ms m }.</ns0:formula><ns0:p>The execution dependency between a pair of adjacent microservices ms i and ms j is denoted by a directed edge (ms i , ms j ) &#8712; E A between them.</ns0:p><ns0:p>Workload. By definition, workload often represents the requests to an application from different users at a given time point t. For clarity, we use microservice workload (short for workload) here instead of application workload because we need to track the details of microservice requests in the DRS-MWS problem. Given a set of m microservices, we model workload as</ns0:p><ns0:formula xml:id='formula_1'>W (t) = {w (t) 1 , w (t) 2 , &#8226; &#8226; &#8226; , w (t) m }, where each integer w (t)</ns0:formula><ns0:p>i denotes the total number of user requests for ms i at t, which can reflect the duration the requests are queued in ms i waiting to be executed.</ns0:p><ns0:p>Running Environment. We consider a set of cloud providers P = {p 1 , p 2 , &#8226; &#8226; &#8226; } hosting n virtual machines (VMs). Typically, different types of VMs with different computing capabilities characterized by the number of virtual CPU cores, CPU frequency, RAM and disk size are provisioned to satisfy different application needs. For simplicity, we define a normalized scalar p = C vm , R vm to describe such computing capability for a given VM, where C vm and R vm represents the virtual CPUs and memory allocated to VM. To model the communication time or latency overhead among different cloud providers, we define a matrix L n&#215;n where the element l i, j represents the latency between the VM v i and v j . Note that we assume that the latency within the same VM is negligible, i.e. l i,i = 0, and the latency within the different cloud providers is larger than in the same provider. As shown in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, there are three VMs (v 1 , v 2 , and v 3 ), and the latency can be expressed as:</ns0:p><ns0:formula xml:id='formula_2'>L = &#63726; &#63728; 0 50 50 50 0 20 50 20 0 &#63737; &#63739; .<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>Microservice Instance. To implement a business function, each microservice needs to be deployed in a container to create a microservice instance. Without loss of generality, we follow the popular 'service instance per container' <ns0:ref type='bibr' target='#b25'>(Richardson, 2020)</ns0:ref> deployment pattern in this paper and use the term microservice instance to denote both the software and the container infrastructure of a specific microservice. Specifically, we use ms i, j to represent the j-th instance of microservice ms i . As illustrated in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, function f 1 is mapped to ms 1,1 and ms 1,2 , which means the 1-st instance of microservice ms 1 and the 1-st instance of microservice ms 2 work cooperatively to implement f 1 .</ns0:p></ns0:div> <ns0:div><ns0:head>Optimization Objectives</ns0:head><ns0:p>At any time point t, we wish to optimize three objectives: i) the resource consumption (C (t) ) for supporting users' requests; ii) the system jitter (J (t) ) due to the deploying adjustment of the microservice instances;</ns0:p><ns0:p>and iii) the invocation expense (E (t) ) for calling different microservice instances along the microservice invocation chain of an application. It is worth mentioning that we treat a microservices as a black-box function, and the latency of queueing and executing requests within the microservice is not considered in this paper.</ns0:p><ns0:p>To achieve this optimization, we define our resource scheduling first by managing the number of instances for each microservice and then deciding how to deploy each microservice instance to an appropriate VM. More specifically, suppose that an MWS has n VMs and m different microservices, and a workload</ns0:p><ns0:formula xml:id='formula_3'>W (t) = {w (t) 1 , w (t) 2 , &#8226; &#8226; &#8226; , w (t)</ns0:formula><ns0:p>m } is generated at a time point t. We define a matrix S m&#215;n to denote our scheduling decision, where each element s i j &#8712; S indicates the number of instances for microservice ms i that will be deployed to VM v j , and . For example, we have three VMs (v 1 , v 2 , and v 3 ) and three microservices (ms 1 , ms 2 , and ms 3 ) in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, and the current resource scheduling at time point t can be defined as:</ns0:p><ns0:formula xml:id='formula_4'>S (t) = &#63726; &#63728; 1 1 0 1 0 1 0 1 1 &#63737; &#63739; .</ns0:formula><ns0:p>(2)</ns0:p><ns0:p>Resource consumption, denoted by C (t) , is measured as the sum of the resource demands for Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>deploying each microservice instance on its target VM:</ns0:p><ns0:formula xml:id='formula_5'>C (t) = m &#8721; i=1 n &#8721; j=1 (s (t) i, j &#8226; ms i .r),<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where s (t) i, j denotes the number of microservice instances ms i deployed on VM v j , and ms i .r represents the normalized resource demand for deploying ms i on a VM.</ns0:p><ns0:p>System jitter, denoted by J (t) , is an important performance metric used to measure the robustness of the system at time point t, which can be further defined as the change degree of microservice instances' deployment for an MWS environment. For example, given any two continuous time points t and t &#8722; 1, J (t) results from subtracting scheduling decisions S (t) and S (t&#8722;1) :</ns0:p><ns0:formula xml:id='formula_6'>J (t) = m &#8721; i=1 n &#8721; j=1 |s (t) i, j &#8722; s (t&#8722;1) i, j | . (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_7'>)</ns0:formula><ns0:p>Invocation expense, denoted by E (t) , is defined as the associated cost for considering both the microservice invocation chains of applications and the latency overhead among different cloud providers.</ns0:p><ns0:p>Because the latency has been defined in the previous section, we need to define the former as microservice invocation expense.</ns0:p><ns0:p>Given a set of applications A = {A 1 , A 2 , &#8226; &#8226; &#8226; } in an MWS, we define the correlation, denoted as cor(ms i , ms j ), between any two microservices ms i and ms j as:</ns0:p><ns0:formula xml:id='formula_8'>cor(ms i , ms j ) = 1 if (ms i , ms j ) &#8712; E A k , &#8704;A k &#8712; A 0 otherwise ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>where E A k represents the microservices required to implement A k .</ns0:p><ns0:p>The microservice invocation distance, denoted by a matrix D m&#215;m , is thus defined to indicate the alienation between any two microservices, and each element d i, j in D is defined as:</ns0:p><ns0:formula xml:id='formula_9'>d i, j = e &#8722;cor(ms i ,ms j ) if i = j 0 otherwise .<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>As shown in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, there is only one application A = {ms 1 , ms 2 , ms 3 } that belongs to the MWS, and the microservice distance D is thus represented as:</ns0:p><ns0:formula xml:id='formula_10'>D = &#63726; &#63728; 0 e &#8722;1 1 1 0 e &#8722;1 1 1 0 &#63737; &#63739; .<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>Based on the latency L n&#215;n among different VMs, the scheduling decision S (t) m&#215;n at time point t, and the microservice distance D m&#215;m , we now define the invocation expense E (t) as a scalar value:</ns0:p><ns0:formula xml:id='formula_11'>E (t) = U &#8226; (D &#8226; S (t) &#8226; L) &#8226;U T ,<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where U 1&#215;n is an auxiliary matrix whose elements are all equal to 1. For example, as shown in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, given L defined in Eq. ( <ns0:ref type='formula' target='#formula_2'>1</ns0:ref>), S (t) defined in Eq. ( <ns0:ref type='formula'>2</ns0:ref>), and D defined in Eq. ( <ns0:ref type='formula' target='#formula_10'>7</ns0:ref>), we have:</ns0:p><ns0:formula xml:id='formula_12'>E (t) = 1 1 1 &#63723; &#63725; &#63726; &#63728; 0 e &#8722;1 1 1 0 e &#8722;1 1 1 0 &#63737; &#63739; &#63726; &#63728; 1 1 0 0 1 1 0 1 1 &#63737; &#63739; &#63726; &#63728; 0 50 50 50 0 20 50 20 0 &#63737; &#63739; &#63734; &#63736; &#63726; &#63728; 1 1 1 &#63737; &#63739; = 280e &#8722;1 + 520 (9)</ns0:formula></ns0:div> <ns0:div><ns0:head>Problem Formulation</ns0:head><ns0:p>We formally define DRS-MWS as a three-objective optimization problem:</ns0:p><ns0:formula xml:id='formula_13'>&#63729; &#63732; &#63732; &#63732; &#63732; &#63732; &#63730; &#63732; &#63732; &#63732; &#63732; &#63732; &#63731; min &#8704;S (t) C (t) (A, MS,V, S (t) )</ns0:formula><ns0:p>min</ns0:p><ns0:formula xml:id='formula_14'>&#8704;S (t) J (t) (A, MS,V, S (t) )</ns0:formula><ns0:p>min Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_15'>&#8704;S (t) E (t) (A, MS,V, S (t) )<ns0:label>(10</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_16'>s.t. scheduling time &#8804; &#947; &#8226; &#8710;t (11) m &#8721; j=1 (s (t) i, j &#8226; ms j .r) &#8804; &#948; &#8226; v i .r &#8704;t, i = 1, 2, &#8226; &#8226; &#8226; , n<ns0:label>(12)</ns0:label></ns0:formula><ns0:formula xml:id='formula_17'>n &#8721; j=1 s (t) i, j = &#8968; w i ms i .g &#8969; (t) &#8704;t, i = 1, 2, &#8226; &#8226; &#8226; , m<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>where Eq. ( <ns0:ref type='formula' target='#formula_15'>10</ns0:ref>) states that at any time point t, given an MWS consisting of a set of applications (A), a set of microservices (MS), and a set of VMs (V ), the goal of DRS-MWS is to find a resource scheduling policy S among all valid policies under workload W (t) to minimize the resource consumption (C (t) ), system jitter (J (t) ), and invocation expense (E (t) ) simultaneously. The constraint Eq. ( <ns0:ref type='formula'>11</ns0:ref>) is that any solution to the problem must terminate after a &#947; &#8226; &#8710;t amount of time. The constraint Eq. ( <ns0:ref type='formula' target='#formula_16'>12</ns0:ref>) states that the resource consumption of any VM for deploying microservices must not exceed a certain proportion of its total resource capacity. Finally, the constraint Eq. ( <ns0:ref type='formula' target='#formula_17'>13</ns0:ref>) states that the workload on each microservice needs to be served appropriately.</ns0:p></ns0:div> <ns0:div><ns0:head>RESOURCE SCHEDULING ALGORITHM</ns0:head><ns0:p>In this section, we introduce Sim-DRS -a similarity-based dynamic resource scheduling algorithm to solve DRS-MWS. Its key idea is to accelerate the convergence of the scheduling algorithm by adopting previously-known good solutions as the initial population whose optimization situation is similar to the current one. We first analyze existing scheduling algorithms and their limitations. We then present similarity estimation, which determines an appropriate initial population for the scheduling algorithm at each time point. Finally, we discuss the details of the Sim-DRS algorithm.</ns0:p></ns0:div> <ns0:div><ns0:head>Motivation</ns0:head><ns0:p>Evolutionary algorithms (EA) are among the most popular for solving the DRS-MWS problem <ns0:ref type='bibr' target='#b11'>(Guerrero et al., 2018a)</ns0:ref>. EA implementation requires a definition of the solution and the construction of several technical components including initial population, genetic operators such as crossover and mutation, fitness function, selection operator, offspring generation, and execution parameterization <ns0:ref type='bibr' target='#b22'>(Mitchell, 1998)</ns0:ref>.</ns0:p><ns0:p>Most of the previous approaches assume that no a prior information about the solution is available, and often use a random initialization method to generate candidate solutions (i.e., the initial population).</ns0:p><ns0:p>According to <ns0:ref type='bibr' target='#b24'>(Rahnamayan et al., 2007)</ns0:ref>, population initialization is a crucial step in evolutionary algorithms because it can affect the convergence speed and also the quality of the final solution. Due to the dynamics and the hard time constraint in the DRS-MWS problem, previous studies have shown that adopting a random initial population often leads to non-convergence in the optimization process and eventually a low-quality solution <ns0:ref type='bibr' target='#b29'>(Wang et al., 2009)</ns0:ref>.</ns0:p><ns0:p>Instead of using a complete randomization technique for initial population generation, we attempt to leverage existing solutions to similar problems to compose the initial population. The rationale behind this idea is that good solutions to similar problems may share some common structures. Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> illustrates the optimization process of Sim-DRS, which contains two phases: offline training and online scheduling.</ns0:p><ns0:p>In the training phase, it first generates a set of synthetic workloads, and then applies a Non-dominated Sorting Genetic Algorithm II (NSGA-II) to reach the pareto optimality for three optimization objectives mentioned in Eq. ( <ns0:ref type='formula' target='#formula_15'>10</ns0:ref>). It constructs a database of good solutions by adding such one-to-one &lt;workload-solution&gt; pairs. In the scheduling phase, given a real workload W (t) generated by users at each time point t, Sim-DRS first applies a similarity-based algorithm to generate an initial population mixed with random and previously-known similar solutions from the database by calculating the similarities between W (t) and existing synthetic workloads. It then uses a standard NSGA-II algorithm to optimize three objectives defined in Eq. ( <ns0:ref type='formula' target='#formula_15'>10</ns0:ref>) and select the best one.</ns0:p></ns0:div> <ns0:div><ns0:head>7/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:65158:1:0:CHECK 23 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>Solution Database Construction</ns0:head></ns0:div> <ns0:div><ns0:head>247</ns0:head><ns0:p>To measure the correlation between any two workloads W i = {w i 1 , w i 2 , &#8226; &#8226; &#8226; , w i m } and W j = {w j 1 , w j 2 , &#8226; &#8226; &#8226; , w j m }, we first define workload similarity (or similarity in short):</ns0:p><ns0:formula xml:id='formula_18'>&#915;(W i ,W j ) = e &#8722; &#8721; m k=1 (w i k &#8722;w j k ) 2 . (<ns0:label>14</ns0:label></ns0:formula><ns0:formula xml:id='formula_19'>)</ns0:formula><ns0:p>As shown is Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>, we need to construct a solution database containing previously-known good </ns0:p><ns0:formula xml:id='formula_20'>Input: W = {W 1 ,W 2 , &#8226; &#8226; &#8226; ,W n }:</ns0:formula><ns0:p>the workload set consisting of n synthetic workloads; k: the number of clusters. Output: SD: the solution database containing k groups.</ns0:p><ns0:p>1: SD &#8592; &#8709;; 2: for i = 1 : n do 3:</ns0:p><ns0:p>Randomly generate an initial population with p different solutions;</ns0:p><ns0:p>4:</ns0:p><ns0:p>Find the optimal solution S i for W i using an NSGA-II algorithm;</ns0:p><ns0:p>5:</ns0:p><ns0:p>SD &#8592; (W i , S i ); 6: end for 7: SD &#8592; k-means(SD, k); 8: return SD;</ns0:p><ns0:p>As shown in Algorithm 1, for every workload W i in the synthetic workload set W , we randomly 251 generate p different solutions as the initial population (line 3), and then apply a Non-dominated Sorting </ns0:p></ns0:div> <ns0:div><ns0:head>Similarity-oriented Initial Population Generation</ns0:head><ns0:p>Given a workload W (t) generated by users at each time point t during the dynamic scheduling phase, the similarity-based initial population generation algorithm in our Sim-DRS approach aims to find good solutions from the solution database according to the distance measurement, as shown in Algorithm 2.</ns0:p><ns0:p>The similarity-based initial population generation algorithm begins with searching for the group G * in the solution database that has the largest similarity with W (t) (line 2), where W c G represents the cluster center of any workload group G. For example, given a workload W (t) in Fig. <ns0:ref type='figure' target='#fig_6'>3</ns0:ref>, we need to calculate t) ,W 5 ) for G 2 , and &#915;(W (t) ,W 7 ) for G 3 , respectively.</ns0:p><ns0:formula xml:id='formula_21'>&#915;(W (t) ,W 3 ) for G 1 , &#915;(W (</ns0:formula><ns0:p>Given G * , we construct a roulette wheel selection (RWS) process, a proportional selection strategy which has a similar selection principle as roulette wheel, to construct a population with good solutions (line 3). More specifically, suppose that group</ns0:p><ns0:formula xml:id='formula_22'>G * contains k different &lt;workload-solution&gt; pairs G * = {a 1 , a 2 , &#8226; &#8226; &#8226; , a k },</ns0:formula><ns0:p>and an individual pair a i =&lt; W i , S i &gt; has the distance value of &#915;(W (t) ,W i ). Then, the probability for a i to be selected is:</ns0:p><ns0:formula xml:id='formula_23'>ps(a i ) = &#915;(W (t) ,W i ) k &#8721; j=1 &#915;(W (t) ,W j ) i = 1, 2, &#8226; &#8226; &#8226; , k. (<ns0:label>15</ns0:label></ns0:formula><ns0:formula xml:id='formula_24'>)</ns0:formula><ns0:p>After obtaining the roulette wheel: PS = {ps(a 1 ), ps(a 2 ), &#8226; &#8226; &#8226; , ps(a k )}, we repeatedly select candidate workload-solution pairs p times from G * using the RWS strategy with PS (line 5).</ns0:p><ns0:p>Once a solution S i to the workload W i is selected, we need to adjust it to the current workload W (t) if W (t) = W i (line 6-19). More specifically, for each microservice ms j , we calculate the difference diff between the workload w t j for W (t) and w i j for W i (line 7), and then convert the workload difference (diff ) into the microservice instance difference (adj) (line 8). Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 2 Similarity-oriented initial population generation algorithm: GenSimPop(W (t) , SD, p)</ns0:p><ns0:p>Input: W (t) : the real workload arriving at time point t; SD: the solution database; p: the number of desired individuals in the set of good solutions. Output: GS: the set of good solutions for composing the initial population.</ns0:p><ns0:p>1: GS &#8592; &#8709;; t) ,W c G ); 3: Construct a roulette wheel selection process PS with G * ; 4: for i = 1 : p do 5: end for 20:</ns0:p><ns0:formula xml:id='formula_25'>2: G * &#8592; argmax G&#8712;SD &#915;(W (</ns0:formula><ns0:formula xml:id='formula_26'>(W i , S i ) &#8592; RWS(G * , PS); 6: for j = 1 : m do 7: diff &#8592; w (t) j &#8722; w i j ; 8: adj &#8592; &#8968; diff ms j .</ns0:formula><ns0:p>GS &#8592; GS &#8746; S i ; 21: end for 22: return GS; GS for the initial population (line 20). The algorithm stops after p times of selections and adjustments, and finally returns the set of good solutions GS.</ns0:p></ns0:div> <ns0:div><ns0:head>Sim-DRS Algorithm</ns0:head><ns0:p>The pseudocode of Sim-DRS is provided in Algorithm 3. Sim-DRS initially generates h s (&#945; &#8226;populationSize) number of good solutions using our similarity-based initial population generation algorithm (line 2), and generates h r ((1 &#8722; &#945;) &#8226; populationSize) number of solutions using the standard random algorithm (line 3). Finally, h s and h r are merged together to form the initial population (line 4).</ns0:p><ns0:p>Our Sim-DRS approach is based on the Non-dominated Sorting Genetic Algorithm-II (NSGA-II) <ns0:ref type='bibr' target='#b5'>(Deb et al., 2002)</ns0:ref> (lines 5-25) as introduced in the previous section. The crossover operation randomly exchanges the same number of rows of two individuals to produce new ones (line 12). To avoid the local minimum value and cover a larger solution space, a mutation operator is also used (line 14). Note that in order to satisfy the constraint stated in Eq. ( <ns0:ref type='formula'>11</ns0:ref>), we need to randomly adjust the values of an individual after applying the crossover and mutation operations (line 16-17).</ns0:p><ns0:p>According to the problem formulation, Sim-DRS considers three objective functions, namely, C (t) , J (t) , and E (t) , as the fitness functions to measure the quality of a solution (lines 5 and 22). It sorts the solutions at Pareto optimal front levels, and all the solutions in the same front level are ordered using the crowding distance <ns0:ref type='bibr'>(lines 22-25)</ns0:ref>. Once all the solutions are sorted, a binary tournament selection operator is applied over the sorted elements (line 11): two solutions are selected randomly, and the first one on the ordered list is finally selected (line 27). Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 3 The Sim-DRS Algorithm: SimDRS(W (t) , SD) Input: W (t) : the real workload arriving at time point t; SD: the solution database. Output: S: the optimal scheduling strategy.</ns0:p><ns0:p>1: Initialize populationSize, generationNumber, mutationProb; 2: h s &#8592; GenSimPop(W (t) , SD, &#945; &#8226; populationSize); 3: h r &#8592; GenRandomPop((1 &#8722; &#945;) &#8226; populationSize); 4: h &#8592; h s + h r ; 5: fitness &#8592; CalculateFitness(h); 6: fronts &#8592; CalculateFronts(h, fitness); 7: distance &#8592; CalculateCrowd(h, fitness, fronts); 8: for i = 1 : generationNumber do 9:</ns0:p><ns0:formula xml:id='formula_27'>h o f f &#8592; / 0;</ns0:formula><ns0:p>10:</ns0:p><ns0:p>for j = 1 : populationSize do 11:</ns0:p><ns0:formula xml:id='formula_28'>f a 1 , f a 2 &#8592; BinarySelect(h, fitness, distance); 12: ch 1 , ch 2 &#8592; Crossover( f a 1 , f a 2 ); 13:</ns0:formula><ns0:p>if Random() &lt; mutationProb then </ns0:p><ns0:formula xml:id='formula_29'>h o f f &#8592; h o f f &#8746; {ch 1 , ch 2 }; 19:</ns0:formula><ns0:p>end for 20: </ns0:p><ns0:formula xml:id='formula_30'>h o f f &#8592; h o f f &#8746; h; 21: end for 22: fitness &#8592; CalculateFitness(h o f f ); 23: fronts &#8592; CalculateFronts(h o f f , fitness); 24: distance &#8592; CalculateCrowd(h o f f , fitness, fronts); 25: h o f f &#8592; OrderElements(h o f f , fronts, distance); 26: h &#8592; h o f f [1 . . .</ns0:formula></ns0:div> <ns0:div><ns0:head>EXPERIMENTS</ns0:head><ns0:p>We implemented our algorithm and conducted extensive experiments under different testing scenarios.</ns0:p><ns0:p>The source code and data can be found in the online public repository 1 . In this section, we first describe our experiment setup, and then present the experimental results to illustrate the efficiency and effectiveness of the proposed approach.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental Setup</ns0:head><ns0:p>Benchmark. We choose TeaStore <ns0:ref type='bibr' target='#b28'>(von Kistowski et al., 2018)</ns0:ref> as our benchmark application to evaluate the performance of different algorithms including Sim-DRS. TeaStore is a state-of-the-art microservicebased test and reference application, and has been widely used for performance evaluation of microservicebased applications. It allows evaluating performance modeling and resource management techniques, and also offers instrumented variants to enable extensive run-time analysis.</ns0:p><ns0:p>As shown in Fig. <ns0:ref type='figure' target='#fig_10'>4</ns0:ref>, the TeaStore consists of five distinct services and a registry service. All services communicate with the registry for service discovery and load balancing. Additionally, the WebUI service issues calls to the image provider, authentication, persistence and recommender services. The image provider and recommender are both dependent on the persistence service. All services communicate via representational state transfer (RESTful) calls, and are deployed as Web services on the Apache Tomcat Web server.</ns0:p><ns0:p>Workload. To characterize user requests to a production microservice-based web system in a daily cycle, we implemented a workload generator using JMeter 2 to generate requests according to Poisson and random distribution, and each request corresponds to some kind of application calling the corresponding microservices. For each cycle of workload, we use every algorithm to make scheduling decisions for 20 times under a fixed time constraint. We observed that it usually takes 5 to 10 seconds for Kubernetes to create a new container or destroy an existing container in our experiment environment, so the time constraint should be smaller then 10s in order to minimize the influence on MWS. Based on this observation, we set five different time intervals for resource scheduling, i.e., 2s, 4s, 6s, 8s, and 10s, in our experiment. are used for the deployment of microservice instances. More specifically, v 2 , v 3 and v 4 are deployed in one public cloud, and v 5 is deployed in the other public cloud.</ns0:p><ns0:p>Performance Metrics. We consider three objectives in our experiments for performance evaluation, namely, resource consumption (C (t) ), system jitter (J (t) ), and invocation expense (E (t) ), as formally defined in Eq. ( <ns0:ref type='formula' target='#formula_15'>10</ns0:ref>). The performance improvement of an algorithm over a baseline algorithm in comparison is defined as:</ns0:p><ns0:formula xml:id='formula_31'>Imp(baseline) = P &#8722; P baseline P baseline &#8226; 100%, (<ns0:label>16</ns0:label></ns0:formula><ns0:formula xml:id='formula_32'>)</ns0:formula><ns0:p>where P baseline is the performance of the baseline algorithm, and P is that of the algorithm being evaluated. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For each run in our experiments, every algorithm is executed under the same time constraint and stops once the constraint is met. To ensure consistency, we run each workload five times and calculate the average of these five runs. Baseline Algorithms and Hyperparameters. To evaluate the performance of Sim-DRS, we compare it with three state-of-the-art algorithms, namely, Ant Colony Algorithm (ACO) <ns0:ref type='bibr' target='#b20'>(Merkle et al., 2002)</ns0:ref>, Particle Swarm Optimization (PSO) <ns0:ref type='bibr' target='#b13'>(Kumar and Raza, 2015)</ns0:ref>, and Non-dominated Sorting Genetic</ns0:p><ns0:p>Algorithm II (NSGA-II) <ns0:ref type='bibr' target='#b5'>(Deb et al., 2002)</ns0:ref>. Table <ns0:ref type='table' target='#tab_7'>2</ns0:ref> summarizes the hyperparameters for each algorithm (including Sim-DRS). </ns0:p></ns0:div> <ns0:div><ns0:head>Experimental Results</ns0:head><ns0:p>Given fixed time constraints, we run four different scheduling algorithms independently. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>9.70% over ACO, 10.58% over PSO, and 10.80% over NSGA-II. It is worth noting that Sim-DRS shows significant improvements over the other algorithms for the system jitter objective, which indicates that the solutions generated by our algorithm are more stable with a higher level of robustness of the MWS.</ns0:p><ns0:p>Another important observation from Table <ns0:ref type='table' target='#tab_8'>3</ns0:ref> is that Sim-DRS achieves more significant improvements over the other algorithms when the time constraint is stricter (i.e., tighter scheduling time). This is consistent with our similarity assumption stated in the Motivation section. In summary, we can safely draw the conclusion from the experimental results that our algorithm outperforms all other algorithms in terms of three objectives, namely resource consumption, system jitter, and invocation expense, in both workloads that follow Poisson and random distribution. Note that the improvement is much significant in terms of system jitter, which means that our algorithm is more practical in the production WMS. This is because our algorithm requires less deployment efforts and is able to make the system more stable.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_13'>6</ns0:ref> illustrates the objective measurements of different algorithms with the time constraint of 2s for 20 scheduling decisions. We observe from Fig. <ns0:ref type='figure' target='#fig_13'>6</ns0:ref> that Sim-DRS outperforms all other three algorithms under every scheduling decision point, followed by ACO, PSO, and NSGA-II. The difference between these algorithms is not significant at the beginning and end of the decision time points, but the difference at the middle points is. Since the workloads follow the Poisson distribution, such results indicate that Sim-DRS is more stable and robust under different circumstances in comparison with other algorithms.</ns0:p><ns0:p>Finally, Fig. <ns0:ref type='figure'>7</ns0:ref> shows the response time of user requests for different algorithms with the workload following Poisson distribution. We observe from Fig. <ns0:ref type='figure'>7</ns0:ref> that the response time measurements also follow</ns0:p><ns0:p>Poisson distribution for all four algorithms, which is reasonable because there should be a positive correlation between the response time and the number of user requests. We also observed that our algorithm is better at responding to requests when the workload is heavier, which means that Sim-DRS Manuscript to be reviewed can obtain a better performance in terms of three objectives while still ensuring a good response time.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>In this paper, we proposed Sim-DRS, a similarity-based dynamic resource scheduling algorithm that quickly finds promising scheduling decisions by identifying and incorporating previously-known viable solutions to similar problems as the initial population, hence obviating the need of a fresh start. We conducted extensive experiments on a well-known microservice benchmark application on disparate computing nodes on public clouds. The superiority of Sim-DRS was illustrated with various performance metrics in comparison with three state-of-the-art scheduling algorithms.</ns0:p><ns0:p>It is of our future interest to make Sim-DRS more reactive by learning previous request patterns and supporting automatic adjustments according to future workload prediction. We will also explore the possibility of using reinforcement learning-based algorithms to make more intelligent and adaptive scheduling decisions.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>used the neural network and attention mechanism in deep learning to optimize the passive elastic scaling mechanism of the cloud platform and the active elastic mechanism of microservices by accurately predicting the load of microservices, and finally realized the automatic 3/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:65158:1:0:CHECK 23 Oct 2021) Manuscript to be reviewed Computer Science scheduling of working nodes. Lv et al. (2019) used machine learning methods in resource scheduling in microservice architecture, pre-trained a random forest regression model to predict the requirements for the microservice in the next time window based on the current unload pressure, and the number of instances and their locations were adjusted to balance the system pressure.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. An overview of the DRS-MWS problem.</ns0:figDesc><ns0:graphic coords='5,162.41,324.82,372.22,287.70' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:08:65158:1:0:CHECK 23 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. An overview of the proposed Sim-DRS approach.</ns0:figDesc><ns0:graphic coords='9,162.41,63.78,372.23,305.29' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>248Algorithm 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>solutions for different workloads in the training phase. The solution database construction (SDC) 249 algorithm is provided in Algorithm 1. 250 Solution database construction algorithm: SDC(W , k).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>252</ns0:head><ns0:label /><ns0:figDesc>Genetic Algorithm II (NSGA-II) to find the optimal solution (line 4). The &lt;workload-solution&gt; pair is253 8/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:65158:1:0:CHECK 23 Oct 2021) Manuscript to be reviewed Computer Science added to the database as a known fact (line 5). Note that the training phase is conducted offline, which allows an extensive execution of the NSGA-II algorithm without any time constraint. Based on the definition of similarity, we then apply a k-Means clustering algorithm to these pairs by clustering the workloads and generate k different groups, where k, which is often designated based on an empirical study, is used to characterize different workload patterns (line 7).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 illustrates an example of the solution database containing three groups of workloads G 1 , G 2 , and G 3 , each of which consists of three workloads. Three centers, namely W 3 , W 5 , and W 7 in this example, represent their clusters, respectively.</ns0:figDesc><ns0:graphic coords='10,245.13,171.46,206.79,185.14' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. An example of the solution database.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:08:65158:1:0:CHECK 23 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>1 https://github.com/xdbdilab/Sim-DRS 11/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:65158:1:0:CHECK 23 Oct 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The architecture of TeaStore benchmark.</ns0:figDesc><ns0:graphic coords='13,183.09,63.78,330.88,213.06' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>2 https://jmeter.apache.org/ 12/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:65158:1:0:CHECK 23 Oct 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Performance comparison of different algorithms under random distribution.</ns0:figDesc><ns0:graphic coords='15,234.79,149.76,227.47,278.42' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Objective measurements of different algorithms with the time constraint of 2s for 20 scheduling decisions.</ns0:figDesc><ns0:graphic coords='16,183.09,63.78,330.87,364.11' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,183.09,63.78,330.87,259.69' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>g &#8969;;</ns0:figDesc><ns0:table><ns0:row><ns0:cell>9:</ns0:cell><ns0:cell>if adj = 0 then</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>10:</ns0:cell><ns0:cell>k &#8592; |adj|;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>11:</ns0:cell><ns0:cell cols='2'>while k &gt; 0 do</ns0:cell></ns0:row><ns0:row><ns0:cell>12:</ns0:cell><ns0:cell cols='3'>Randomly select a VM index x &#8712; [1, n];</ns0:cell></ns0:row><ns0:row><ns0:cell>13:</ns0:cell><ns0:cell>if s i j,x +</ns0:cell><ns0:cell cols='2'>|adj| adj &#8805; 0 then</ns0:cell></ns0:row><ns0:row><ns0:cell>14:</ns0:cell><ns0:cell cols='2'>s i j,x &#8592; s i j,x +</ns0:cell><ns0:cell>|adj| adj ;</ns0:cell></ns0:row><ns0:row><ns0:cell>15:</ns0:cell><ns0:cell cols='2'>k &#8592; k &#8722; 1;</ns0:cell></ns0:row><ns0:row><ns0:cell>16:</ns0:cell><ns0:cell>end if</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>17:</ns0:cell><ns0:cell>end while</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>18:</ns0:cell><ns0:cell>end if</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>19:</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Specifications of virtual machines. The experiment was carried out on a cluster of VMs provisioned on public clouds, each of which runs CentOS Linux release 7.6.1810 (core) X86-64. The specifications of these five VMs are provided in Table1. Specifically, v 1 deploys the Nginx gateway and the complete set of TeaStore test benchmarks, including the database and the registry. The other four virtual machines (v 2 -v 5 )</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>VMs CPU cores Memory size Disk size</ns0:cell></ns0:row><ns0:row><ns0:cell>v 1</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>16GiB</ns0:cell><ns0:cell>30GB</ns0:cell></ns0:row><ns0:row><ns0:cell>v 2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8GiB</ns0:cell><ns0:cell>30GB</ns0:cell></ns0:row><ns0:row><ns0:cell>v 3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8GiB</ns0:cell><ns0:cell>30GB</ns0:cell></ns0:row><ns0:row><ns0:cell>v 4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8GiB</ns0:cell><ns0:cell>30GB</ns0:cell></ns0:row><ns0:row><ns0:cell>v 5</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>16GiB</ns0:cell><ns0:cell>30GB</ns0:cell></ns0:row><ns0:row><ns0:cell>Execution Environment.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Hyperparameters for each algorithm.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Algorithms Parameter name</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>pheromone volatilization rate</ns0:cell><ns0:cell>0.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>pheromone initial concentration</ns0:cell><ns0:cell>700</ns0:cell></ns0:row><ns0:row><ns0:cell>ACO</ns0:cell><ns0:cell>pheromone releasing factor information heuristic factor</ns0:cell><ns0:cell>1 3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>expectation heuristic factor</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>number of ants</ns0:cell><ns0:cell>50</ns0:cell></ns0:row><ns0:row><ns0:cell>PSO</ns0:cell><ns0:cell>iteration</ns0:cell><ns0:cell>50</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>mutation probability</ns0:cell><ns0:cell>0.3</ns0:cell></ns0:row><ns0:row><ns0:cell>NSGA-II</ns0:cell><ns0:cell>cross probability</ns0:cell><ns0:cell>0.3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>population</ns0:cell><ns0:cell>50</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>mutation probability</ns0:cell><ns0:cell>0.3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>cross probability</ns0:cell><ns0:cell>0.3</ns0:cell></ns0:row><ns0:row><ns0:cell>Sim-DRS</ns0:cell><ns0:cell>population</ns0:cell><ns0:cell>50</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>the number of clusters</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>the proportion of good solutions (&#945;)</ns0:cell><ns0:cell>0.4</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Objective measurements of different algorithms under Poisson distribution.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Objectives</ns0:cell><ns0:cell>Time (s)</ns0:cell><ns0:cell>ACO (Imp%)</ns0:cell><ns0:cell>PSO (Imp%)</ns0:cell><ns0:cell cols='2'>NSGA-II (Imp%) Sim-DRS</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell cols='2'>21.534 (10.58%) 21.890 (12.41%)</ns0:cell><ns0:cell>21.660 (11.23%)</ns0:cell><ns0:cell>19.474</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>4</ns0:cell><ns0:cell cols='2'>21.702 (11.67%) 21.534 (10.75%)</ns0:cell><ns0:cell>21.286 (9.47%)</ns0:cell><ns0:cell>19.444</ns0:cell></ns0:row><ns0:row><ns0:cell>Resource consumption</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell cols='2'>21.732 (12.86%) 21.706 (12.72%)</ns0:cell><ns0:cell>21.674 (12.56%)</ns0:cell><ns0:cell>19.256</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>8</ns0:cell><ns0:cell cols='2'>21.256 (10.09%) 21.396 (10.81%)</ns0:cell><ns0:cell>21.218 (9.89%)</ns0:cell><ns0:cell>19.308</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>10</ns0:cell><ns0:cell>21.018 (9.33%)</ns0:cell><ns0:cell>21.206 (10.31%)</ns0:cell><ns0:cell>21.070 (9.60%)</ns0:cell><ns0:cell>19.224</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell>8.80 (51.72%)</ns0:cell><ns0:cell>8.80 (51.72%)</ns0:cell><ns0:cell>7.60 (31.03%)</ns0:cell><ns0:cell>5.80</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>4</ns0:cell><ns0:cell>7.80 (56.00%)</ns0:cell><ns0:cell>7.60 (32.00%)</ns0:cell><ns0:cell>6.40 (28.00%)</ns0:cell><ns0:cell>5.00</ns0:cell></ns0:row><ns0:row><ns0:cell>System jitter</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>5.40 (28.57%)</ns0:cell><ns0:cell>6.50 (54.76%)</ns0:cell><ns0:cell>6.40 (52.38%)</ns0:cell><ns0:cell>4.20</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>8</ns0:cell><ns0:cell>5.80 (38.10%)</ns0:cell><ns0:cell>6.00 (42.86%)</ns0:cell><ns0:cell>6.20 (47.62%)</ns0:cell><ns0:cell>4.20</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>10</ns0:cell><ns0:cell>5.30 (39.47%)</ns0:cell><ns0:cell>4.80 (26.32%)</ns0:cell><ns0:cell>5.00 (31.58%)</ns0:cell><ns0:cell>3.80</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell>20.080 (9.25%)</ns0:cell><ns0:cell>20.990 (14.20%)</ns0:cell><ns0:cell>20.340 (10.66%)</ns0:cell><ns0:cell>18.380</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>4</ns0:cell><ns0:cell>20.330 (9.30%)</ns0:cell><ns0:cell>20.380 (9.03%)</ns0:cell><ns0:cell>20.730 (11.45%)</ns0:cell><ns0:cell>18.600</ns0:cell></ns0:row><ns0:row><ns0:cell>Invocation expense</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>20.540 (9.61%)</ns0:cell><ns0:cell>20.680 (10.35%)</ns0:cell><ns0:cell>20.700 (10.46%)</ns0:cell><ns0:cell>18.740</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>8</ns0:cell><ns0:cell>20.260 (10.47%)</ns0:cell><ns0:cell>20.080 (9.49%)</ns0:cell><ns0:cell>20.140 (9.81%)</ns0:cell><ns0:cell>18.340</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>10</ns0:cell><ns0:cell>19.580 (9.88%)</ns0:cell><ns0:cell>19.570 (9.82%)</ns0:cell><ns0:cell>19.890 (11.62%)</ns0:cell><ns0:cell>17.820</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>tabulates the objective values when the workload subjects to the Poisson distribution. As expected, Sim-DRS has better performance than the three comparison algorithms in terms of three objectives. Specifically, in terms of resource consumption, our algorithm achieves an average performance improvement of 10.91% over ACO, 11.40% over PSO, and 10.55% over NSGA-II; in terms of system jitter, our algorithm achieves an average performance improvement of 42.77% over ACO, 41.53% over PSO, and 38.12% over NSGA-II; in terms of invocation expense, our algorithm achieves an average performance improvement of13/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:65158:1:0:CHECK 23 Oct 2021)</ns0:note></ns0:figure> <ns0:note place='foot' n='17'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:65158:1:0:CHECK 23 Oct 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Dear Editors Thank you very much for your generous help with the review process, and we have edited the manuscript to address your concerns. Please find our responses to the comments of the three reviewers in the rebuttal letter below. We believe that this manuscript has now met the requirements for publication in PeerJ Computer Science. Best regards, Wenjing Liu On behalf of all authors. Reviewer 1 (Anonymous) Additional comments Weaknesses: 1. A formal complexity study of the formulated problem “DRS-MWS” is missing We have added the challenges of DRS-MWS problem: NP-hard and multi-objective optimization. Details can be found in the Introduction section in lines: 45-56. 2. The related work section should be better summarized, e.g., the “rule-based approach” and “heuristic approach” contains too much work in a single paragraph and these two approaches may contain some overlap and their distinction is not well described. Also, a complete refresh of the related work with more recent literatures would be appreciated. We are very thankful to the reviewer for the constructive comments. We have partially deleted the related work listed in the two parts of “rule-based approach” and “heuristic approach”, retaining the methods with obvious classification characteristics. We have redescribed the differences between these approaches and supplemented the latest work on rules-based, heuristic, and learning-based approaches. Details can be found in the Related Work section. 3. Eq. 9 seems to be messed up, please correct Following the reviewer’s suggestion, we have corrected the format of Formula 9 in line 206. Reviewer 2 (Ye Yao) Additional comments This paper presents Sim-DRS, a similarity-based heuristic scheduling algorithm for microservice-based web systems that aims to find viable scheduling schemes by utilizing solutions to similar problems. More specifically, the Dynamic Resource Scheduling for Microservice-based Web Systems (DRS-MWS) is formulated as a combinatorial optimization problem, and the authors propose the Sim-DRS to solve the DRS-MWS problem. The experimental results show the superiority of Sim-DRS over three state-of-the-art scheduling algorithms. Overall, the article is well organized and its presentation is good. The interesting idea is to obviate a fresh start in dynamic resource scheduling by incorporating previously-known viable solutions to similar problem. However, some minor issues still need to be improved: 1. The introduction section should firstly provide an idea of the resource scheduling process before the analysis of existing methods. We have added a detailed description of resource scheduling in the Introduction Section, and introduced the goals that resource scheduling should achieve: instance scheduling and auto-scaling, which can be viewed in line 39-45. 2. The words in Figure 1 are not very clear, and the contents in Table 2 and Table 3 are not clearly displayed. Please improve the quality of figures and tables. Following the reviewer’s suggestion, we have re-uploaded Figure 1 with higher resolution in page 4, and modified Tables 2 and 3 to a more understandable format, which can be found in page 12 and page 13. 3. There are some typos and grammar errors. It is recommended that the authors should proof read the manuscript before submission. Thank for the reviewer’s carefully review, we have modified some typos errors such as virtuals->virtual, parametrization-> parameterization and etc.. We also have corrected some grammar errors such as from->of, line->lines, using->by, and etc.. The whole manuscript has been carefully reviewed and all spelling and grammatical errors have been corrected. Reviewer 3 (Anonymous) Additional comments To improve the paper, I have some comments though: 1. There are some formatting issues in the paper. For example, the in formula (6) should be left aligned; the result ( ) in formula 9 can try to give the reader. Following the reviewer’s suggestion, we have corrected the format of Formula 6 and improved the calculation result of Formula 9. These modifications can be found in line 206. 2. Some symbols are not clearly defined. For example, the 'X' in equation (8) is unclear. If it is the dot product of the matrix, it is recommended to use '.', if it is the outer product or Hadamard product, it is also recommended to use the corresponding mainstream symbols to express; it is recommended to consider replacing (in line 180) with for more clarity. Similar issues are also recommended to be modified, etc. We are very thankful to the reviewer for the mathematical symbol suggestions. In Formula 8, we use matrix dot product to calculate the invocation expense, so we have modified Formula 8 according to the requirements of reviewers in line 206. We also have checked the use of other symbols to make sure there was no ambiguity. 3. Unify the expression of images in the text, such as 'Figure 3' in line 233 and 'Fig. 5' in line 330. We have carefully checked the template of PeerJ and make sure the current expression of images meet the requirements, so we would not modify them. The reference link for the template is: https://peerj.com/about/author-instructions/cs, we can find images’ cited requirements in the File types section, “When citing use the abbreviation 'Fig.'. When starting a sentence with a citation, use 'Figure 1'.” "
Here is a paper. Please give your review comments after reading it.
320
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. Electrocardiogram (ECG) signal classification plays a critical role in the automatic diagnosis of heart abnormalities. While most ECG signal patterns cannot be recognized by human interpreter, can be detected with precision using artificial intelligence approaches, making the ECG a powerful non-invasive biomarker. However, performing rapid and accurate ECG signal classification is difficult due to the low amplitude, complexity, and non-linearity. The widely available and proposed deep learning (DL) method has explored and presented an opportunity to substantially improve the accuracy of automated ECG classification analysis using rhythm or beat feature. Unfortunately, a comprehensive and general evaluation of the specific DL architecture for ECG analysis across a wide variety of rhythm and beat features has not been previously reported. Some previous studies have been concerned with detecting ECG class abnormalities only through rhythm or beat feature separately.</ns0:p><ns0:p>Methods. This study proposes a single architecture based on DL method with one-dimensional convolutional neural network (1D-CNN) architecture, to automatically classify 24 patterns of ECG signals through both features rhythm and beat. To validate the proposed model, five databases which consisted of nine-class of ECG-base rhythm and 15-class of ECG-based beat are utilized in this study. The proposed DL network applied and experimented with varying datasets with different frequency samplings in intra and inter-patient scheme.</ns0:p><ns0:p>Results. Using a 10-fold cross-validation scheme, the performance results had an accuracy of 99.98%, a sensitivity of 99.90%, a specificity of 99.89%, a precision of 99.90%, and an F1-score of 99.99% for ECG rhythm classification. Also, for ECG beat classification, the model obtained an accuracy of 99.87%, a sensitivity of 96.97%, a specificity of 99.89%, a precision of 92.23%, and an F1-score of 94.39%. In conclusion, this study provides clinicians with an advanced methodology for detecting and discriminating heart abnormalities between different ECG rhythm and beat assessments by using one outstanding proposed DL architecture.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Globally, heart abnormality deaths are projected to increase to 23.4 million, comprising 35% of all deaths in 2030 (World Health Organization 2016). In the clinical symptoms, electrocardiogram (ECG) pattern analysis and measurement of important cardiac biomarkers are the current heart diagnostic cornerstone <ns0:ref type='bibr'>(O'Gara et al. 2013)</ns0:ref>. However, such diagnosis is based on the invasive laboratory test and requires specific tools, cost, and infrastructures, such as trained clinical staff for inspecting blood and performing assays and a hematology analyser with biochemical reagents <ns0:ref type='bibr' target='#b8'>(Cho et al. 2020)</ns0:ref>. For this reason, such assessments are difficult to use in remote healthcare monitoring or developing countries <ns0:ref type='bibr' target='#b27'>(Makimoto et al. 2020)</ns0:ref>. Analysis of ECG patterns could help with early detection of life-threatening heart abnormalities and is considered for diagnosing patients' health conditions into specific grades, which can assist clinicians with proper treatment <ns0:ref type='bibr' target='#b37'>(Siontis et al. 2021)</ns0:ref>. ECG measures the electricity of the heart, by analyzing each of electrical signal, it is possible to detect some abnormalities. In such conditions, ECG should allow continuous and remote monitoring.</ns0:p><ns0:p>Although the acquisition of ECG recordings is well standardized, human interpretations of ECG recordings vary widely. This is due to differences in the level of experience and expertise. To minimize these constraints, computer-generated interpretations have been used for various years. However, because this interpretation is based on predetermined rules and the limitations of the feature recognition algorithm, the interpretation results do not always capture the complexities and nuances contained in the ECG <ns0:ref type='bibr' target='#b37'>(Siontis et al. 2021</ns0:ref>). Based on this, the ECG by itself is often insufficient to diagnose several heart abnormalities. In myocardial infarction (MI), for example, is due to ST-segment deviation and may be occurred in other conditions such as acute pericarditis, left ventricular hypertrophy, left bundle-branch block, Brugada syndrome, and early repolarizations <ns0:ref type='bibr' target='#b43'>(Wang, Asinger, and Marriott 2003)</ns0:ref>. Due of this, automatically diagnosing MI using a ruled based inference system from conventional ECG machine has a low reliability and by practice cardiologists are unable to diagnose it only from ECG record <ns0:ref type='bibr' target='#b9'>(Daly et al. 2012)</ns0:ref> <ns0:ref type='bibr' target='#b8'>(Cho et al. 2020)</ns0:ref>. Furthermore, the traditional methods for diagnosing heart abnormalities specifically from a 12-lead ECG are difficult to apply in wearable devices <ns0:ref type='bibr' target='#b40'>(Walsh, Topol, and Steinhubl 2014)</ns0:ref> <ns0:ref type='bibr' target='#b8'>(Cho et al. 2020</ns0:ref>) and a wide variability in ECG morphology between patients causes major challenges.</ns0:p><ns0:p>The heart abnormalities analysis by using ECG signal processing can be conducted by using rhythm and beat feature. In the previous studies, such research has been proposed with several method. Deep learning (DL) is one type of artificial intelligence approach that can learn and extract meaningful patterns from complex raw data and recently has begun to widely used to analyze ECG signals for diagnosing an arrhythmia, heart failure, myocardial infarction, left ventricular hypertrophy, valvular heart disease, age, and sex with ECG alone and produce good result <ns0:ref type='bibr' target='#b27'>(Makimoto et al. 2020)</ns0:ref> <ns0:ref type='bibr' target='#b2'>(Attia et al. 2019)</ns0:ref> <ns0:ref type='bibr' target='#b14'>(Hannun et al. 2019)</ns0:ref> <ns0:ref type='bibr'>(Kwon et al. 2020(a)</ns0:ref>) <ns0:ref type='bibr'>(Kwon et al. 2020(b)</ns0:ref>) <ns0:ref type='bibr' target='#b22'>(LeCun, Bengio, and Hinton 2015)</ns0:ref>. DL performs excellently over a relatively short period of time. The sophistication of DL has a much better ability to feature PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64082:1:1:NEW 14 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>representation at an abstract level compared to general machine learning. The DL model can extract a hierarchical representation of the raw data automatically and then utilize the last stacking layers to gain knowledge from complex features to the simpler ones <ns0:ref type='bibr' target='#b16'>(Khan and Yairi 2018)</ns0:ref>.</ns0:p><ns0:p>In the previous study, the ECG signal classification based on heart rhythm can be conducted with several features morphology of ECG signal like presenting ST-elevation and depression, T-wave abnormalities, and pathological Q-waves <ns0:ref type='bibr' target='#b1'>(Ansari et al. 2017)</ns0:ref>. Moreover, a variety of ECG rhythm features, such as the RR interval, ST interval, PR interval, and QT interval have been implemented to automatically detect heart abnormalities over the past decade <ns0:ref type='bibr'>(Gopika et al. 2020)</ns0:ref>. Unlike an ECG rhythm, the efficiency classification of the irregular heartbeat , either faster or slower than normal, or even waveform malformation can be improve by using beat feature. <ns0:ref type='bibr' target='#b15'>(Khalaf, Owis, and Yassine 2015)</ns0:ref>. For heartbeat classification, ECG pattern may be similar for different patients who have different heartbeats and may be different for the same patient at different times. ECG-based heartbeat classification is virtually a problem of temporal pattern recognition and classification <ns0:ref type='bibr' target='#b48'>(Zubair, Kim, and Yoon 2016)</ns0:ref> <ns0:ref type='bibr' target='#b11'>(Dong, Wang, and Si 2017)</ns0:ref>. Based on the aforementioned instances, the variety of ECG signals with abnormalities must be handled specifically, either as an ECG rhythm or beat features.</ns0:p><ns0:p>Unfortunately, the challenge in analyzing the pattern of ECG signal is not limited to this. ECG signals have small amplitudes and short durations, measured in millivolts and milliseconds, respectively, and large inter-and intra-observer variability that influences the perceptibility of these signals <ns0:ref type='bibr' target='#b24'>(Lih et al. 2020</ns0:ref>). The analysis of thousands of ECG signals is time-consuming, and the possibility of misreading vital information is high. Automated diagnostic systems can utilize computerized recognition of heart abnormalities based on rhythm or beat to overcome such limitations. This could become the standard procedure by clinicians classifying ECG recordings.</ns0:p><ns0:p>Hence, the present study proposes a single DL architecture for classifying ECG patterns by using both rhythm and heartbeat features. Rather than treating ECG heartbeat and rhythm separately, we process both of them in the same framework. Hence, we only need a single DL architecture to classify the ECG signal with high accuracy. DL-based frameworks mainly include a stacked autoencoder (SAE), long short-term memory (LSTM), a deep belief network (DBN), convolutional neural networks (CNN), and so on. Among DL algorithms, we have generated a one-dimensional CNN (1D-CNN) model and showed promising results in our previous works <ns0:ref type='bibr'>(Nurmaini et al. 2020)</ns0:ref> <ns0:ref type='bibr'>(Tutuko et al. 2021)</ns0:ref>. In other works, 1D-CNN has also performed well for ECG classification, with overall performances ranging from 93.53% to 97.4% accuracy using rhythm <ns0:ref type='bibr' target='#b0'>(Acharya et al. 2017)</ns0:ref>; <ns0:ref type='bibr' target='#b42'>(Wang 2020)</ns0:ref> and with overall 92.7% to 96.4% accuracy using beat <ns0:ref type='bibr' target='#b48'>(Zubair et al. 2016)</ns0:ref> <ns0:ref type='bibr' target='#b18'>(Kiranyaz, Ince, and Gabbouj 2015)</ns0:ref>. For the pattern recognition technique, 1D-CNN is well known, as it integrates feature extraction, dimensionality reduction, and classification techniques utilizing several convolution layers, pooling layers, and a fully connected layer. Convoluted optimum features are derived and classified using feed-forward artificial neural networks using a fully connected layer with a learning framework for back propagation <ns0:ref type='bibr' target='#b23'>(Li et al. 2019</ns0:ref>). This study and the proposed approach make the following novel The rest of this paper is organized as follows. Section 2 presents the materials and methods, which comprise ECG raw data and the proposed methodology for ECG rhythm and beat classification using 1D-CNN architecture. Section 3 presents the results and discussion. Finally, the conclusions are presented in Section 4.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials and Methods</ns0:head></ns0:div> <ns0:div><ns0:head>Data Preparation</ns0:head><ns0:p>In this study, we use the public data set from PhysioNet <ns0:ref type='bibr' target='#b13'>(Goldberger et al. 2000)</ns0:ref>. The ECGs data in the Physionet were collected from healthy volunteers and patients with different heart diseases. This database has already been published online by a third party and unrelated to the study. Consequently, there should be no concerns regarding the ethical disclosure of the information. To process the ECG signal pattern recognition, we utilize two segmentation processes, rhythm and beat. Therefore the experimental databases is divided into two cases, (i) for ECG rhythm classification utilize the PTB Diagnostic ECG (PTB DB) <ns0:ref type='bibr' target='#b4'>(Bousseljot, Kreiseler, and Schnabel 1995)</ns0:ref>, the BIDMC Congestive Heart Failure (CHF) <ns0:ref type='bibr' target='#b3'>(Baim et al. 1986</ns0:ref>), the China Physiological Signal Challenge 2018 <ns0:ref type='bibr' target='#b25'>(Liu et al. 2018)</ns0:ref>, the MIT-BIH Normal Sinus Rhythm <ns0:ref type='bibr' target='#b13'>(Goldberger et al. 2000)</ns0:ref>; and (ii) for ECG beat classification utilize the MIT-BIH Arrhythmia Database <ns0:ref type='bibr' target='#b28'>(Moody and Mark 2001)</ns0:ref>.</ns0:p><ns0:p>A summary of each database is provided as follows:</ns0:p><ns0:p>&#61623; The PTB DB contains 549 records from 290 patients (209 men and 81 women). ECG signals were sampled at 1000 Hz. Each ECG record includes 15 signals measured simultaneously: 12 conventional leads (I, II, III, aVR, aVL, aVF, V1, V2, V3, V4, V5, V6) along with three ECG Frank leads <ns0:ref type='bibr'>(vx, vy, vz)</ns0:ref> in the .xyz file. For this study, only a single lead (lead II) was used. The database provides ECG normal and nine heart abnormalities, such as myocardial infarction, cardiomyopathy, bundle branch block, dysrhythmia, hypertrophy, myocarditis, and valvular heart disease.</ns0:p><ns0:p>for ECG signal classification. and (v) the evaluation stage of the proposed model based on validation and testing data with accuracy, sensitivity, specificity, precision and F1-score.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.'>Database Selection</ns0:head><ns0:p>We have total of 168,472 rhythm episodes and 110,082 beat episodes as ECG features were used for training, validation, and an testing (as unseen data). The 1D-CNN architecture was used to classify the nine-class by using rhythms feature segmentation and 15-class of beats feature segmentation of the ECG signal. The information available from the single-lead ECG standard recordings included different signal lengths and frequency samplings (128, 250, 500, and 1000 Hz).</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Pre-processing</ns0:head><ns0:p>The ECG signal can become corrupted during acquisition due to different types of artifacts and interference, such as muscle contraction, baseline drift, electrode contact noise, and power line interference <ns0:ref type='bibr' target='#b36'>(Sameni et al. 2007</ns0:ref>) <ns0:ref type='bibr' target='#b38'>(Tracey and Miller 2012</ns0:ref>) <ns0:ref type='bibr' target='#b41'>(Wang et al. 2015)</ns0:ref>. To achieve an accurate analysis and diagnosis, undesirable noise and signals should be removed or deleted from the ECG by eliminating various kinds of noise and artifacts. This study implemented DWT as a frequently used denoising technique that offers a useful option for denoising ECG signals. This study also implemented some wavelet families for ECG signals, such as symlets, daubechies, haar, bior, and coiflet, to analyze which type of wavelet would obtain the best signal denoising result. Among them, based on the highest the signal noise to ratio (SNR) results, the symlet wavelet was the best DWT parameter and was chosen for ECG signal denoising.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>ECG Signal Segmentation</ns0:head><ns0:p>The aim of ECG segmentation is to divide a signal into many parts with similar statistical properties, such as amplitude, nodes, and frequency. The presence, time, and length of each segment of an ECG signal have diagnostic and biophysical significance, and the various sections of an ECG signal have distinctive physiological meaning <ns0:ref type='bibr' target='#b45'>(Yadav and Ray 2016)</ns0:ref>. ECG signal segmentation may also be accurately analyzed. The process of ECG feature segmentation for rhythm and beat classification can be described as follows:</ns0:p><ns0:p>&#61623; ECG rhythm segmentation is the process to produce the features for the entire ECG signal recordings at 2700 nodes without considering the different frequency sampling (128, 250, 500, and 1000 Hz) for ECG rhythm classification. In our previous work <ns0:ref type='bibr'>(Nurmaini et al. 2020)</ns0:ref>, we successfully segmented the length of AF episodes to 2700 nodes. Therefore, for this study, we generated the features for nine-class of normalabnormal ECG rhythm. The length of 2700 nodes contained at least two R-R intervals between one and the next beat with different frequency samplings in all records. Furthermore, the 2700-node segmentations might show more than two R-R intervals Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>with a minimum frequency sampling of 128 Hz for the training, validation, and unseen set. As a result, the best ECG episodes were chosen from 2700 nodes for segmentation. The process of ECG rhythm classification is illustrated in Figure <ns0:ref type='figure'>4</ns0:ref>(a). Figure <ns0:ref type='figure'>4</ns0:ref>(a) shows that all lengths of the ECG recordings have been segmented to each episode of 2700 nodes. If the total nodes were less than 2700 nodes, we added the zero-padding technique, which involved extending a signal with zeros. &#61623; ECG beat segmentation is the step of intercepting numerous nodes in a signal to discern not only subsequent heart beats but also the waveforms included in each beat <ns0:ref type='bibr' target='#b34'>(Qin et al. 2017)</ns0:ref>. The former refers to the characteristics retrieved from a single beat, which typically only contains one R-peak. The latter, however, refers to features that are dependent on at least two beats. These features include more information than a single R-peak. The waveforms of beat segmentation are presented in Figure <ns0:ref type='figure'>4</ns0:ref>(b). Figure <ns0:ref type='figure'>4</ns0:ref>(b) shows the positions of the P-wave, QRS-complex, and T-wave, which are all intimately connected to the location of the R-peak. According to <ns0:ref type='bibr' target='#b34'>(Qin et al. 2017</ns0:ref>) <ns0:ref type='bibr' target='#b5'>(Chang et al. 2012</ns0:ref>) <ns0:ref type='bibr' target='#b30'>(Nurmaini et al. 2019)</ns0:ref>, the average ECG rhythm frequency is between 60 and 80 beats per minute, the t1 duration is 0.25 seconds before R-peak, and the t2 duration is 0.45 seconds after R-peak, which results in a total length of 0.7 seconds. A total of 0.7 seconds contains 252 nodes, with a sampling frequency of 360 Hz, which covers the P-wave, QRS-complex, and T-wave (one beat).</ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>Feature Extraction and Classification</ns0:head><ns0:p>The 1D-CNN classifier was proposed by <ns0:ref type='bibr'>(Nurmaini et al. 2020</ns0:ref>) for AF detection. By using the architecture, we generalized the model for abnormal-normal rhythm and beat classification. The rectified linear unit (ReLU) function was adopted with 13 convolution layers (64, 128, 256, and 512 filters) and also consisted of five max pooling layers. The 1D-CNN model comprised two fully connected layers with 1000 nodes for each layer and one node for the output layer. The 1D-CNN required a three-dimensional input, which consists of samples, features, and timesteps. The detailed process of 1D-CNN n n architecture for both ECG rhythm and beat classification was as follows:</ns0:p><ns0:p>&#61623; For ECG based on rhythm classification, the input timesteps with the dimension 2700 x 1 were fed into the convolution layer equipped with the ReLU activation function. The first and second convolutional layers produced an output length of 64 with a kernel size of 3. The output of the first and second convolutional layers through the max pooling layer had a kernel size of 2 for the feature reduction. The output of the first max pooling layer as the input for the third and fourth convolutional layers produced 128 feature maps. The convolutional layers were passed onto the fifth and last convolutional layers and produced output lengths of 256 and 512, respectively, with a kernel size of 3. The output of the last convolutional layer was then passed onto two fully connected layers with a total of 1000 nodes. This architecture produced an output of a nine-class ECG rhythm classification. &#61623; Unlike ECG rhythm classification, none of the processes differed from the features interpretation for ECG based on beat classification. The main differences were the input timesteps value of (252, 1) and products of the output size of the 15-class ECG beat classification. The architecture also implemented the ReLU activation function with 64, 128, 256, and 512 filters, with a kernel size of 3. For each max pooling layer, a kernel size of 2 was also used for the feature interpretation of the 1D-CNN for ECG beat classification.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.'>Model Evaluation</ns0:head><ns0:p>Classification ECG signal based on rhythm and beat feature is evaluated by using intra and inter patient scheme. Such schemes are conducted to resemble a clinical environment and to ensure the robustness of the proposed model. Five commons metrics used in this study are accuracy, sensitivity, specificity, precision and F1-score. Moreover, two measures are usually considered for evaluating the classification performance, specifically for imbalance data, are receiver-operating characteristic (ROC) and Precision-Recall (P-R) curves. These two-evaluation metrics were added because the overall accuracy was distorted by the majority class results, since the beat type classes are extremely imbalanced in the available dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Results and Discussion</ns0:head><ns0:p>For ECG rhythm classification, the proposed 1D-CNN model was tested on an unseen set, but ECG beat classification was only tested on the validation set in this study. All experimentation in the training processes used a 10-fold cross-validation scheme. This scheme divides the collection of observations into groups, or folds, of roughly similar size at random. k The scheme is fitted on the remaining folds, with the initial fold serving as a validation set. 1 k &#61485; For the selected model, the parameters that provided the best cross-validation accuracy, sensitivity, specificity, precision, and F1 score were chosen.</ns0:p></ns0:div> <ns0:div><ns0:head>ECG Rhythm Classification in Validation Model</ns0:head><ns0:p>A total of 2445 records consisted of rhythm episodes of 138,415 training sets, 15,373 validation sets, and 14,684 unseen sets after being segmented by each 2700 nodes (refer to Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>). A total of 168,472 episodes were analyzed for the ECG rhythm classification task. As can be seen, all PTB Diagnostics ECG records were used for the training and validation sets. The rest of the datasets were used for the training, validation, and unseen sets. Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref> shows the large different ratio between one class and another (imbalanced) class, for example, a total number of MI, HF, and HC classes to H, M, and VHD classes. However, we did not implement the oversampling techniques to overcome such a case in this study.</ns0:p><ns0:p>Without considering the data ratio (total number) of episodes for each class, we validated the proposed 1D-CNN model to the 10-fold cross-validation scheme. Figure <ns0:ref type='figure'>5</ns0:ref> shows the performance results of folds 1 through 10, which were evaluated for accuracy, sensitivity, specificity, precision, and F1 score. The performance results obtained above 99% for accuracy and specificity and ranged from 93 to 99% for sensitivity, precision, and F1 score. The model with the highest accuracy was chosen as the best model for this study out of all the models analyzed. The model had an accuracy of 99.98%, a sensitivity of 98.53%, a specificity of 99.99%, a precision of 99.81%, and an F1 score of 99.15% (fold 6). The results showed that the proposed 1D-CNN was the most accurate predictor, with an accuracy of 99.98%.</ns0:p><ns0:p>Confusion Matrix (CM) for rhythm evaluation result is shown in Figure <ns0:ref type='figure'>6</ns0:ref>. This metric is used to capture information about the predicted results from the model respected to the actual label. It can be seen that the BBB class has four prediction errors (predicted as healthy-control class) and two healthy-control rhythms that are predicted as BBB. The prediction error between those arises due to the morphology of these types of rhythms is almost similar. Even so, the overall predictive results of the proposed approach provide a satisfactory evaluation performance. Based on the CM, the classification result of the ECG signal with rhythm feature produce good performance, due to only two class of ECG pattern have misclassified (HC and HF). However, overall result can be state that the classification is close to 100%.</ns0:p><ns0:p>Using the performance value in CM, we can observe the classification result with other views in terms of classification model at all classification thresholds named receiver-operating characteristic (ROC) and precision-recall (PR) curves. This curve plots two parameters true positive rate (sensitivity) and false positive rate (specificity )provide a graphical representation of a classifier's performance across many thresholds, rather than a single value. It is important to understand the trade-off in performance for different threshold values. As shown in Figures <ns0:ref type='figure'>7(a</ns0:ref>) and 7(b). Figure <ns0:ref type='figure'>7</ns0:ref>(a) shows the resulting ROC curve, which compares the nine-class of ECG rhythm characteristics. The comparable value is sensitivity versus specificity. Sensitivity is the ability to correctly identify the true positive class of ECG rhythm, whereas specificity is the ability to correctly identify the true negative rate of ECG rhythm. Therefore, if used in medical data, it will produce a precise and accurate diagnosis. Misclassification between positive class and negative class of ECG rhythm can be dangerous, and the consequences can be as serious as death.</ns0:p><ns0:p>The area under the curve (AUC) is the value analyzed in the ROC by looking at how far the middle value is and whether the area below the curve approaches the value of 1. The lower left point of the graph (0,0) is a value that does not contain errors (no false positives) and does not detect any true positives. On the upper right side of the graph (1,1), the opposite point defines all true positives but with a 100% error rate (rates of false positives). The upper left point (0,1) is the ideal classification that defines all true positives without any mistakes (no false positives or 0 cost). The lower right point (1,0) is the worst classification, where all subjects labeled as positive are simply false positives, without knowing true positives. As shown in Figure <ns0:ref type='figure'>7</ns0:ref>(a), the ROCs of the nine-class normal-abnormal ECG rhythm show excellent performance, as the value of the ROC for the nine-class classification is 1, or the AUC is about 100%. This means that the proposed 1D-CNN can categorize all classes with higher accuracy and precision. However, the ROC cannot be trusted with imbalanced data, and it remains unchanged even after the performance changes. Therefore, the P-R curve is used to describe the classifier performance on imbalanced data (Figure <ns0:ref type='figure'>7(b)</ns0:ref>). The overall performances are also good, as the P-R value is 1.</ns0:p><ns0:p>Table <ns0:ref type='table'>4</ns0:ref> lists the performance results for the nine-class of ECG based on rhythm feature. As can be seen, the C, D, H, MI, M, and VHD classes obtained 100% for accuracy, sensitivity, specificity, precision, and F1 score. The proposed 1D-CNN model was proven to be robust and had no effect on the imbalanced class problem. For the nine-class classification, the average of all performance metrics achieved above 99% accuracy.</ns0:p></ns0:div> <ns0:div><ns0:head>ECG Beat Classification in Validation Model</ns0:head><ns0:p>For the 15-class of ECG beats, a total of 110,082 beats were trained, validated, and tested (unseen) in this study. The large different ratio between one class and another (imbalanced) class, however in this study we can't conducted the augmentation data. All ECG beats data divided into a ratio of 8: 2 or 80% is used for training data and the remaining for testing. The process of training with 10-fold is selected with randomly. Therefore, about 88,065 beats are used as training data and about 22,017 beats as testing data.</ns0:p><ns0:p>The performance results of folds 1 through 10 using the 10-fold cross-validation scheme are shown in Figure <ns0:ref type='figure'>8</ns0:ref>. As can be seen, the results vary from 0% as the lowest and around 99% as the highest result. Accuracy and specificity achieved above 99%, sensitivity and precision ranged from above 0% to 94%, and the F1 score ranged from 0% to 94%. The model had an accuracy of 99.88%, a sensitivity of 96.98%, a specificity of 99.90%, a precision of 92.24%, and an F1 score of 94.39% (fold 6). Unlike the ECG rhythm results, the performance of the 10-fold was not good enough. There was an outlier of sensitivity, which had a 0 (zero) value in the initial fold. The massive difference between the total number of the normal beat (N) class and the other abnormal beats could be an imbalanced class problem.</ns0:p><ns0:p>To analyze the performance of the 15-class of ECG beats, we also presented the confusion matrix evaluation in Figure <ns0:ref type='figure'>9</ns0:ref>. It can be seen that normal beats have the highest number of true positives (with 7248 data). However, this beat also has the most false-negative and false positive values with 46 and 21 data, respectively compared to other classes. Furthermore, both classes J and e have neither false positive nor false negative errors. Even though the ratio of number data used in this study was imbalance, the atrial escape beat (e), which is proven to be a minority class, was able to classify by the model correctly. However, an imbalanced data problem still requires a particular concern to avoid the model simply predicting the majority class rather than the minority.</ns0:p><ns0:p>To analyze the performance of the 15-class of ECG beats, we also presented the ROC and P-R curve (refer to Figures <ns0:ref type='figure'>9(a</ns0:ref>) and 9(b)). Figure <ns0:ref type='figure'>9</ns0:ref>(a) shows that the perfect classification can be presented in the R, L, j, and P beat classes. However, the other beat classes obtained an AUC value above 75%. Also, Figure <ns0:ref type='figure'>9</ns0:ref>(b) shows the worst classification as the e beat class, with an AUC value above 50%. According to the ratio of number data that was used in this study, the atrial escape beat (e) is proven to be a minority class, as it has limited dataset representation. Due to the large imbalanced class, the model tends to perform poorly and requires some modifications to avoid simply predicting the majority class in all cases.</ns0:p><ns0:p>The results for the 15-class ECG beat classification are listed in Table <ns0:ref type='table'>5</ns0:ref>. As can be seen, the results show an above 99% accuracy and specificity for all 15-class of ECG beats. The results presentation is quite good for the ECG beat classification task, although some beats' results (A, a, and j) are not. The A and a beats are related to an atrial premature beat, causing aberrant ventricular conduction. An unexpected beat discharged by an ectopic focus in the atria is termed a premature atrial beat. While some fibers are still refractory, the impulse from the premature beat reaches the His-Purkinje system early. Due to abnormal ventricular conduction, the resultant QRS complex exhibits a right BBB pattern. Also, the j beat is a delayed heartbeat originating from an ectopic focus in the atrioventricular junction. The classification of ECG beats tends to be more challenging because the results are related to the heart beat segmentation process, which will be close to optimal with the QRS detection.</ns0:p></ns0:div> <ns0:div><ns0:head>ECG signal Classification with Inter-patient Data</ns0:head><ns0:p>Tables <ns0:ref type='table'>4 and 5</ns0:ref> list the proposed model result with dataset based on intra-patient scenario. Such conditions where the ECG data from the same patients probably appear in the training and validation set. In this study, we took the precaution to construct and evaluate the classification using rhythm and beat features also from different patients (inter-patient). To test the robustness of the proposed 1D-CNN model, we tested the model on an unseen set (refer to Table <ns0:ref type='table'>6</ns0:ref>). The unseen set sample consisted of five of the 24-class of ECG-based rhythms and beat feature-BBB, HC, HF, V and L class. From the experiment, the performance still achieved outstanding results.</ns0:p></ns0:div> <ns0:div><ns0:head>Benchmarking of The Proposed Model</ns0:head><ns0:p>Prevention of heart abnormalities is one of the most important tasks of any health care system. Therefore, some previous studies have explored the classification of heart abnormalities through rhythm and beat using DL algorithms. Table <ns0:ref type='table' target='#tab_5'>7</ns0:ref> shows the comparison of performance results of state-of-the-art DL for ECG rhythm and beat classification. As Table <ns0:ref type='table' target='#tab_5'>7</ns0:ref> shows, for the classification of the nine-class of ECG rhythm, we achieved above 99% for all performance metrics. Previous studies have also implemented other DL algorithms, gated recurrent unit (GRU) <ns0:ref type='bibr'>(Darmawahyuni et al. 2021)</ns0:ref>, and a combination of 1D-CNN and LSTM <ns0:ref type='bibr' target='#b26'>(Lui and Chow 2018)</ns0:ref>. In our previous work <ns0:ref type='bibr'>(Darmawahyuni et al. 2021)</ns0:ref>, we explored the sequence learning algorithm by using unidirectional and bidirectional recurrent networks (LSTM and GRU) for HC, MI, C, BBB, and D. As the results show, unidirectional GRU had the best performance, with an average accuracy of 98.50%. The aforementioned DL algorithms obtained a good performance; however, our proposed 1D-CNN model outperformed and achieved a better performance under the condition of more classification. The recurrent network classifiers lack the feature extraction to recognize the dynamic ECG waveform. 1D-CNN can generate local features of the ECG signal sequence to recognize regional patterns in the convolution window <ns0:ref type='bibr'>(Nurmaini et al. 2020)</ns0:ref>; <ns0:ref type='bibr' target='#b17'>(Kiranyaz et al. 2021)</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_5'>7</ns0:ref> also shows the performance results for the ECG beat classification tasks. Varying DL algorithms have been explored and proposed. Different from other ECG beat analysis algorithms reported earlier, our proposed 1D-CNN model considers 15-class classifications. In our previous work <ns0:ref type='bibr' target='#b30'>(Nurmaini et al. 2019)</ns0:ref>, we classified 10-class arrhythmia of the ECG beat classification by using deep autoencoder (DAE) as a feature extraction and deep neural network (DNN) as a classifier algorithm. DAE is a method that can learn generic features using greedy layer-wise training <ns0:ref type='bibr' target='#b30'>(Nurmaini et al. 2019)</ns0:ref>. However, in our recent study, 1D-CNN outperformed DAE-DNN, with better performance results and with a larger class number of beats. In addition, training a DAE with lower-dimensional internal layers than the input and output layers drives the network to update its weight in such a way that it learns to compress data. Internal layers learn to act as mappings from the original data to a lower-dimensional compressed projection. Unlike DAE, 1D-CNN contains a sliding 'filter,' or kernel, which may be regarded as moving across the input by sharing weights over a local patch function. Because these filters are present, 1D-CNN performs better in areas where local patterns are important for classification. Although the results look promising for ECG rhythm and beat classification, there are some limitations to our study:</ns0:p><ns0:p>&#61623; The pre-processing stage of the ECG signal still needs improvement, specifically in the case of ECG signals that have different sampling frequencies, leads, and various noises.; &#61623; The segmentation of the P, QRS, and T-waves and the HRV measurement before the classification process were not carried out; and &#61623; The proposed model was not validated against the hospital patient data. We only used the available public dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Deep learning has gained a central position in recent years for ECG rhythm and beat classification. It was built on a foundation of significant algorithmic details and generally can be understood in the construction and training of DL architectures. A DL approach based on one 1D-CNN architecture has been presented to automatically learn and classify the nine-class of ECG pattern with rhythms feature and 15-class of ECG pattern with beats feature, which is important for classifying the abnormalities pattern. In this study, the proposed 1D-CNN model, which consisted of 13 convolutional layers and five max-pooling layers, was used. The 1D-CNN has low computational requirements. Thus, it is well-suited for real-time and low-cost applications for ECG devices.</ns0:p><ns0:p>Using the 10-fold cross-validation scheme, the performance results had an accuracy of 99.98%, a sensitivity of 99.90%, a specificity of 99.89%, a precision of 99.90%, and an F1 score of 99.99% for ECG rhythm classification. Also, for ECG beat classification, the model obtained an accuracy of 99.87%, a sensitivity of 96.97%, a specificity of 99.89%, a precision of 92.23%, and an F1 score of 94.39%. We realize the performance results of the ECG rhythm are better PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64082:1:1:NEW 14 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science than the ECG beat classification. The selection of an appropriate preprocessing step for QRS detection to accurately find the R-peak to achieve the best model for ECG beat classification is needed to achieve high performance results. In the future, the challenges regarding ECG signals are still many, such as the precision segmentation of P, QRS, and T-waves before the process of rhythm and beat classification. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,268.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,198.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,268.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>ECG rhythm data description</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Class</ns0:cell><ns0:cell>Label/</ns0:cell><ns0:cell>Records</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Abbreviation</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>PTB Diagnostic ECG</ns0:cell><ns0:cell>Bundle Branch Block</ns0:cell><ns0:cell>BBB</ns0:cell><ns0:cell>17</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Cardiomyopathy</ns0:cell><ns0:cell>C</ns0:cell><ns0:cell>17</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Dysrhythmia</ns0:cell><ns0:cell>D</ns0:cell><ns0:cell>16</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Health Control</ns0:cell><ns0:cell>HC</ns0:cell><ns0:cell>80</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Myocardial Hypertrophy</ns0:cell><ns0:cell>H</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Myocardial Infarction</ns0:cell><ns0:cell>NU</ns0:cell><ns0:cell>368</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Myocarditis</ns0:cell><ns0:cell>M</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Valvural</ns0:cell><ns0:cell>VHD</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>BIDMC Congestive Heart</ns0:cell><ns0:cell cols='2'>Congestive Heart Failure HF</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>Failure</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>China Physiological Signal</ns0:cell><ns0:cell cols='2'>Left bundle branch block BBB</ns0:cell><ns0:cell>207</ns0:cell></ns0:row><ns0:row><ns0:cell>Challenge 2018</ns0:cell><ns0:cell>Right bundle branch</ns0:cell><ns0:cell /><ns0:cell>1,695</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>block</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>MIT-BIH Normal Sinus</ns0:cell><ns0:cell>Normal sinus (healthy</ns0:cell><ns0:cell>HC</ns0:cell><ns0:cell>18</ns0:cell></ns0:row><ns0:row><ns0:cell>Rhythm</ns0:cell><ns0:cell>control)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>ECG beat data description</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Class</ns0:cell><ns0:cell>Total Beats</ns0:cell></ns0:row><ns0:row><ns0:cell>MIT-BIH Arrhythmia</ns0:cell><ns0:cell>Normal Beat (N)</ns0:cell><ns0:cell>75,022</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Atrial Premature Beat (A)</ns0:cell><ns0:cell>2,546</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Premature Ventricular Contraction (V)</ns0:cell><ns0:cell>7,129</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Right Bundle Branch Block Beat (R)</ns0:cell><ns0:cell>7,255</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Left bundle branch block beat (L)</ns0:cell><ns0:cell>8,072</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Aberrated atrial premature beat (a)</ns0:cell><ns0:cell>150</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Ventricular flutter wave (!)</ns0:cell><ns0:cell>472</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Fusion of ventricular and normal beat</ns0:cell><ns0:cell>802</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(F)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Fusion of paced and normal beat (f)</ns0:cell><ns0:cell>982</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Nodal (junctional) escape beat (j)</ns0:cell><ns0:cell>229</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Nodal (junctional) premature beat (J)</ns0:cell><ns0:cell>83</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Paced beat (/)</ns0:cell><ns0:cell>7,025</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Ventricular escape beat (E)</ns0:cell><ns0:cell>106</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Non-conducted P-wave (x)</ns0:cell><ns0:cell>193</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Atrial escape beat (e)</ns0:cell><ns0:cell>16</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The total episodes after segmentation of 2700 nodes</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Total rhythm after segmentation</ns0:cell><ns0:cell>Unseen</ns0:cell></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Class</ns0:cell><ns0:cell cols='2'>of 2700 nodes (episode)</ns0:cell><ns0:cell>Set</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Training Set Validation Set</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>PTB Diagnostics ECG</ns0:cell><ns0:cell cols='2'>BBB 230</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>C</ns0:cell><ns0:cell>658</ns0:cell><ns0:cell>73</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>D</ns0:cell><ns0:cell>619</ns0:cell><ns0:cell>69</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>HC</ns0:cell><ns0:cell>3,096</ns0:cell><ns0:cell>344</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>H</ns0:cell><ns0:cell>271</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MI</ns0:cell><ns0:cell>14,242</ns0:cell><ns0:cell>1,582</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>M</ns0:cell><ns0:cell>155</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>VHD 232</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>BIDMC Congestive Heart Failures</ns0:cell><ns0:cell>HF</ns0:cell><ns0:cell>53,738</ns0:cell><ns0:cell>5,969</ns0:cell><ns0:cell>6,647</ns0:cell></ns0:row><ns0:row><ns0:cell>China Physiological Signal</ns0:cell><ns0:cell>BBB</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Challenge 2018</ns0:cell><ns0:cell cols='2'>BBB 4,651</ns0:cell><ns0:cell>514</ns0:cell><ns0:cell>614</ns0:cell></ns0:row><ns0:row><ns0:cell>MIT-BIH Normal Sinus Rhythm</ns0:cell><ns0:cell>HC</ns0:cell><ns0:cell>60,523</ns0:cell><ns0:cell>6,723</ns0:cell><ns0:cell>7,423</ns0:cell></ns0:row><ns0:row><ns0:cell>Total</ns0:cell><ns0:cell /><ns0:cell>138,415</ns0:cell><ns0:cell>15,373</ns0:cell><ns0:cell>14,684</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Comparison results and DL algorithms between the related work and our proposed method in this workPerformance Results (%) SEN, sensitivity; SPE, specificity; PRE, precision; GRU, gated recurrent unit; DULSTM-WS2, deep unidirectional LSTM network-based wavelet sequences 2; FL, focal loss</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Authors</ns0:cell><ns0:cell cols='2'>Classes Method</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>ACC</ns0:cell><ns0:cell cols='2'>SEN SPE</ns0:cell><ns0:cell cols='2'>PRE F1-Score</ns0:cell></ns0:row><ns0:row><ns0:cell>ECG Rhythm</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Lui et al. (Lui and</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>1D-CNN-LSTM -</ns0:cell><ns0:cell cols='3'>92.40 97.70 -</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Chow 2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Darmawahyuni et</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>GRU</ns0:cell><ns0:cell>98.50</ns0:cell><ns0:cell cols='4'>95.54 98.42 89.93 92.31</ns0:cell></ns0:row><ns0:row><ns0:cell>al. (Darmawahyuni</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>et al. 2021)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Our work</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>1D-CNN</ns0:cell><ns0:cell>99.98</ns0:cell><ns0:cell cols='4'>99.90 99.89 99.90 99.99</ns0:cell></ns0:row><ns0:row><ns0:cell>ECG Beat</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Yildirim (Yildirim</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>DULSTM-WS2 99.25</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Oh et al. (Oh et al.</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>1D-CNN-LSTM 98.10</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='2'>97.50 98.70</ns0:cell></ns0:row><ns0:row><ns0:cell>2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Yildirim et al.</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>LSTM</ns0:cell><ns0:cell>99.23</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='2'>99.00 99.00</ns0:cell></ns0:row><ns0:row><ns0:cell>(Yildirim et al.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Chen et al. (Chen et</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell cols='2'>1D-CNN-LSTM 99.32</ns0:cell><ns0:cell cols='2'>97.75 -</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>al. 2020)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Gao et al. (Gao et</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>LSTM, FL</ns0:cell><ns0:cell>99.26</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='2'>99.26 99.14</ns0:cell></ns0:row><ns0:row><ns0:cell>al. 2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Rajkumar et al.</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>1D-CNN</ns0:cell><ns0:cell>93.60</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>(Rajkumar,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Ganesan, and</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Lavanya 2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Nurmaini et al.</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>DAE-DNN</ns0:cell><ns0:cell>99.73</ns0:cell><ns0:cell cols='4'>91.20 93.60 99.80 91.80</ns0:cell></ns0:row><ns0:row><ns0:cell>(Nurmaini et al.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Our work</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>1D-CNN</ns0:cell><ns0:cell>99.87</ns0:cell><ns0:cell cols='4'>96.97 99.89 92.23 94.39</ns0:cell></ns0:row><ns0:row><ns0:cell>ACC, accuracy;</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64082:1:1:NEW 14 Oct 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
" Intelligent System Research Group Universitas Sriwijaya October 3rd. 2021 Dear Editors Thank you for giving us the opportunity to submit a revised draft of our manuscript to PeerJ Computer Science. We would like to thank the reviewers for encouraging comments. It helps us make this work clearer to understand, and the constructive comments helped us improve the quality of the manuscript. We appreciate the time and effort that you have dedicated to providing your valuable feedback on our manuscript. We are grateful to you for your insightful comments on our paper. We have been able to incorporate changes to reflect most of the suggestions provided by the reviewers. We have highlighted the changes within the manuscript. Here is a point-by-point response to the reviewer’ comments and concerns. We are uploading (a) our point-by-point response to the comments (below) (response to Editor and reviewers), (b) an updated manuscript with track changes enabled, and (c) a clean updated manuscript without highlights. Prof. Siti Nurmaini Corresponding Author On behalf of all authors. Editor The article has some good content but the reviewers highlighted some important issues that need to be solved. Please address the points raised by the reviewers and prepare a new version. In the binary classification results section, please include the results measured through the Matthews correlation coefficient (MCC). Author response: Thank you for the concern and suggestion. however, our research is multi-class classification with 24-class of ECG signal from five database. By using 24-class of ECG signal, we separated with two scenarios of analysis, based on rhythm and beat feature. Author action: To justify the experiment that we conduct, the confusion matrix for ECG signal classification with rhythm and beat feature are added in the revised manuscript. Please refer to figure 6 and 9 ECG signal classification with rhythm feature ECG signal classification with beat feature These two figures were used to justify the performance of our proposed method under imbalanced data condition. ROC and P-R Curve were mostly used on binary classification cases. However, what we are doing is not binary classification, but multi-class classification. Hence, we added multi-class Confusion Matrix for the rhythm and beat feature. Reviewer 1 Basic reporting Concern 1# The manuscript is well-written, has good presentation, and thoroughly covers the literature. Thank you for your appreciation. Experimental design Concern 2# The manuscript should be clearer in stating the novelty of the proposed method versus the existing state-of-the-art. Also, the architecture of the proposed network should be more clearly presented. Author response: Thank you for your critical point. The novelty of our proposed method is we present a methodology using single architecture of DL to classify the ECG signal pattern occurred in ECG in the form of beat and rhythm feature. Unlike others methodologies that treat beat and rhythm separately, our approach enables both forms to proceed on a single deep learning architecture. The methodology includes, database selection signal denoising, beat and rhythm segmentation, feature extraction and classification also the proposed model evaluation. By using this approach, the ECG pattern that occurred both in beats and rhythms feature can be detected only using single architecture. Author action: This statement has added into the revised manuscript. Please kindly check line 178-193 Validity of the findings Concern 3# Results seem good, and the experiments use plenty of databases. However, the comparison with the state-of-the-art is flawed and does not suffice to assess the relative quality of the proposed method. The most promising literature approaches should be tested in the same conditions as the proposed method to directly compare their performance results. Author response: We appreciate your concern and we have already double check the comparison study used in this paper. All the state-of-the-art study used the same dataset as we presented in the paper. However, it is hard to say that our evaluation testing has the same condition with other study. First, our study has the more classes compare to others. This makes our evaluation become more complex, and it also related to our contribution study. Moreover, even though we use the same dataset and the same training and testing ratios, data selection will definitely be different because of the use of different random seeds. Additional comments Concern 4# The exact contribution of this work is not very clear. Throughout the abstract and introduction, the authors state that the literature is composed of either rhythm or beat-based methods, which leads me to think that this would be a hybrid method combining rhythm and beat features. However, both tasks are addressed separately, which was already studied in (Nurmaini 2020) and (Tutuko 2021). The introduction should state, more clearly, what is the difference between this and the previous works. Author Response: Thank you for your critical question. Author action: We updated the manuscript by revising the sentence in the contributions and abstract and we updated the methodology to make the contribution become clearer. Please refer to line 119 to 129. This study and the proposed approach make the following novel contributions: • Proposes the generalization framework of deep learning for ECG signal classification with high accuracy in intra and inter-patients scheme; • Develops a single DL-architecture for classifying ECG signal pattern based on both of rhythm and beat feature with simple segmentation; • Validates the proposed framework with five public ECG datasets that have different frequency sampling with massive data; and • Experiments with 24-class abnormalities found in the ECG signal, which consist of nine-class of ECG pattern based on rhythms feature and 15-class of ECG pattern based on beats feature. Hence, the aim of the present study is to propose a single DL-architecture for classifying ECG rhythm and beat abnormalities using a rhythm and beat assessment. Rather than treating ECG beat and rhythm separately, we process both of them in the same framework. Hence, we only need a single DL architecture to classify rhythm and beat abnormalities. Moreover, the major differences of our approach with the previous work (Nurmaini 2020) and (Tutuko 2021), besides the frameworks, is we used more labels compare to the previous study which increase the complexity of recent study. In the previous paper, we concern to classify 3 classes i.e., Normal, AF and Non-AF. However, in this study we use 25 classes of ECG pattern. Lastly, this study do not combining rhythm and beat features as it is opposite the clinical approach. Concern 5# The figure 3, depicting the CNN architecture, needs to be improved. As it stands, it seems there is a FC neuron (in the bottom) for each convolutional layer, and somehow information flows forward and backward through the model using those neurons. Also, it is not clear what the orange squares represent. Author response: Thank you for your concern, we have updated figure 3 so that it will be clearer illustrating our proposed method and easier to understand. Author action: We revised Figure 3. Figure 3 before revision Figure 3 after revision Concern 6# Were the state-of-the-art methods (in Table 7) evaluated with the same datasets as the proposed method? And the same data train-test splits? If not, and since they even consider different number of target classes, the results are not really comparable. Hence, we do not have a robust way to assess if the proposed method is, in fact, better than the state-of-the-art. Authors should implement a couple of the most promising literature methods and test them in the same conditions as the proposed method. Author response: Thank you for your concern, we have already double check the comparison study used in this paper. All the state-of-the-art study (in table 7) used the same dataset as we presented in the paper. However, even though we use the same dataset and the same training and testing ratios, data selection will definitely be different because of the use of different random seeds. Moreover, indeed that our study has more classes compare to others nevertheless this makes our evaluation become more complex and it also related to our contribution study. Besides, It is relatively hard to evaluate several study with the exact condition, due to several technical difficulties such as the programming language, library, machine (computer) specification, etc. Concern 7# In figure 6, since the AUC is very close to 1, perhaps it would be useful to use log-log scale axes for the ROC curve. Or, there could be a box zooming in on the [1.0, 1.0] area, so readers can see clearly the difference between the curves. Author response: We really appreciate your concern, however due to our study perform a multi-class classification task, we add confusion matrix, ROC and P-R Curve for measure the classifier performance Author action: Please kindly check figure 6 and 9 for Rhythms and Beat Confusion Matrix respectively. We also add several statements regarding the confusion matrix analysis in page 9 line 317-325 for figure 6 and page 10 line 375-383. Figure 6 Figure 8 Reviewer 2: Akinori Higaki Basic reporting Concern 1# In this manuscript, Darmawahyuni and colleagues evaluated the performance of 1D-CNN model on ECG classification. The authors reported that their model demonstrated high classification performance on both rhythm and beat abnormalities. The reviewer agrees with the clinical importance of simultaneously identifying ECG waveforms and rhythms, but unfortunately the authors' method fails to achieve their goal. The authors stated that they were able to classify 24 heart abnormalities (9 rhythms and 15 beats), but in reality, they only repeated the binary classification 24 times. This is the main drawback of this study. Author response: We really appreciate your concern, however what we are doing was a multi-class classification problem do not a repeated binary classification. We don’t train the DL model for normal and abnormal class, but all classes of the ECG signal in five datasets that separated with two features of rhythm and beat for deep analysis. The output of the proposed model ECG signal pattern in nine-classes by using rhythm segmentation and 15-class by using beat segmentation. All the proof of the classification process as seen in the Confusion Matrix. ECG signal classification based on rhythm feature We segmented the length of ECG signal episodes to 2700 node and generated the features for nine-class of ECG rhythm. The length of 2700 nodes contained at least two R-R intervals between one and the next beat with different frequency samplings in all records. The 2700-node segmentations might show more than two R-R intervals with a minimum frequency sampling of 128 Hz for the training/validation (intra-patient), and testing set (inter-patient). ECG signal classification based on beat feature The former refers to the characteristics retrieved from a single beat, which typically only contains one R-peak. The latter, however, refers to features that are dependent on at least two beats. These features include more information than a single R-peak. The waveforms of beat segmentation shows the positions of the P-wave, QRS-complex, and T-wave, which are all intimately connected to the location of the R-peak. We segmented 15-class of ECG signal with t1 duration is 0.25 seconds before R-peak, and the t2 duration is 0.45 seconds after R-peak, which results in a total length of 0.7 seconds. A total of 0.7 seconds contains 252 nodes, with a sampling frequency of 360 Hz, which covers the P-wave, QRS-complex, and T-wave (one beat). Author Action: Please kindly check the changed section structure of the revised manuscript as follows: 1. ECG signal classification based on rhythm feature: line 224-237 2. ECG signal classification based on beat feature: line 238-250 3. Confusion Matrix for Rhythm Feature: Figure 6 4. Confusion Matrix for Rhythm Feature: Figure 9 Concern 2# As a side note, the reviewer believes that confusion matrices, not ROC curves, should be used to evaluate the performance of multiclass classification. Author response: We apologize for presenting both figures, ROC and P-R Curve, which are mainly used in binary class classification problem Although it works for binary classification problems, in this study we extend it to evaluate multi-class classification problems too. Author action: Hence, we add Confusion Matrix figures. Please kindly check Rhythms and Beat Confusion Matrix in figure 6 and 9 respectively. We also add several statements regarding the confusion matrix analysis in page 9 line 317-325 for figure 6 and page 10 line 375-383 for figure 9. ECG signal classification based on rhythm feature ECG signal classification based on beat feature Confusion Matrix of ECG signal classification based on rrhythm and beat feature Concern 3# The definition of the classifications for cardiac rhythm is also incorrect. According to the authors, the rhythm abnormality includes “Myocardial Infarction”, Myocarditis”, “Heart Failure”, “Valvular heart disease”, “Hypertrophy” and “Cardiomyopathy”, none of these are abnormalities in heart rhythm. Furthermore, since myocardial infarction and heart failure often coexist, and valvular disease and cardiac hypertrophy can also coexist, classifying them as separate classes (based on ECG alone) is obviously a wrong approach. Author response: Thank you for your concern. We have tried to present the aim of the manuscript more understanding. Please kindly check radically changes of the manuscript. Our research is multi-class classification with 24-class of ECG signal form five database. To classify 24 classes, we separated such data set with two scenarios based on rhythm and beat feature for deeper analysis. In the revised paper, we don’t mention the type of abnormality. We only focusing to recognize the ECG pattern based on rhythm and beat to see the true positives, true negatives, false positives and false negatives value. Author action: We revised the manuscript. Please kindly check the paper in line 55-107. Although the acquisition of ECG recordings is well standardized, human interpretations of ECG recordings vary widely. This is due to differences in the level of experience and expertise. To minimize these constraints, computer-generated interpretations have been used for various years. However, because this interpretation is based on predetermined rules and the limitations of the feature recognition algorithm, the interpretation results do not always capture the complexities and nuances contained in the ECG (Siontis et al. 2021). Based on this, the ECG by itself is often insufficient to diagnose several heart abnormalities. In myocardial infarction (MI), for example, is due to ST-segment deviation and may be occurred in other conditions such as acute pericarditis, left ventricular hypertrophy, left bundle-branch block, Brugada syndrome, and early repolarizations (Wang, Asinger, and Marriott 2003). Due of this, automatically diagnosing MI using a ruled based inference system from conventional ECG machine has a low reliability and by practice cardiologists are unable to diagnose it only from ECG record (Daly et al. 2012)(Cho et al. 2020). Furthermore, the traditional methods for diagnosing heart abnormalities specifically from a 12-lead ECG are difficult to apply in wearable devices (Walsh, Topol, and Steinhubl 2014) (Cho et al. 2020) and a wide variability in ECG morphology between patients causes major challenges. The heart abnormalities analysis by using ECG signal processing can be conducted by using rhythm and beat feature. In the previous studies, such research has been proposed with several method. Deep learning (DL) is one type of artificial intelligence approach that can learn and extract meaningful patterns from complex raw data and recently has begun to widely used to analyze ECG signals for diagnosing an arrhythmia, heart failure, myocardial infarction, left ventricular hypertrophy, valvular heart disease, age, and sex with ECG alone and produce good result (Makimoto et al. 2020) (Attia et al. 2019) (Hannun et al. 2019) (Kwon et al. 2020(a)) (Kwon et al. 2020(b)) (LeCun, Bengio, and Hinton 2015). DL performs excellently over a relatively short period of time. The sophistication of DL has a much better ability to feature representation at an abstract level compared to general machine learning. The DL model can extract a hierarchical representation of the raw data automatically and then utilize the last stacking layers to gain knowledge from complex features to the simpler ones (Khan and Yairi 2018). In the previous study, the ECG signal classification based on heart rhythm can be conducted with several features morphology of ECG signal like presenting ST-elevation and depression, T-wave abnormalities, and pathological Q-waves (Ansari et al. 2017). Moreover, a variety of ECG rhythm features, such as the RR interval, ST interval, PR interval, and QT interval have been implemented to automatically detect heart abnormalities over the past decade (Gopika et al. 2020). Unlike an ECG rhythm, the efficiency classification of the irregular heartbeat , either faster or slower than normal, or even waveform malformation can be improve by using beat feature. (Khalaf, Owis, and Yassine 2015). For heartbeat classification, ECG pattern may be similar for different patients who have different heartbeats and may be different for the same patient at different times. ECG-based heartbeat classification is virtually a problem of temporal pattern recognition and classification (Zubair, Kim, and Yoon 2016)(Dong, Wang, and Si 2017). Based on the aforementioned instances, the variety of ECG signals with abnormalities must be handled specifically, either as an ECG rhythm or beat features. Unfortunately, the challenge in analyzing the pattern of ECG signal is not limited to this. ECG signals have small amplitudes and short durations, measured in millivolts and milliseconds, respectively, and large inter- and intra-observer variability that influences the perceptibility of these signals (Lih et al. 2020). The analysis of thousands of ECG signals is time-consuming, and the possibility of misreading vital information is high. Automated diagnostic systems can utilize computerized recognition of heart abnormalities based on rhythm or beat to overcome such limitations. This could become the standard procedure by clinicians classifying ECG recordings. Hence, the present study proposes a single DL architecture for classifying ECG patterns by using both rhythm and heartbeat features. Rather than treating ECG heartbeat and rhythm separately, we process both of them in the same framework. Hence, we only need a single DL architecture to classify the ECG signal with high accuracy. Concern 4# In such a complex task, it is impossible for all classification models to achieve AUROC 1.0, as shown in Figure 6. Author response: Thank you for your concern. All dataset used in this paper were a publicly published dataset. Most of them were from Physionet which is managed by MIT’s Computational Physiology lab Members. It also well-known as one of the largest ECG record providers which the data has been used widely in the world's top research. Related to your concern, we used the dataset as it is (without any addition or reduction). We do apologize for the mistake in using abnormality class name. However, what we did was based on the instructions manual contained in the dataset. Author action: In order to proof the ROC and P-R Curve result, We revised the manuscript by adding the confusion matrix for ECG signal classification with rhythm and beat feature in figure 6 and 9 respectively. Please also refer to line 317-325. From the confusion matrix with from validation model, it can be seen that false negative (FN) only occur in the BBB class with 4 records, the HC class with 2 records and the HF class with 3 records. Compared to the amount of data used, the FN does not affect the ROC curve. Therefore, ROC/AUC curve can be said to be perfect, due to it tend to 1. Figure 6 ROC Figure 6. Confusion Matrix for ECG signal classification based on rhythm feature Concern 5# The reviewers speculate that these mistakes may be partly due to the lack of clinicians among the authors. This manuscript needs to be radically revised from the research design. Author response: Thank you for your critical point, we are really appreciate and understand for your doubt. For your information, the research that we have conducted always involving cardio and vascular clinicians from Muhammad Hoesin General Hospital (RSMH) in Indonesia. Especially, they involved very much for data preparation that taken from patient in RSMH. However, in this study, we utilized the ECG data from public dataset/third party that has been common used in the cardiac research, so there is no involvement of clinicians in data preparation. But in the experiment, they are fully involved. We apologize for wrong definition of beat and rhythm feature in this study. We don’t classify the heart rhythm abnormalities or heart beat abnormalities, but we only classify the ECG signal patterns using the rhythm and beat features that we retrieved from 5 databases, due to the ECG signal pattern can be recognize and classify by two features. Therefore, the ECG signal analysis can be more complete by using single architecture. We rearranged and radically revised the research background and state of the art of the manuscript, however, there is no change in the experiment. Based on the model evaluation of the proposed model, it produces good performance. Author action: We revised the manuscript Experimental design Concern 6# Experimental design is inappropriate as described above. Author response: Thank you for your concern. We have tried to address about the experimental design please see our explanations. Author action : We revised the manuscript please refer line 198-288 in the revised manuscript. In this study, we conducted a comprehensive experiment with the methodology can be summarized as follow; 1. Database Selection We have total of 168,472 rhythm episodes and 110,082 beat episodes as ECG features were used for training, validation, and an testing (as unseen data). The 1D-CNN architecture was used to classify the nine-class by using rhythms feature segmentation and 15-class of beats feature segmentation of the ECG signal. The information available from the single-lead ECG standard recordings included different signal lengths and frequency samplings (128, 250, 500, and 1000 Hz). 2. Pre-processing The ECG signal can become corrupted during acquisition due to different types of artifacts and interference, such as muscle contraction, baseline drift, electrode contact noise, and power line interference (Sameni et al. 2007)(Tracey and Miller 2012)(Wang et al. 2015). To achieve an accurate analysis and diagnosis, undesirable noise and signals should be removed or deleted from the ECG by eliminating various kinds of noise and artifacts. This study implemented DWT as a frequently used denoising technique that offers a useful option for denoising ECG signals. This study also implemented some wavelet families for ECG signals, such as symlets, daubechies, haar, bior, and coiflet, to analyze which type of wavelet would obtain the best signal denoising result. Among them, based on the highest the signal noise to ratio (SNR) results, the symlet wavelet was the best DWT parameter and was chosen for ECG signal denoising. 3. ECG Signal Segmentation The aim of ECG segmentation is to divide a signal into many parts with similar statistical properties, such as amplitude, nodes, and frequency. The presence, time, and length of each segment of an ECG signal have diagnostic and biophysical significance, and the various sections of an ECG signal have distinctive physiological meaning (Yadav and Ray 2016). ECG signal segmentation may also be accurately analyzed. The process of ECG feature segmentation for rhythm and beat classification can be described as follows: • ECG rhythm segmentation is the process to produce the features for the entire ECG signal recordings at 2700 nodes without considering the different frequency sampling (128, 250, 500, and 1000 Hz) for ECG rhythm classification. In our previous work (Nurmaini et al. 2020), we successfully segmented the length of AF episodes to 2700 nodes. Therefore, for this study, we generated the features for nine-class of normal-abnormal ECG rhythm. The length of 2700 nodes contained at least two R-R intervals between one and the next beat with different frequency samplings in all records. Furthermore, the 2700-node segmentations might show more than two R-R intervals with a minimum frequency sampling of 128 Hz for the training, validation, and unseen set. As a result, the best ECG episodes were chosen from 2700 nodes for segmentation. The process of ECG rhythm classification is illustrated in Figure 4(a). Figure 4(a) shows that all lengths of the ECG recordings have been segmented to each episode of 2700 nodes. If the total nodes were less than 2700 nodes, we added the zero-padding technique, which involved extending a signal with zeros. • ECG beat segmentation is the step of intercepting numerous nodes in a signal to discern not only subsequent heart beats but also the waveforms included in each beat (Qin et al. 2017). The former refers to the characteristics retrieved from a single beat, which typically only contains one R-peak. The latter, however, refers to features that are dependent on at least two beats. These features include more information than a single R-peak. The waveforms of beat segmentation are presented in Figure 4(b). Figure 4(b) shows the positions of the P-wave, QRS-complex, and T-wave, which are all intimately connected to the location of the R-peak. According to (Qin et al. 2017)(Chang et al. 2012)(Nurmaini et al. 2019), the average ECG rhythm frequency is between 60 and 80 beats per minute, the t1 duration is 0.25 seconds before R-peak, and the t2 duration is 0.45 seconds after R-peak, which results in a total length of 0.7 seconds. A total of 0.7 seconds contains 252 nodes, with a sampling frequency of 360 Hz, which covers the P-wave, QRS-complex, and T-wave (one beat). 4. Feature Extraction and Classification The 1D-CNN classifier was proposed by (Nurmaini et al. 2020) for AF detection. By using the architecture, we generalized the model for abnormal-normal rhythm and beat classification. The rectified linear unit (ReLU) function was adopted with 13 convolution layers (64, 128, 256, and 512 filters) and also consisted of five max pooling layers. The 1D-CNN model comprised two fully connected layers with 1000 nodes for each layer and one node for the output layer. The 1D-CNN required a three-dimensional input, which consists of samples, features, and timesteps. The detailed process of 1D-CNN architecture for both ECG rhythm and beat classification was as follows: • For ECG based on rhythm classification, the input timesteps with the dimension 2700 x 1 were fed into the convolution layer equipped with the ReLU activation function. The first and second convolutional layers produced an output length of 64 with a kernel size of 3. The output of the first and second convolutional layers through the max pooling layer had a kernel size of 2 for the feature reduction. The output of the first max pooling layer as the input for the third and fourth convolutional layers produced 128 feature maps. The convolutional layers were passed onto the fifth and last convolutional layers and produced output lengths of 256 and 512, respectively, with a kernel size of 3. The output of the last convolutional layer was then passed onto two fully connected layers with a total of 1000 nodes. This architecture produced an output of a nine-class ECG rhythm classification. • Unlike ECG rhythm classification, none of the processes differed from the features interpretation for ECG based on beat classification. The main differences were the input timesteps value of (252, 1) and products of the output size of the 15-class ECG beat classification. The architecture also implemented the ReLU activation function with 64, 128, 256, and 512 filters, with a kernel size of 3. For each max pooling layer, a kernel size of 2 was also used for the feature interpretation of the 1D-CNN for ECG beat classification. 5. Model Evaluation Classification ECG signal based on rhythm and beat feature is evaluating by using intra and inter patient scheme. Such schemes are conducted to resemble a clinical environment and to ensure the robustness of the proposed model. Five commons metrics used in this study are accuracy, sensitivity, specificity, precision and F1-score. Moreover, two measures are usually considered for evaluating the classification performance, specifically for imbalance data, are receiver-operating characteristic (ROC) and Precision-Recall (P-R) curves. These two-evaluation metrics were added because the overall accuracy was distorted by the majority class results, since the beat type classes are extremely imbalanced in the available dataset. Validity of the findings Concern 7# Findings are unreliable as described above. Author response: Thank you for the concern. We have incorporated your concern throughout the manuscript in the contributions validity of research finding. Our validation is in accordance with the applicable standards for deep learning-based model testing: 1. Classification ECG signal based on rhythm and beat feature is evaluating by using intra and inter-patient scheme. Intra-patient by using cross-fold validation and inter-patient by using another dataset not including in the cross-validation process. Such scheme is conducted to resemble a clinical environment and to ensure the robustness of the proposed model. 2. Five commons metrics used in this study are accuracy, sensitivity, specificity, precision and F1-score, all metrics is taken from the Confusion Matrix. (Please refer to line 282-283) 3. Two measures are usually considered for evaluating the performance of classification are receiver-operating characteristic (ROC) and Precision-Recall (P-R) curves. However, the overall accuracy distorted by the results of the majority class, since the classes for the beat types are extremely imbalanced in available dataset. (Please refer to line 283-288 and line 375-383) Author action: 4. We revised the manuscript. Please refer to line 282-288 and line 375-383. "
Here is a paper. Please give your review comments after reading it.